text
stringlengths
11
9.77k
label
stringlengths
2
104
We are building an image slicer integral field unit (IFU) to go on the IMACS wide-field imaging spectrograph on the Magellan Baade Telescope at Las Campanas Observatory, the Reformatting Optically-Sensitive IMACS Enhancement IFU, or ROSIE IFU. The 50.4" x 53.5" field of view will be pre-sliced into four 12.6" x 53.5" subfields, and then each subfield will be divided into 21 0.6" x 53.5" slices. The four main image slicers will produce four pseudo-slits spaced six arcminutes apart across the IMACS f/2 camera field of view, providing a wavelength coverage of 1800 Angstroms at a spectral resolution of 2000. Optics are in-hand, the first image slicer is being aluminized, mounts are being designed and fabricated, and software is being written. This IFU will enable the efficient mapping of extended objects such as nebulae, galaxies, or outflows, making it a powerful addition to IMACS.
astrophysics
We investigate the advantage of coherent superposition of two different coded channels in quantum metrology. In a continuous variable system, we show that the Heisenberg limit $1/N$ can be beaten by the coherent superposition without the help of indefinite causal order. And in parameter estimation, we demonstrate that the strategy with the coherent superposition can perform better than the strategy with quantum \textsc{switch} which can generate indefinite causal order. We analytically obtain the general form of estimation precision in terms of the quantum Fisher information and further prove that the nonlinear Hamiltonian can improve the estimation precision and make the measurement uncertainty scale as $1/N^m$ for $m\geq2$. Our results can help to construct a high-precision measurement equipment, which can be applied to the detection of coupling strength and the test of time dilation and the modification of the canonical commutation relation.
quantum physics
We show how the renormalization constant of the Higgs vacuum expectation value, fixed by a tadpole condition, is responsible for gauge dependences in various definitions of parameters in the $R_{\xi}$-gauge. Then we show the relationship of this renormalization constant to the Fleischer-Jegerlehner (FJ) scheme, which is used to avoid these gauge dependences. In this way, we also present a viewpoint on the FJ-scheme complementary to the ones already existing in the literature. Additionally, we compare and discuss different approaches to the renormalization of tadpoles by identifying the similarities and relations between them. The relationship to the Higgs background field renormalization is also discussed.
high energy physics phenomenology
In this note we study a quantitative version of Bernstein's approximation problem when the polynomials are dense in weighted spaces on the real line completing a result of S.~N.~Mergelyan (1960). We estimate in the logarithmic scale the error of the weighted polynomial approximation of the Cauchy kernel.
mathematics
Physics of topological materials have attracted much attention from both physicists and mathematicians recently. The index and the fermion number of Dirac fermions play an important role in topological insulators and topological superconductors. A zero-energy mode exists when Dirac fermions couple to objects with soliton-like structure such as kinks, vortices, monopoles, strings and branes. We discuss a system of Dirac fermions interacting with a vortex and a kink. This kind of systems will be realized on the surface of topological insulators where Dirac fermions exist. The fermion number is fractionalized and this is related to the presence of fermion zero-energy excitation modes. A zero-energy mode can be regarded as a Majorana fermion mode when the chemical potential vanishes. Our discussion includes the case where there is a half-flux quantum vortex associated with a kink in a magnetic field in a bilayer superconductor. A normalizable wave function of fermion zero-energy mode does not exist in the core of the half-flux quantum vortex. The index of Dirac operator and the fermion number have additional contributions when a soliton scalar field has a singularity.
high energy physics theory
Latent variable models are well-known to suffer from rank deficiencies, causing problems with convergence and stability. Such problems are compounded in the "reduced-group split-ballot multitrait-multimethod model", which omits a set of moments from the estimation through a planned missing data design. This paper demonstrates the existence of rank deficiencies in this model and give the explicit null space. It also demonstrates that sample size and distance from the rank-deficient point interact in their effects on convergence, causing convergence to improve or worsen depending on both factors simultaneously. Furthermore, it notes that the latent variable correlations in the uncorrelated methods SB-MTMM model remain unaffected by the rank deficiency. I conclude that methodological experiments should be careful to manipulate both distance to known rank-deficiencies and sample size, and report all results, not only the apparently converged ones. Practitioners may consider that, even in the presence of nonconvergence or so-called "inadmissible" estimates, a subset of parameter estimates may still be consistent and stable.
statistics
Automated cyber threat detection in computer networks is a major challenge in cybersecurity. The cyber domain has inherent challenges that make traditional machine learning techniques problematic, specifically the need to learn continually evolving attacks through global collaboration while maintaining data privacy, and the varying resources available to network owners. We present a scheme to mitigate these difficulties through an architectural approach using community model sharing with a streaming analytic pipeline. Our streaming approach trains models incrementally as each log record is processed, thereby adjusting to concept drift resulting from changing attacks. Further, we designed a community sharing approach which federates learning through merging models without the need to share sensitive cyber-log data. Finally, by standardizing data and Machine Learning processes in a modular way, we provide network security operators the ability to manage cyber threat events and model sensitivity through community member and analytic method weighting in ways that are best suited for their available resources and data.
computer science
Non-volatile memory (NVM) technologies suffer from limited write endurance. To address this challenge, we propose Predict and Write (PNW), a K/V-store that uses a clustering-based machine learning approach to extend the lifetime of NVMs. PNW decreases the number of bit flips for PUT/UPDATE operations by determining the best memory location an updated value should be written to. PNW leverages the indirection level of K/V-stores to freely choose the target memory location for any given write based on its value. PNW organizes NVM addresses in a dynamic address pool clustered by the similarity of the data values they refer to. We show that, by choosing the right target memory location for a given PUT/UPDATE operation, the number of total bit flips and cache lines can be reduced by up to 85% and 56% over the state of the art.
computer science
Multi-contrast magnetic resonance (MR) image registration is useful in the clinic to achieve fast and accurate imaging-based disease diagnosis and treatment planning. Nevertheless, the efficiency and performance of the existing registration algorithms can still be improved. In this paper, we propose a novel unsupervised learning-based framework to achieve accurate and efficient multi-contrast MR image registrations. Specifically, an end-to-end coarse-to-fine network architecture consisting of affine and deformable transformations is designed to improve the robustness and achieve end-to-end registration. Furthermore, a dual consistency constraint and a new prior knowledge-based loss function are developed to enhance the registration performances. The proposed method has been evaluated on a clinical dataset containing 555 cases, and encouraging performances have been achieved. Compared to the commonly utilized registration methods, including VoxelMorph, SyN, and LT-Net, the proposed method achieves better registration performance with a Dice score of 0.8397 in identifying stroke lesions. With regards to the registration speed, our method is about 10 times faster than the most competitive method of SyN (Affine) when testing on a CPU. Moreover, we prove that our method can still perform well on more challenging tasks with lacking scanning information data, showing high robustness for the clinical application.
electrical engineering and systems science
We elaborate on integrable dynamical systems from scalar-gravity Lagrangians that include the leading dilaton tadpole potentials of broken supersymmetry. In the static Dudas-Mourad compactifications from ten to nine dimensions, which rest on these leading potentials, the string coupling and the space-time curvature become unbounded in some regions of the internal space. On the other hand, the string coupling remains bounded in several corresponding solutions of these integrable models. One can thus identify corrected potential shapes that could grant these features generically when supersymmetry is absent or non-linearly realized. On the other hand, large scalar curvatures remain present in all our examples. However, as in other contexts, the combined effects of the higher-derivative corrections of String Theory could tame them.
high energy physics theory
Theories involving localized collapse allow the possibility that classical information could be obtained about quantum states without using POVMS and without allowing superluminal signalling. We can model this by extending quantum theory to include hypothetical devices that read out information about the local quantum state at a given point, defined by considering only collapses in its past light cone. Like Popescu-Rohrlich boxes, these hypothetical devices would have practical and scientific implications if realisable. These include signalling through opaque media, probing the physics of distant or opaque systems without needing a reflected signal and giving detailed information about collapse dynamics without requiring direct observation of the collapsing system. These potential applications motivate systematic searches for possible signatures of these nonstandard extensions of quantum theory, and in particular for relevant gravitational effects, such as the validity of semi-classical gravity on small scales.
quantum physics
The effectiveness of outer hair cells (OHCs) in amplifying the motion of the organ of Corti, and thereby contributing to the sensitivity of mammalian hearing, depends on the mechanical power output of these cells. Electromechanical coupling in OHCs, which enables these cells to convert electrical energy into mechanical energy, has been analyzed in detail using isolated cells using primarily static membrane models. In the preceding reports, mechanical output of OHC was evaluated by developing a kinetic theory based on a simplified one-dimensional (1D) model for OHCs. Here such a kinetic description of OHCs is extended by using the membrane model, which has been used for analyzing in vitro experiments. The present theory predicts, for systems without inertial load, that elastic load enhances positive shift of voltage dependence of the membrane capacitance due to turgor pressure. For systems with inertia, mechanical power output also depends on turgor pressure. The maximal power output is, however, similar to the previous prediction of up to ~10 fW based on the 1D model.
physics
Carr and Wu (2004), henceforth CW, developed a framework that encompasses almost all of the continuous-time models proposed in the option pricing literature. Their main result hinges on the stopping time property of the time changes, but all of the models CW proposed for the time changes do not satisfy this assumption. In this paper, when the time changes are adapted, but not necessarily stopping times, we provide analogous results to CW. We show that our approach can be applied to all models in CW.
mathematics
We report on the fabrication and characterization of 50 Ohms, flux-tunable, low-loss, SQUID-based transmission lines. The fabrication process relies on the deposition of a thin dielectric layer (few tens of nanometers) via Atomic Layer Deposition (ALD) on top of a SQUID array, the whole structure is then covered by a non-superconducting metallic top ground plane. We present experimental results from five different samples. We systematically characterize their microscopic parameters by measuring the propagating phase in these structures. We also investigate losses and discriminate conductor from dielectric losses. This fabrication method offers several advantages. First, the SQUID array fabrication does not rely on a Niobium tri-layer process but on a simpler double angle evaporation technique. Second, ALD provides high quality dielectric leading to low-loss devices. Further, the SQUID array fabrication is based on a standard, all-aluminum process, allowing direct integration with superconducting qubits. Moreover, our devices are in-situ flux tunable, allowing mitigation of incertitude inherent to any fabrication process. Finally, the unit cell being a single SQUID (no extra ground capacitance is needed), it is straightforward to modulate the size of the unit cell periodically, allowing band-engineering. This fabrication process can be directly applied to traveling wave parametric amplifiers.
condensed matter
Motion degradation is a central problem in Magnetic Resonance Imaging (MRI). This work addresses the problem of how to obtain higher quality, super-resolved motion-free, reconstructions from highly undersampled MRI data. In this work, we present for the first time a variational multi-task framework that allows joining three relevant tasks in MRI: reconstruction, registration and super-resolution. Our framework takes a set of multiple undersampled MR acquisitions corrupted by motion into a novel multi-task optimisation model, which is composed of an $L^2$ fidelity term that allows sharing representation between tasks, super-resolution foundations and hyperelastic deformations to model biological tissue behaviors. We demonstrate that this combination yields to significant improvements over sequential models and other bi-task methods. Our results exhibit fine details and compensate for motion producing sharp and highly textured images compared to state of the art methods.
electrical engineering and systems science
There has been considerable recent interest in Bayesian modeling of high-dimensional networks via latent space approaches. When the number of nodes increases, estimation based on Markov Chain Monte Carlo can be extremely slow and show poor mixing, thereby motivating research on alternative algorithms that scale well in high-dimensional settings. In this article, we focus on the latent factor model, a widely used approach for latent space modeling of network data. We develop scalable algorithms to conduct approximate Bayesian inference via stochastic optimization. Leveraging sparse representations of network data, the proposed algorithms show massive computational and storage benefits, and allow to conduct inference in settings with thousands of nodes.
statistics
We present a Monte Carlo based analysis of the combined world data on polarized lepton-nucleon deep-inelastic scattering at small Bjorken $x$ within the polarized quark dipole formalism. We show for the first time that double-spin asymmetries at $x<0.1$ can be successfully described using only small-$x$ evolution derived from first-principles QCD, allowing predictions to be made for the $g_1$ structure function at much smaller $x$. Anticipating future data from the Electron-Ion Collider, we assess the impact of electromagnetic and parity-violating polarization asymmetries on $g_1$ and demonstrate an extraction of the individual flavor helicity PDFs at small $x$.
high energy physics phenomenology
Using out-of-band (OOB) side-information has recently been shown to accelerate beam selection in single-user millimeter wave (mmWave) massive MIMO communications. In this paper, we propose a novel OOB-aided beam selection framework for a mmWave uplink multi-user system. In particular, we exploit spatial information extracted from lower (sub-6 GHz) bands in order to assist with an inter-user coordination scheme at mmWave bands. To enforce coordination, we propose an exchange protocol exploiting device-to-device communications, where low-rate beam-related information is exchanged between the mobile terminals. The decentralized coordination mechanism allows the suppression of the so-called co-beam interference which would otherwise lead to irreducible interference at the base station side, thereby triggering substantial spectral efficiency gains.
electrical engineering and systems science
Context: Fast radio bursts are transient radio pulses of extragalactic origin. Their dispersion measure is indicative of the baryon content in the ionized intergalactic medium between the source and the observer. However, inference using unlocalized fast radio bursts is degenerate to the distribution of redshifts of host galaxies. Method: We perform a joint inference of the intergalactic baryon content and the fast radio burst redshift distribution with the use of Bayesian statistics by comparing the likelihood of different models to reproduce the observed statistics in order to infer the most likely models. In addition to two models of the intergalactic medium, we consider contributions from the local environment of the source, assumed to be a magnetar, as well as a representative ensemble of host and intervening galaxies. Results: Assuming that the missing baryons reside in the ionized intergalactic medium, our results suggest that the redshift distribution of observed fast radio bursts peaks at $z \lesssim 0.6$. However, conclusions from different instruments regarding the intergalactic baryon content diverge and thus require additional changes to the observed distribution of host redshifts, beyond those caused by telescope selection effects.
astrophysics
The Gompertz-Makeham distribution, which is used commonly to represent lifetimes based on laws of mortality, is one of the most popular choices for mortality modelling in the field of actuarial science. This paper investigates ordering properties of the smallest and largest lifetimes arising from two sets of heterogeneous groups of insurees following respective Gompertz-Makeham distributions. Some sufficient conditions are provided in the sense of usual stochastic ordering to compare the smallest and largest lifetimes from two sets of dependent variables. Comparison results on the smallest lifetimes in the sense of hazard rate ordering and ageing faster ordering are established for two groups of heterogeneous independent lifetimes. Under similar set-up, no reversed hazard rate ordering is shown to exist between the largest lifetimes with the use of a counter-example. Finally, we present sufficient conditions to stochastically compare two sets of independent heterogeneous lifetimes under random shocks by means of usual stochastic ordering. Such comparisons for the smallest lifetimes are also carried out in terms of hazard rate ordering.
statistics
Concern has been widely acknowledged about human health---e.g., heating of the eyes and skin--from exposure to electromagnetic fields (EMF) produced by wireless transmitters. Mobile telecommunications rely on an extensive network of base stations (BSs) and handheld devices that transmit signals via EMF. There is chance of aggravation due to two important changes that will be seen in future cellular networks. First, the number of BSs will remarkably grow with the proliferation of small-cell networks, which will expose humans to EMF more often. Second, highly concentrated EMF beams will be generated by employing larger antenna arrays to overcome faster EMF energy attenuation in higher-frequency bands such as millimeter wave (mmW) spectrum, which will increase damage if the main beam points to the human body. However, the two changes can be exploited as leverages for (i) wider selection of alternative BSs and (ii) more precise beamforming to a desired user equipment (UE) with less EMF leakage to other directions, respectively. Harnessing the two changes, we have been investigating the human health impacts of 5G wireless systems. This extended abstract summarizes our findings thus far.
electrical engineering and systems science
As a part of the celebration of 50 years of the Standard Model of particle physics, I present a brief history of the precision theory of electroweak interactions. I emphasize in particular the theoretical preparations for the LEP program and the prediction of m_t and m_h from the electroweak precision data. [to appear in the proceedings of the SM@50 Symposium, Case Western Reserve University, June 1-4, 2018]
high energy physics phenomenology
The addition of a weak oscillating field modifying strongly dressed spins enhances and enriches the system quantum dynamics. Through low-order harmonic mixing the bichromatic driving generates additional rectified static field acting on the spin system. The secondary field allows for a fine tuning of the atomic response and produces effects not accessible with a single dressing field, such as a spatial triaxial anisotropy of the spin coupling constants and acceleration of the spin dynamics. This tuning-dressed configuration introduces an extra handle for the system full engineering for quantum control applications. Tuning amplitude, harmonic content, spatial orientation and phase relation are control parameters. A theoretical analysis, based on perturbative approach, is experimentally validated by applying a bichromatic radiofrequency field to an optically pumped Cs atomic vapour. We measure the resonance shifts produced by tuning fields up to the third harmonic.
quantum physics
Despite the potential of online sharing economy platforms such as Uber, Lyft, or Foodora to democratize the labor market, these services are often accused of fostering unfair working conditions and low wages. These problems have been recognized by researchers and regulators but the size and complexity of these socio-technical systems, combined with the lack of transparency about algorithmic practices, makes it difficult to understand system dynamics and large-scale behavior. This paper combines approaches from complex systems and algorithmic fairness to investigate the effect of algorithm design decisions on wage inequality in ride-hailing markets. We first present a computational model that includes conditions about locations of drivers and passengers, traffic, the layout of the city, and the algorithm that matches requests with drivers. We calibrate the model with parameters derived from empirical data. Our simulations show that small changes in the system parameters can cause large deviations in the income distributions of drivers, leading to a highly unpredictable system which often distributes vastly different incomes to identically performing drivers. As suggested by recent studies about feedback loops in algorithmic systems, these initial income differences can result in enforced and long-term wage gaps.
physics
Quasars are ideal targets to use for astrometric calibration of large scale astronomical surveys as they have negligible proper motion and parallax. The forthcoming 4-m International Liquid Mirror Telescope (ILMT) will survey the sky that covers a width of about 27 arcminute. To carry out astrometric calibration of the ILMT observations, we aimed to compile a list of quasars with accurate equatorial coordinates and falling in the ILMT stripe. Towards this, we cross-correlated all the quasars that are known till the present date with the sources in the Gaia-DR2 catalogue, as the Gaia-DR2 sources have position uncertainties as small as a few milli arcsec (mas). We present here the results of this cross-correlation which is a catalogue of 6738 quasars that is suitable for astrometric calibration of the ILMT fields. In this work, we present this quasar catalogue. This catalogue of quasars can also be used to study quasar variability over diverse time scales when the ILMT starts its observations. While preparing this catalogue, we also confirmed that quasars in the ILMT stripe have proper motion and parallax lesser than 20 mas/yr and 10 mas, respectively.
astrophysics
In this work we provide a method to study the entanglement entropy for non-Gaussian states that minimize the energy functional of interacting quantum field theories at arbitrary coupling. To this end, we build a class of non-Gaussian variational trial wavefunctionals with the help of exact nonlinear canonical transformations. The calculability \emph{bonanza} shown by these variational \emph{ansatze} allows us to compute the entanglement entropy using the prescription for the ground state of free theories. In free theories, the entanglement entropy is determined by the two-point correlation functions. For the interacting case, we show that these two-point correlators can be replaced by their nonperturbatively corrected counterparts. Upon giving some general formulae for general interacting models we calculate the entanglement entropy of half space and compact regions for the $\phi^4$ scalar field theory in 2D. Finally, we analyze the r\^ole played by higher order correlators in our results and show that strong subadditivity is satisfied.
high energy physics theory
In this paper, we introduce ID-Conditioned Auto-Encoder for unsupervised anomaly detection. Our method is an adaptation of the Class-Conditioned Auto-Encoder (C2AE) designed for the open-set recognition. Assuming that non-anomalous samples constitute of distinct IDs, we apply Conditioned Auto-Encoder with labels provided by these IDs. Opposed to C2AE, our approach omits the classification subtask and reduces the learning process to the single run. We simplify the learning process further by fixing a constant vector as the target for non-matching labels. We apply our method in the context of sounds for machine condition monitoring. We evaluate our method on the ToyADMOS and MIMII datasets from the DCASE 2020 Challenge Task 2. We conduct an ablation study to indicate which steps of our method influences results the most.
electrical engineering and systems science
A ballean $\mathcal{B}$ (or a coarse structure) on a set $X$ is a family of subsets of $X$ called balls (or entourages of the diagonal in $X\times X$) defined in such a way that $\mathcal{B}$ can be considered as the asymptotic counterpart of a uniform topological space. The aim of this paper is to study two concrete balleans defined by the ideals in the Boolean algebra of all subsets of $X$ and their hyperballeans, with particular emphasis on their connectedness structure, more specifically the number of their connected components.
mathematics
In this paper we present a new verification theorem for optimal stopping problems for Hunt processes. The approach is based on the Fukushima-Dynkin formula, and its advantage is that it allows us to verify that a given function is the value function without using the viscosity solution argument. Our verification theorem works in any dimension. We illustrate our results with some examples of optimal stopping of reflected diffusions and absorbed diffusions.
mathematics
In this paper, we propose several solutions to the committee selection problem among participants of a DAG distributed ledger. Our methods are based on a ledger intrinsic reputation model that serves as a selection criterion. The main difficulty arises from the fact that the DAG ledger is a priori not totally ordered and that the participants need to reach a consensus on participants' reputation. Furthermore, we outline applications of the proposed protocols, including: (i) self-contained decentralized random number beacon; (ii) selection of oracles in smart contracts; (iii) applications in consensus protocols and sharding solutions. We conclude with a discussion on the security and liveness of the proposed protocols by modeling reputation with a Zipf law.
computer science
The Lagrangian that defines quantum chromodynamics (QCD), the strong interaction piece of the Standard Model, appears very simple. Nevertheless, it is responsible for an astonishing array of high-level phenomena with enormous apparent complexity, e.g., the existence, number and structure of atomic nuclei. The source of all these things can be traced to emergent mass, which might itself be QCD's self-stabilising mechanism. A background to this perspective is provided, presenting, inter alia, a discussion of the gluon mass and QCD's process-independent effective charge and highlighting an array of observable expressions of emergent mass, ranging from its manifestations in pion parton distributions to those in nucleon electromagnetic form factors.
high energy physics phenomenology
We investigate the transverse momentum dependent parton distributions (TMDs) in the quasi-parton-distribution framework. The long standing hurdle of the so-called pinch pole singularity from the space-like gauge links in the TMD definitions can be resolved by the finite length of the gauge link along the hadron moving direction. In addition, with the soft factor subtraction, the quasi-TMD is free of linear divergence. We further demonstrate that the energy evolution equation of the quasi-TMD a.k.a. the Collins-Soper evolution, only depends on the hadron momentum. This leads to a clear matching between the quasi-TMD and the standard TMDs.
high energy physics phenomenology
We compute the damping rate of a fermion propagating in a chiral plasma when there is an imbalance between the densities of left- and right-handed fermions, after generalizing the hard thermal loop resummation techniques for these systems. In the ultradegenerate limit, for very high energies the damping rate of this external fermion approaches a constant value. Closer to the two Fermi surfaces, however, we find that the rate depends on both the energy and the chirality of the fermion, being higher for the predominant chirality. This comes out as a result of its scattering with the particles of the plasma, mediated by the exchange of Landau damped photons. In particular, we find that the chiral imbalance is responsible for a different propagation of the left and right circular polarised transverse modes of the photon, and that a chiral fermion interacts differently with these two transverse modes. We argue that spontaneous radiation of energetic fermions is kinematically forbidden, and discuss the time regime where our computation is valid.
high energy physics phenomenology
We discuss the requirement of single valuedness and periodicity of eigenfunction of the third component of the operator of angular momentum. This condition, imposed on a non observable, is often used to derive that the eigenvalues of angular momentum could be only integer. We reexamine the arguments based on this requirement and alternate condition imposed by Pauli and show that they do not follow from the first principles and therefore these constraints can dropped. Consequently, we arrive to the same conclusion as in [1]: there exist regular, normalizable eigenfunctions with the non-integer eigenvalues thus a non-integer angular momentum is perfectly admissible from the theoretical viewpoint. The issue of the nature of eigenvalues forming the spectrum of the angular momentum remains open. What can be derived from the first principles is that to a fixed value of the angular momentum L corresponds a discrete spectrum of eigenvalues of the third component of the angular momentum, m, defined by the relation |m|=L-k, k=0,1,...,[L], where [L] is an integer part of L. As a mathematical byproduct of our analysis of eigenfunctions, we present an alternate definition of a power of a complex number allowing to retain initial translational invariance of a base.
physics
Extra CP-violating source for electroweak baryogenesis can dynamically appear at finite temperature in the complex two-Higgs doublet model, which might help to alleviate the strong constraints from the electric dipole moment experiments. In this scenario, we study the detailed phase transition dynamics and the corresponding gravitational wave signals in synergy with the collider signals at future lepton colliders. For some parameter spaces, various phase transition patterns can occur, such as the multi-step phase transition and supercooling. Gravitational waves complementary to collider signals can help to pin down the underlying phase transition dynamics or different phase transition patterns.
high energy physics phenomenology
Phylodynamics is an area of population genetics that uses genetic sequence data to estimate past population dynamics. Modern state-of-the-art Bayesian nonparametric methods for recovering population size trajectories of unknown form use either change-point models or Gaussian process priors. Change-point models suffer from computational issues when the number of change-points is unknown and needs to be estimated. Gaussian process-based methods lack local adaptivity and cannot accurately recover trajectories that exhibit features such as abrupt changes in trend or varying levels of smoothness. We propose a novel, locally-adaptive approach to Bayesian nonparametric phylodynamic inference that has the flexibility to accommodate a large class of functional behaviors. Local adaptivity results from modeling the log-transformed effective population size a priori as a horseshoe Markov random field, a recently proposed statistical model that blends together the best properties of the change-point and Gaussian process modeling paradigms. We use simulated data to assess model performance, and find that our proposed method results in reduced bias and increased precision when compared to contemporary methods. We also use our models to reconstruct past changes in genetic diversity of human hepatitis C virus in Egypt and to estimate population size changes of ancient and modern steppe bison. These analyses show that our new method captures features of the population size trajectories that were missed by the state-of-the-art methods.
statistics
Measurements on cluster states can be used to process quantum information. But errors in cluster states naturally accrue as error-prone inter-particle interactions entangle qubits. We consider one-dimensional cluster states built from controlled phase, Ising, and XY interactions with slow two-qubit error in the interaction strength, consistent with error models of interactions found in a variety of qubit architectures. We focus on measurement protocols designed to implement perfect teleportation wherein quantum information moves across a cluster state intact. Deviations from perfect teleportation offer a proxy for entanglement that can be degraded by two-qubit gate errors. We detail an experimentally viable teleportation fidelity that offers a measure of the impact of error on the cluster state as a whole. Our fidelity calculations show that the error has a distinctly different impact depending on the underlying interaction used for the two-qubit entangling gate. In particular, the Ising and XY interactions can allow perfect teleportation through the cluster state even with large errors, but the controlled phase interaction does not. Nonetheless, we find that teleportation through cluster state chains of size $N$ has a maximum two-qubit error for teleportation along a quantum channel that decreases as $N^{-1/2}$. To allow construction of larger cluster states, we also design lowest-order refocusing pulses for correcting slow errors in the interaction strength. Our work generalizes to higher-dimensional cluster states and sets the stage for experiments to monitor the growth of entanglement in cluster states built from error-prone interactions.
quantum physics
We investigate general features of the evolution of holographic subregion complexity (HSC) on Vaidya-AdS metric with a general form. The spacetime is dual to a sudden quench process in quantum system and HSC is a measure of the ``difference'' between two mixed states. Based on the subregion CV (Complexity equals Volume) conjecture and in the large size limit, we extract out three distinct stages during the evolution of HSC: the stage of linear growth at the early time, the stage of linear growth with a slightly small rate during the intermediate time and the stage of linear decrease at the late time. The growth rates of the first two stages are compared with the Lloyd bound. We find that with some choices of certain parameter, the Lloyd bound is always saturated at the early time, while at the intermediate stage, the growth rate is always less than the Lloyd bound. Moreover, the fact that the behavior of CV conjecture and its version of the subregion in Vaidya spacetime implies that they are different even in the large size limit.
high energy physics theory
We compute the sheaf homology of the intersection lattice of a hyperplane arrangement with coefficients in the graded exterior sheaf of the natural sheaf. This builds on the results of our previous paper, where this homology was computed for the natural sheaf, itself a generalisation of an old result of Lusztig. The computational machinery we develop in this paper is quite different though: sheaf homology is lifted to what we call Boolean covers, where we instead compute homology cellularly. A number of tools are given for the cellular homology of these Boolean covers, including a deletion-restriction long exact sequence.
mathematics
We dare to make use of a possible analogy between neurons in a brain and people in society, asking ourselves whether individual intelligence is necessary in order to collective wisdom to emerge and, most importantly, what sort of individual intelligence is conducive of greater collective wisdom. We review insights and findings from connectionism, agent-based modeling, group psychology, economics and physics, casting them in terms of changing structure of the system's Lyapunov function. Finally, we apply these insights to the sort and degrees of intelligence of preys and predators in the Lotka-Volterra model, explaining why certain individual understandings lead to co-existence of the two species whereas other usages of their individual intelligence cause global extinction.
electrical engineering and systems science
Hydropower plants are one of the most convenient option for power generation, as they generate energy exploiting a renewable source, they have relatively low operating and maintenance costs, and they may be used to provide ancillary services, exploiting the large reservoirs of available water. The recent advances in Information and Communication Technologies (ICT) and in machine learning methodologies are seen as fundamental enablers to upgrade and modernize the current operation of most hydropower plants, in terms of condition monitoring, early diagnostics and eventually predictive maintenance. While very few works, or running technologies, have been documented so far for the hydro case, in this paper we propose a novel Key Performance Indicator (KPI) that we have recently developed and tested on operating hydropower plants. In particular, we show that after more than one year of operation it has been able to identify several faults, and to support the operation and maintenance tasks of plant operators. Also, we show that the proposed KPI outperforms conventional multivariable process control charts, like the Hotelling $t_2$ index.
electrical engineering and systems science
The Fermilab E989 experiment has recently reported their result of muon anomalous magnetic moment ($g$-2). Combined with the E821 result, the discrepancy with the Standard Model (SM) reaches $4.2\sigma$, which may indicate the light new physics related with the electroweak interactions. On the other hand, the observed Galactic Center GeV Excess (GCE) and anti-proton excess can also be explained by the light weakly interacting massive particle dark matter. In this work, we attempt to pin down a common origin of these anomalies in the Next-to-Minimal Supersymmetric Model. By considering various constraints, we find that the eletroweakinos and sleptons have to be lighter than about 1 TeV, and the geometric mean of their masses need to be less than about 375 GeV to interpret the muon $g-2$ within $2\sigma$ range. In order to accommodate both the muon $g-2$ and GCE, the bino-like neutralino DM is needed and has to resonantly annihilate through the $Z$ boson or Higgs boson to give the correct relic density. Besides, the DM annihilating cross section for GCE can be achieved by a singlet-like Higgs boson via $s$-channel. If the anti-proton excess is explained together, only the Higgs funnel is feasible. We point out that the favored parameter space to explain all these anomalies can be probed by the future direct detection experiments.
high energy physics phenomenology
In a recent paper we have shown how to optimally compute the differential and cumulative cross sections for massive event-shapes at $\mathcal{O}(\alpha_s)$ in full QCD. In the present article we complete our study by obtaining resummed expressions for non-recoil-sensitive observables to N$^2$LL + $\mathcal{O}(\alpha_s)$ precision. Our results can be used for thrust, heavy jet mass and C-parameter distributions in any massive scheme, and are easily generalized to angularities and other event shapes. We show that the so-called E- and P-schemes coincide in the collinear limit, and compute the missing pieces to achieve this level of accuracy: the P-scheme massive jet function in Soft-Collinear Effective Theory (SCET) and boosted Heavy Quark Effective Theory (bHQET). The resummed expression is subsequently matched into fixed-order QCD to extend its validity towards the tail and far-tail of the distribution. The computation of the jet function cannot be cast as the discontinuity of a forward-scattering matrix element, and involves phase space integrals in $d=4-2\varepsilon$ dimensions. We show how to analytically solve the renormalization group equation for the P-scheme SCET jet function, which is significantly more complicated than its 2-jettiness counterpart, and derive rapidly-convergent expansions in various kinematic regimes. Finally, we perform a numerical study to pin down when mass effects become more relevant.
high energy physics phenomenology
Automatic fact-checking systems detect misinformation, such as fake news, by (i) selecting check-worthy sentences for fact-checking, (ii) gathering related information to the sentences, and (iii) inferring the factuality of the sentences. Most prior research on (i) uses hand-crafted features to select check-worthy sentences, and does not explicitly account for the recent finding that the top weighted terms in both check-worthy and non-check-worthy sentences are actually overlapping [15]. Motivated by this, we present a neural check-worthiness sentence ranking model that represents each word in a sentence by \textit{both} its embedding (aiming to capture its semantics) and its syntactic dependencies (aiming to capture its role in modifying the semantics of other terms in the sentence). Our model is an end-to-end trainable neural network for check-worthiness ranking, which is trained on large amounts of unlabelled data through weak supervision. Thorough experimental evaluation against state of the art baselines, with and without weak supervision, shows our model to be superior at all times (+13% in MAP and +28% at various Precision cut-offs from the best baseline with statistical significance). Empirical analysis of the use of weak supervision, word embedding pretraining on domain-specific data, and the use of syntactic dependencies of our model reveals that check-worthy sentences contain notably more identical syntactic dependencies than non-check-worthy sentences.
computer science
Ion transport through nanopores permeates through many areas of science and technology, from cell behavior to sensing and separation to catalysis and batteries. Two-dimensional materials, such as graphene, molybdenum disulfide (MoS$_2$), and hexagonal boron nitride (hBN), are recent additions to these fields. Low-dimensional materials present new opportunities to develop filtration, sensing, and power technologies, encompassing ion exclusion membranes, DNA sequencing, single molecule detection, osmotic power generation, and beyond. Moreover, the physics of ionic transport through pores and constrictions within these materials is a distinct realm of competing many-particle interactions (e.g., solvation/dehydration, electrostatic blockade, hydrogen bond dynamics) and confinement. This opens up alternative routes to creating biomimetic pores and may even give analogues of quantum phenomena, such as quantized conductance, in the classical domain. These prospects make membranes of 2D materials -- i.e., 2D membranes -- fascinating. We will discuss the physics and applications of ionic transport through nanopores in 2D membranes.opores in 2D membranes.
condensed matter
The dynamics of a traditional toy, the yoyo, is investigated theoretically and experimentally using smartphone' sensors. In particular, using the gyroscope the angular velocity is measured. The experimental results are complemented thanks to a digital video analysis. The concordance between theoretical and experimental results is discussed. As the yoyo is a ubiquitous, simple and traditional toy this simple proposal could encourage students to experiment with everyday objects and modern technologies.
physics
The quantum vacuum has long been known to be characterized by field correlations between spacetime points. These correlations can be swapped with a pair of particle detectors, modelled as simple two-level quantum systems (Unruh-DeWitt detectors) via a process known as entanglement harvesting. We study this phenomenon in the presence of a rotating BTZ black hole, and find that rotation can significantly amplify the harvested vacuum entanglement. Concurrence between co-rotating detectors is amplified by as much as an order of magnitude at intermediate distances from the black hole relative to that at large distances. The effect is most pronounced for near-extremal small mass black holes, and allows for harvesting at large spacelike detector separations. We also find that the entanglement shadow -- a region near the black hole from which entanglement cannot be extracted -- is diminished in size as the black hole's angular momentum increases.
high energy physics theory
Quantum walks are powerful tools for quantum applications and for designing topological systems. Although they are simulated in a variety of platforms, genuine two-dimensional realizations are still challenging. Here we present an innovative approach to the photonic simulation of a quantum walk in two dimensions, where walker positions are encoded in the transverse wavevector components of a single light beam. The desired dynamics is obtained by means of a sequence of liquid-crystal devices, which apply polarization-dependent transverse "kicks" to the photons in the beam. We engineer our quantum walk so that it realizes a periodically-driven Chern insulator, and we probe its topological features by detecting the anomalous displacement of the photonic wavepacket under the effect of a constant force. Our compact, versatile platform offers exciting prospects for the photonic simulation of two-dimensional quantum dynamics and topological systems.
quantum physics
We present a novel next-to-next-to-leading order (NNLO) QCD calculation matched to parton shower for the production of a pair of $Z$ bosons decaying to four massless leptons, $p p \to \ell^+ \ell^- \ell'^+ \ell'^- + X$, at the LHC. Spin correlations, interferences and off-shell effects are included throughout. Our result is based on the resummed beam-thrust spectrum, which we evaluate at next-to-next-to-leading-logarithmic (NNLL$'_{\mathcal{T}_0}$) accuracy for the first time for this process, and makes use of the GENEVA Monte Carlo framework for the matching to PYTHIA8 shower and hadronisation models. We compare our predictions with data from the ATLAS and CMS experiments at 13 TeV, finding a good agreement.
high energy physics phenomenology
Data center (DC) contains both IT devices and facility equipment, and the operation of a DC requires a high-quality monitoring (anomaly detection) system. There are lots of sensors in computer rooms for the DC monitoring system, and they are inherently related. This work proposes a data-driven pipeline (ts2graph) to build a DC graph of things (sensor graph) from the time series measurements of sensors. The sensor graph is an undirected weighted property graph, where sensors are the nodes, sensor features are the node properties, and sensor connections are the edges. The sensor node property is defined by features that characterize the sensor events (behaviors), instead of the original time series. The sensor connection (edge weight) is defined by the probability of concurrent events between two sensors. A graph of things prototype is constructed from the sensor time series of a real data center, and it successfully reveals meaningful relationships between the sensors. To demonstrate the use of the DC sensor graph for anomaly detection, we compare the performance of graph neural network (GNN) and existing standard methods on synthetic anomaly data. GNN outperforms existing algorithms by a factor of 2 to 3 (in terms of precision and F1 score), because it takes into account the topology relationship between DC sensors. We expect that the DC sensor graph can serve as the infrastructure for the DC monitoring system since it represents the sensor relationships.
computer science
We analyse predictions of future recruitment to a multi-centre clinical trial based on a maximum-likelihood fitting of a commonly used hierarchical Poisson-Gamma model for recruitments at individual centres. We consider the asymptotic accuracy of quantile predictions in the limit as the number of recruitment centres grows large and find that, in an important sense, the accuracy of the quantiles does not improve as the number of centres increases. When predicting the number of further recruits in an additional time period, the accuracy degrades as the ratio of the additional time to the census time increases, whereas when predicting the amount of additional time to recruit a further $n^+_\bullet$ patients, the accuracy degrades as the ratio of $n^+_\bullet$ to the number recruited up to the census period increases. Our analysis suggests an improved quantile predictor. Simulation studies verify that the predicted pattern holds for typical recruitment scenarios in clinical trials and verify the much improved coverage properties of prediction intervals obtained from our quantile predictor. In the process of extending the applicability of our methodology, we show that in terms of the accuracy of all integer moments it is always better to approximate the sum of independent gamma random variables by a single gamma random variable matched on the first two moments than by the moment-matched Gaussian available from the central limit theorem.
statistics
Crystallization often proceeds through successive stages that lead to a gradual increase in organization. Using molecular simulation, we determine the nucleation pathway for solid solutions of copper and gold. We identify a new nucleation mechanism (liquid$\to$$L1_2$~precursor$\to$solid solution), involving a chemically ordered intermediate that is more organized than the end product. This nucleation pathway arises from the low formation energy of $L1_2$ clusters which, in turn, promote crystal nucleation. We also show that this mechanism is composition-dependent since the high formation energy of other ordered phases precludes them from acting as precursors.
condensed matter
We consider the Euler--Darboux equation with parameters modulo 1/2 and generalization to the space 3D analogue. Due to the fact that the Cauchy problem in its classical formulation is incorrect for such parameter values, the authors propose formulations and solutions of modified Cauchy-type problems with parameter values: a) $\alpha=\beta=\displaystyle\frac{1}{2}$, b) $\alpha=-\,\displaystyle\frac{1}{2}$, $\beta=+\,\displaystyle\frac{1}{2}$, c) $\alpha=\beta=-\,\displaystyle\frac{1}{2}$. The obtained result is used to formulate an analogue of the $\Delta_1$ problem in the first quadrant with the setting of boundary conditions with displacement on the coordinate axes and non-standard conjugation conditions on the singularity line of the coefficients of the equation $y=x$. The first of these conditions glues the normal derivatives of the desired solution, the~second contains the limit values of the combination of the solution and its normal derivatives. The problem was reduced to a uniquely solvable system of integral equations.
mathematics
It is important to draw causal inference from observational studies, which, however, becomes challenging if the confounders have missing values. Generally, causal effects are not identifiable if the confounders are missing not at random. We propose a novel framework to nonparametrically identify causal effects with confounders subject to an outcome-independent missingness, that is, the missing data mechanism is independent of the outcome, given the treatment and possibly missing confounders. We then propose a nonparametric two-stage least squares estimator and a parametric estimator for causal effects.
statistics
The phase separation of the ferromagnetic (FM) and paramagnetic (PM) phases in the superconducting (SC) state of UCoGe at the FM critical region was investigated using $^{59}$Co nuclear quadrupole resonance (NQR) technique by taking advantage of its site-selective feature. The NQR measurements revealed that the first-order quantum phase transition occurs between the FM and the PM states. The nuclear spin-lattice relaxation rate $1/T_1$ exhibited a clear drop at the SC state in the PM phase, whereas it was not detected in the FM phase, which indicates that the superconductivity in the FM phase becomes weaker at the FM critical region due to the presence of the PM SC state. This result suggests that the SC condensation energy of the PM SC state is equal or larger than that of the FM SC state in this region. The pressure-temperature phase diagram of UCoGe was modified by taking the results from this study into account.
condensed matter
We introduce the Southern Stellar Stream Spectroscopy Survey (${S}^5$), an on-going program to map the kinematics and chemistry of stellar streams in the Southern Hemisphere. The initial focus of ${S}^5$ has been spectroscopic observations of recently identified streams within the footprint of the Dark Energy Survey (DES), with the eventual goal of surveying streams across the entire southern sky. Stellar streams are composed of material that has been tidally striped from dwarf galaxies and globular clusters and hence are excellent dynamical probes of the gravitational potential of the Milky Way, as well as providing a detailed snapshot of its accretion history. Observing with the 3.9-m Anglo-Australian Telescope's 2-degree-Field fibre positioner and AAOmega spectrograph, and combining the precise photometry of DES DR1 with the superb proper motions from $Gaia$ DR2, allows us to conduct an efficient spectroscopic survey to map these stellar streams. So far ${S}^5$ has mapped 9 DES streams and 3 streams outside of DES; the former are the first spectroscopic observations of these recently discovered streams. In addition to the stream survey, we use spare fibres to undertake a Milky Way halo survey and a low-redshift galaxy survey. This paper presents an overview of the ${S}^5$ program, describing the scientific motivation for the survey, target selection, observation strategy, data reduction and survey validation. Finally, we describe early science results on stellar streams and Milky Way halo stars drawn from the survey. Updates on ${S}^5$, including future public data release, can be found at \url{http://s5collab.github.io}.
astrophysics
The identification and use of structure property relationships lies at the heart of the chemical sciences. Quantum mechanics forms the basis for the unbiased virtual exploration of chemical compound space (CCS), imposing substantial compute needs if chemical accuracy is to be reached. In order to accelerate predictions of quantum properties without compromising accuracy, our lab has been developing quantum machine learning (QML) based models which can be applied throughout CCS. Here, we briefly explain, review, and discuss the recently introduced operator formalism which substantially improves the data efficiency for QML models of common response properties.
physics
Plasmonic photocatalysis has facilitated rapid progress in enhancing photocatalytic efficiency under visible light irradiation. Poor visible-light-responsive photocatalytic materials and low photocatalytic efficiency remain major challenges. Plasmonic metal-semiconductor heterostructures where both the metal and semiconductor are photosensitive are promising for light harvesting catalysis, as both components can absorb solar light. Efficiency of photon capture can be further improved by structuring the catalyst as a photonic crystal. Here we report the synthesis of photonic crystal plasmonic photocatalyst materials using Au nanoparticle-functionalized inverse opal (IO) photonic crystals. A catalyst prepared using a visible light responsive semiconductor (V2O5) displayed over an order of magnitude increase in reaction rate under green light excitation ($\lambda$=532 nm) compared to no illumination. The superior performance of Au-V2O5 IO was attributed to spectral overlap of the electronic band gap, localized surface plasmon resonance and incident light source. Comparing the photocatalytic performance of Au-V2O5 IO with a conventional Au-TiO2 IO catalyst, where the semiconductor band gap is in the UV, revealed that optimal photocatalytic activity is observed under different illumination conditions depending on the nature of the semiconductor. For the Au-TiO2 catalyst, despite coupling of the LSPR and excitation source at $\lambda$=532 nm, this was not as effective in enhancing photocatalytic activity compared to carrying out the reaction under broadband visible light, which is attributed to improved photon adsorption in the visible by the presence of a photonic band gap, and exploiting slow light in the photonic crystal to enhance photon absorption to create this synergistic type of photocatalyst.
physics
In this paper, we introduce a method for computing rigorous local inclusions of solutions of Cauchy problems for nonlinear heat equations for complex time values. Using a solution map operator, we construct a simplified Newton operator and show that it has a unique fixed point. The fixed point together with its rigorous bounds provides the local inclusion of the solution of the Cauchy problem. The local inclusion technique is then applied iteratively to compute solutions over long time intervals. This technique is used to prove the existence of a branching singularity in the nonlinear heat equation. Finally, we introduce an approach based on the Lyapunov-Perron method to calculate part of a center-stable manifold and prove that an open set of solutions of the Cauchy problem converge to zero, hence yielding the global existence of the solutions in the complex plane of time.
mathematics
We consider a two parameter family of Drinfeld twists generated from a simple Jordanian twist further twisted by 1-cochains. Twists from this family interpolate between two simple Jordanian twists. Relations between them are constructed and discussed. It is proved that there exists a one parameter family of twists identical to a simple Jordanian twist. The twisted coalgebra, star product and coordinate realizations of the $\kappa$-Minkowski noncommutative space time are presented. Real forms of Jordanian deformations are also discussed. The method of similarity transformations is applied to the Poincar\'e-Weyl Hopf algebra and two types of one parameter families of dispersion relations are constructed. Mathematically equivalent deformations, that are related to nonlinear changes of symmetry generators and linked with similarity maps, may lead to differences in the description of physical phenomena.
high energy physics theory
We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different oracles. In particular, we show how a separation oracle can be implemented using $\tilde{O}(1)$ quantum queries to a membership oracle, which is an exponential quantum speed-up over the $\Omega(n)$ membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that $\tilde{O}(n)$ quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: $\Omega(\sqrt{n})$ quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and $\Omega(n)$ quantum separation queries are needed if it does not.
quantum physics
Relation between the Virasoro constraints and KP integrability (determinant formulas) for matrix models is a lasting mystery. We elaborate on the claim that the situation is improved when integrability is enhanced to super-integrability, i.e. to explicit formulas for Gaussian averages of characters. In this case, the Virasoro constraints are equivalent to simple recursive formulas, which have appropriate combinations of characters as their solutions. Moreover, one can easily separate dependence on the size of matrix, and deduce superintegrability from the Virasoro constraints. We describe one of the ways to do so for the Gaussian Hermitian matrix model. The result is a spectacularly elegant reformulation of Virasoro constraints as identities for the Schur functions evaluated at appropriate loci in the space of time-variables.
high energy physics theory
Feedback-driven winds from star formation or active galactic nuclei might be a relevant channel for the abrupt quenching star formation in massive galaxies. However, both observations and simulations support the idea that these processes are non-conflictingly co-evolving and self-regulating. Furthermore, evidence of disruptive events that are capable of fast quenching is rare, and constraints on their statistical prevalence are lacking. Here we present a massive starburst galaxy at z=1.4 which is ejecting $46 \pm 13$\% of its molecular gas mass at a startling rate of $\gtrsim 10,000$ M$_{\odot}{\rm yr}^{-1}$. A broad component that is red-shifted from the galaxy emission is detected in four (low- and high-J) CO and [CI] transitions and in the ionized phase, which ensures a robust estimate of the expelled gas mass. The implied statistics suggest that similar events are potentially a major star-formation quenching channel. However, our observations provide compelling evidence that this is not a feedback-driven wind, but rather material from a merger that has been probably tidally ejected. This finding challenges some literature studies in which the role of feedback-driven winds might be overstated.
astrophysics
We consider membership inference attacks, one of the main privacy issues in machine learning. These recently developed attacks have been proven successful in determining, with confidence better than a random guess, whether a given sample belongs to the dataset on which the attacked machine learning model was trained. Several approaches have been developed to mitigate this privacy leakage but the tradeoff performance implications of these defensive mechanisms (i.e., accuracy and utility of the defended machine learning model) are not well studied yet. We propose a novel approach of privacy leakage avoidance with switching ensembles (PASE), which both protects against current membership inference attacks and does that with very small accuracy penalty, while requiring acceptable increase in training and inference time. We test our PASE method, along with the the current state-of-the-art PATE approach, on three calibration image datasets and analyze their tradeoffs.
computer science
In this letter, we propose a joint resource allocation algorithm for an OFDM-based multi-user system assisted by an improved Decode-and-Forward (DF) relay. We aim at maximizing the sum rate of the system by jointly optimizing subcarrier pairing, subcarrier pair-user assignment, and power allocation in such a single DF relay system. When the relay does not perform any transmission on some subcarriers in the second phase, we further allow the source to transmit new symbols on these inactive subcarriers. We effectively solve the formulated mixed integer programming problem by using continuous relaxation and dual minimization methods. Numerical results verify the theoretical analysis, and illustrate the remarkable gains resulted from the extra direct-link transmissions.
electrical engineering and systems science
Graph Convolutional Networks (GCNs) are powerful models for node representation learning tasks. However, the node representation in existing GCN models is usually generated by performing recursive neighborhood aggregation across multiple graph convolutional layers with certain sampling methods, which may lead to redundant feature mixing, needless information loss, and extensive computations. Therefore, in this paper, we propose a novel architecture named Non-Recursive Graph Convolutional Network (NRGCN) to improve both the training efficiency and the learning performance of GCNs in the context of node classification. Specifically, NRGCN proposes to represent different hops of neighbors for each node based on inner-layer aggregation and layer-independent sampling. In this way, each node can be directly represented by concatenating the information extracted independently from each hop of its neighbors thereby avoiding the recursive neighborhood expansion across layers. Moreover, the layer-independent sampling and aggregation can be precomputed before the model training, thus the training process can be accelerated considerably. Extensive experiments on benchmark datasets verify that our NRGCN outperforms the state-of-the-art GCN models, in terms of the node classification performance and reliability.
computer science
We explore the Gravitational Waves (GW) phenomenology of a simple class of supergravity models that can explain and unify inflation and Primordial Black Holes (PBH) as Dark Matter (DM). Our (modified) supergravity models naturally lead to a two-field attractor-type double inflation, whose first stage is driven by Starobinsky scalaron and the second stage is driven by another scalar belonging to a supergravity multiplet. The PBHs formation in our supergravity models is efficient, compatible with all observational constraints, and predicts a stochastic GW background. We compute the PBH-induced GW power spectrum and show that GW signals can be detected within the sensitivity curves of the future space-based GW interferometers such as LISA, DECIGO, TAIJI and TianQin projects, thus showing predictive power of supergravity in GW physics and their compatibility.
high energy physics theory
Motivated by the mode estimation problem of an unknown multivariate probability density function, we study the problem of identifying the point with the minimum k-th nearest neighbor distance for a given dataset of n points. We study the case where the pairwise distances are apriori unknown, but we have access to an oracle which we can query to get noisy information about the distance between any pair of points. For two natural oracle models, we design a sequential learning algorithm, based on the idea of confidence intervals, which adaptively decides which queries to send to the oracle and is able to correctly solve the problem with high probability. We derive instance-dependent upper bounds on the query complexity of our proposed scheme and also demonstrate significant improvement over the performance of other baselines via extensive numerical evaluations.
statistics
We estimate the axion properties i.e. its mass, topological susceptibility and the self-coupling within the framework of Polyakov loop enhanced Nambu-Jona-Lasinio (PNJL) model at finite temperature and quark chemical potential. PNJL model, where quarks couple simultaneously to the chiral condensate and to a background temporal quantum chromodynamics (QCD) gauge field, includes two important features of QCD phase transition, i.e. deconfinement and chiral symmetry restoration. The Polyakov loop in PNJL model plays an important role near the critical temperature. We have shown significant difference in the axion properties calculated in PNJL model compared to the same obtained using Nambu-Jona-Lasinio (NJL) model. We find that both the mass of the axion and its self-coupling are correlated with the chiral transition as well as the confinement-deconfinement transition. We have also estimated the axion properties at finite chemical potential. Across the QCD transition temperature and/or quark chemical potential axion mass and its self-coupling also changes significantly. Since the PNJL model includes both the fermionic sector and the gauge fields, it can give reliable estimates of the axion properties, i.e. it's mass and the self-coupling in a hot and dense QCD medium. We also compare our results with the lattice QCD results whenever available.
high energy physics phenomenology
Quantum effects such as the environment assisted quantum transport (ENAQT) displayed in photosynthetic Fenna-Mathews-Olson (FMO) complex has been simulated on analog quantum simulators. Digital quantum simulations offer greater universality and flexibility over analog simulations. However, digital quantum simulations of open quantum systems face a theoretical challenge; one does not know the solutions of the continuous time master equation for developing quantum gate operators. We give a theoretical framework for digital quantum simulation of ENAQT by introducing new quantum evolution operators. We develop the dynamical equation for the operators and prove that it is an analytical solution of the master equation. As an example, using the dynamical equations, we simulate the FMO complex in the digital setting, reproducing theoretical and experimental evidence of the dynamics. The framework gives an optimal method for {quantum circuit} implementation, giving a log reduction in complexity over known methods. The generic framework can be extrapolated to study other open quantum systems.
quantum physics
Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an $l$-th order tensor in $(R^d)^{\otimes l}$ of rank $r$ (where $r\ll d$), can variants of gradient descent find a rank $m$ decomposition where $m > r$? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2.5l}\log d)$. Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
statistics
We consider the problem of estimating the maximum posterior probability (MAP) state sequence for a finite state and finite emission alphabet hidden Markov model (HMM) in the Bayesian setup, where both emission and transition matrices have Dirichlet priors. We study a training set consisting of thousands of protein alignment pairs. The training data is used to set the prior hyperparameters for Bayesian MAP segmentation. Since the Viterbi algorithm is not applicable any more, there is no simple procedure to find the MAP path, and several iterative algorithms are considered and compared. The main goal of the paper is to test the Bayesian setup against the frequentist one, where the parameters of HMM are estimated using the training data.
statistics
We present a data analysis methodology for a model-independent reconstruction of the spectral shape of a stochastic gravitational wave background with LISA. We improve a previously proposed reconstruction algorithm that relied on a single Time-Delay-Interferometry (TDI) channel by including a complete set of TDI channels. As in the earlier work, we assume an idealized equilateral configuration. We test the improved algorithm with a number of case studies, including reconstruction in the presence of two different astrophysical foreground signals. We find that including additional channels helps in different ways: it reduces the uncertainties on the reconstruction; it makes the global likelihood maximization less prone to falling into local extrema; and it efficiently breaks degeneracies between the signal and the instrumental noise.
astrophysics
With the purpose of holographically describing flows from a large family of four dimensional ${\cal N}=1$ and ${\cal N}=2$ conformal field theories, we discuss truncations of seven dimensional supergravity to five dimensions. We write explicitly the reduced gauged supergravity and find BPS equations for simple configurations. Lifting these flows to eleven dimensions or Massive IIA supergravity, we present string duals to RG flows from strongly coupled conformal theories when deformed by marginal and/or relevant operators. We further discuss observables common to infinite families of ${\cal N}=1$ and ${\cal N}=2$ QFTs in this context.
high energy physics theory
Mobile health (mHealth) applications (apps) have gained significant popularity over the last few years due to its tremendous benefits, such as lowering healthcare cost and increasing patient awareness. However, the sensitivity of healthcare data makes the security of mHealth apps a serious concern. In this review, we aim to identify and analyse the reported challenges that the developers of mHealth apps face concerning security. Additionally, our study aimed to develop a conceptual framework with the challenges faced by mHealth apps development organization for developing secure apps. The knowledge of such challenges can help to reduce the risk of developing insecure mHealth apps. We followed the Systematic Literature Review method for this review. We selected studies that have been published between January 2008 and October 2020. We selected 32 primary studies using predefined criteria and used thematic analysis method for analysing the extracted data. We identified nine challenges that can affect the development of secure mHealth apps. Such as 1) lack of security guidelines and regulations for developing secure mHealth apps, 2) developers lack of knowledge and expertise for secure mHealth app development, 3) lack of stakeholders involvement during mHealth app development, etc . Based on our analysis, we have presented a conceptual framework which highlights the correlation between the identified challenges. We conclude that our findings can help them identify their weaknesses and improve their security practices. Similarly, mHealth apps developers can identify the challenges they face to develop mHealth apps that do not pose security risks for users. Our review is a step towards providing insights into the development of secure mHealth apps. Our proposed conceptual framework can act as a practice guideline for practitioners to enhance secure mHealth apps development.
computer science
Since the historical experiments of Crookes, the direct manipulation of matter by light has been both a challenge and a source of scientific debate. Here we show that laser illumination allows to displace a vial of nanoparticle solution over centimetre-scale distances. Cantilever-based force measurements show that the movement is due to millisecond long force spikes, which are synchronised with a sound emission. We observe that the nanoparticles undergo negative thermophoresis, while ultrafast imaging reveals that the force spikes are followed by the explosive growth of a bubble in the solution. We propose a mechanism accounting for the propulsion based on a thermophoretic instability of the nanoparticle cloud, analogous to the Jeans instability that occurs in gravitational systems. Our experiments demonstrate a new type of laser propulsion, and a remarkably violent actuation of soft matter, reminiscent of the strategy used by certain plants to propel their spores.
condensed matter
The proposed India-based Neutrino Observatory will host a 50 kton magnetized iron calorimeter (ICAL) with resistive plate chambers as its active detector element. Its primary focus is to study charged-current interactions of atmospheric muon neutrinos via the reconstruction of muons in the detector. We present the first study of the energy and direction reconstruction of the final state lepton and hadrons produced in charged current interactions of atmospheric electron neutrinos at ICAL and the sensitivity of these events to neutrino oscillation parameters $\theta_{23}$ and $\Delta m_{32}^2$. However, the signatures of these events are similar to those from neutral-current interactions and charged-current muon neutrino events in which the muon track is not reconstructed. On including the entire set of events that do not produce a muon track, we find that reasonably good sensitivity to $\theta_{23}$ is obtained, with a relative $1\sigma$ precision of 15% on the mixing parameter $\sin^2\theta_{23}$, which decreases to 21%, when systematic uncertainties are considered.
physics
This pedagogical article is aimed to the beginning graduate students interested in broad field of frustrated magnetism. We introduce and present some of the exact results obtained in Kitaev model. The Kitaev model embodies an unusual two spin interactions yet exactly solvable model in two dimension. This exact solvability renders it to realize many emergent many body phenomena such as $Z_2$ gauge field, spin liquid states, spin fractionalization, topological order exactly. First we present the exact solution of Kitaev model using Majorana fermionisation and elaborate in detail the $Z_2$ gauge structure. Following this we discuss exact calculation of magnetization, spin-spin correlation function establishing its spin-liquid character. Spin fractionalization and de-confinement of Majorana fermion is explained in detail. Existence of long range multi-spin correlation function and topological degeneracy are discussed to elucidate the entangled and topological nature of any eigenstate. Some elementary questionnaires are provided in appropriate places for assimilation of the technical details.
condensed matter
In a multipass energy recovery linac (ERL), each cavity must regain all energy expended from beam acceleration during beam deceleration, and the beam should achieve specific energy targets during each loop that returns it to the linac. For full energy recovery, and for every returning beam to meet loop energy requirements, we must specify and maintain the phase and voltage of cavity fields in addition to selecting adequate flight times. These parameters are found with a full scale numerical optimization program. If we impose symmetry in time and energy during acceleration and deceleration, fewer parameters are needed, simplifying the optimization. As an example, we present symmetric models of the Cornell BNL ERL Test Accelerator (CBETA) with solutions that satisfy the optimization targets of loop energy and zero cavity loading. An identical cavity design and nearly uniform linac layout make CBETA a potential candidate for symmetric operation.
physics
Connected and automated vehicles (CAVs) have attracted more and more attention recently. The fast actuation time allows them having the potential to promote the efficiency and safety of the whole transportation system. Due to technical challenges, there will be a proportion of vehicles that can be equipped with automation while other vehicles are without automation. Instead of learning a reliable behavior for ego automated vehicle, we focus on how to improve the outcomes of the total transportation system by allowing each automated vehicle to learn cooperation with each other and regulate human-driven traffic flow. One of state of the art method is using reinforcement learning to learn intelligent decision making policy. However, direct reinforcement learning framework cannot improve the performance of the whole system. In this article, we demonstrate that considering the problem in multi-agent setting with shared policy can help achieve better system performance than non-shared policy in single-agent setting. Furthermore, we find that utilization of attention mechanism on interaction features can capture the interplay between each agent in order to boost cooperation. To the best of our knowledge, while previous automated driving studies mainly focus on enhancing individual's driving performance, this work serves as a starting point for research on system-level multi-agent cooperation performance using graph information sharing. We conduct extensive experiments in car-following and unsignalized intersection settings. The results demonstrate that CAVs controlled by our method can achieve the best performance against several state of the art baselines.
statistics
The area law for entanglement provides one of the most important connections between information theory and quantum many-body physics. It is not only related to the universality of quantum phases, but also to efficient numerical simulations in the ground state. Various numerical observations have led to a strong belief that the area law is true for every non-critical phase in short-range interacting systems. However, the area law for long-range interacting systems is still elusive, as the long-range interaction results in correlation patterns similar to those in critical phases. Here, we show that for generic non-critical one-dimensional ground states with locally bounded Hamiltonians, the area law robustly holds without any corrections, even under long-range interactions. Our result guarantees an efficient description of ground states by the matrix-product state in experimentally relevant long-range systems, which justifies the density-matrix renormalization algorithm.
quantum physics
We analyze the sensitivity of hadronic tau decays to non-standard interactions within the model-independent framework of the Standard Model Effective Field Theory (SMEFT). Both exclusive and inclusive decays are studied, using the latest lattice data and QCD dispersion relations. We show that there are enough theoretically clean channels to disentangle all the effective couplings contributing to these decays, with the $\tau \to \pi\pi\nu_\tau$ channel representing an unexpected powerful New Physics probe. We find that the ratios of non-standard couplings to the Fermi constant are bound at the sub-percent level. These bounds are complementary to the ones from electroweak precision observables and $p p \to \tau \nu_\tau$ measurements at the LHC. The combination of tau decay and LHC data puts tighter constraints on lepton universality violation in the gauge boson-lepton vertex corrections.
high energy physics phenomenology
This work examines the problem of using finite Gaussian mixtures (GM) probability density functions in recursive Bayesian peer-to-peer decentralized data fusion (DDF). It is shown that algorithms for both exact and approximate GM DDF lead to the same problem of finding a suitable GM approximation to a posterior fusion pdf resulting from the division of a `naive Bayes' fusion GM (representing direct combination of possibly dependent information sources) by another non-Gaussian pdf (representing removal of either the actual or estimated `common information' between the information sources). The resulting quotient pdf for general GM fusion is naturally a mixture pdf, although the fused mixands are non-Gaussian and are not analytically tractable for recursive Bayesian updates. Parallelizable importance sampling algorithms for both direct local approximation and indirect global approximation of the quotient mixture are developed to find tractable GM approximations to the non-Gaussian `sum of quotients' mixtures. Practical application examples for multi-platform static target search and maneuverable range-based target tracking demonstrate the higher fidelity of the resulting approximations compared to existing GM DDF techniques, as well as their favorable computational features.
electrical engineering and systems science
Skyrmion-containing devices have been proposed as a promising solution for low energy data storage. These devices include racetrack or logic structures and require skyrmions to be confined in regions with dimensions comparable to the size of a single skyrmion. Here we examine Bloch skyrmions in {FeGe} device shapes using Lorentz transmission electron microscopy (LTEM) to reveal the consequences of skyrmion confinement in a device structure. Dumbbell-shaped devices were created by focused ion beam (FIB) milling to provide regions where single skyrmions are confined adjacent to areas containing a skyrmion lattice. Simple block shapes of equivalent dimensions were prepared within the specimen to allow a direct comparison with skyrmion formation in a less complex, yet still confined, device geometry. The impact of the application of an applied external field and varying the temperature on skyrmion formation within the shapes was examined and this revealed that it is not just confinement within a small device structure that controls the position and number of skyrmions, but that a complex device geometry changes the skyrmion behaviour, including allowing formation of skyrmions at lower applied magnetic fields than in simple shapes. This could allow experimental methods to be developed to control the positioning and number of skyrmions within device shapes.
condensed matter
Detailed investigation of the incommensurate magnetic ordering in a single crystal of multiferroic NdMn2O5 has been performed using both non-polarized and polarized neutron diffraction techniques. Below TN = 30.5 K magnetic Bragg reflections corresponding to the non-chiral type magnetic structure with propagation vector k1 = (0.5 0 kz1) occurs. Below about 27 K a new distorted magnetic modulation with a similar vector kz2 occurs, which is attributed to the magnetization of the Nd3+ ions by the Mn-sub-lattice. Strong temperature hysteresis in the occurrence of the incommensurate magnetic phases in NdMn2O5 was observed depending on the cooling or heating history of the sample. Below about 20 K the magnetic structure became of a chiral type. From spherical neutron polarimetry measurements, the resulting low-temperature magnetic structure kz3 was approximated by the general elliptic helix. The parameters of the magnetic helix-like ellipticity and helical plane orientation in regard to the crystal structure were determined. A reorientation of the helix occurs at an intermediate temperature between 4 K and 18 K. A difference between the population of right- and left-handed chiral domains of about 0.2 was observed in the as-grown crystal when cooling without an external electric field. The magnetic chiral ratio can be changed by the application of an external electric field of a few kV/cm, revealing strong magnetoelectric coupling. A linear dependence of the magnetic chirality on the applied electric field in NdMn2O5 was found. The results are discussed within the frame of the antisymmetric super-exchange model for Dzyaloshinsky-Moria interaction.
condensed matter
Predictive modeling based on genomic data has gained popularity in biomedical research and clinical practice by allowing researchers and clinicians to identify biomarkers and tailor treatment decisions more efficiently. Analysis incorporating pathway information can boost discovery power and better connect new findings with biological mechanisms. In this article, we propose a general framework, Pathway-based Kernel Boosting (PKB), which incorporates clinical information and prior knowledge about pathways for prediction of binary, continuous and survival outcomes. We introduce appropriate loss functions and optimization procedures for different outcome types. Our prediction algorithm incorporates pathway knowledge by constructing kernel function spaces from the pathways and use them as base learners in the boosting procedure. Through extensive simulations and case studies in drug response and cancer survival datasets, we demonstrate that PKB can substantially outperform other competing methods, better identify biological pathways related to drug response and patient survival, and provide novel insights into cancer pathogenesis and treatment response.
statistics
Promising searches for new physics beyond the current Standard Model (SM) of particle physics are feasible through isotope-shift spectroscopy, which is sensitive to a hypothetical fifth force between the neutrons of the nucleus and the electrons of the shell. Such an interaction would be mediated by a new particle which could in principle be associated with dark matter. In so-called King plots, the mass-scaled frequency shifts of two optical transitions are plotted against each other for a series of isotopes. Subtle deviations from the expected linearity could reveal such a fifth force. Here, we study experimentally and theoretically six transitions in highly charged ions of Ca, an element with five stable isotopes of zero nuclear spin. Some of the transitions are suitable for upcoming high-precision coherent laser spectroscopy and optical clocks. Our results provide a sufficient number of clock transitions for -- in combination with those of singly charged Ca$^+$ -- application of the generalized King plot method. This will allow future high-precision measurements to remove higher-order SM-related nonlinearities and open a new door to yet more sensitive searches for unknown forces and particles.
physics
We study the clustering of a model cyanobacterium \textit{Synechocystis} into microcolonies. The bacteria are allowed to diffuse onto surfaces of different hardness, and interact with the others by aggregation and detachment. We find that soft surfaces give rise to more microcolonies than hard ones. This effect is related to the amount of heterogeneity of bacteria's dynamics as given by the proportion of motile cells. A kinetic model that emphasizes specific interactions between cells, complemented by extensive numerical simulations considering various amounts of motility, describes the experimental results adequately. The high proportion of motile cells enhances dispersion rather than aggregation.
physics
Growing observational evidence confirms the existence of massive black holes ($M_{BH} \sim 10^9 M_{\odot}$), accreting at rates close to the Eddington limit, at very high redshifts ($z \gtrsim 6-7$) in the early Universe. Recent observations indicate that the host galaxies of the first quasars are chemically evolved systems, containing unexpectedly large amounts of dust. Such a combination of high luminosities and large dust content should form favourable physical conditions for radiative dusty feedback. We explore the impact of the active galactic nucleus (AGN) feedback, driven by radiation pressure on dust, on the early growth of massive black holes. Assuming Eddington-limited exponential black hole growth, we find that the dynamics and energetics of the radiation pressure-driven outflows also follow exponential trends at late times. We obtain modest outflow energetics (with momentum flux $\dot{p} \lesssim L/c$ and kinetic power $\dot{E}_{k} \lesssim 10^{-3} L$), comparable with available observations of quasar-driven outflows at very high redshifts, but significantly lower than typically observed in local quasars and predicted by wind energy-driven models. AGN radiative dusty feedback may thus play an important role in powering galactic outflows in the first quasars in the early Universe.
astrophysics
It is proved that the rank of an elliptic curve is one less the arithmetic complexity of the corresponding non-commutative torus. As an illustration, we consider a family of elliptic curves with complex multiplication.
mathematics
We study the properties of the dark matter component of the radially anisotropic stellar population recently identified in the Gaia data, using magneto-hydrodynamical simulations of Milky Way-like halos from the Auriga project. We identify 10 simulated galaxies that approximately match the rotation curve and stellar mass of the Milky Way. Four of these have an anisotropic stellar population reminiscent of the Gaia structure. We find an anti-correlation between the dark matter mass fraction of this population in the Solar neighbourhood and its orbital anisotropy. We estimate the local dark matter density and velocity distribution for halos with and without the anisotropic stellar population, and use them to simulate the signals expected in future xenon and germanium direct detection experiments. We find that a generalized Maxwellian distribution fits the dark matter halo integrals of the Milky Way-like halos containing the radially anisotropic stellar population. For dark matter particle masses below approximately 10 GeV, direct detection exclusion limits for the simulated halos with the anisotropic stellar population show a mild shift towards smaller masses compared to the commonly adopted Standard Halo Model.
astrophysics
We analyze the performance of a quantum Otto cycle, employing time-dependent harmonic oscillator as the working fluid undergoing sudden expansion and compression strokes during the adiabatic stages, coupled to a squeezed reservoir. First, we show that the maximum efficiency that our engine can achieve is 1/2 only, which is in contrast with the earlier studies claiming unit efficiency under the effect of squeezed reservoir. Then, we obtain analytic expressions for the upper bound on the efficiency as well as on the coefficient of performance of the Otto cycle. The obtained bounds are independent of the parameters of the system and depends on the reservoir parameters only. Additionally, with hot squeezed thermal bath, we obtain analytic expression for the efficiency at maximum work which satisfies the derived upper bound. Further, in the presence of squeezing in the cold reservoir, we specify an operational regime for the Otto refrigerator otherwise forbidden in the standard case.
quantum physics
We analyze the problem of estimating past quantum states of a monitored system from a mathematical perspective in order to ensure self-consistency with the principle of quantum non-demolition. Despite several claims of ``measuring noncommuting observables'' in the physics literature, we show that we are always measuring commuting processes. Our main interest is in the notion of quantum smoothing or retrodiction. In particular, we examine proposals to estimate the result of an external measurement made on an open quantum systems during a period where it is also undergoing continuous monitoring. A full analysis shows that the non-demolition principle is not actually violated, and so a well-posed as a statistical inference problem can be formulated. We extend the formalism to consider multiple independent external measurements made on the system over the course of a continual period of monitoring.
quantum physics
We define a new class of Bayesian point estimators, which we refer to as risk averse. Using this definition, we formulate axioms that provide natural requirements for inference, e.g. in a scientific setting, and show that for well-behaved estimation problems the axioms uniquely characterise an estimator. Namely, for estimation problems in which some parameter values have a positive posterior probability (such as, e.g., problems with a discrete hypothesis space), the axioms characterise Maximum A Posteriori (MAP) estimation, whereas elsewhere (such as in continuous estimation) they characterise the Wallace-Freeman estimator. Our results provide a novel justification for the Wallace-Freeman estimator, which previously was derived only as an approximation to the information-theoretic Strict Minimum Message Length estimator. By contrast, our derivation requires neither approximations nor coding.
statistics
A collection of quantum channels is called incompatible if they cannot be obtained as marginals from a single channel. No-cloning theorem is the most prominent instance of incompatibility of quantum channels. We show that every collection of incompatible channels can be more useful than compatible ones as preprocessings in a state discrimination task with multiple ensembles of states. This is done by showing that the robustness of channel incompatibility which is a measure for incompatibility of channels exactly quantifies the maximum advantage in the state discrimination. We also show that incompatibility of quantum measurement and channel has a similar operational interpretation. Finally, we demonstrate that our result with respect to channel incompatibility includes all other kinds of incompatibility as special cases.
quantum physics
The dielectric and magnetic polarizations of quantum paraelectrics and paramagnetic materials have in many cases been found to initially increase with increasing thermal disorder and hence exhibit peaks as a function of temperature. A quantitative description of these examples of 'order-by-disorder' phenomenona has remained elusive in nearly ferromagnetic metals and in dielectrics on the border of displacive ferroelectric transitions. Here we present an experimental study of the evolution of the dielectric susceptibility peak as a function of pressure in the nearly ferroelectric material, strontium titanate, which reveals that the peak position collapses towards absolute zero as the ferroelectric quantum critical point is approached. We show that this behaviour can be described in detail without the use of adjustable parameters in terms of the Larkin-Khmelnitskii-Shneerson-Rechester (LKSR) theory, first introduced nearly 50 years ago, of the hybridization of polar and acoustic modes in quantum paraelectrics, in contrast to alternative models that have been proposed. Our study allows us to construct for the first time a detailed temperature-pressure phase diagram of a material on the border of a ferroelectric quantum critical point comprising ferroelectric, quantum critical paraelectric and hybridized polar-acoustic regimes. Furthermore, at the lowest temperatures, below the susceptibility maximum, we observe a new regime characterized by a linear temperature dependence of the inverse susceptibility that differs sharply from the quartic temperature dependence predicted by the LKSR theory. We find that this non-LKSR low temperature regime cannot be accounted for in terms of any detailed model reported in the literature, and its interpretation poses a new empirical and conceptual challenge.
condensed matter
In this paper, we introduce the concepts of weaknorm, quasi-weaknorm on real vector spaces. By these concepts, we introduce the concept of quasi-locally convex topological vector spaces, which include locally convex topological vector spaces as special cases. By the Fan-KKM theorem, we prove a fixed point theorem in quasi-locally convex topological vector spaces, that is a natural extension of Tychonoff fixed point theorem in locally convex topological vector spaces. Then we provide an example to show that this extension is a proper extension.
mathematics
The strong coupling between two subsystems consisting of quantum emitters and photonic modes, at which the level splitting of mixed quantum states occurs, has been a central subject of quantum physics and nanophotonics due to various important applications. The spectral Rabi-splitting of photon emission or absorption has been adopted to experimentally characterize the strong coupling under the equality assumption that it is identical to the level splitting. Here, we for the first time reveal that the equality assumption is not valid. It is the invalidity that results in the relativity and diversity of the strong coupling characterized by the spectral Rabi-splitting to the measured subsystems, highly correlated with their dissipative decays. The strong coupling is easier to be observed from the subsystem with larger decay, and can be classified into pseudo-, dark-, middle-, and super-strong interaction regimes. We also suggest a prototype of coupled plasmon-exciton system for possibly future experiment observations on these novel predictions. Our work brings new fundamental insight to the light-matter interaction in nanostructures, which will stimulate further researches in this field.
physics
Text editors represent one of the fundamental tools that writers use - software developers, book authors, mathematicians. A text editor must work as intended in that it should allow the users to do their job. We start by introducing a small subset of a text editor - line editor. Next, we will give a concrete definition (specification) of what a complete text editor means. Afterward, we will provide an implementation of a line editor in Coq, and then we will prove that it is a complete text editor.
computer science
In order to classify linearly non-separable data, neurons are typically organized into multi-layer neural networks that are equipped with at least one hidden layer. Inspired by some recent discoveries in neuroscience, we propose a new neuron model along with a novel activation function enabling learning of non-linear decision boundaries using a single neuron. We show that a standard neuron followed by the novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy. Furthermore, we conduct experiments on three benchmark data sets from computer vision and natural language processing, i.e. Fashion-MNIST, UTKFace and MOROCO, showing that the ADA and the leaky ADA functions provide superior results to Rectified Liner Units (ReLU) and leaky ReLU, for various neural network architectures, e.g. 1-hidden layer or 2-hidden layers multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs) such as LeNet, VGG, ResNet and Character-level CNN. We also obtain further improvements when we change the standard model of the neuron with our pyramidal neuron with apical dendrite activations (PyNADA).
computer science