text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
A series of MnxZn1-xO (x=0.03, 0.05) nanostructures have been grown via the solution based chemical spray pyrolysis technique. Electron beam induced modifications on structural, linear and nonlinear optical and surface morphological properties have been studied and elaborated. GXRD (glancing angle X-ray diffraction) patterns show sharp diffraction peaks matching with the hexagonal wurtzite structure of ZnO thin films. The upsurge in ebeam dosage resulted in the shifting of XRD peaks (101) and (002) towards lower angle side, and increase in FWHM value. Gaussian deconvolution on PL spectra reveals the quenching of defect centers, implying the role of electron beam irradiation regulating luminescence and defect centers in the nanostructures. Irradiation induced spatial confinement and phonon localization effects have been observed in the films via micro Raman studies. The later are evident from spectral peak shifts and broadening. Detailed investigations on the effect of electron beam irradiation on third order nonlinear optical properties under continuous and pulsed mode of laser operation regimes are deliberated. Third order absorptive nonlinearity of the nanostructures evaluated using the open aperture Z-scan technique in both continuous and pulsed laser regimes shows strong nonlinear absorption coefficient \b{eta} eff of the order 10-4 cm/W confirming their suitability for passive optical limiting applications under intense radiation environments. Laser induced third harmonic generation (LITHG) experiment results supports the significant variation in nonlinearities upon electron beam irradiation, and the effect can be utilized for frequency conversion mechanisms in high power laser sources and UV light emitters.
|
physics
|
We present a sample of 86,111 star-forming galaxies (SFGs) selected from the catalogue of the MPA-JHU emission-line measurements for SDSS DR7 to investigate the evolution of mass-metallicity (MZ) relation. We find that: under the $\rm log(L_{H \alpha})>41.0$ threshold, the $0.04<z\leqslant0.06$ SFGs with $9.2<$ log($M_{\star}/M_{\sun})<9.5$ have always higher metallicities ($\sim 0.1$ dex) than the $0.10<z<0.12$ SFGs by using the closely-matched control sample method; under the $\rm log(L_{\OIII})>39.7$ threshold, the $0.04<z\leqslant0.06$ SFGs with $9.2<$ log($M_{\star}/M_{\sun})<9.5$ do not exhibit the evolution of the MZ relation, in contrast to the $0.10<z<0.12$ SFGs. We find that the metallicity tends to be lower in galaxies with a higher concentration, higher S\'{e}rsic index, or higher SFR. In addition, we can see that the stellar mass and metallicity usually present higher in galaxies with a higher $D_{n}4000$ or higher log(N/O) ratio. Moreover, we present the two galaxy populations with log($M_{\star}/M_{\sun}$) below $10.0$ or greater than $10.5$ in the MZ relation, showing clearly an anticorrelation and a positive correlation between specific star formation rate and 12+log(O/H).
|
astrophysics
|
We present the design, manufacturing technique, and characterization of a 3D-printed broadband graded index millimeter wave absorber. The absorber is additively manufactured using a fused filament fabrication (FFF) 3D printer out of a carbon-loaded high impact polystyrene (HIPS) filament and is designed using a space-filling curve to optimize manufacturability using said process. The absorber's reflectivity is measured from 63 GHz to 115 GHz and from 140 GHz to 215 GHz and is compared to electromagnetic simulations. The intended application is for terminating stray light in Cosmic Microwave Background (CMB) telescopes, and the absorber has been shown to survive cryogenic thermal cycling.
|
astrophysics
|
We present a revision of predictions for nuclear shadowing in deep-inelastic scattering at small Bjorken $x_{Bj}$ corresponding to kinematic regions accessible by the future experiments at electron-ion colliders. The nuclear shadowing is treated within the color dipole formalism based on the rigorous Green function technique. This allows incorporating naturally color transparency and coherence length effects, which are not consistently and properly included in present calculations. For the lowest $|q\bar q\rangle$ Fock component of the photon, our calculations are based on an exact numerical solution of the evolution equation for the Green function. Here the magnitude of shadowing is tested using a realistic form for the nuclear density function, as well as various phenomenological models for the dipole cross section. The corresponding variation of the transverse size of the $q\bar q$ photon fluctuations is important for $x_{Bj}\gtrsim 10^{-4}$, on the contrary with the most of other models, which use frequently only the eikonal approximation with the "frozen" transverse size. At $x_{Bj}\lesssim 0.01$ we calculate within the same formalism also a shadowing correction for the higher Fock component of the photon containing gluons. The corresponding magnitudes of gluon shadowing correction are compared adopting different phenomenological dipole models. Our results are tested by available data from the E665 and NMC collaborations. Finally, the magnitude of nuclear shadowing is predicted for various kinematic regions that should be scanned by the future experiments at electron-ion colliders.
|
high energy physics phenomenology
|
The partially observable card game Hanabi has recently been proposed as a new AI challenge problem due to its dependence on implicit communication conventions and apparent necessity of theory of mind reasoning for efficient play. In this work, we propose a mechanism for imbuing Reinforcement Learning agents with a theory of mind to discover efficient cooperative strategies in Hanabi. The primary contributions of this work are threefold: First, a formal definition of a computationally tractable mechanism for computing hand probabilities in Hanabi. Second, an extension to conventional Deep Reinforcement Learning that introduces reasoning over finitely nested theory of mind belief hierarchies. Finally, an intrinsic reward mechanism enabled by theory of mind that incentivizes agents to share strategically relevant private knowledge with their teammates. We demonstrate the utility of our algorithm against Rainbow, a state-of-the-art Reinforcement Learning agent.
|
computer science
|
Research in functional regression has made great strides in expanding to non-Gaussian functional outcomes, but exploration of ordinal functional outcomes remains limited. Motivated by a study of computer-use behavior in rhesus macaques (Macaca mulatta), we introduce the Ordinal Probit Functional Outcome Regression model (OPFOR). OPFOR models can be fit using one of several basis functions including penalized B-splines, wavelets, and O'Sullivan splines -- the last of which typically performs best. Simulation using a variety of underlying covariance patterns shows that the model performs reasonably well in estimation under multiple basis functions with near nominal coverage for joint credible intervals. Finally, in application, we use Bayesian model selection criteria adapted to functional outcome regression to best characterize the relation between several demographic factors of interest and the monkeys' computer use over the course of a year. In comparison with a standard ordinal longitudinal analysis, OPFOR outperforms a cumulative-link mixed-effects model in simulation and provides additional and more nuanced information on the nature of the monkeys' computer-use behavior.
|
statistics
|
We introduce fusion-based quantum computing (FBQC) - a model of universal quantum computation in which entangling measurements, called fusions, are performed on the qubits of small constant-sized entangled resource states. We introduce a stabilizer formalism for analyzing fault tolerance and computation in these schemes. This framework naturally captures the error structure that arises in certain physical systems for quantum computing, such as photonics. FBQC can offer significant architectural simplifications, enabling hardware made up of many identical modules, requiring an extremely low depth of operations on each physical qubit and reducing classical processing requirements. We present two pedagogical examples of fault-tolerant schemes constructed in this framework and numerically evaluate their threshold under a hardware agnostic fusion error model including both erasure and Pauli error. We also study an error model of linear optical quantum computing with probabilistic fusion and photon loss. In FBQC the non-determinism of fusion is directly dealt with by the quantum error correction protocol, along with other errors. We find that tailoring the fault-tolerance framework to the physical system allows the scheme to have a higher threshold than schemes reported in literature. We present a ballistic scheme which can tolerate a 10.4% probability of suffering photon loss in each fusion.
|
quantum physics
|
We investigate properties of minimizers of a variational model describing the shape of charged liquid droplets. The model, proposed by Muratov and Novaga, takes into account the regularizing effect due to the screening of free counterionions in the droplet. In particular we prove partial regularity of minimizers, a first step toward the understanding of further properties of minimizers.
|
mathematics
|
FLAME is a software package to perform a wide range of atomistic simulations for exploring the potential energy surfaces (PES) of complex condensed matter systems. The range of methods include molecular dynamics simulations to sample free energy landscapes, saddle point searches to identify transition states, and gradient relaxations to find dynamically stable geometries. In addition to such common tasks, FLAME implements a structure prediction algorithm based on the minima hopping method (MHM) to identify the ground state structure of any system given solely the chemical composition, and a framework to train a neural network potential to reproduce the PES from $\textit{ab initio}$ calculations. The combination of neural network potentials with the MHM in FLAME allows a highly efficient and reliable identification of the ground state as well as metastable structures of molecules and crystals, as well as of nano structures, including surfaces, interfaces, and two-dimensional materials. In this manuscript, we provide detailed descriptions of the methods implemented in the FLAME code and its capabilities, together with several illustrative examples.
|
physics
|
The newly discovered Corona virus Disease 2019 (COVID-19) has been globally spreading and causing hundreds of thousands of deaths around the world as of its first emergence in late 2019. Computed tomography (CT) scans have shown distinctive features and higher sensitivity compared to other diagnostic tests, in particular the current gold standard, i.e., the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Current deep learning-based algorithms are mainly developed based on Convolutional Neural Networks (CNNs) to identify COVID-19 pneumonia cases. CNNs, however, require extensive data augmentation and large datasets to identify detailed spatial relations between image instances. Furthermore, existing algorithms utilizing CT scans, either extend slice-level predictions to patient-level ones using a simple thresholding mechanism or rely on a sophisticated infection segmentation to identify the disease. In this paper, we propose a two-stage fully-automated CT-based framework for identification of COVID-19 positive cases referred to as the "COVID-FACT". COVID-FACT utilizes Capsule Networks, as its main building blocks and is, therefore, capable of capturing spatial information. In particular, to make the proposed COVID-FACT independent from sophisticated segmentation of the area of infection, slices demonstrating infection are detected at the first stage and the second stage is responsible for classifying patients into COVID and non-COVID cases. COVID-FACT detects slices with infection, and identifies positive COVID-19 cases using an in-house CT scan dataset, containing COVID-19, community acquired pneumonia, and normal cases. Based on our experiments, COVID-FACT achieves an accuracy of 90.82%, a sensitivity of 94.55%, a specificity of 86.04%, and an Area Under the Curve (AUC) of 0.98, while depending on far less supervision and annotation, in comparison to its counterparts.
|
electrical engineering and systems science
|
We show that it is possible to simulate an anyon by a trapped atom which possesses an induced electric dipole moment in the background of electromagnetic fields with a specific configuration. The electromagnetic fields we applied contain a magnetic and two electric fields. We find that when the atom is cooled down to the limit of the negligibly small kinetic energy, the atom behaves like an anyon because its angular momentum takes fractional values. The fractional part of the angular momentum is determined by both the magnetic and one of the electric fields. Roles two electromagnetic fields played are analyzed.
|
quantum physics
|
Migration and replication of virtual network functions (VNFs) are well-known mechanisms to face dynamic resource requests in Internet Service Provider (ISP) edge networks. They are not only used to reallocate resources in carrier networks, but in case of excessive traffic churns also to offloading VNFs to third party cloud providers. We propose to study how traffic forecasting can help to reduce the number of required migrations and replications when the traffic dynamically changes in the network. We analyze and compare three scenarios for the VNF migrations and replications based on: (i) the current observed traffic demands only, (ii) specific maximum traffic demand value observed in the past, or (iii) predictive traffic values. For the prediction of traffic demand values, we use an LSTM model which is proven to be one of the most accurate methods in time series forecasting problems. Based the traffic prediction model, we then use a Mixed-Integer Linear Programming (MILP) model as well as a greedy algorithm to solve this optimization problem that considers migrations and replications of VNFs. The results show that LSTM-based traffic prediction can reduce the number of migrations up to 45\% when there is enough available resources to allocate replicas, while less cloud-based offloading is required compared to overprovisioning.
|
computer science
|
Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous large-scale empirical comparison of these methods under dataset shift. We present a large-scale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.
|
statistics
|
Methodological development of the Model-implied Instrumental Variable (MIIV) estimation framework has proved fruitful over the last three decades. Major milestones include Bollen's (1996) original development of the MIIV estimator and its robustness properties for continuous endogenous variable SEMs, the extension of the MIIV estimator to ordered categorical endogenous variables (Bollen \& Maydeu-Olivares, 2007), and the introduction of a Generalized Method of Moments (GMM) estimator (Bollen, Kolenikov \& Bauldry, 2014). This paper furthers these developments by making several unique contributions not present in the prior literature: (1) we use matrix calculus to derive the analytic derivatives of the PIV estimator, (2) we extend the PIV estimator to apply to any mixture of binary, ordinal, and continuous variables, (3) we generalize the PIV model to include intercepts and means, (4) we devise a method to input known threshold values for ordinal observed variables, and (5) we enable a general parameterization that permits the estimation of means, variances, and covariances of the underlying variables to use as input into a SEM analysis with PIV. An empirical example illustrates a mixture of continuous variables and ordinal variables with fixed thresholds. We also include a simulation study to compare the performance of this novel estimator to WLSMV.
|
statistics
|
Labelfree nanoscopy encompasses optical imaging with resolution in the 100 nm range using visible wavelengths. Here, we present a labelfree nanoscopy method that combines Fourier ptychography with waveguide microscopy to realize a 'super-condenser' featuring maximally inclined coherent darkfield illumination with artificially stretched wave vectors due to large refractive indices of the employed Si$_3$N$_4$ waveguide material. We produce the required coherent plane wave illumination for Fourier ptychography over imaging areas 400 $\mathrm{\mu}$m$^2$ in size via adiabatically tapered single-mode waveguides and tackle the overlap constraints of the Fourier ptychography phase retrieval algorithm two-fold: firstly, the directionality of the illumination wave vector is changed sequentially via a multiplexed input structure of the waveguide chip layout and secondly, the wave vector modulus is shortend via step-wise increases of the illumination light wavelength over the visible spectrum. We validate the method via in silico and in vitro experiments and provide details on the underlying image formation theory as well as the reconstruction algorithm.
|
physics
|
We compute, model, and predict drag reduction of an actuated turbulent boundary layer at a momentum thickness based Reynolds number of Re{\theta} = 1000. The actuation is performed using spanwise traveling transversal surface waves parameterized by wavelength, amplitude, and period. The drag reduction for the set of actuation parameters is modeled using 71 large-eddy simulations (LES). This drag model allows to extrapolate outside the actuation domain for larger wavelengths and amplitudes. The modeling novelty is based on combining support vector regression for interpolation, a parameterized ridgeline leading out of the data domain, scaling from Tomiyama and Fukagata (2013), and a discovered self-similar structure of the actuation effect. The model yields high prediction accuracy outside the training data range.
|
physics
|
Dark matter is the generally accepted paradigm in astrophysics and cosmology as a solution to the higher rate of rotation in galaxies, among many other reasons. But since there are still some problems encountered by the standard dark matter paradigm at the galactic scale, we have resorted to an alternative solution, similar to Milgrom's Modified Newtonian dynamics (MOND). Here, we have assumed that: (i) either the gravitational constant, G, is a function of distance (scale): G = G(r), or, (ii) the gravitational-to-inertial mass ratio, mg/mi, is a function of distance (scale): f(r). We have used a linear approximation of each function, from which two new parameters appeared that have to be determined: G1, the first-order coefficient of gravitational coupling, and C1, the first-order coefficient of gravitational-to-inertial mass ratio. In the current part of this research, we have generated simplified theoretical rotation curves for some hypothetical galaxies by varying the parameters. We have concluded that our model gives a qualitatively and quantitatively acceptable behavior of the galactic rotation curves for some values of these parameters. The values of the 1st-order coefficients that give quantitatively acceptable description of galactic rotation curves are: G1 between around 10^-31 to 10^-30 m^2 s^-2 kg^-1; and, C1 between 10^-21 to 10^-20 m^-1. Furthermore, our model implies the existence of a critical distance at which the MOND effects become significant rather than a critical acceleration. In fact, Milgrom's MOND converges with our model if the critical acceleration is not a constant but a linear function of the galactic baryonic mass.
|
astrophysics
|
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is first constructed by computing carefully chosen nonlinear random features (e.g., random Fourier features) and averaging them over the whole dataset. Parameters are then learned from the sketch, without access to the original dataset. This article surveys the current state-of-the-art in sketched learning, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees -- on both information preservation and privacy preservation, and important open problems.
|
statistics
|
The nonlinear Shannon capacity limit has been identified as the fundamental barrier to the maximum rate of transmitted information in optical communications. In long-haul high-bandwidth optical networks, this limit is mainly attributed to deterministic Kerr-induced fiber nonlinearities and from the interaction of amplified spontaneous emission noise from cascaded optical amplifiers with fiber nonlinearity: the stochastic parametric noise amplification. Unlike earlier impractical approaches that compensate solely deterministic nonlinearities, here we demonstrate a novel electronic-based deep neural network with multiple-inputs and outputs (MIMO) that tackles the interplay of deterministic and stochastic nonlinearity manifestation in coherent optical signals. Our demonstration shows that MIMO deep learning can compensate nonlinear inter-carrier crosstalk effects even in the presence of frequency stochastic variations, which has hitherto been considered impossible. Our solution significantly outperforms conventional machine learning and gold-standard nonlinear equalizers without sacrificing computational complexity, leading to record-breaking transmission performance for up to 40 Gbit/sec high-spectral-efficient optical signals.
|
electrical engineering and systems science
|
Parametric amplification of attosecond coherent pulses around 100 eV at the single-atom level is demonstrated for the first time by using the 3D time-dependent Schr{\"o}dinger equation in high-harmonic generation processes from excited states of He$^+$. We present the attosecond dynamics of the amplification process far from the ionization threshold and resolve the physics behind it. The amplification of a particular central photon energy requires the seed XUV pulses to be perfectly synchronized in time with the driving laser field for stimulated recombination to the He$^+$ ground state and is only produced in a few specific laser cycles in agreement with the experimental measurements. Our simulations show that the amplified photon energy region can be controlled by varying the peak intensity of the laser field. Our results pave the way to the realization of compact attosecond pulse intense XUV lasers with broad applications.
|
physics
|
We perform a general reduction of an M5-brane on a spacetime that admits a null Killing vector, including couplings to background 4-form fluxes and possible twisting of the normal bundle. We give the non-abelian extension of this action and present its supersymmetry transformations. The result is a class of supersymmetric non-lorentzian gauge theories in 4+1 dimensions, which depend on the geometry of the six-dimensional spacetime. These can be used for DLCQ constructions of M5-branes reduced on various manifolds.
|
high energy physics theory
|
In this work, we present a novel neural network to generate high resolution images. We replace the decoder of VAE with a discriminator while using the encoder as it is. The encoder is fed data from a normal distribution while the generator is fed from a gaussian distribution. The combination from both is given to a discriminator which tells whether the generated image is correct or not. We evaluate our network on 3 different datasets: MNIST, LSUN and CelebA dataset. Our network beats the previous state of the art using MMD, SSIM, log likelihood, reconstruction error, ELBO and KL divergence as the evaluation metrics while generating much sharper images. This work is potentially very exciting as we are able to combine the advantages of generative models and inference models in a principled bayesian manner.
|
electrical engineering and systems science
|
In this work, we consider the distributed stochastic optimization problem of minimizing a non-convex function $f(x) = \mathbb{E}_{\xi \sim \mathcal{D}} f(x; \xi)$ in an adversarial setting, where the individual functions $f(x; \xi)$ can also be potentially non-convex. We assume that at most $\alpha$-fraction of a total of $K$ nodes can be Byzantines. We propose a robust stochastic variance-reduced gradient (SVRG) like algorithm for the problem, where the batch gradients are computed at the worker nodes (WNs) and the stochastic gradients are computed at the server node (SN). For the non-convex optimization problem, we show that we need $\tilde{O}\left( \frac{1}{\epsilon^{5/3} K^{2/3}} + \frac{\alpha^{4/3}}{\epsilon^{5/3}} \right)$ gradient computations on average at each node (SN and WNs) to reach an $\epsilon$-stationary point. The proposed algorithm guarantees convergence via the design of a novel Byzantine filtering rule which is independent of the problem dimension. Importantly, we capture the effect of the fraction of Byzantine nodes $\alpha$ present in the network on the convergence performance of the algorithm.
|
mathematics
|
A small fraction of millicharged dark matter (DM) is considered in the literature to give an interpretation about the enhanced 21-cm absorption at the cosmic dawn. Here we focus on the case that the main component of DM is self-interacting dark matter (SIDM), motivated by the small scale problems. For self interactions of SIDM being compatible from dwarf to cluster scales, velocity-dependent self interactions mediated by a light scalar $\phi$ is considered. To fermionic SIDM $\Psi$, the main annihilation mode $\Psi \bar{\Psi} \to \phi \phi$ is a $p -$wave process. The thermal transition of SIDM $\rightleftarrows \phi \rightleftarrows$ standard model (SM) particles in the early universe sets a lower bound on couplings of $\phi$ to SM particles, which has been excluded by DM direct detections, and here we consider SIDM in the thermal equilibrium via millicharged DM. For $m_\phi >$ twice millicharged DM mass, $\phi$ could decay quickly and avoid excess energy injection to the big bang nucleosynthesis. Thus, the $\phi -$SM particle couplings could be very tiny and evade DM direct detections. The picture of weakly interacting massive particle (WIMP)-nucleus scattering with contact interactions fails for SIDM-nucleus scattering with a light mediator, and a method is explored in this paper, with which a WIMP search result can be converted into the hunt for SIDM in direct detections.
|
high energy physics phenomenology
|
We consider the problem of assessing the importance of multiple variables or factors from a dataset when side information is available. In principle, using side information can allow the statistician to pay attention to variables with a greater potential, which in turn, may lead to more discoveries. We introduce an adaptive knockoff filter, which generalizes the knockoff procedure (Barber and Cand\`es, 2015; Cand\`es et al., 2018) in that it uses both the data at hand and side information to adaptively order the variables under study and focus on those that are most promising. Adaptive knockoffs controls the finite-sample false discovery rate (FDR) and we demonstrate its power by comparing it with other structured multiple testing methods. We also apply our methodology to real genetic data in order to find associations between genetic variants and various phenotypes such as Crohn's disease and lipid levels. Here, adaptive knockoffs makes more discoveries than reported in previous studies on the same datasets.
|
statistics
|
We consider compactifications of rank $Q$ E-string theory on a genus zero surface with no punctures but with flux for various subgroups of the $\text{E}_8\times \text{SU}(2)$ global symmetry group of the six dimensional theory. We first construct a simple Wess-Zumino model in four dimensions corresponding to the compactification on a sphere with one puncture and a particular value of flux, the cap model. Using this theory and theories corresponding to two punctured spheres with flux, one can obtain a large number of models corresponding to spheres with a variety of fluxes. These models exhibit interesting IR enhancements of global symmetry as well as duality properties. As an example we will show that constructing sphere models associated to specific fluxes related by an action of the Weyl group of $\text{E}_8$ leads to the S-confinement duality of the $\text{USp}(2Q)$ gauge theory with six fundamentals and a traceless antisymmetric field. Finally, we show that the theories we discuss possess an $\text{SU}(2)_{\text{ISO}}$ symmetry in four dimensions that can be naturally identified with the isometry of the two-sphere. We give evidence in favor of this identification by computing the `t Hooft anomalies of the $\text{SU}(2)_{\text{ISO}}$ in 4d and comparing them with the predicted anomalies from 6d.
|
high energy physics theory
|
Cosmological relaxation of the electroweak scale is improved by using particle production to trap the relaxion. We combine leptogenesis with such a relaxion model that has no extremely small parameters or large e-foldings. Scanning happens after inflation--now allowed to be at a high scale--over a sub-Planckian relaxion field range for an $\mathcal{O}(100)$ TeV cut-off scale of new physics. Particle production by the relaxion also reheats the universe and generates the baryonic matter-antimatter asymmetry. We propose a realisation in which out-of-equilibrium leptons, produced by the relaxion, scatter with the thermal bath through interactions that violate CP and lepton number via higher-dimensional operators. Such a minimal effective field theory setup, with no new physics below the cut-off, naturally decouples new physics while linking leptogenesis to relaxion particle production; the baryon asymmetry of the universe can thus be intrinsically tied to a weak scale hierarchy.
|
high energy physics phenomenology
|
We present our experience with a data science problem in Public Health, where researchers use social media (Twitter) to determine whether the public shows awareness of HIV prevention measures offered by Public Health campaigns. To help the researcher, we develop an investigative exploration system called boutique that allows a user to perform a multi-step visualization and exploration of data through a dashboard interface. Unique features of boutique includes its ability to handle heterogeneous types of data provided by a polystore, and its ability to use computation as part of the investigative exploration process. In this paper, we present the design of the boutique middleware and walk through an investigation process for a real-life problem.
|
computer science
|
The pair cluster (dimer) is studied within the framework of the extended Hubbard model and the grand canonical ensemble. The elastic interatomic interactions and thermal vibrational energy of the atoms are taken into account. The total grand potential is constructed, from which the equation of state is derived. In equilibrium state, the deformation of cluster size, as well as its derivatives, are studied as a function of the temperature and the external magnetic and electric fields. In particular, the thermal expansion, magnetostriction and electrostriction effects are examined for arbitrary temperature, in a wide range of Hamiltonian parameters.
|
condensed matter
|
This note presents a simple and unified formulation of the most fundamental structures used in quantum information with qubits, arbitrary dimension qudits, and quantum continuous variables. This \emph{general quantum variables} construction provides a succinct language for formulating many results in quantum computation and information so that they are applicable in all dimensions. The structures included within this formalism include: a generalization to arbitrary dimension of the three Pauli operators, and the associated mutually unbiased bases; the Pauli and Clifford groups; many important quantum gates; standard sets of generators for the Clifford group; and simple universal gate sets. This formalism provides a convenient, intuitive and extensible language for easily generalizing results that were originally derived for a single type of system (often qubits or quantum continuous variables), and that rely on only those structures listed above, to apply in all dimensions.
|
quantum physics
|
The main aim of the ESS$\nu$SB proposal is the discovery of the leptonic CP phase $\delta_{CP}$ with a high significance ($5\sigma$ for 50% values of $\delta_{CP}$) by utilizing the physics at the second oscillation maxima of the $P_{\mu e}$ channel. It can achieve $3\sigma$ sensitivity to hierarchy for all values of $\delta_{CP}$. In this work, we concentrate on the hierarchy and octant sensitivity of the ESS$\nu$SB experiment. We show that combining the ESS$\nu$SB experiment with the atmospheric neutrino data from the proposed India-based Neutrino Observatory(INO) experiment can result in an increased sensitivity to mass hierarchy. In addition, we also combine the results from the ongoing experiments T2K and NO$\nu$A assuming their full runtime and present the combined sensitivity of ESS$\nu$SB + ICAL@INO + T2K + NO$\nu$A. We show that while by itself ESS$\nu$SB can have up to $3\sigma$ hierarchy sensitivity, the combination of all the experiments can give up to $5\sigma$ sensitivity depending on the true hierarchy-octant combination. The octant sensitivity of ESS$\nu$SB is low by itself. However the combined sensitivity of all the above experiments can give up to $3\sigma$ sensitivity depending on the choice of true hierarchy and octant. We discuss the various degeneracies and the synergies that lead to the enhanced sensitivity when combining different experimental data.
|
high energy physics phenomenology
|
We find the most general solution to Chern-Simons AdS$_3$ gravity in Fefferman-Graham gauge. The connections are equivalent to geometries that have a non-trivial curved boundary, characterized by a 2-dimensional vielbein and a spin connection. We define a variational principle for Dirichlet boundary conditions and find the boundary stress tensor in the Chern-Simons formalism. Using this variational principle as the departure point, we show how to treat other choices of boundary conditions in this formalism, such as, including the mixed boundary conditions corresponding to a $T \bar T$-deformation.
|
high energy physics theory
|
We propose a radiative seesaw model based on a modular $A_4$ symmetry, which has good predictability in the lepton sector. We execute a numerical analysis to search for parameters that satisfy the experimental constraints such as those from neutrino oscillation data and lepton flavor violations. Then, we present several predictions in our model that originate from the modular symmetry at a fixed point as well as fundamental region of $\tau$.
|
high energy physics phenomenology
|
Intrinsic dimension and differential entropy estimators are studied in this paper, including their systematic bias. A pragmatic approach for joint estimation and bias correction of these two fundamental measures is proposed. Shared steps on both estimators are highlighted, along with their useful consequences to data analysis. It is shown that both estimators can be complementary parts of a single approach, and that the simultaneous estimation of differential entropy and intrinsic dimension give meaning to each other, where estimates at different observation scales convey different perspectives of underlying manifolds. Experiments with synthetic and real datasets are presented to illustrate how to extract meaning from visual inspections, and how to compensate for biases.
|
statistics
|
Self-folding origami has emerged as a tool to make functional objects in material science. The common idea is to pattern a sheet with creases and activate them to have the object fold spontaneously into a desired configuration. This article shows that collinear quadrilateral metasheets are able to fold into the Miura-Ori configuration, if we only impose strain on part of their creases. In this study, we define and determine the optimal pattern of strain (OPS) on a collinear quadrilateral metasheet, that is the pattern of minimum "functional" creases with which the self-folding metasheet can fold into Miura-Ori state stably. By comparing the energy evolution along the folding pathway of each possible folded state under OPS, we conclude that the energy predominance of the desired Miura-Ori pathway during the initial period of time accounts for why the OPS works. Furthermore, we measure the projected force of the OPS on the intial flat metasheet and give insights on how to determine the OPS using only local information of the initial flat state.
|
condensed matter
|
We propose a novel approach for deformation-aware neural networks that learn the weighting and synthesis of dense volumetric deformation fields. Our method specifically targets the space-time representation of physical surfaces from liquid simulations. Liquids exhibit highly complex, non-linear behavior under changing simulation conditions such as different initial conditions. Our algorithm captures these complex phenomena in two stages: a first neural network computes a weighting function for a set of pre-computed deformations, while a second network directly generates a deformation field for refining the surface. Key for successful training runs in this setting is a suitable loss function that encodes the effect of the deformations, and a robust calculation of the corresponding gradients. To demonstrate the effectiveness of our approach, we showcase our method with several complex examples of flowing liquids with topology changes. Our representation makes it possible to rapidly generate the desired implicit surfaces. We have implemented a mobile application to demonstrate that real-time interactions with complex liquid effects are possible with our approach.
|
computer science
|
We design, test, and analyze fiber-optic voltage sensors based on optical reflection from a piezoelectric transducer. By controlling the physical dimensions of the device, we can tune the frequency of its natural resonance to achieve a desired sensitivity and bandwidth combination. In this work, we fully characterize sensors designed with a 2 kHz characteristic resonance, experimentally verifying a readily usable frequency range from approximately 10 Hz to 3 kHz. Spectral noise measurements indicate detectable voltage levels down to 300 mV rms at 60 Hz, along with a full-scale dynamic range of 60 dB, limited currently by the readout electronics, not the inherent performance of the transducer in the sensor. Additionally, we demonstrate a digital signal processing approach to equalize the measured frequency response, enabling accurate retrieval of short-pulse inputs. Our results suggest the value and applicability of intensity-modulated fiber-optic voltage sensors for measuring both steady-state waveforms and broadband transients which, coupled with the straightforward and compact design of the sensors, should make them effective tools in electric grid monitoring.
|
electrical engineering and systems science
|
In this note we present explicit canonical forms for all the elements in the two-qubit CNOT-Dihedral group, with minimal numbers of controlled-S (CS) and controlled-X (CX) gates, using the generating set of quantum gates [X, T, CX, CS]. We provide an algorithm to successively construct the n-qubit CNOT-Dihedral group, asserting an optimal number of controlled-X (CX) gates. These results are needed to estimate gate errors via non-Clifford randomized benchmarking and may have further applications to circuit optimization over fault-tolerant gate sets.
|
quantum physics
|
In this work we introduce a new methodology to infer from gene expression data the complex interactions associated with polygenetic diseases that remain a major frontier in understanding factors in human health. In many cases disease may be related to the covariance of several genes, rather than simply the variance of a single gene, making network inference crucial to the development of potential treatments. Specifically we investigate the network of factors and associations involved in developing breast cancer from gene expression data. Our approach is information theoretic, but a major obstacle has been the discrete nature of such data that is well described as a multi-variate Poisson process. In fact despite that mutual information is generally a well regarded approach for developing networks of association in data science of complex systems across many disciplines, until now a good method to accurately and efficiently compute entropies from such processes as been lacking. Nonparameteric methods such as the popular k-nearest neighbors (KNN) methods are slow converging and thus require unrealistic amounts of data. We will use the causation entropy (CSE) principle, together with the associated greedy search algorithm optimal CSE (oCSE) as a network inference method to deduce the actual structure, with our multi-variate Poisson estimator developed here as the core computational engine. We show that the Poisson version of oCSE outperforms both the Kraskov-St\"ogbauer-Grassberger (KSG) oCSE method (which is a KNN method for estimating the entropy) and the Gaussian oCSE method on synthetic data. We present the results for a breast cancer gene expression data set.
|
statistics
|
Mostly acyclic directed networks, treated mathematically as directed graphs, arise in machine learning, biology, social science, physics, and other applications. Newman [1] has noted the mathematical challenges of such networks. In this series of papers, we study their connectivity properties, focusing on three types of phase transitions that affect horizon sizes for typical nodes. The first two types involve the familiar emergence of giant components as average local connectivity increases, while the third type involves small-world horizon growth at variable distance from a typical node. In this first paper, we focus on qualitative behavior, simulations, and applications, leaving formal considerations for subsequent papers. We explain how such phase transitions distinguish deep neural networks from shallow machine learning architectures, and propose hybrid local/random network designs with surprising connectivity advantages. We also propose a small-world approach to the horizon problem in the cosmology of the early universe as a novel alternative to the inflationary hypothesis of Guth and Linde.
|
physics
|
We study the offline recommender learning problem in the presence of selection bias in rating feedback. A current promising solution to address the bias is to use the propensity score. However, the performance of the existing propensity-based methods can significantly suffer from propensity estimation bias. To solve the problem, we formulate the recommendation with selection bias as unsupervised domain adaptation and derive a propensity-independent generalization error bound. We further propose a novel algorithm that minimizes the bound via adversarial learning. Our theory and algorithm do not depend on propensity scores, and thus can result in a well-performing rating predictor without requiring the true propensity information. Empirical evaluation demonstrates the effectiveness and real-world applicability of the proposed approach.
|
statistics
|
Functional magnetic resonance imaging (fMRI) data have become increasingly available and are useful for describing functional connectivity (FC), the relatedness of neuronal activity in regions of the brain. This FC of the brain provides insight into certain neurodegenerative diseases and psychiatric disorders, and thus is of clinical importance. To help inform physicians regarding patient diagnoses, unsupervised clustering of subjects based on FC is desired, allowing the data to inform us of groupings of patients based on shared features of connectivity. Since heterogeneity in FC is present even between patients within the same group, it is important to allow subject-level differences in connectivity, while still pooling information across patients within each group to describe group-level FC. To this end, we propose a random covariance clustering model (RCCM) to concurrently cluster subjects based on their FC networks, estimate the unique FC networks of each subject, and to infer shared network features. Although current methods exist for estimating FC or clustering subjects using fMRI data, our novel contribution is to cluster or group subjects based on similar FC of the brain while simultaneously providing group- and subject-level FC network estimates. The competitive performance of RCCM relative to other methods is demonstrated through simulations in various settings, achieving both improved clustering of subjects and estimation of FC networks. Utility of the proposed method is demonstrated with application to a resting-state fMRI data set collected on 43 healthy controls and 61 participants diagnosed with schizophrenia.
|
statistics
|
The search for earth abundant, efficient and stable electrocatalysts that can enable the chemical reduction of CO2 to value-added chemicals and fuels at an industrially relevant scale, is a high priority for the development of a global network of renewable energy conversion and storage systems that can meaningfully impact greenhouse gas induced climate change. Here we introduce a straightforward, low cost, scalable and technologically relevant method to manufacture an all-carbon, electroactive, nitrogen-doped nanoporous carbon-carbon nanotube composite membrane, dubbed "HNCM-CNT". The membrane is demonstrated to function as a binder-free, high-performance electrode for the electrocatalytic reduction of CO2 to formate. The Faradaic efficiency for the production of formate is 81%. Furthermore, the robust structural and electrochemical properties of the membrane endow it with excellent long-term stability.
|
physics
|
In many applications, there is a need to predict the effect of an intervention on different individuals from data. For example, which customers are persuadable by a product promotion? which patients should be treated with a certain type of treatment? These are typical causal questions involving the effect or the change in outcomes made by an intervention. The questions cannot be answered with traditional classification methods as they only use associations to predict outcomes. For personalised marketing, these questions are often answered with uplift modelling. The objective of uplift modelling is to estimate causal effect, but its literature does not discuss when the uplift represents causal effect. Causal heterogeneity modelling can solve the problem, but its assumption of unconfoundedness is untestable in data. So practitioners need guidelines in their applications when using the methods. In this paper, we use causal classification for a set of personalised decision making problems, and differentiate it from classification. We discuss the conditions when causal classification can be resolved by uplift (and causal heterogeneity) modelling methods. We also propose a general framework for causal classification, by using off-the-shelf supervised methods for flexible implementations. Experiments have shown two instantiations of the framework work for causal classification and for uplift (causal heterogeneity) modelling, and are competitive with the other uplift (causal heterogeneity) modelling methods.
|
computer science
|
We present a comparison of the physical properties of the ionized gas in the circumgalactic (CGM) and intergalactic (IGM) media at $z\sim0$ between observations and four cosmological hydrodynamical simulations: Illustris, TNG300 of the IllustrisTNG project, EAGLE, and one of the Magneticum simulations. For the observational data, we use the gas properties that are inferred from cross-correlating the Sunyaev-Zel'dovich effect (SZE) from the {\it Planck} CMB maps with the haloes and the large-scale structure reconstructed from Sloan Digital Sky Survey data. Both the observational and simulation results indicate that the integrated gas pressure in haloes deviates from the self-similar case, showing that feedback impacts haloes with $M_{500}\sim 10^{12-13}\,{\rm M_\odot}$. The simulations predict that more than half the baryons are displaced from haloes, while the gas fraction inferred from our observational data roughly equals the cosmic baryon fraction throughout the $M_{500}\sim 10^{12-14.5}\,{\rm M_\odot}$ halo mass range. All simulations tested here predict that the mean gas temperature in haloes is about the virial temperature, while that inferred from the SZE is up to one order of magnitude lower than that from the simulations (and also from X-ray observations). While a remarkable agreement is found for the average properties of the IGM between the observation and some simulations, we show that their dependence on the large-scale tidal field can break the degeneracy between models that show similar predictions otherwise. Finally, we show that the gas pressure and the electron density profiles from simulations are not well described by a generalized NFW (GNFW) profile. Instead, we present a new model with a mass-dependent shape that fits the profiles accurately.
|
astrophysics
|
Synchrosqueezing transform (SST) is a useful tool for vibration signal analysis due to its high time-frequency (TF) concentration and reconstruction properties. However, existing SST requires much processing time for large-scale data. In this paper, some fast implementation methods of SST based on downsampled short-time Fourier transform (STFT) are proposed. By controlling the downsampling factor both in time and frequency, combined with the proposed selective reassignment and frequency subdivision scheme, one can keep a balance between efficiency and accuracy according to practical needs. Moreover, the reconstruction property is available, accomplished by an approximate but direct inverse formula under downsampling. The effects of parameters on the concentration, computing efficiency, and reconstruction accuracy are also investigated quantitatively, followed by a mathematic model of reassignment behavior with decimate factors. Experimental results on an aero-engine and a spindle show that the fast implementation of SST can effectively characterize the non-stationary characteristics of the large-scale vibration signal to reveal the mechanism of mechanical systems.
|
electrical engineering and systems science
|
Electronic transport in Weyl semimetals is quite extraordinary due to the topological property of the chiral anomaly generating the charge pumping between two distant Weyl nodes with opposite chiralities under parallel electric and magnetic fields. Here, we develop a full nonequilibrium quantum transport theory of the chiral anomaly, based on the fact that the chiral charge pumping is essentially nothing but the Bloch oscillation. Specifically, by using the Keldysh nonequilibrium Green function method, it is shown that there is a rich structure in the chiral anomaly transport, including the negative magnetoresistance, the non-Ohmic behavior, the Esaki-Tsu peak, and finally the resonant oscillation of the DC electric current as a function of electric field, called the electric quantum oscillation. We argue that, going beyond the usual behavior of linear response, the non-Ohmic behavior observed in BiSb alloys can be regarded as a precursor to the occurrence of electric quantum oscillation, which is both topologically and energetically protected in Weyl semimetals.
|
condensed matter
|
The technique of dispersive gate sensing (DGS) uses a single electrode to readout a qubit by detecting the change in quantum capacitance due to single electron tunnelling. Here, we extend DGS from the detection of discrete tunnel events to the open regime, where many electrons are transported via partially- or fully-transmitting quantum modes. Comparing DGS with conventional transport shows that the technique can resolve the Van Hove singularities of a one-dimensional ballistic system, and also probe aspects of the potential landscape that are not easily accessed with dc transport. Beyond readout, these results suggest that gate-sensing can also be of use in tuning-up qubits or probing the charge configuration of open quantum devices in the regime where electrons are delocalized.
|
condensed matter
|
In this work we present a novel computational method for embedding arbitrary curved one-dimensional (1D) fibers into three-dimensional (3D) solid volumes, as e.g. in fiber-reinforced materials. The fibers are explicitly modeled with highly efficient 1D geometrically exact beam finite elements, based on various types of geometrically nonlinear beam theories. The surrounding solid volume is modeled with 3D continuum (solid) elements. An embedded mortar-type approach is employed to enforce the kinematic coupling constraints between the beam elements and solid elements on non-matching meshes. This allows for very flexible mesh generation and simple material modeling procedures in the solid, since it can be discretized without having to capture for the reinforcements, while still being able to account for complex nonlinear effects due to the embedded fibers. Several numerical examples demonstrate the consistency, robustness and accuracy of the proposed method, as well as its applicability to rather complex fiber-reinforced structures of practical relevance.
|
computer science
|
I argue that the ten dimensional non--supersymmetric tachyonic superstrings may serve as good starting points for the construction of viable phenomenological vacua. Thus, enlarging the space of possible solutions that may address some of the outstanding problems in string phenomenology. A tachyon free six generation Standard--like Model is presented, which can be regarded as an orbifold of the $SO(16)\times E_8$ heterotic--string in ten dimensions. I propose that any $(2,0)$ heterotic--string in four dimensions can be connected to a $(2,2)$ one via an orbifold or by interpolations and provide some evidence for this conjecture. It suggests that any Effective Field Theory (EFT) model that cannot be connected to a $(2,2)$ theory is necessarily in the swampland, and will simplify the analysis of the moduli spaces of $(2,0)$ string compactifications.
|
high energy physics theory
|
Manipulation of micro and nanoscale particles suspended in a fluidic medium is one among the defining goals of modern nanotechnology. Speckle tweezers (ST) by incorporating randomly distributed light fields have been used to apply detectable limits on the Brownian motion of micro-particles with refractive indices higher than their medium. Indeed, compared to periodic potentials, ST represents a wider possibility to be operated for such tasks. Here, we extend the usefulness of ST into low index micro-particles. Repelling of such particles by high intensity regions into lower intensity regions makes them to be locally confined, and the confinement can be tuned by changing the average grain intensity and size of the speckle patterns. Experiments on polystyrenes and liposomes validate the procedure. Moreover, we show that ST can be used to nano-particle (NP)-loaded liposomes. Interestingly, the different interactions of NP-loaded and empty liposomes with ST enables collective manipulation of their mixture using the same speckle pattern, which may be explained by inclusion of the photophoretic forces on NPs in NP-loaded liposomes. Our results on the different behavior between empty and non-empty vesicles may open a new window on controlling collective transportation of drug micro-containers along with its wide applications in soft matter.
|
condensed matter
|
Row-sparse principal component analysis (rsPCA), also known as principal component analysis (PCA) with global support, is the problem of finding the top-$r$ leading principal components such that all these principal components are linear combination of a subset of $k$ variables. rsPCA is a popular dimension reduction tool in statistics that enhances interpretability compared to regular principal component analysis (PCA). Popular methods for solving rsPCA mentioned in literature are either greedy heuristics (in the special case of $r = 1$) where guarantees on the quality of solution found can be verified under restrictive statistical-models, or algorithms with stationary point convergence guarantee for some regularized reformulation of rsPCA. There are no known good heuristics when $r >1$, and more importantly none of the existing computational methods can efficiently verify the quality of the solutions via comparing objective values of feasible solutions with dual bounds, especially in a statistical-model-free setting. We propose: (a) a convex integer programming relaxation of rsPCA that gives upper (dual) bounds for rsPCA, and; (b) a new local search algorithm for finding primal feasible solutions for rsPCA in the general case where $r >1$. We also show that, in the worst-case, the dual bounds provided by the convex IP is within an affine function of the global optimal value. Numerical results are reported to demonstrate the advantages of our method.
|
mathematics
|
We develop a systematic approach for constructing symmetry-based indicators of a topological classification for superconducting systems. The topological invariants constructed in this work form a complete set of symmetry-based indicators that can be computed from knowledge of the Bogoliubov-de Gennes Hamiltonian on high-symmetry points in Brillouin zone. After excluding topological invariants corresponding to the phases without boundary signatures, we arrive at natural generalization of symmetry-based indicators [H. C. Po, A. Vishwanath, and H. Watanabe, Nature Comm. 8, 50 (2017)] to Hamiltonians of Bogoliubov-de Gennes type.
|
condensed matter
|
Studying the morphology of a large sample of active galaxies at different wavelengths and comparing it with active galactic nuclei (AGN) properties, such as black hole mass ($M_{BH}$) and Eddington ratio ($\lambda_{Edd}$), can help us in understanding better the connection between AGN and their host galaxies and the role of nuclear activity in galaxy formation and evolution. By using the BAT-SWIFT hard X-ray public data and by extracting those parameters measured for AGN and by using other public catalogues for parameters such as stellar mass ($M_*$), star formation rate (SFR), bolometric luminosity ($L_{bol}$), etc., we studied the multiwavelength morphological properties of host galaxies of ultra-hard X-ray detected AGN and their correlation with other AGN properties. We found that ultra hard X-ray detected AGN can be hosted by all morphological types, but in larger fractions (42%) they seem to be hosted by spirals in optical, to be quiet in radio, and to have compact morphologies in X-rays. When comparing morphologies with other galaxy properties, we found that ultra hard X-ray detected AGN follow previously obtained relations. On the SFR vs. stellar mass diagram, we found that although the majority of sources are located below the main sequence (MS) of star formation (SF), still non-negligible number of sources, with diverse morphologies, is located on and/or above the MS, suggesting that AGN feedback might have more complex influence on the SF in galaxies than simply quenching it, as it was suggested in some of previous studies.
|
astrophysics
|
Motivated by multiphase flow in reservoirs, we propose and study a two-species sandpile model in two dimensions. A pile of particles becomes unstable and topples if, at least one of the following two conditions is fulfilled: 1) the number of particles of one species in the pile exceeds a given threshold or 2) the total number of particles in the pile exceeds a second threshold. The latter mechanism leads to the invasion of one species through regions dominated by the other species. We studied numerically the statistics of the avalanches and identified two different regimes. For large avalanches the statistics is consistent with ordinary Bak-Tang-Weisenfeld model. Whereas, for small avalanches, we find a regime with different exponents. In particular, the fractal dimension of the external perimeter of avalanches is $D_f=1.47\pm 0.02$ and the exponent of their size distribution exponent is $\tau_s=0.95\pm 0.03$, which are significantly different from $D_f=1.25\pm 0.01$ and $\tau_s=1.26\pm 0.04$, observed for large avalanches.
|
condensed matter
|
The two main research threads in computer-based music generation are: the construction of autonomous music-making systems, and the design of computer-based environments to assist musicians. In the symbolic domain, the key problem of automatically arranging a piece music was extensively studied, while relatively fewer systems tackled this challenge in the audio domain. In this contribution, we propose CycleDRUMS, a novel method for generating drums given a bass line. After converting the waveform of the bass into a mel-spectrogram, we are able to automatically generate original drums that follow the beat, sound credible and can be directly mixed with the input bass. We formulated this task as an unpaired image-to-image translation problem, and we addressed it with CycleGAN, a well-established unsupervised style transfer framework, originally designed for treating images. The choice to deploy raw audio and mel-spectrograms enabled us to better represent how humans perceive music, and to potentially draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century. In absence of an objective way of evaluating the output of both generative adversarial networks and music generative systems, we further defined a possible metric for the proposed task, partially based on human (and expert) judgement. Finally, as a comparison, we replicated our results with Pix2Pix, a paired image-to-image translation network, and we showed that our approach outperforms it.
|
electrical engineering and systems science
|
We derive the entanglement entropy of chiral fermions on the circle at arbitrary temperature. The spin-sector contribution depends only on the total length of the entangling region, regardless of the configuration of the intervals. Thus three-partite information provides a global indicator for the spin boundary conditions. Together with the modular Hamiltonian, our results provide a systematic way of obtaining relative entropy on the torus.
|
high energy physics theory
|
We report the serendipitous detection of two 3 mm continuum sources found in deep ALMA Band 3 observations to study intermediate redshift galaxies in the COSMOS field. One is near a foreground galaxy at 1.3", but is a previously unknown dust-obscured star-forming galaxy (DSFG) at probable $z_{CO}=3.329$, illustrating the risk of misidentifying shorter wavelength counterparts. The optical-to-mm spectral energy distribution (SED) favors a grey $\lambda^{-0.4}$ attenuation curve and results in significantly larger stellar mass and SFR compared to a Calzetti starburst law, suggesting caution when relating progenitors and descendants based on these quantities. The other source is missing from all previous optical/near-infrared/sub-mm/radio catalogs ("ALMA-only"), and remains undetected even in stacked ultradeep optical ($>29.6$ AB) and near-infrared ($>27.9$ AB) images. Using the ALMA position as a prior reveals faint $SNR\sim3$ measurements in stacked IRAC 3.6+4.5, ultradeep SCUBA2 850$\mu$m, and VLA 3GHz, indicating the source is real. The SED is robustly reproduced by a massive $M^*=10^{10.8}$M$_\odot$ and $M_{gas}=10^{11}$M$_\odot$, highly obscured $A_V\sim4$, star forming $SFR\sim300$ M$_{\odot}$yr$^{-1}$ galaxy at redshift $z=5.5\pm$1.1. The ultrasmall 8 arcmin$^{2}$ survey area implies a large yet uncertain contribution to the cosmic star formation rate density CSFRD(z=5) $\sim0.9\times10^{-2}$ M$_{\odot}$ yr$^{-1}$ Mpc$^{-3}$, comparable to all ultraviolet-selected galaxies combined. These results indicate the existence of a prominent population of DSFGs at $z>4$, below the typical detection limit of bright galaxies found in single-dish sub-mm surveys, but with larger space densities $\sim3 \times 10^{-5}$ Mpc$^{-3}$, higher duty cycles $50-100\%$, contributing more to the CSFRD, and potentially dominating the high-mass galaxy stellar mass function.
|
astrophysics
|
We incorporate the next-to-leading order (NLO) and the next-to-next-to-leading order (NNLO) effects in the models of the Singlet Structure function F 2 S (x, t) and the gluon distribution G(x, t) using DGLAP equations approximated at small x. Analytical solutions both at the next-to leading order (NLO) as well as the next-to-next-to leading order (NNLO) are obtained. We then make comparisons with exact results.
|
high energy physics phenomenology
|
In recent years information on the transversity distribution $h_1$ has been obtained combining the Collins asymmetry results from semi-inclusive deep inelastic scattering (SIDIS) data on transversely polarized nucleon targets with the information on the fragmentation function of a transversely polarized quark from the asymmetries measured in $e^+e^-$ annihilation into hadrons. An alternative method was proposed long time ago, which does not require the $e^+e^-$ data, but allows one to get ratios of the $u$ and $d$ quark transversity distributions from the SIDIS data alone. The method utilizes the ratio of the difference of the Collins asymmetries of positively and negatively charged hadrons produced on transversely polarized proton and deuteron targets. We have applied this method to the COMPASS proton and deuteron data, and extracted the ratio $h_1^d/h_1^u$. The results are compared to those obtained in a previous point--by--point extraction based both on SIDIS and $e^+e^-$ data.
|
high energy physics phenomenology
|
Searching for new methods to enhance the stability of antiferromagnetic (AFM) skyrmion during its motion is an important issue for AFM spintronic devices. Herein, the spin polarized current-induced dynamics of a distorted AFM skyrmion is numerically studied, based on the Landau Lifshitz Gilbert simulations of the model with an anisotropic Dzyaloshinskii Moriya (DM) interaction. It is demonstrated that the DM interaction anisotropy induces the skyrmion deformation, which suppresses the distortion during the motion and enhances the stability of the skyrmion. Moreover, the effect of the DM interaction anisotropy on the skyrmion velocity is investigated in detail, and the simulated results are further explained by Thiele theory. This work unveils a promising strategy to enhance the stability and the maximum velocity of AFM skyrmion, benefiting future spintronic applications.
|
physics
|
We consider the decay $B\to\ell\ell\ell^{\prime}\nu$, taking into account the leading $1/m_b$ and $q^2$ corrections calculated in the QCD factorization framework as well as the soft corrections calculated employing dispersion relations and quark-hadron duality. We extend the existing results for the radiative decay $B\to\gamma\ell\nu$ to the case of non-zero (but small) $q^2$, the invariant mass squared of the dilepton pair $\ell^+\ell^-$. This restricts us to the case $\ell\neq\ell'$ as otherwise the same sign $\ell$ and $\ell'$ cannot be distinguished. We further study the sensitivity of the results to the leading moment of the $B$-meson distribution amplitude and discuss the potential to extract this quantity at LHCb and the Belle II experiment.
|
high energy physics phenomenology
|
We present the analytic computation of the master integrals associated to certain two-loop non-planar topologies, which are needed to complete the evaluation of the last two color coefficients for the top-pair production in the quark-annihilation channel, which are not yet known analytically. The master integrals have been computed exploiting the differential equations method in canonical form. The solution is given as a series expansion in the dimensional regularization parameter through to weight four, the expansion coefficients are given in terms of multiple polylogarithms.
|
high energy physics phenomenology
|
We propose an alpha-fair routing and spectrum allocation (RSA) framework for reconfigurable elastic optical networks under modeled tidal traffic, that is based on the maximization of the social welfare function parameterized by a scalar alpha (the inequality aversion parameter). The objective is to approximate an egalitarian spectrum allocation (SA) that maximizes the minimum possible SA over all connections contending for the network resources, shifting from the widely used utilitarian SA that merely maximizes the network efficiency. A set of existing metrics are examined (i.e., connection blocking, resource utilization, coefficient of variation (CV) of utilities), and a set of new measures are also introduced (i.e., improvement on connection over- (COP) and under-provisioning (CUP), CV of unserved traffic), allowing a network operator to derive and evaluate in advance a set of alpha-fair RSA solutions and select the one that best fits the performance requirements of both the individual connections and the overall network. We show that an egalitarian SA better utilizes the network resources by significantly improving both COP (up to 20%) and CUP (up to 80%), compared to the utilitarian allocation, while attaining zero blocking. Importantly, the CVs of utilities and unserved traffic indicate that a SA that is fairest with respect to the amount of utilities allocated to the connections does not imply that the SA is also fairest with respect to the achievable QoS of the connections, while an egalitarian SA better approximates a fairest QoS-based SA.
|
computer science
|
Small-angle scattering of x-rays and neutrons is a routine method for the determination of nanoparticle sizes. The so-called Guinier law represents the low-q approximation for the small-angle scattering curve from an assembly of particles. The Guinier law has originally been derived for nonmagnetic particle-matrix-type systems, and it is successfully employed for the estimation of particle sizes in various scientific domains (e.g., soft matter physics, biology, colloidal chemistry, materials science). An important prerequisite for it to apply is the presence of a discontinuous interface separating particles and matrix. Here, we introduce the Guinier law for the case of magnetic small-angle neutron scattering (SANS) and experimentally demonstrate its applicability for the example of nanocrystalline cobalt. It is well-known that the magnetic microstructure of nanocrystalline ferromagnets is highly nonuniform on the nanometer length scale and characterized by a spectrum of continuously varying long-wavelength magnetization fluctuations, i.e., these systems do not manifest sharp interfaces in their magnetization profile. The magnetic Guinier radius depends on the applied magnetic field, on the magnetic interactions (exchange, magnetostatics), and on the magnetic anisotropy-field radius, which characterizes the size over which the magnetic anisotropy field is coherently aligned into the same direction. In contrast to the nonmagnetic conventional Guinier law, the magnetic version can be applied to fully dense random-anisotropy-type ferromagnets.
|
condensed matter
|
The non-linear energy response of the plastic scintillator EJ-260 is measured with the MicroCHANDLER detector, using neutron beams of energy 5 to 27 MeV at the Triangle Universities Nuclear Laboratory. The first and second order Birks' constants are extracted from the data, and found to be $k_B = (8.70 \pm 0.93)\times 10^{-3}\ {\rm g/cm^2/MeV}$ and $k_C = (1.42 \pm 1.00) \times 10^{-5}\ {\rm (g/cm^2/MeV)^2}$. This result covers a unique energy range that is of direct relevance for fast neutron backgrounds in reactor inverse beta decay detectors. These measurements will improve the energy non-linearity modeling of plastic scintillator detectors. In particular, the updated energy response model will lead to an improvement of fast neutron modeling for detectors based on the CHANDLER reactor neutrino detector technology.
|
physics
|
Bilinear time-frequency representations (TFRs) provide high-resolution time-varying frequency characteristics of nonstationary signals. However, they suffer from crossterms due to the bilinear nature. Existing crossterm-reduced TFRs focus on optimized kernel design which amounts to low-pass weighting or masking in the ambiguity function domain. Optimization of fixed and adaptive kernels are difficult, particularly for complicated signals whose autoterms and crossterms overlap in the ambiguity function. In this letter, we develop a new method to offer high-resolution TFRs of nonstationary signals with crossterms effectively suppressed. The proposed method exploits a deep convolutional neural network which is trained to construct crossterm-free TFRs. The effectiveness of the proposed method is verified by simulation results which clearly show desirable autoterm preservation and crossterm mitigation capabilities. The proposed technique significantly outperforms state-of-the-art time-frequency analysis algorithms based on adaptive kernels and compressive sensing techniques.
|
electrical engineering and systems science
|
Understanding collective human behavior and dynamics at urban-scale has drawn broad interest in physics, engineering, and social sciences. Social physics often adopts a statistical perspective and treats individuals as interactive elementary units,while the economics perspective sees individuals as strategic decision makers. Here we provide a microscopic mechanism of city-scale dynamics,interpret the collective outcome in a thermodynamic framework,and verify its various implications empirically. We capture the decisions of taxi drivers in a game-theoretic model,prove the existence, uniqueness, and global asymptotic stability of Nash equilibrium. We offer a macroscopic view of this equilibrium with laws of thermodynamics. With 870 million trips of over 50k drivers in New York City,we verify this equilibrium in space and time,estimate an empirical constitutive relation,and examine the learning process at individual and collective levels. Connecting two perspectives,our work shows a promising approach to understand collective behavior of subpopulations.
|
physics
|
We consider accelerated black hole horizons with and without defects. These horizons appear in the $C$-metric solution to Einstein equations and in its generalization to the case where external fields are present. These solutions realize a variety of physical processes, from the decay of a cosmic string by a black hole pair nucleation to the creation of a black hole pair by an external electromagnetic field. Here, we show that such geometries exhibit an infinite set of symmetries in their near horizon region, generalizing in this way previous results for smooth isolated horizons. By considering the limit close to both the black hole and the acceleration horizons, we show that a sensible set of asymptotic boundary conditions gets preserved by supertranslation and superrotation transformations. By acting on the geometry with such transformations, we derive the superrotated, supertranslated version of the $C$-metric and compute the associated conserved charges.
|
high energy physics theory
|
We study the thermodynamic performance of the finite-time non-regenerative Stirling cycle used as a quantum heat engine. We consider specifically the case in which the working substance (WS) is a two-level system. The Stirling cycle is made of two isochoric transformations separated by a compression and an expansion stroke during which the working substance is in contact with a thermal reservoir. To describe these two strokes we derive a non-Markovian master equation which allows to study the dynamics of a driven open quantum system with arbitrary fast driving. We found that the finite-time dynamics and thermodynamics of the cycle depend non-trivially on the different time scales at play. In particular, driving the WS at a time scale comparable to the resonance time of the bath enhances the performance of the cycle and allows for an efficiency higher than the efficiency of the slow adiabatic cycle, but still below the Carnot bound. Interestingly, performance of the cycle is dependent on the compression and expansion speeds asymmetrically. This suggests new freedom in optimizing quantum heat engines. We further show that the maximum output power and the maximum efficiency can be achieved almost simultaneously, although the net extractable work declines by speeding up the drive.
|
quantum physics
|
This is a chapter of the planned monograph "Out of Nowhere: The Emergence of Spacetime in Quantum Theories of Gravity", co-authored by Nick Huggett and Christian W\"uthrich and under contract with Oxford University Press. (More information at www.beyondspacetime.net.) This chapter sketches how spacetime emerges in causal set theory and demonstrates how this question is deeply entangled with genuinely philosophical concerns.
|
physics
|
In this paper, we study symmetry and existence of solutions of minimal gradient graph equations on punctured space $\mathbb R^n\setminus\{0\}$, which include the Monge-Amp\`ere equation, inverse harmonic Hessian equation and the special Lagrangian equation. This extends the classification results of Monge-Amp\`ere equations. Under some conditions, we also give the characterization of the solvability on exterior Dirichlet problem in terms of their asymptotic behaviors.
|
mathematics
|
We study a class of perturbative scalar and gravitational quantum field theories where dynamics is characterized by Lorentz-invariant or Lorentz-breaking non-local operators of fractional order and the underlying spacetime has a varying spectral dimension. These theories are either ghost free or power-counting renormalizable but they cannot be both at the same time. However, some of them are one-loop unitary and finite, and possibly unitary and finite at all orders. One of the theories is unitary and infrared-finite and can serve as a ghost-free model with large-scale modifications of general relativity.
|
high energy physics theory
|
A new conceptual foundation for the notion of "information" is proposed, based on the concept of a "distinction graph": a graph in which two nodes are connected iff they cannot be distinguished by a particular observer. The "graphtropy" of a distinction graph is defined as the average connection probability of two nodes; in the case where the distinction graph is a composed of disconnected components that are fully connected subgraphs, this is equivalent to Ellerman's logical entropy, which has straightforward relationships to Shannon entropy. Probabilistic distinction graphs and probabilistic graphtropy are also considered, as well as connections between graphtropy and thermodynamic and quantum entropy. The semantics of the Second Law of Thermodynamics and the Maximum Entropy Production Principle are unfolded in a novel way, via analysis of the cognitive processes underlying the making of distinction graphs This evokes an interpretation in which complex intelligence is seen to correspond to states of consciousness with intermediate graphtropy, which are associated with memory imperfections that violate the assumptions leading to derivation of the Second Law. In the case where nodes of a distinction graph are labeled by computable entities, graphtropy is shown to be monotonically related to the average algorithmic information of the nodes (relative to to the algorithmic information of the observer). A quantum-mechanical version of distinction graphs is considered, in which distinctions can exist in a superposed state; this yields to graphtropy as a measure of the impurity of a mixed state, and to a concept of "quangraphtropy." Finally, a novel computational model called Dynamic Distinction Graphs (DDGs) is formulated, via enhancing distinction graphs with additional links expressing causal implications, enabling a distinction-based model of "observers."
|
computer science
|
We compute the three-loop scattering amplitude of four gravitons in ${\mathcal N}=8$ supergravity. Our results are analytic formulae for a Laurent expansion of the amplitude in the regulator of dimensional regularisation. The coefficients of this series are closed formulae in terms of well-established harmonic poly-logarithms. Our results display a remarkable degree of simplicity and represent an important stepping stone in the exploration of the structure of scattering amplitudes. In particular, we observe that to this loop order the four graviton amplitude is given by uniform weight $2L$ functions, where $L$ is the loop order.
|
high energy physics theory
|
Generative Adversarial Networks (GANs) are a class of artificial neural network that can produce realistic, but artificial, images that resemble those in a training set. In typical GAN architectures these images are small, but a variant known as Spatial-GANs (SGANs) can generate arbitrarily large images, provided training images exhibit some level of periodicity. Deep extragalactic imaging surveys meet this criteria due to the cosmological tenet of isotropy. Here we train an SGAN to generate images resembling the iconic Hubble Space Telescope eXtreme Deep Field (XDF). We show that the properties of 'galaxies' in generated images have a high level of fidelity with galaxies in the real XDF in terms of abundance, morphology, magnitude distributions and colours. As a demonstration we have generated a 7.6-billion pixel 'generative deep field' spanning 1.45 degrees. The technique can be generalised to any appropriate imaging training set, offering a new purely data-driven approach for producing realistic mock surveys and synthetic data at scale, in astrophysics and beyond.
|
astrophysics
|
The Grassmannian is a disjoint union of open positroid varieties $P_v$, certain smooth irreducible subvarieties whose definition is motivated by total positivity. The coordinate ring of $P_v$ is a cluster algebra, and each reduced plabic graph $G$ for $P_v$ determines a cluster. We study the effect of relabeling the boundary vertices of $G$ by a permutation $r$. Under suitable hypotheses on the permutation, we show that the relabeled graph $G^r$ determines a cluster for a different open positroid variety $P_w$. As a key step of the proof, we show that $P_v$ and $P_w$ are isomorphic by a nontrivial twist isomorphism. Our constructions yield many cluster structures on each open positroid variety $P_w$, given by plabic graphs with appropriately relabeled boundary. We conjecture that the seeds in all of these cluster structures are related by a combination of mutations and Laurent monomial transformations involving frozen variables, and establish this conjecture for (open) Schubert and opposite Schubert varieties. As an application, we also show that for certain reduced plabic graphs $G$, the "source" cluster and the "target" cluster are related by mutation and Laurent monomial rescalings.
|
mathematics
|
The visible Universe is largely characterised by a single mass-scale; namely, the proton mass, $m_p$. Contemporary theory suggests that $m_p$ emerges as a consequence of gluon self-interactions, which are a defining characteristic of quantum chromodynamics (QCD), the theory of strong interactions in the Standard Model. However, the proton is not elementary. Its mass appears as a corollary of other, more basic emergent phenomena latent in the QCD Lagrangian, e.g. generation of nuclear-size gluon and quark mass-scales, and a unique effective charge that may describe QCD interactions at all accessible momentum scales. These remarks are explained herein; and focusing on the distribution amplitudes and functions of $\pi$ and $K$ mesons, promising paths for their empirical verification are elucidated. Connected therewith, in anticipation that production of $J/\psi$-mesons using $\pi$ and $K$ beams can provide access to the gluon distributions in these pseudo-Nambu-Goldstone modes, predictions for all $\pi$ and $K$ distribution functions are provided at the scale $\zeta=m_{J/\psi}$.
|
high energy physics phenomenology
|
Cranial implant design is a challenging task, whose accuracy is crucial in the context of cranioplasty procedures. This task is usually performed manually by experts using computer-assisted design software. In this work, we propose and evaluate alternative automatic deep learning models for cranial implant reconstruction from CT images. The models are trained and evaluated using the database released by the AutoImplant challenge, and compared to a baseline implemented by the organizers. We employ a simulated virtual craniectomy to train our models using complete skulls, and compare two different approaches trained with this procedure. The first one is a direct estimation method based on the UNet architecture. The second method incorporates shape priors to increase the robustness when dealing with out-of-distribution implant shapes. Our direct estimation method outperforms the baselines provided by the organizers, while the model with shape priors shows superior performance when dealing with out-of-distribution cases. Overall, our methods show promising results in the difficult task of cranial implant design.
|
electrical engineering and systems science
|
Purpose: Summarize ICU delirium prediction models published within the past five years. Methods: Electronic searches were conducted in April 2019 using PubMed, Embase, Cochrane Central, Web of Science, and CINAHL to identify peer reviewed studies published in English during the past five years that specifically addressed the development, validation, or recalibration of delirium prediction models in adult ICU populations. Data were extracted using CHARMS checklist elements for systematic reviews of prediction studies, including the following characteristics: study design, participant descriptions and recruitment methods, predicted outcomes, a priori candidate predictors, sample size, model development, model performance, study results, interpretation of those results, and whether the study included missing data. Results: Twenty studies featuring 26 distinct prediction models were included. Model performance varied greatly, as assessed by AUROC (0.68-0.94), specificity (56.5%-92.5%), and sensitivity (59%-90.9%). Most models used data collected from a single time point or window to predict the occurrence of delirium at any point during hospital or ICU admission, and lacked mechanisms for providing pragmatic, actionable predictions to clinicians. Conclusions: Although most ICU delirium prediction models have relatively good performance, they have limited applicability to clinical practice. Most models were static, making predictions based on data collected at a single time-point, failing to account for fluctuating conditions during ICU admission. Further research is needed to create clinically relevant dynamic delirium prediction models that can adapt to changes in individual patient physiology over time and deliver actionable predictions to clinicians.
|
statistics
|
We develop a deep convolutional neural networks(CNNs) to deal with the blurry artifacts caused by the defocus of the camera using dual-pixel images. Specifically, we develop a double attention network which consists of attentional encoders, triple locals and global local modules to effectively extract useful information from each image in the dual-pixels and select the useful information from each image and synthesize the final output image. We demonstrate the effectiveness of the proposed deblurring algorithm in terms of both qualitative and quantitative aspects by evaluating on the test set in the NTIRE 2021 Defocus Deblurring using Dual-pixel Images Challenge. The code, and trained models are available at https://github.com/tuvovan/ATTSF.
|
electrical engineering and systems science
|
We consider type 0A matrix model in the presence of a spacelike D brane,localized in matter direction at any arbitrary point. It appears that in order to have an appropriate string/MQM correspondence we must need to impose a constraint on the matrix model side which is equivalent to an operator constraint on the matter part of the boundary state that arises from open string boundary condition. This condition constrains the Hilbert space generated by the macroscopic loop operator but the bulk matrix model remains unaffected, thereby describing a situation parallel to string theory. We have analyzed the constrained theory with uncompactified as well as compactified time and have shown that it is in good agreement with string theory. Without this constraint, the grand canonical partition function for MQM on a circle in the presence of the brane diverges while the constrained partition function is finite and corresponds to that of a deformed Fermi surface, as expected within the compactified theory. Matrix model path integral on a circle in the presence of the brane is expressed as Fredholm determinant. We consider matrix model in the presence of the brane, to be perturbed by momentum modes. We obtain the expression of MQM wave function in this background from collective field theory analysis. We have shown with the help of this constraint that the grand canonical partition function has an integrable structure and can be expressed as tau function of Toda hierarchy. Finally we have analyzed the dispersionless limit.
|
high energy physics theory
|
We show that exponential sums (ES) of the form \begin{equation*} S(f, N)= \sum_{k=0}^{N-1} \sqrt{w_k} e^{2 \pi i f(k)}, \end{equation*} can be efficiently carried out with a quantum computer (QC). Here $N$ can be exponentially large, $w_k$ are real numbers such that sum $S_w(M)=\sum_{k=0}^{M-1} w_k$ can be calculated in a closed form for any $M$, $S_w(N)=1$ and $f(x)$ is a real function, that is assumed to be easily implementable on a QC. As an application of the technique, we show that Riemann zeta (RZ) function, $\zeta(\sigma+ i t)$ in the critical strip, $\{0 \le \sigma <1, t \in \mathbb{R} \}$, can be obtained in polyLog(t) time. In another setting, we show that RZ function can be obtained with a scaling $t^{1/D}$, where $D \ge 2$ is any integer. These methods provide a vast improvement over the best known classical algorithms; best of which is known to scale as $t^{4/13}$. We present alternative methods to find $\lvert S(f,N) \rvert$ on a QC directly. This method relies on finding the magnitude $A=\lvert \sum_0^{N-1} a_k \rvert$ of a $n$-qubit quantum state with $a_k$ as amplitudes in the computational basis. We present two different ways to do obtain $A$. Finally, a brief discussion of phase/amplitude estimation methods is presented.
|
quantum physics
|
Quantum computing technologies pose a significant threat to the currently employed public-key cryptography protocols. In this paper, we discuss the impact of the quantum threat on public key infrastructures (PKIs), which are used as a part of security systems for protecting production environments. We analyze security issues of existing models with a focus on requirements for a fast transition to post-quantum solutions. Although our primary focus is on the attacks with quantum computing, we also discuss some security issues that are not directly related to the used cryptographic algorithms but are essential for the overall security of the PKI. We attempt to provide a set of security recommendations regarding the PKI from the viewpoints of attacks with quantum computers.
|
quantum physics
|
The conjecture of Brown, Erd\H{o}s and S\'os from 1973 states that, for any $k \ge 3$, if a $3$-uniform hypergraph $H$ with $n$ vertices does not contain a set of $k+3$ vertices spanning at least $k$ edges then it has $o(n^2)$ edges. The case $k=3$ of this conjecture is the celebrated $(6,3)$-theorem of Ruzsa and Szemer\'edi which implies Roth's theorem on $3$-term arithmetic progressions in dense sets of integers. Solymosi observed that, in order to prove the conjecture, one can assume that $H$ consists of triples $(a, b, ab)$ of some finite quasigroup $\Gamma$. Since this problem remains open for all $k \geq 4$, he further proposed to study triple systems coming from finite groups. In this case he proved that the conjecture holds also for $k = 4$. Here we completely resolve the Brown-Erd\H{o}s-S\'os conjecture for all finite groups and values of $k$. Moreover, we prove that the hypergraphs coming from groups contain sets of size $\Theta(\sqrt{k})$ which span $k$ edges. This is best possible and goes far beyond the conjecture.
|
mathematics
|
Observational healthcare data offer the potential to estimate causal effects of medical products on a large scale. However, the confidence intervals and p-values produced by observational studies only account for random error and fail to account for systematic error. As a consequence, operating characteristics such as confidence interval coverage and Type I error rates often deviate sharply from their nominal values and render interpretation impossible. While there is longstanding awareness of systematic error in observational studies, analytic approaches to empirically account for systematic error are relatively new. Several authors have proposed approaches using negative controls (also known as "falsification hypotheses") and positive controls. The basic idea is to adjust confidence intervals and p-values in light of the bias (if any) detected in the analyses of the negative and positive control. In this work, we propose a Bayesian statistical procedure for posterior interval calibration that uses negative and positive controls. We show that the posterior interval calibration procedure restores nominal characteristics, such as 95% coverage of the true effect size by the 95% posterior interval.
|
statistics
|
Although pre-trained contextualized language models such as BERT achieve significant performance on various downstream tasks, current language representation still only focuses on linguistic objective at a specific granularity, which may not applicable when multiple levels of linguistic units are involved at the same time. Thus this work introduces and explores the universal representation learning, i.e., embeddings of different levels of linguistic unit in a uniform vector space. We present a universal representation model, BURT (BERT-inspired Universal Representation from learning meaningful segmenT), to encode different levels of linguistic unit into the same vector space. Specifically, we extract and mask meaningful segments based on point-wise mutual information (PMI) to incorporate different granular objectives into the pre-training stage. We conduct experiments on datasets for English and Chinese including the GLUE and CLUE benchmarks, where our model surpasses its baselines and alternatives on a wide range of downstream tasks. We present our approach of constructing analogy datasets in terms of words, phrases and sentences and experiment with multiple representation models to examine geometric properties of the learned vector space through a task-independent evaluation. Finally, we verify the effectiveness of our unified pre-training strategy in two real-world text matching scenarios. As a result, our model significantly outperforms existing information retrieval (IR) methods and yields universal representations that can be directly applied to retrieval-based question-answering and natural language generation tasks.
|
computer science
|
We study tree lengths in $\Lambda$-coalescents without a dust component from a sample of $n$ individuals. For the total length of all branches and the total length of all external branches we present laws of large numbers in full generality. The other results treat regularly varying coalescents with exponent 1, which cover the Bolthausen-Sznitman coalescent. The theorems contain laws of large numbers for the total length of all internal branches and of internal branches of order $a$ (i.e. branches carrying $a$ individuals out of the sample). These results transform immediately to sampling formulas in the infinite sites model. In particular, we obtain the asymptotic site frequency spectrum of the Bolthausen-Sznitman coalescent. The proofs rely on a new technique to obtain laws of large numbers for certain functionals of decreasing Markov chains.
|
mathematics
|
Water slip at solid surfaces is important for a wide range of micro/nano-fluidic applications. While it is known that water slip behavior depends on surface functionalization, how it impacts the molecular level dynamics and mass transport at the interface is still not thoroughly understood. In this paper, we use nonequilibrium molecular dynamics simulations to investigate the slip behavior of water confined between gold surfaces functionalized by self-assembled monolayer (SAM) molecules with different polar functional groups. We observe a positive-to-negative slip transition from hydrophobic to hydrophilic SAM functionalizations, which is found related to the stronger interfacial interaction between water molecules and more hydrophilic SAM molecules. The stronger interaction increases the surface friction and local viscosity, making water slip more difficult. More hydrophilic functionalization also slows down the interfacial water relaxation and leads to more pronounced water trapping inside the SAM layer, which both impede water slip. The results from this work will provide useful insights to the understanding of the water slip at functionalized surfaces and design guidelines for various applications.
|
physics
|
Group buying, as an emerging form of purchase in social e-commerce websites, such as Pinduoduo, has recently achieved great success. In this new business model, users, initiator, can launch a group and share products to their social networks, and when there are enough friends, participants, join it, the deal is clinched. Group-buying recommendation for social e-commerce, which recommends an item list when users want to launch a group, plays an important role in the group success ratio and sales. However, designing a personalized recommendation model for group buying is an entirely new problem that is seldom explored. In this work, we take the first step to approach the problem of group-buying recommendation for social e-commerce and develop a GBGCN method (short for Group-Buying Graph Convolutional Network). Considering there are multiple types of behaviors (launch and join) and structured social network data, we first propose to construct directed heterogeneous graphs to represent behavioral data and social networks. We then develop a graph convolutional network model with multi-view embedding propagation, which can extract the complicated high-order graph structure to learn the embeddings. Last, since a failed group-buying implies rich preferences of the initiator and participants, we design a double-pairwise loss function to distill such preference signals. We collect a real-world dataset of group-buying and conduct experiments to evaluate the performance. Empirical results demonstrate that our proposed GBGCN can significantly outperform baseline methods by 2.69%-7.36%. The codes and the dataset are released at https://github.com/Sweetnow/group-buying-recommendation.
|
computer science
|
The realization of topological quantum phases of matter remains a key challenge to condensed matter physics and quantum information science. In this work, we demonstrate that progress in this direction can be made by combining concepts of tensor network theory with Majorana device technology. Considering the topological double semion string-net phase as an example, we exploit the fact that the representation of topological phases by tensor networks can be significantly simpler than their description by lattice Hamiltonians. The building blocks defining the tensor network are tailored to realization via simple units of capacitively coupled Majorana bound states. In the case under consideration, this yields a remarkably simple blueprint of a synthetic double semion string-net, and one may be optimistic that the required device technology will be available soon. Our results indicate that the implementation of tensor network structures via mesoscopic quantum devices opens up a powerful novel avenue toward the realization and quantum simulation of synthetic topological quantum matter.
|
condensed matter
|
For abelian groups $A, B$, $A$ is called $B$-small if the covariant functor $Hom(A,-)$ commutes with all direct sums $B^{(\kappa)}$ and $A$ is self-small provided it is $A$-small. The paper characterizes self-small products applying developed closure properties of the classes of relatively small groups. As a consequence, self-small products of finitely generated abelian groups are described.
|
mathematics
|
In the envelope-function approximation, interband transitions produced by electric fields are neglected. However, electric fields may lead to a spatially local ($k$-independent) coupling of band (internal, pseudospin) degrees of freedom. Such a coupling exists between heavy-hole and light-hole (pseudo-)spin states for holes in III-V semiconductors, such as GaAs, or in group IV semiconductors (germanium, silicon, ...) with broken inversion symmetry. Here, we calculate the electric-dipole (pseudospin-electric) coupling for holes in GaAs from first principles. We find a transition dipole of $0.5$ debye, a significant fraction of that for the hydrogen-atom $1s\to2p$ transition. In addition, we derive the Dresselhaus spin-orbit coupling that is generated by this transition dipole for heavy holes in a triangular quantum well. A quantitative microscopic description of this pseudospin-electric coupling may be important for understanding the origin of spin splitting in quantum wells, spin coherence/relaxation ($T_2^*/T_1$) times, spin-electric coupling for cavity-QED, electric-dipole spin resonance, and spin non-conserving tunneling in double quantum dot systems.
|
condensed matter
|
We propose a linear model of opinion dynamics that builds on, and extends, the celebrated DeGroot model. Its primary innovation is that it concerns relative opinions, in that two opinion vectors that differ only by a multiplicative constant are considered equivalent. An agent's opinion is represented by a number in R, and evolves due to positive and negative impacts resulting from the other agents' opinions. As a consequence of the relative interpretation, the model exhibits nonlinear dynamics despite its DeGroot-like linear foundation. This means that, even when agents are "well-connected", the dynamics of the relative opinion model can reproduce phenomena such as polarization, consensus formation, and periodic behavior. An important additional advantage of remaining in a DeGroot-type framework is that it allows for extensive analytical investigation using elementary matrix algebra. A few specific themes are covered: (i) We demonstrate how stable patterns in opinion dynamics are identified which are hidden when only absolute opinions are considered, such as stable fragmentation of a population despite a continuous shift in absolute opinions. (ii) For the two-agent case, we provide an exhaustive description of the model's asymptotic (long-term, that is) behavior. (iii) We explore group dynamics, in particular providing a non-trivial condition under which a group's asymptotic behavior carries over to the entire population.
|
physics
|
Precise control over the electronic and optical properties of defect centers in solid-state materials is necessary for their applications as quantum sensors, transducers, memories, and emitters. In this study, we demonstrate, from first principles, how to tune these properties via the formation of defect polaritons. Specifically, we investigate three defect types -- CHB, CB-CB, and CB-VN -- in monolayer hexagonal boron nitride (hBN). The lowest-lying electronic excitation of these systems is coupled to an optical cavity where we explore the strong light-matter coupling regime. For all defect systems, we show that the polaritonic splitting that shifts the absorption energy of the lower polariton is much higher than can be expected from a Jaynes-Cummings interaction. In addition, we find that the absorption intensity of the lower polariton increases by several orders of magnitude, suggesting a possible route toward overcoming phonon-limited single photon emission from defect centers. Finally, we find that initially localized electronic transition densities can become delocalized across the entire material under strong light-matter coupling. These findings are a result of an effective continuum of electronic transitions near the lowest-lying electronic transition for both pristine hBN and hBN with defect centers that dramatically enhances the strength of the light-matter interaction. We expect our findings to spur experimental investigations of strong light-matter coupling between defect centers and cavity photons for applications in quantum technologies.
|
quantum physics
|
The main goal of disease mapping is to estimate disease risk and identify high-risk areas. Such analyses are hampered by the limited geographical resolution of the available data. Typically the available data are counts per spatial unit and the common approach is the Besag--York--Molli{\'e} (BYM) model. When precise geocodes are available, it is more natural to use Log-Gaussian Cox processes (LGCPs). In a simulation study mimicking childhood leukaemia incidence using actual residential locations of all children in the canton of Z\"urich, Switzerland, we compare the ability of these models to recover risk surfaces and identify high-risk areas. We then apply both approaches to actual data on childhood leukaemia incidence in the canton of Z\"urich during 1985-2015. We found that LGCPs outperform BYM models in almost all scenarios considered. Our findings suggest that there are important gains to be made from the use of LGCPs in spatial epidemiology.
|
statistics
|
We analyze the holographic entanglement entropy in a soliton background with Wilson lines and derive a relation analogous to the first law of thermodynamics. The confinement/deconfinement phase transition occurs due to the competition of two minimal surfaces. The entropic c function probes the confinement/deconfinement phase transition. It is sensitive to the degrees of freedom (DOF) smaller than the size of a spatial circle. When the Wilson line becomes large, the entropic c function becomes non-monotonic as a function of the size and does not satisfy the usual c-theorem. We analyze the entanglement entropy for a small subregion and the relation analogous to the first law of thermodynamics. For the small amount of Wilson lines, the excited amount of the entanglement entropy decreases from the ground state. It reflects that confinement decreases degrees of freedom. We finally discuss the second order correction of the holographic entanglement entropy.
|
high energy physics theory
|
The Exact Foldy-Wouthuysen transformation (EFWT) method is generalized here. In principle, it is not possible to construct the EFWT to any Hamiltonian. The transformation conditions are the same but the involution operator has a new form. We took a special example and constructed explicitly the new involution operator that allows one to perform the transformation. We treat the case of the Hamiltonian with 160 possible CPT-Lorentz breaking terms, using this new technique. The transformation was performed and physics analysis of the equations of motion is shown.
|
high energy physics theory
|
We construct, by a procedure involving a dimensional reduction from a Chern-Simons theory with borders, an effective theory for a 1+1 dimensional superconductor. 1That system can be either in an ordinary phase or in a topological one, depending on the value of two phases, corresponding to complex order parameters. Finally, we argue that the original theory and its dimensionally reduced one can be related to the effective action for a quantum Dirac field in a slab geometry, coupled to a gauge field.
|
high energy physics theory
|
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.
|
computer science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.