text
stringlengths
11
9.77k
label
stringlengths
2
104
M dwarfs with masses 0.1 <= M/M_sol <= 0.3 are under increasing scrutiny because these fully convective stars pose interesting astrophysical questions regarding their magnetic activity and angular momentum history. They also afford the most accessible near-future opportunity to study the atmospheres of terrestrial planets. Because they are intrinsically low in luminosity, the identification of the nearest examples of these M dwarfs is essential for progress. We present the volume-complete, all-sky list of 512 M dwarfs with masses 0.1 <= M/M_sol <= 0.3 and with trigonometric distances placing them within 15 pc (parallax >= 66.67 mas) from which we have created a sample of 413 M dwarfs for spectroscopic study. We present the mass function for these 512 M dwarfs, which increases with decreasing stellar mass in linear mass space, but is flat in logarithmic mass space. As part of this sample, we present new VRI photometry for 17 targets, measured as a result of the RECONS group's long-term work at the CTIO/SMARTS 0.9m telescope. We also note the details of targets that are known to be members of multiple systems and find a preliminary multiplicity rate of 21 +/- 2% for the primary M dwarfs in our sample, when considering known stellar and brown dwarf companions at all separations from their primaries. We further find that 43 +/- 2% of all M dwarfs with masses 0.1 <= M/M_sol <= 0.3 are found in multiple systems with primary stars of all masses within 15 pc.
astrophysics
Values of quaternionic modular forms are related to twisted central $L$-values via periods and a theorem of Waldspurger. In particular, certain twisted $L$-values must be non-vanishing for forms with no zeroes. Here we study, theoretically and computationally, zeroes of definite quaternionic modular forms of trivial weight. Local sign conditions force certain forms to have trivial zeroes, but we conjecture that almost all forms have no nontrivial zeroes. In particular, almost all forms with appropriate local signs should have no zeroes. We show these conjectures follow from a conjecture on the average number of Galois orbits, and give applications to (non)vanishing of $L$-values.
mathematics
To conduct Bayesian inference with large data sets, it is often convenient or necessary to distribute the data across multiple machines. We consider a likelihood function expressed as a product of terms, each associated with a subset of the data. Inspired by global variable consensus optimisation, we introduce an instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters. One of these top-level parameters controls the unconditional strength of association between the auxiliary parameters. This model leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations. A trade-off between computational tractability and fidelity to the original model can be controlled by changing the association strength in the instrumental model. We further propose the use of a SMC sampler with a sequence of association strengths, allowing both the automatic determination of appropriate strengths and for a bias correction technique to be applied. In contrast to similar distributed Monte Carlo algorithms, this approach requires few distributional assumptions. The performance of the algorithms is illustrated with a number of simulated examples.
statistics
Complexity in quantum physics measures how difficult a state can be reached from a reference state and more precisely it is the number of fundamental unitary gates we have to operate to transform the reference state to the state we are considering. In the holographic context, based on several explicit calculations and arguments, it is conjectured that certain bulk volume calculates the boundary field theory subregion complexity. In this paper, we will show that the $T\bar{T}$ deformation shows a strong signal of the correctness of this complexity equals volume conjecture. A bonus is a way to look at the $T\bar{T}$ deformation, by its reversibility, as operating a unitary quantum circuit which prepares states in quantum field theory.
high energy physics theory
Lithium ion batteries have been a central part of consumer electronics for decades. More recently, they have also become critical components in the quickly arising technological fields of electric mobility and intermittent renewable energy storage. However, many fundamental principles and mechanisms are not yet understood to a sufficient extent to fully realize the potential of the incorporated materials. The vast majority of concurrent lithium ion batteries make use of graphite anodes. Their working principle is based on intercalation---the embedding and ordering of (lithium-) ions in the two-dimensional spaces between the graphene sheets. This important process---it yields the upper bound to a battery's charging speed and plays a decisive role for its longevity---is characterized by multiple phase transitions, ordered and disordered domains, as well as non-equilibrium phenomena, and therefore quite complex. In this work, we provide a simulation framework for the purpose of better understanding lithium intercalated graphite and its behaviour during use in a battery. In order to address the large systems sizes and long time scales required to investigate said effects, we identify the highly efficient, but semi-empirical Density Funtional Tight Binding (DFTB) as a suitable approach and combine particle swarm optimization (PSO) with the machine learning (ML) procedure Gaussian Process Regression (GPR) to obtain the necessary parameters. Using the resulting parametrization, we are able to reproduce experimental reference structures at a level of accuracy which is in no way inferior to much more costly ab initio methods. We finally present structural properties and diffusion barriers for some exemplary system states.
physics
The training of Deep Neural Networks (DNN) is costly, thus DNN can be considered as the intellectual properties (IP) of model owners. To date, most of the existing protection works focus on verifying the ownership after the DNN model is stolen, which cannot resist piracy in advance. To this end, we propose an active DNN IP protection method based on adversarial examples against DNN piracy, named ActiveGuard. ActiveGuard aims to achieve authorization control and users' fingerprints management through adversarial examples, and can provide ownership verification. Specifically, ActiveGuard exploits the elaborate adversarial examples as users' fingerprints to distinguish authorized users from unauthorized users. Legitimate users can enter fingerprints into DNN for identity authentication and authorized usage, while unauthorized users will obtain poor model performance due to an additional control layer. In addition, ActiveGuard enables the model owner to embed a watermark into the weights of DNN. When the DNN is illegally pirated, the model owner can extract the embedded watermark and perform ownership verification. Experimental results show that, for authorized users, the test accuracy of LeNet-5 and Wide Residual Network (WRN) models are 99.15% and 91.46%, respectively, while for unauthorized users, the test accuracy of the two DNNs are only 8.92% (LeNet-5) and 10% (WRN), respectively. Besides, each authorized user can pass the fingerprint authentication with a high success rate (up to 100%). For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected. Further, ActiveGuard is demonstrated to be robust against fingerprint forgery attack, model fine-tuning attack and pruning attack.
computer science
We describe possibilities of spontaneous, degenerate four-wave mixing (FWM) processes in spin-orbit coupled Bose-Einstein condensates. Phase matching conditions (i.e., energy and momentum conservation laws) in such systems allow one to identify four different configurations characterized by involvement of distinct spinor states in which such a process can take place. We derived these conditions from first principles and then illustrated dynamics with direct numerical simulations. We found, among others, the unique configuration, where both probe waves have smaller group velocity than pump wave and proved numerically that it can be observed experimentally under proper choice of the parameters. We also reported the case when two different FWM processes can occur simultaneously. The described resonant interactions of matter waves is expected to play important role in the experiments of BEC with artificial gauge fields. Beams created by FWM processes are important source of correlated particles and can be used the experiments testing quantum properties of atomic ensembles.
quantum physics
The separability detecting problem of mixed states is one of the fundamental problems in quantum information theory. In the last 20 years, almost all methods are based on the sufficient or necessary conditions for entanglement. However, in this paper, we only need one algorithm to solve the problem. We propose a tensor optimization method to check whether an $m$-partite quantum mixed state is separable or not and give a decomposition for it if it is. We first convert the separability discrimination problem of mixed states to the positive Hermitian decomposition problem of Hermitian tensors. Then, employing the $E$-truncated $K$-moment method, we obtain an optimization model for discriminating separability. Moreover, applying semidefinite relaxation method, we get a hierarchy of semidefinite relaxation optimization models and propose an $E$-truncated $K$-moment and semidefinite relaxations (ETKM-SDR) algorithm for detecting the separability of mixed states. The algorithm can also be used for symmetric and non-symmetric decomposition of separable mixed states. By numerical examples, we find that not all symmetric separable states have symmetric decompositions. The algorithm can be used for studying properties of mixed states in the future.
quantum physics
It is well known that evaporative cooling of Earth's surface water reduces the amount of radiation that goes into sensible heat, namely the portion of radiation that produces higher temperatures. However, a rigorous use of long-term hydrologic measurements and the related theories of hydrologic partitioning have not yet been fully exploited to quantify these effects on climate. Here, we show that the Budyko's curve, a well-known and efficient framework for water balance estimation, can be effectively utilized to partition the surface energy fluxes by expressing the long-term evaporative fraction as a function of the dryness index. The combination of this energy partitioning method with hydrological observations allows us to estimate the surface energy components at watershed and continental scales. Analyzing climate model outputs through this new lens reveals energy biases due to inaccurate parameterization of hydrological and atmospheric processes, offering insight into model parameterization and providing useful information for improving climate projections.
physics
Quantum Key Distribution (QKD) is a technology that allows secure key exchange between two distant users. A widespread adoption of QKD requires the development of simple, low-cost, and stable systems. However, implementation of the current QKD requires a complex self-alignment process during the initial stage and an additional hardware to compensate the environmental disturbances. In this study, we have presented the implementation of a simple QKD with the help of a stable transmitter-receiver scheme, which simplifies the self-alignment and is robust enough to withstand environmental disturbances. In case of the stability test, the implementation system is able to remain stable for 48 hours and exhibits an average quantum bit error rate of less than 1\% without any feedback control. The scheme is also tested over a fiber spool, obtaining a stable and secure finite key rate of 7.32k bits per second over a fiber spool extending up to 75 km. The demonstrated long-term stability and obtained secure key rate prove that our method of implementation is a promising alternative for practical QKD systems, in particular, for Cubesat platform and satellite applications.
quantum physics
Brownian motion is a ubiquitous physical phenomenon across the sciences. After its discovery by Brown and intensive study since the first half of the 20th century, many different aspects of Brownian motion and stochastic processes in general have been addressed in Statistical Physics. In particular, there now exist a very large range of applications of stochastic processes in various disciplines. Here, we highlight some of the advances in stochastic processes prompted by novel experimental methods such as superresolution microscopy. Here we provide a summary of some of the recent developments highlighting both the experimental findings and theoretical frameworks.
condensed matter
Pulsar timing data used to provide upper limits on a possible stochastic gravitational wave background (SGWB). However, the NANOGrav Collaboration has recently reported strong evidence for a stochastic common-spectrum process, which we interpret as a SGWB in the framework of cosmic strings. The possible NANOGrav signal would correspond to a string tension $G\mu \in (4 \times 10^{-11}, 10^{-10}) $ at the 68% confidence level, with a different frequency dependence from supermassive black hole mergers. The SGWB produced by cosmic strings with such values of $G\mu$ would be beyond the reach of LIGO, but could be measured by other planned and proposed detectors such as SKA, LISA, TianQin, AION-1km, AEDGE, Einstein Telescope and Cosmic Explorer.
astrophysics
Every second flat Reidemeister move of knot projections can be decomposed into two types thorough an inverse or direct self-tangency modification, respectively called strong or weak, when orientations of the knot projections are arbitrarily provided. Further, we introduce the notions of strong and weak (1, 2) homotopies; we define that two knot projections are strongly (resp. weakly) (1, 2) homotopic if and only if two knot projections are related by a finite sequence of first and strong (resp. weak) second flat Reidemeister moves. This paper gives a new necessary and sufficient condition that two knot projections are not strongly (1, 2) homotopic. Similarly, we obtain a new necessary and sufficient condition in the weak (1, 2) homotopy case. We also define a new integer-valued strong (1, 2) homotopy invariant. Using it, we show that the set of the non-trivial prime knot projections without 1-gons that can be trivialized under strong (1, 2) homotopy is disjoint from that of weak (1, 2) homotopy. We also investigate topological properties of the new invariant and give its generalization, a comparison of our invariants and Arnold invariants, and a table of invariants.
mathematics
We present a sample of 21 hydrogen-free superluminous supernovae (SLSNe-I), and one hydrogen-rich SLSN (SLSN-II) detected during the five-year Dark Energy Survey (DES). These SNe, located in the redshift range 0.220<z<1.998, represent the largest homogeneously-selected sample of SLSN events at high redshift. We present the observed g,r, i, z light curves for these SNe, which we interpolate using Gaussian Processes. The resulting light curves are analysed to determine the luminosity function of SLSN-I, and their evolutionary timescales. The DES SLSN-I sample significantly broadens the distribution of SLSN-I light curve properties when combined with existing samples from the literature. We fit a magnetar model to our SLSNe, and find that this model alone is unable to replicate the behaviour of many of the bolometric light curves. We search the DES SLSN-I light curves for the presence of initial peaks prior to the main light-curve peak. Using a shock breakout model, our Monte Carlo search finds that 3 of our 14 events with pre-max data display such initial peaks. However, 10 events show no evidence for such peaks, in some cases down to an absolute magnitude of <-16, suggesting that such features are not ubiquitous to all SLSN-I events. We also identify a red pre-peak feature within the light curve of one SLSN, which is comparable to that observed within SN2018bsz.
astrophysics
Plant response is not only dependent on the atmospheric evaporative demand due to the combined effects of wind speed, air temperature, humidity, and solar radiation, but is also dependent on the water transport within the leaf-xylem-root system. Therefore, a detailed understanding of such dynamics is key to the development of appropriate mitigation strategies and numerical models. In this study, we unveil the diurnal dynamics of the microclimate of a Buxus sempervirens plant using multiple high-resolution non-intrusive imaging techniques. The wake flow field is measured using stereoscopic particle image velocimetry, the spatiotemporal leaf temperature history is obtained using infrared thermography, and additionally, the plant porosity is obtained using X-ray tomography. We find that the wake velocity statistics is not directly linked with the distribution of the porosity but depends mainly on the geometry of the plant foliage which generates the shear flow. The interaction between the shear regions and the upstream boundary layer profile is seen to have a dominant effect on the wake turbulent kinetic energy distribution. Furthermore, the leaf area density distribution has a direct impact on the short-wave radiative heat flux absorption inside the foliage where 50% of the radiation is absorbed in the top 20% of the foliage. This localized radiation absorption results in a high local leaf and air temperature. Furthermore, a comparison of the diurnal variation of leaf temperature and the net plant transpiration rate enabled us to quantify the diurnal hysteresis resulting from the stomatal response lag. The day of this plant is seen to comprise of four stages of climatic conditions: no-cooling, high-cooling, equilibrium, and decaying-cooling stages.
physics
A method that exactly knows the earthquakes beforehand and can generalize them cannot still been developed. However, earthquakes are tried to be predicted through numerous methods. One of these methods, artificial neural networks give appropriate outputs to different patterns by learning the relationship between the determined inputs and outputs. In this study, a feedforward back propagation artificial neural network that is connected to Gutenberg-Richter relationship and that bases on b value used in earthquake predictions was developed. The artificial neural network was trained employing earthquake data belonging to four different regions which have intensive seismic activity in the west of Turkey. After the training process, the earthquake data belonging to later dates of the same regions were used for testing and the performance of the network was put forward. When the prediction results of the developed network are examined, the prediction results that the network predicts that an earthquake is not going to occur are quite high in all regions. Furthermore, the earthquake prediction results that the network predicts that an earthquake is going to occur are different to some extent for the studied regions.
computer science
In this paper, the Einstein AdS black brane solution in the presence of a string cloud in the context of d-dimensional massive gravity is introduced. The ratio of shear viscosity to entropy density for this solution violates the KSS bound by applying the Dirichlet boundary and regularity on the horizon conditions. Our result shows that this value is independent of string cloud in any arbitrary dimensions.
high energy physics theory
We give the hyperasymptotic expansion of the energy of a static quark-antiquark pair with a precision that includes the effects of the subleading renormalon. The terminants associated to the first and second renormalon are incorporated in the analysis when necessary. In particular, we determine the normalization of the leading renormalon of the force and, consequently, of the subleading renormalon of the static potential. We obtain $Z_3^F(n_f=3)=2Z_3^V(n_f=3)=0.37(17)$. The precision we reach in strict perturbation theory is next-to-next-to-next-to-leading logarithmic resummed order both for the static potential and for the force. We find that the resummation of large logarithms and the inclusion of the leading terminants associated to the renormalons are compulsory to get accurate determinations of $\Lambda_{\overline{\rm MS}}$ when fitting to short-distance lattice data of the static energy. We obtain $\Lambda_{\overline{\rm MS}}^{(n_f=3)}=338(12)$ MeV and $\alpha(M_z)=0.1181(9)$. We have also found strong consistency checks that the ultrasoft correction to the static energy can be computed at weak coupling in the energy range we have studied.
high energy physics phenomenology
We report a decreased surface wettability when polymer films on a glass substrate are treated by ultra-fast laser pulses in a back-illumination geometry. We propose that back-illumination through the substrate confines chemical changes beneath the surface of polymer films, leaving the surface blistered but chemically intact. To confirm this hypothesis, we measure the phase contrast of the polymer when observed with a focused ion beam. We observe a void at the polymer-quartz interface that results from the expansion of an ultrafast laser-induced plasma. A modified polymer layer surrounds the void, but otherwise the film seems unmodified. We also use X-ray photoelectron spectroscopy to confirm that there is no chemical change to the surface. When patterned with partially overlapping blisters, our polymer surface shows increased hydrophobicity. The increased hydrophobicity of back-illuminated surfaces can only result from the morphological change. This contrasts with the combined chemical and morphological changes of the polymer surface caused by a front-illumination geometry.
physics
We present the results of the X-ray flaring activity of 1ES 1959+650 during October 25-26, 2017 using AstroSat observations. The source was variable in the X-ray. We investigated the evolution of the X-ray spectral properties of the source by dividing the total observation period ($\sim 130$ ksecs) into time segments of 5 ksecs, and fitting the SXT and LAXPC spectra for each segment. Synchrotron emission of a broken power-law particle density model provided a better fit than the log-parabola one. The X-ray flux and the normalised particle density at an energy less than the break one, were found to anti-correlate with the index before the break. However, a stronger correlation between the density and index was obtained when a delay of $\sim 60$ ksec was introduced. The amplitude of the normalised particle density variation $|\Delta n_\gamma/n_\gamma| \sim 0.1$ was found to be less than that of the index $\Delta \Gamma \sim 0.5$. We model the amplitudes and the time delay in a scenario where the particle acceleration time-scale varies on a time-scale comparable to itself. In this framework, the rest frame acceleration time-scale is estimated to be $\sim 1.97\times10^{5}$ secs and the emission region size to be $\sim 6.73\times 10^{15}$ cms.
astrophysics
In its standard formulation, quantum backflow is a classically impossible phenomenon in which a free quantum particle in a positive-momentum state exhibits a negative probability current. Recently, Miller et al. [Quantum 5, 379 (2021)] have put forward a new, "experiment-friendly" formulation of quantum backflow that aims at extending the notion of quantum backflow to situations in which the particle's state may have both positive and negative momenta. Here, we investigate how the experiment-friendly formulation of quantum backflow compares to the standard one when applied to a free particle in a positive-momentum state. We show that the two formulations are not always compatible. We further identify a parametric regime in which the two formulations appear to be in qualitative agreement with one another.
quantum physics
The coherent nonlinear process where a single photon simultaneously excites two or more two-level systems (qubits) in a single-mode resonator has recently been theoretically predicted. Here we explore the case where the two qubits are placed in different resonators in an array of two or three weakly coupled resonators. Investigating different setups and excitation schemes, we show that this process can still occur with a probability approaching one under specific conditions. The obtained results provide interesting insights into subtle causality issues underlying the simultaneous excitation processes of qubits placed in different resonators.
quantum physics
Policy evaluation studies, which intend to assess the effect of an intervention, face some statistical challenges: in real-world settings treatments are not randomly assigned and the analysis might be further complicated by the presence of interference between units. Researchers have started to develop novel methods that allow to manage spillover mechanisms in observational studies; recent works focus primarily on binary treatments. However, many policy evaluation studies deal with more complex interventions. For instance, in political science, evaluating the impact of policies implemented by administrative entities often implies a multivariate approach, as a policy towards a specific issue operates at many different levels and can be defined along a number of dimensions. In this work, we extend the statistical framework about causal inference under network interference in observational studies, allowing for a multi-valued individual treatment and an interference structure shaped by a weighted network. The estimation strategy is based on a joint multiple generalized propensity score and allows one to estimate direct effects, controlling for both individual and network covariates. We follow the proposed methodology to analyze the impact of the national immigration policy on the crime rate. We define a multi-valued characterization of political attitudes towards migrants and we assume that the extent to which each country can be influenced by another country is modeled by an appropriate indicator, summarizing their cultural and geographical proximity. Results suggest that implementing a highly restrictive immigration policy leads to an increase of the crime rate and the estimated effects is larger if we take into account interference from other countries.
statistics
This paper focuses on the analysis and synthesis of hypo and hyperarticulated speech in the framework of HMM-based speech synthesis. First of all, a new French database matching our needs was created, which contains three identical sets, pronounced with three different degrees of articulation: neutral, hypo and hyperarticulated speech. On that basis, acoustic and phonetic analyses were performed. It is shown that the degrees of articulation significantly influence, on one hand, both vocal tract and glottal characteristics, and on the other hand, speech rate, phone durations, phone variations and the presence of glottal stops. Finally, neutral, hypo and hyperarticulated speech are synthesized using HMM-based speech synthesis and both objective and subjective tests aiming at assessing the generated speech quality are performed. These tests show that synthesized hypoarticulated speech seems to be less naturally rendered than neutral and hyperarticulated speech.
electrical engineering and systems science
We analyze the quantum mechanics of anyons on the sphere in the presence of a constant magnetic field. We introduce an operator method for diagonalizing the Hamiltonian and derive a set of exact anyon energy eigenstates, in partial correspondence with the known exact eigenstates on the plane. We also comment on possible connections of this system with integrable systems of the Calogero type.
high energy physics theory
While data science is battling to extract information from the enormous explosion of data, many estimators and algorithms are being developed for better prediction. Researchers and data scientists often introduce new methods and evaluate them based on various aspects of data. However, studies on the impact of/on a model with multiple response variables are limited. This study compares some newly-developed (envelope) and well-established (PLS, PCR) prediction methods based on real data and simulated data specifically designed by varying properties such as multicollinearity, the correlation between multiple responses and position of relevant principal components of predictors. This study aims to give some insight into these methods and help the researcher to understand and use them in further studies.
statistics
A novel basis signal optimization method is proposed for reducing the interference in the N-continuous orthogonal frequency division multiplexing (NC-OFDM) system. Compared to conventional NC-OFDM, the proposed scheme is capable of improving the transmission performance while maintaining an identical sidelobe suppression performance imposed by the linear combination of two groups of basis signals. Our performance results demonstrate that with a low complexity overhead, the proposed scheme is capable of striking a better trade-off among the bit error rate (BER), complexity, and the sidelobe suppression performance compared to its conventional counterpart.
electrical engineering and systems science
We analyse 850um continuum observations of eight massive X-ray detected galaxy clusters at z~0.8-1.6 taken with SCUBA-2 on the James Clerk Maxwell Telescope. We find an average overdensity of 850um-selected sources of a factor of 4+/-2 per cluster within the central 1Mpc compared to the field. We investigate the multiwavelength properties of these sources and identify 34 infrared counterparts to 26 SCUBA-2 sources. Their colours suggest that the majority of these counterparts are probable cluster members. We use the multi-wavelength far-infrared photometry to measure the total luminosities and total cluster star-formation rates demonstrating that they are roughly three orders of magnitude higher than local clusters. We predict the H-band luminosities of the descendants of our cluster submillimetre galaxies and find that their stellar luminosity distribution is consistent with that of passive elliptical galaxies in z~0 clusters. Together, the faded descendants of the passive cluster population already in place at z~1 and the cluster submillimetre galaxies are able to account for the total luminosity function of early-type cluster galaxies at z~0. This suggests that the majority of the luminous passive population in z~0 clusters are likely to have formed at z>>1 through an extreme, dust-obscured starburst event.
astrophysics
An existence result is presented for the dynamical low rank (DLR) approximation for random semi-linear evolutionary equations. The DLR solution approximates the true solution at each time instant by a linear combination of products of deterministic and stochastic basis functions, both of which evolve over time. A key to our proof is to find a suitable equivalent formulation of the original problem. The so-called Dual Dynamically Orthogonal formulation turns out to be convenient. Based on this formulation, the DLR approximation is recast to an abstract Cauchy problem in a suitable linear space, for which existence and uniqueness of the solution in the maximal interval are established.
mathematics
By employing the method of moving planes in a novel way we extend some classical symmetry and rigidity results for smooth minimal surfaces to surfaces that have singularities of the sort typically observed in soap films.
mathematics
The thermal conductivity of B-form double-stranded DNA (dsDNA) of the Drew-Dickerson sequence d(CGCGAATTCGCG) is computed using classical Molecular Dynamics (MD) simulations. In contrast to previous studies, which focus on a simplified 1D model or a coarse-grained model of DNA to improve simulation times, full atomistic simulations are employed to understand the thermal conduction in B-DNA. Thermal conductivity at different temperatures from 100 to 400 K are investigated using the Einstein Green-Kubo equilibrium and M\"uller-Plathe non-equilibrium formalisms. The thermal conductivity of B-DNA at room temperature is found to be 1.5 W/m$\cdot$K in equilibrium and 1.225 W/m$\cdot$K in non-equilibrium approach. In addition, the denaturation regime of B-DNA is obtained from the variation of thermal conductivity with temperature. It is in agreement with previous works using Peyrard-Bishop Dauxois (PBD) model at a temperature of around 350 K. The quantum heat capacity ($C_{vq}$) has given the additional clues regarding the Debye and denaturation temperature of 12-bp B-DNA.
physics
We consider a model of quartic inflation where the inflaton is coupled non-minimally to gravity and the self-induced radiative corrections to its effective potential are dominant. We perform a comparative analysis considering two different formulations of gravity, metric or Palatini, and two different choices for the renormalization scale, widely known as prescription I and II. Moreover we comment on the eventual compatibility of the results with the final data release of the Planck mission.
high energy physics phenomenology
Atmospheric modelling has recently experienced a surge with the advent of deep learning. Most of these models, however, predict concentrations of pollutants following a data-driven approach in which the physical laws that govern their behaviors and relationships remain hidden. With the aid of real-world air quality data collected hourly in different stations throughout Madrid, we present a case study using a series of data-driven techniques with the following goals: (1) Find systems of ordinary differential equations that model the concentration of pollutants and their changes over time; (2) assess the performance and limitations of our models using stability analysis; (3) reconstruct the time series of chemical pollutants not measured in certain stations using delay coordinate embedding results.
statistics
Working with images, one often faces problems with incomplete or unclear information. Image inpainting can be used to restore missing image regions but focuses, however, on low-level image features such as pixel intensity, pixel gradient orientation, and color. This paper aims to recover semantic image features (objects and positions) in images. Based on published gated PixelCNNs, we demonstrate a new approach referred to as quadro-directional PixelCNN to recover missing objects and return probable positions for objects based on the context. We call this approach context-based image segment labeling (CBISL). The results suggest that our four-directional model outperforms one-directional models (gated PixelCNN) and returns a human-comparable performance.
computer science
In the present article, we define squeezing function corresponding to polydisk and study its properties. We investigate relationship between squeezing fuction and squeezing function corresponding to polydisk.
mathematics
Braiding operators can be used to create entangled states out of product states, thus establishing a correspondence between topological and quantum entanglement. This is well-known for maximally entangled Bell and GHZ states and their equivalent states under Stochastic Local Operations and Classical Communication, but so far a similar result for W states was missing. Here we use generators of extraspecial 2-groups to obtain the W state in a four-qubit space and partition algebras to generate the W state in a three-qubit space. We also present a unitary generalized Yang-Baxter operator that embeds the W$_n$ state in a $(2n-1)$-qubit space.
quantum physics
A nonuniform in-plane Zeeman field can induce spontaneous supercurrents of spin-orbit coupled electrons in superconducting two-dimensional systems and thin films. In this work it is shown that current vortices can be created at the ends of a long homogeneously magnetized strip of a ferromagnetic insulator, which is deposited on the surface of a three-dimensional topological insulator. The s-wave superconductivity on its surface is assumed to have an intrinsic origin, or to be induced by the proximity effect. It is shown that vortices with the odd vorticity can localize Majorana zero modes.The latter may also be induced by sufficiently narrow domain walls inside the strip, that opens a way for manipulating these modes by moving the walls. It is shown that the vorticity can be tuned by varying the magnetization and width of the strip. A stability of the strip magnetization with respect to the Berezinsky-Kosterlitz-Thouless transition has been analyzed.
condensed matter
We consider the Johnson noise of a two-dimensional, two-terminal electrical conductor for which the electron system obeys the Wiedemann-Franz law. We derive two simple and generic relations between the Johnson Noise temperature and the heat flux into the electron system. First, we consider the case where the electron system is heated by Joule heating from a DC current, and we show that there is a universal proportionality coefficient between the Joule power and the increase in Johnson noise temperature. Second, we consider the case where heat flows into the sample from an external source, and we derive a simple relation between the Johnson noise temperature and the heat flux across the boundary of the sample.
condensed matter
In this work, we investigate proactive Hybrid Automatic Repeat reQuest (HARQ) using link-level simulations for multiple packet sizes, modulation orders, BLock Error Rate (BLER) targets and two delay budgets of 1 ms and 2 ms, in the context of Industrial Internet of Things (IIOT) applications. In particular, we propose an enhanced proactive HARQ protocol using a feedback prediction mechanism. We show that the enhanced protocol achieves a significant gain over the classical proactive HARQ in terms of energy efficiency for almost all evaluated BLER targets at least for sufficiently large feedback delays. Furthermore, we demonstrate that the proposed protocol clearly outperforms the classical proactive HARQ in all scenarios when taking a processing delay reduction due to the less complex prediction approach into account, achieving an energy efficiency gain in the range of 11% up to 15% for very stringent latency budgets of 1 ms at $10^{-2}$ BLER and from 4% up to 7.5% for less stringent latency budgets of 2 ms at $10^{-3}$ BLER. Furthermore, we show that power-constrained proactive HARQ with prediction even outperforms unconstrained reactive HARQ for sufficiently large feedback delays.
electrical engineering and systems science
Two dimensional field theories invariant under the Bondi-Metzner-Sachs (BMS) group are conjectured to be dual to asymptotically flat spacetimes in three dimensions. In this paper, we continue our investigations of the modular properties of these field theories. In particular, we focus on the BMS torus one-point function. We use two different methods to arrive at expressions for asymptotic structure constants for general states in the theory utilising modular properties of the torus one-point function. We then concentrate on the BMS highest weight representation, and derive a host of new results, the most important of which is the BMS torus block. In a particular limit of large weights, we derive the leading and sub-leading pieces of the BMS torus block, which we then use to rederive an expression for the asymptotic structure constants for BMS primaries. Finally, we perform a bulk computation of a probe scalar in the background of a flatspace cosmological solution based on the geodesic approximation to reproduce our field theoretic results.
high energy physics theory
This paper investigates antenna selection at a base station with large antenna arrays and low-resolution analog-to-digital converters. For downlink transmit antenna selection for narrowband channels, we show (1) a selection criterion that maximizes sum rate with zero-forcing precoding equivalent to that of a perfect quantization system; (2) maximum sum rate increases with number of selected antennas; (3) derivation of the sum rate loss function from using a subset of antennas; and (4) unlike high-resolution converter systems, sum rate loss reaches a maximum at a point of total transmit power and decreases beyond that point to converge to zero. For wideband orthogonal-frequency-division-multiplexing (OFDM) systems, our results hold when entire subcarriers share a common subset of antennas. For uplink receive antenna selection for narrowband channels, we (1) generalize a greedy antenna selection criterion to capture tradeoffs between channel gain and quantization error; (2) propose a quantization-aware fast antenna selection algorithm using the criterion; and (3) derive a lower bound on sum rate achieved by the proposed algorithm based on submodular functions. For wideband OFDM systems, we extend our algorithm and derive a lower bound on its sum rate. Simulation results validate theoretical analyses and show increases in sum rate over conventional algorithms.
electrical engineering and systems science
The quantum version of Wheeler's delayed-choice experiment challenges interpretations of the complementarity principle based on post-quantum variables. With basis on the visibility at the output of a quantum controlled interferometer, a conceptual framework has been put forward which detaches the notions of wave and particle from the quantum state and allows for the existence of hybrid wave-particle behaviours, thus claiming for a critical review of the complementarity principle. Here, we propose and implement a contrast experimental setup which, upon analysis of an operational criterion of physical reality, proves to yield a dramatically different state of affairs. We show that, in disparity with previous proposals, our setup ensures a formal link between the visibility and elements of reality within the interferometer and predicts the existence of hybrid elements of reality interpolating between wave and particle. An experimental proof-of-principle is provided for a two-spin-1/2 system in an interferometric setup implemented in a nuclear magnetic resonance platform. Furthermore, our results validate, to a great extent, Bohr's original formulation of the complementarity principle.
quantum physics
Efficient manipulation of magnetic order with electric current pulses is desirable for achieving fast spintronic devices. The Rashba-Edelstein effect, wherein a spin polarization is electrically induced in noncentrosymmetric systems, provides a mean to achieve current-induced staggered spin-orbit torques. Initially predicted for spin, the orbital counterpart of this effect has been disregarded up to now. Here, we present a generalized Rashba-Edelstein effect, which generates not only spin polarization but also orbital polarization, which we find to be far from being negligible and could play a crucial role in the magnetization dynamics. We show that the orbital Rashba-Edelstein effect does not require spin-orbit coupling to exist. We present first-principles calculations of the frequency-dependent spin and orbital Rashba-Edelstein susceptibility tensors for the noncentrosymmetric antiferromagnets CuMnAs and Mn2Au. We show that the electrically induced local magnetization has both staggered in-plane components and non-staggered out-of-plane components, and can exhibit Rashba-like or Dresselhaus-like symmetries, depending on the magnetic configuration. Furthermore, there is an induced local magnetization on the nonmagnetic atoms as well, that is smaller in Mn2Au than in CuMnAs. We compute sizable induced magnetizations at optical frequencies, which suggest that electric-field driven switching could be achieved at much higher frequencies.
condensed matter
Review of tomographic probability representation of quantum states is presented both for oscillator systems with continious variables and spin--systems with discrete variables. New entropic--information inequalities are obtained for Franck--Condon factors. Density matrices of qudit states are expressed in terms of probabilities of artificial qubits as well as the quantum suprematism approach to geometry of these states using the triadas of Malevich squares is developed. Examples of qubits, qutrits and ququarts are considered.
quantum physics
Schwarzschild black-hole interiors border on space-like singularities representing classical information leaks. We show that local quantum physics is decoupled from these leaks due to dynamically generated boundaries, called Zeno borders. Beyond Zeno borders black-hole interiors become asymptotically silent, and quantum fields evolve freely towards the geodesic singularity with vanishing probability measure for populating the geodesic boundary. Thus Zeno borders represent a probabilistic completion of Schwarzschild black holes within the semiclassical framework.
high energy physics theory
Unsupervised domain adaptation without consuming annotation process for unlabeled target data attracts appealing interests in semantic segmentation. However, 1) existing methods neglect that not all semantic representations across domains are transferable, which cripples domain-wise transfer with untransferable knowledge; 2) they fail to narrow category-wise distribution shift due to category-agnostic feature alignment. To address above challenges, we develop a new Critical Semantic-Consistent Learning (CSCL) model, which mitigates the discrepancy of both domain-wise and category-wise distributions. Specifically, a critical transfer based adversarial framework is designed to highlight transferable domain-wise knowledge while neglecting untransferable knowledge. Transferability-critic guides transferability-quantizer to maximize positive transfer gain under reinforcement learning manner, although negative transfer of untransferable knowledge occurs. Meanwhile, with the help of confidence-guided pseudo labels generator of target samples, a symmetric soft divergence loss is presented to explore inter-class relationships and facilitate category-wise distribution alignment. Experiments on several datasets demonstrate the superiority of our model.
computer science
Efficient algorithms for the sparse solution of under-determined linear systems $Ax = b$ are known for matrices $A$ satisfying suitable assumptions like the restricted isometry property (RIP). Without such assumptions little is known and without any assumptions on $A$ the problem is $NP$-hard. A common approach is to replace $\ell_1$ by $\ell_p$ minimization for $0 < p < 1$, which is no longer convex and typically requires some form of local initial values for provably convergent algorithms. In this paper, we consider an alternative, where instead of suitable initial values we are provided with extra training problems $Ax = B_l$, $l=1, \dots, p$ that are related to our compressed sensing problem. They allow us to find the solution of the original problem $Ax = b$ with high probability in the range of a one layer linear neural network with comparatively few assumptions on the matrix $A$.
computer science
The next generation electron-positron colliders are designed for precision studies of the Standard Model and its extensions, in particular in the Higgs sector. We consider the potential for discovery of composite Higgs models in Higgs pair production through photon collisions. This process is loop-generated, thus it provides access to all Higgs couplings and can show new physics effects in polarized and unpolarized cross-sections starting at relatively low collider energies. It is, therefore, relevant for all electron-positron colliders planned or in preparation. Sizeable deviations from the Standard Model predictions are present in a general class of composite Higgs models, as couplings of one or more Higgs bosons to fermions, or fermionic and scalar resonances, modify the destructive interference present in the Standard Model. In particular, large effects are due to the new quartic coupling of the Higgs to tops and to the presence of a light scalar resonance.
high energy physics phenomenology
We develop a comprehensive theoretical model of relativistic collisionless pair shocks mediated by the current filamentation instability. We notably characterize the noninertial frame in which this instability is of a mostly magnetic nature, and describe at a microscopic level the deceleration and heating of the incoming background plasma through its collisionless interaction with the electromagnetic turbulence. Our model compares well to large-scale 2D3V PIC simulations, and provides an important touchstone for the phenomenology of such plasma systems.
astrophysics
The $\mu^-e^-\to e^-e^-$ process in a muonic atom is one of the promising probes to study the charged lepton flavor violation (CLFV). We have investigated the angular distribution of electrons from the polarized muon of the atomic bound state. The parity violating asymmetric distribution of electrons is analyzed by using lepton wave functions under the Coulomb interaction of a finite nuclear charge distribution. It is found that the asymmetry parameters of electrons are very sensitive to the chiral structure of the CLFV interaction and the contact/photonic interaction. Therefore, together with the atomic number dependence of the decay rate studied in our previous work, the angular distribution of electrons from a polarized muon should be a very useful tool to constrain the model beyond the standard model.
high energy physics phenomenology
We propose a four-state quantum system, or quantum unit, that can be realized in superconducting hetero-structures. The unit combines the states of a spin and an Andreev qubit providing the opportunity of quantum superpositions of their states. This functionality is achieved by tunnel coupling between a 4-terminal superconducting heterostucture housing a Weyl point, and a quantum dot. The quantum states in the vicinity of the Weyl point are extremely sensitive to small changes of superconducting phase, this gives reach opportunities for quantum manipulation. We establish an effective Hamiltonian for the setup and describe the peculiarities of the resulting spectrum. We concentrate on the 4-state subspace and explain how to make a double qubit in this setup. We review various ways to achieve quantum manipulation in the unit, this includes resonant, adiabatic, diabatic manipulation and combinations of those. We provide detailed illustrations of designing arbitrary quantum gates in the unit.
condensed matter
The future Facility for Anti-proton and Ion Research (FAIR), currently in construction in Darmstadt, Germany, is one of the largest research projects worldwide. The Compressed Baryonic Matter (CBM) experiment is one of the main pillars at FAIR, studying the quantum chromodynamics (QCD) phase diagram at high baryon densities with unprecedented interaction rate in heavy ion collisions up to 10 MHz. This requires new data-driven readout chain, new data analysis methods and high-rate capable detector systems. The task of the CBM Time of Flight wall (CBM-TOF) is the charged particle identification. Multi-gap Resistive Plate Chambers (MRPCs) with different rate capabilities will be used at CBM-TOF corresponding regions. To reduce the commissioning time for CBM, a CBM full system test-setup called mini-CBM (mCBM) had been installed and tested with beams at GSI SIS18 facility in 2019. The high-rate MRPC prototypes developed at Tsinghua University, called MRPC2, were selected to be implemented in mTOF modules for mCBM. Additional thin float glass MRPCs from USTC called MRPC3, foreseen for the CBM lower rate region, were also tested in the mCBM experiment. Performance results of the two kinds of MRPCs analyzed by the so called tracking method will be shown.
physics
We present an algorithm for L1-norm kernel PCA and provide a convergence analysis for it. While an optimal solution of L2-norm kernel PCA can be obtained through matrix decomposition, finding that of L1-norm kernel PCA is not trivial due to its non-convexity and non-smoothness. We provide a novel reformulation through which an equivalent, geometrically interpretable problem is obtained. Based on the geometric interpretation of the reformulated problem, we present a fixed-point type algorithm that iteratively computes a binary weight for each observation. As the algorithm requires only inner products of data vectors, it is computationally efficient and the kernel trick is applicable. In the convergence analysis, we show that the algorithm converges to a local optimal solution in a finite number of steps. Moreover, we provide a rate of convergence analysis, which has been never done for any L1-norm PCA algorithm, proving that the sequence of objective values converges at a linear rate. In numerical experiments, we show that the algorithm is robust in the presence of entry-wise perturbations and computationally scalable, especially in a large-scale setting. Lastly, we introduce an application to outlier detection where the model based on the proposed algorithm outperforms the benchmark algorithms.
statistics
We report on the study of optical properties of mist CVD grown alpha Gallium oxide with the observation of excitonic absorption in spectral responsivity measurements. 163 nm of Gallium oxide was grown on sapphire using Gallium acetylacetonate as the starting solution at a substrate temperature of 450 deg C. The film was found to be crystalline and of alpha phase with an on axis full width at half maximum of 92 arcsec as confirmed from X ray diffraction scans. The Taucs plot extracted from absorption spectroscopy exhibited two transitions in the UV regime at 5.3 eV and 5.6 eV, corresponding to excitonic absorption and direct band to band transition respectively. The binding energy of exciton was extracted to be 114 meV from spectral responsivity measurements. Further, metal semiconductor metal photodetectors with lateral inter digitated geometry were fabricated on the film. A sharp band edge was observed at 230 nm in the spectral response with peak responsivity of around 1 Amperes per Watt at a bias of 20 V. The UV to visible rejection ratio was found to be around 100 while the dark current was measured to be around 0.1 nA.
physics
The channel modeling of unnamed aerial vehicle (UAV)-based free-space optical (FSO) links with nonzero boresight pointing error is the subject of this paper. In particular, utilizing log-normal turbulence model, we propose a novel closed-form statistical channel model for UAV-based FSO links that takes into account the effect of nonzero boresight pointing errors. Subsequently, utilizing Gamma-Gamma turbulence model, we propose a novel channel characterization for such links that is valid under moderate to strong turbulence conditions. The accuracy of the proposed models is verified via Monte-Carlo simulations. The proposed models are more tractable and suitable for analysis of such UAV-based FSO links.
electrical engineering and systems science
I review the parametrisation of the full set of $\Lambda_b\to\Lambda^* (1520)$ form factors in the framework of Heavy Quark Expansion, including next-to-leading-order $\mathcal{O}(\alpha_s)$ and, for the first time, next-to-leading-power $\mathcal{O}(1/m_b)$ corrections. The unknown hadronic parameters are obtained by performing a fit to recent lattice QCD calculations. I investigate the compatibility of the Heavy Quark Expansion and the current lattice data, finding tension between these two approaches in the case of tensor and pseudo-tensor form factors, whose origin could come from an underestimation of the current lattice QCD uncertainties and higher order terms in the Heavy Quark Expansion.
high energy physics phenomenology
Fenyves BCI-algebras are BCI-algebras that satisfy the Bol-Moufang identities. In this paper, the holomorphy of BCI-algebras are studied. It is shown that whenever a loop and its holomorph are BCI-algebras, the former is p-semisimple if and only if the latter is p-semisimple. Whenever a loop and its holomorph are BCI-algebras, it is established that the former is a BCK-algebra if and only if the latter has a BCK-subalgebra. Moreover, the holomorphy of the associative and some non-associative Fenyves BCI-algebras are also studied.
mathematics
The electromagnetic form factors of octet baryons are investigated with the nonlocal chiral effective theory. The nonlocal interaction generates both the regulator which makes the loop integral convergent and the $Q^2$ dependence of form factors at tree level. Both octet and decuplet intermediate states are included in the one loop calculation. The momentum dependence of baryon form factors are studied up to 1 GeV$^2$ with the same number of parameters as for the nucleon form factors. The obtained magnetic moments of all the baryon octets as well as the radii are in good agreement with the experimental data and/or lattice simulation.
high energy physics phenomenology
Confinement is a ubiquitous mechanism in nature, whereby particles feel an attractive force that increases without bound as they separate. A prominent example is color confinement in particle physics, in which baryons and mesons are produced by quark confinement. Analogously, confinement can also occur in low-energy quantum many-body systems when elementary excitations are confined into bound quasiparticles. Here, we report the first observation of magnetic domain wall confinement in interacting spin chains with a trapped-ion quantum simulator. By measuring how correlations spread, we show that confinement can dramatically suppress information propagation and thermalization in such many-body systems. We are able to quantitatively determine the excitation energy of domain wall bound states from non-equilibrium quench dynamics. Furthermore, we study the number of domain wall excitations created for different quench parameters, in a regime that is difficult to model with classical computers. This work demonstrates the capability of quantum simulators for investigating exotic high-energy physics phenomena, such as quark collision and string breaking.
quantum physics
Bars inhabit the majority of local-Universe disk galaxies and may be important drivers of galaxy evolution through the redistribution of gas and angular momentum within disks. We investigate the star formation and gas properties of bars in galaxies spanning a wide range of masses, environments, and star formation rates using the MaNGA galaxy survey. Using a robustly-defined sample of 684 barred galaxies, we find that fractional (or scaled) bar length correlates with the host's offset from the star-formation main sequence. Considering the morphology of the H$\alpha$ emission we separate barred galaxies into different categories, including barred, ringed, and central configurations, together with H$\alpha$ detected at the ends of a bar. We find that only low-mass galaxies host star formation along their bars, and that this is located predominantly at the leading edge of the bar itself. Our results are supported by recent simulations of massive galaxies, which show that the position of star formation within a bar is regulated by a combination of shear forces, turbulence and gas flows. We conclude that the physical properties of a bar are mostly governed by the existing stellar mass of the host galaxy, but that they also play an important role in the galaxy's ongoing star formation.
astrophysics
The possibility that primordial black holes (PBHs) form a part of dark matter has been considered for a long time but poorly constrained in the $1-100~M_{\odot}$ (or stellar mass range). However, a renewed special interest of PBHs in this mass window was triggered by the discovery at LIGO of the merger events of black-hole binaries. Fast radio bursts (FRBs) are bright radio transients with millisecond duration and high all-sky occurrence rate. Lensing effect of these bursts has been proposed as one of the cleanest probes for constraining the presence of PBHs in the stellar mass window. In this paper, we first investigate constraints on the abundance of PBHs from the latest FRB observations for both the monochromatic mass distribution and three other popular extended mass distributions (EMDs). We find that constraints from currently public FRB observations are relatively weaker than those from existing gravitational wave detections. Furthermore, we forecast constraining power of future FRB observations on the abundance of PBHs with different mass distributions of PBHs and different redshift distributions of FRBs taken into account. Finally, We find that constraints of parameter space on EMDs from $\sim10^5$ FRBs with $\overline{\Delta t}\leq1 ~\rm ms$ would be comparable with what can be constrained from gravitational wave events. It is foreseen that upcoming complementary multi-messenger observations will yield considerable constraints on the possibilities of PBHs in this intriguing mass window.
astrophysics
Two phase transitions in the tetragonal strongly correlated electron system CeNiAsO were probed by neutron scattering and zero field muon spin rotation. For $T <T_{N1}$ = 8.7(3) K, a second order phase transition yields an incommensurate spin density wave with wave vector $\textbf{k} = (0.44(4), 0, 0)$. For $T < T_{N2}$ = 7.6(3) K, we find co-planar commensurate order with a moment of $0.37(5)~\mu_B$, reduced to $30 \%$ of the saturation moment of the $|\pm\frac{1}{2}\rangle$ Kramers doublet ground state, which we establish by inelastic neutron scattering. Muon spin rotation in $\rm CeNiAs_{1-x}P_xO$ shows the commensurate order only exists for x $\le$ 0.1 so the transition at $x_c$ = 0.4(1) is from an incommensurate longitudinal spin density wave to a paramagnetic Fermi liquid.
condensed matter
We calculate the signal rate of hypothetical heavy neutral leptons (HNL or sterile neutrinos) from kaon decays expected in the framework of the SHiP experiment. The kaons are produced in the hadronic shower initiated in the beam-dump mode by 400 GeV protons from CERN SPS. For a sufficiently light HNL (when the decays are kinematically allowed) we find kaon decays to be a noticeably richer source of HNL as compared to $D$-meson decays adopted in previous studies of the HNL phenomenology at SHiP. In particular, SHiP is capable of fully exploring the central part of the kinematically allowed region of the HNL mass and mixing with electron and muon neutrinos down to the lower cosmological bound. The latter is associated with HNL decays in the early Universe to energetic products rescattering off and thus destroying light nuclei produced at the primordial nucleosynthesis. A consistency of the HNL model with smaller mixing would require either a hierarchy -- much larger mixing of all the HNL with tau neutrino -- or non-standard cosmology and new ingredients in the HNL sector, closing the room for the minimal non-seesaw type I model with sterile neutrinos lighter than kaons.
high energy physics phenomenology
The log-rank test is most powerful under proportional hazards (PH). In practice, non-PH patterns are often observed in clinical trials, such as in immuno-oncology; therefore, alternative methods are needed to restore the efficiency of statistical testing. Three categories of testing methods were evaluated, including weighted log-rank tests, Kaplan-Meier curve-based tests (including weighted Kaplan-Meier and Restricted Mean Survival Time, RMST), and combination tests (including Breslow test, Lee's combo test, and MaxCombo test). Nine scenarios representing the PH and various non-PH patterns were simulated. The power, type I error, and effect estimates of each method were compared. In general, all tests control type I error well. There is not a single most powerful test across all scenarios. In the absence of prior knowledge regarding the PH or non-PH patterns, the MaxCombo test is relatively robust across patterns. Since the treatment effect changes overtime under non-PH, the overall profile of the treatment effect may not be represented comprehensively based on a single measure. Thus, multiple measures of the treatment effect should be pre-specified as sensitivity analyses to evaluate the totality of the data.
statistics
Atherosclerosis, hardening of the arteries, originates from small plaque in the arteries; it is a major cause of disability and premature death in the United States and worldwide. In this paper, we study the bifurcation of a highly nonlinear and highly coupled PDE model describing the growth of arterial plaque in the early stage of atherosclerosis. The model involves LDL and HDL cholesterols, macrophage cells as well as foam cells, with the interface separating the plaque and blood flow regions being a free boundary. We establish finite branches of symmetry-breaking stationary solutions which bifurcate from the radially symmetric solution. Since plaque in reality is unlikely to be strictly radially symmetric, our result would be useful to explain the asymmetric shapes of plaque.
mathematics
In this paper, we mainly consider about the existence and uniqueness of global weak solutions for the two-component Novikov system. We first recall some results and definitions of strong solutions and weak solutions for the system, then by using the method of approximation of smooth solutions, we prove the existence and uniqueness of global weak solutions of the system.
mathematics
We discuss the determination of electroweak parameters from hadron collider observables, focusing on the $W$ boson mass measurement. We revise the procedures adopted in the literature to include in the experimental analysis the uncertainty due to our imperfect knowledge of the proton structure. We show how the treatment of the proton parton density functions (PDFs) uncertainty as a source of systematic error, leads to the automatic inclusion in the fit of the bin-bin correlation of the kinematic distributions with respect to PDF variations. In the case of the determination of $M_W$ from the charged lepton transverse momentum distribution, we observe that the inclusion of this correlation factor yields a strong reduction of the PDF uncertainty, given a sufficiently good control over all the other error sources. This improvement depends on a systematic accounting of the features of the QCD-based PDF model, and it is achieved relying only on the information available in current PDF sets. While a realistic quantitative estimate requires to take into account the details of the experimental systematics, we argue that, in perspective, the proton PDF uncertainty will not be a bottleneck for precision measurements.
high energy physics phenomenology
We identify and characterize compact dwarf starburst (CDS) galaxies in the RESOLVE survey, a volume-limited census of galaxies in the local universe, to probe whether this population contains any residual ``blue nuggets,'' a class of intensely star-forming compact galaxies first identified at high redshift $z$. Our 50 low-$z$ CDS galaxies are defined by dwarf masses (stellar mass $M_* < 10^{9.5}$ M$_{\odot}$), compact bulged-disk or spheroid-dominated morphologies (using a quantitative criterion, $\mu_\Delta > 8.6$), and specific star formation rates above the defining threshold for high-$z$ blue nuggets ($\log$ SSFR [Gyr$^{-1}] > -0.5$). Across redshifts, blue nuggets exhibit three defining properties: compactness relative to contemporaneous galaxies, abundant cold gas, and formation via compaction in mergers or colliding streams. Those with halo mass below $M_{\rm halo} \sim 10^{11.5}$ M$_{\odot}$ may in theory evade permanent quenching and cyclically refuel until the present day. Selected only for compactness and starburst activity, our CDS galaxies generally have $M_{\rm halo} \lesssim 10^{11.5}$ M$_{\odot}$ and gas-to-stellar mass ratio $\gtrsim$1. Moreover, analysis of archival DECaLS photometry and new 3D spectroscopic observations for CDS galaxies reveals a high rate of photometric and kinematic disturbances suggestive of dwarf mergers. The SSFRs, surface mass densities, and number counts of CDS galaxies are compatible with theoretical and observational expectations for redshift evolution in blue nuggets. We argue that CDS galaxies represent a maximally-starbursting subset of traditional compact dwarf classes such as blue compact dwarfs and blue E/S0s. We conclude that CDS galaxies represent a low-$z$ tail of the blue nugget phenomenon formed via a moderated compaction channel that leaves open the possibility of disk regrowth and evolution into normal disk galaxies.
astrophysics
We report on the critical properties of minimaly-polydisperse crystals, hexagonal in 2d and face-centered cubic in 3 dimensions, at the isostatic jamming point. The force and gap distributions display power-law tails for small values. The vibrational density of states (VDOS) is flat. The scaling behavior of forces of extended floppy modes and the VDOS are universal and in agreement with an infinite-dimensional mean-field theory and maximally amorphous packings down to 2 dimensions. The distributions of gaps and forces of localized floppy modes of near-crystals appear non-universal. A small fraction of normal modes exhibit partial localization at low frequency. The majority of normal modes is delocalized exhibiting a characteristic inverse participation ratio scaling with frequency. The packing fraction and order at jamming decay linearly and quadratically respectively with polydispersity down to the maximally amorphous state.
condensed matter
Motion is one of the main sources for artifacts in magnetic resonance (MR) images. It can have significant consequences on the diagnostic quality of the resultant scans. Previously, supervised adversarial approaches have been suggested for the correction of MR motion artifacts. However, these approaches suffer from the limitation of required paired co-registered datasets for training which are often hard or impossible to acquire. Building upon our previous work, we introduce a new adversarial framework with a new generator architecture and loss function for the unsupervised correction of severe rigid motion artifacts in the brain region. Quantitative and qualitative comparisons with other supervised and unsupervised translation approaches showcase the enhanced performance of the introduced framework.
electrical engineering and systems science
QA models based on pretrained language mod-els have achieved remarkable performance onv arious benchmark datasets.However, QA models do not generalize well to unseen data that falls outside the training distribution, due to distributional shifts.Data augmentation(DA) techniques which drop/replace words have shown to be effective in regularizing the model from overfitting to the training data.Yet, they may adversely affect the QA tasks since they incur semantic changes that may lead to wrong answers for the QA task. To tackle this problem, we propose a simple yet effective DA method based on a stochastic noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics. We validate the performance of the QA models trained with our word embedding perturbation on a single source dataset, on five different target domains.The results show that our method significantly outperforms the baselineDA methods. Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.
computer science
We present a high-precision temporal-spatial phase-demodulation algorithm for phase-shifting interferometry (PSI) affected by random/systematic phase-stepping errors. Laser interferometers in standard optical-shops suffer from several error sources including random phase-shift deviations. Even calibrated phase-shifters do not achieve floating-point linear accuracy, as routinely obtained in multimedia video-projectors for fringe-projection profilometry. In standard optical-shops, calibrated phase-shifting interferometers suffer from nonlinearities due to vibrations, turbulence, and environmental fluctuations (temperature, pressure, humidity, air composition) still under controlled laboratory conditions. These random phase-step errors (even if they are small), increases the uncertainty of the phase measurement. This is particularly significant if the wavefront tolerance is tightened to high precision optics. We show that these phase-step errors precludes high-precision wavefront measurements because its uncertainty increases to around lambda/10. We develop an analytical expression based on optical-wavefront formalism showing that these phase-step nonlinearities appear as a spurious conjugate signal degrading the desired wavefront. Removing this spurious conjugate constitutes the central objective of the proposed nonlinear phase-shifting algorithm (nPSA). Using this nPSI algorithm we demodulate experimental interferograms subject to small vibrations and phase-shifter nonlinearities, obtaining a high-precision spurious-free, demodulated wavefront. We show that our artifact-free, temporal-spatial quadrature filtering, accomplishes an equivalent wavefront precision as the one obtained from floating-point linear phase-shifting interferometry.
electrical engineering and systems science
The principal goal of Group Testing (GT) is to identify a small subset of "defective" items from a large population, by grouping items into as few test pools as possible. The test outcome of a pool is positive if it contains at least one defective item, and is negative otherwise. GT algorithms are utilized in numerous applications, and in many of them maintaining the privacy of the tested items, namely, keeping secret whether they are defective or not, is critical. In this paper, we consider a scenario where there is an eavesdropper (Eve) who is able to observe a subset of the GT outcomes (pools). We propose a new non-adaptive Secure Group Testing (SGT) scheme based on information-theoretic principles. The new proposed test design keeps the eavesdropper ignorant regarding the items' status. Specifically, when the fraction of tests observed by Eve is $0 \leq \delta <1$, we prove that with the naive Maximum Likelihood (ML) decoding algorithm the number of tests required for both correct reconstruction at the legitimate user (with high probability) and negligible information leakage to Eve is $\frac{1}{1-\delta}$ times the number of tests required with no secrecy constraint for the fixed $K$ regime. By a matching converse, we completely characterize the Secure GT capacity. Moreover, we consider the Definitely Non-Defective (DND) computationally efficient decoding algorithm, proposed in the literature for non-secure GT. We prove that with the new secure test design, for $\delta < 1/2$, the number of tests required, without any constraint on $K$, is at most $\frac{1}{1/2-\delta}$ times the number of tests required with no secrecy constraint.
computer science
It was recently pointed out that the existence of dark energy imposes highly restrictive constraints on effective field theories that satisfy the Swampland conjectures. We provide a critical confrontation of these constraints with the cosmological framework emerging from the Salam-Sezgin model and its string realization by Cvetic, Gibbons, and Pope. We also discuss the implication of the constraints for string model building.
high energy physics theory
We formulate an optimization problem of Hamiltonian design based on the variational principle. Given a variational ansatz for a Hamiltonian we construct a loss function to be minimised as a weighted sum of relevant Hamiltonian properties specifying thereby the search query. Using fractional quantum Hall effect as a test system we illustrate how the framework can be used to determine a generating Hamiltonian of a finite-size model wavefunction (Moore-Read Pfaffian and Read-Rezayi states), find optimal conditions for an experiment or "extrapolate" given wavefunctions in a certain universality class from smaller to larger system sizes. We also discuss how the search for approximate generating Hamiltonians may be used to find simpler and more realistic models implementing the given exotic phase of matter by experimentally accessible interaction terms.
quantum physics
We consider the problem of extracting a common structure from multiple tensor datasets. For this purpose, we propose multilinear common component analysis (MCCA) based on Kronecker products of mode-wise covariance matrices. MCCA constructs a common basis represented by linear combinations of the original variables which loses as little information of the multiple tensor datasets. We also develop an estimation algorithm for MCCA that guarantees mode-wise global convergence. Numerical studies are conducted to show the effectiveness of MCCA.
statistics
Millimeter-wave (mmWave) and Terahertz (THz) will be used in the sixth-generation (6G) wireless systems, especially for indoor scenarios. This paper presents an indoor three-dimensional (3-D) statistical channel model for mmWave and sub-THz frequencies, which is developed from extensive channel propagation measurements conducted in an office building at 28 GHz and 140 GHz in 2014 and 2019. Over 15,000 power delay profiles (PDPs) were recorded to study channel statistics such as the number of time clusters, cluster delays, and cluster powers. All the parameters required in the channel generation procedure are derived from empirical measurement data for 28 GHz and 140 GHz line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. The channel model is validated by showing that the simulated root mean square (RMS) delay spread and RMS angular spread yield good agreements with measured values. An indoor channel simulation software is built upon the popular NYUSIM outdoor channel simulator, which can generate realistic channel impulse response, PDP, and power angular spectrum.
electrical engineering and systems science
Classical subleading soft graviton theorem in four space-time dimensions determines the gravitational wave-form at late and early retarded time, generated during a scattering or explosion, in terms of the four momenta of the ingoing and outgoing objects. This result was `derived' earlier by taking the classical limit of the quantum soft graviton theorem, and making some assumptions about how to deal with the infrared divergences of the soft factor. In this paper we give a direct proof of this result by analyzing the classical equations of motion of gravity coupled to matter. We also extend the result to the electromagnetic wave-form generated during scattering of charged particles, and present a new conjecture on subsubleading corrections to the gravitational wave-form at early and late retarded time.
high energy physics theory
Magnetic resonance imaging (MRI) is a widely used medical imaging modality. However, due to the limitations in hardware, scan time, and throughput, it is often clinically challenging to obtain high-quality MR images. The super-resolution approach is potentially promising to improve MR image quality without any hardware upgrade. In this paper, we propose an ensemble learning and deep learning framework for MR image super-resolution. In our study, we first enlarged low resolution images using 5 commonly used super-resolution algorithms and obtained differentially enlarged image datasets with complementary priors. Then, a generative adversarial network (GAN) is trained with each dataset to generate super-resolution MR images. Finally, a convolutional neural network is used for ensemble learning that synergizes the outputs of GANs into the final MR super-resolution images. According to our results, the ensemble learning results outcome any one of GAN outputs. Compared with some state-of-the-art deep learning-based super-resolution methods, our approach is advantageous in suppressing artifacts and keeping more image details.
electrical engineering and systems science
In this article, we give a description of the closed cone of curves of the projective bundle $\mathbb{P}(E)$ over a smooth projective surface $X$. Using this, we calculate the nef cone of $\mathbb{P}(E)$ over $X$ in some cases under some suitable assumptions on $X$ as well as on the vector bundle $E$. As an application, we also calculate the Seshadri constants of ample vector bundles in the following cases : (1) for a completely decomposable ample vector bundle on $\mathbb{P}^2$ at a closed point in $\mathbb{P}^2$, (2) for a semistable ample vector bundle with vanishing discriminant on some special ruled surfaces at special points, and in all other cases, we give bounds on Seshadri constants of any ample vector bundles on these spaces.
mathematics
We briefly review and expand our recent analysis for all three invariant A,B,D gravitational form factors of the nucleon in holographic QCD. They compare well to the gluonic gravitational form factors recently measured using lattice QCD simulations. The holographic A-term is fixed by the tensor $T=2^{++}$ (graviton) Regge trajectory, and the D-term by the difference between the tensor $T=2^{++}$ (graviton) and scalar $S=0^{++}$ (dilaton) Regge trajectories. The B-term is null in the absence of a tensor coupling to a Dirac fermion in bulk. A first measurement of the tensor form factor A-term is already accessible using the current GlueX data, and therefore the tensor gluonic mass radius, pressure and shear inside the proton, thanks to holography. The holographic A-term and D-term can be expressed exactly in terms of harmonic numbers. The tensor mass radius from the holographic threshold is found to be $\langle r^2_{GT}\rangle \approx (0.57-0.60\,{\rm fm})^2$, in agreement with $\langle r^2_{GT}\rangle \approx (0.62\,{\rm fm})^2$ as extracted from the overall numerical lattice data, and empirical GlueX data. The scalar mass radius is found to be slightly larger $\langle r^2_{GS}\rangle \approx (0.7\,{\rm fm})^2$.
high energy physics phenomenology
Tidal interactions are important in driving spin and orbital evolution in planetary and stellar binary systems, but the fluid dynamical mechanisms responsible remain incompletely understood. One key mechanism is the interaction between tidal flows and convection. Turbulent convection is thought to act as an effective viscosity in damping large-scale tidal flows, but there is a long-standing controversy over the efficiency of this mechanism when the tidal frequency exceeds the turnover frequency of the dominant convective eddies. This high frequency regime is relevant for many applications, such as for tides in stars hosting hot Jupiters. We explore the interaction between tidal flows and convection using hydrodynamical simulations within a local Cartesian model of a small patch of a convection zone of a star or planet. We adopt the Boussinesq approximation and simulate Rayleigh-B\'enard convection, modelling the tidal flow as a background oscillatory shear flow. We demonstrate that the effective viscosity of both laminar and turbulent convection is approximately frequency-independent for low frequencies. When the forcing frequency exceeds the dominant convective frequency, the effective viscosity scales inversely with the square of the tidal frequency. We also show that negative effective viscosities are possible, particularly for high frequency tidal forcing, suggesting the surprising possibility of tidal anti-dissipation. These results are supported by a complementary high frequency asymptotic analysis that extends prior work by Ogilvie & Lesur. We discuss the implications of these results for interpreting the orbital decay of hot Jupiters, and for several other astrophysical problems.
astrophysics
The vacuum expectation values of conserved currents play an essential role in the generalized hydrodynamics of integrable quantum field theories. We use analytic continuation to extend these results for the excited state expectation values in a finite volume. Our formulas are valid for diagonally scattering theories and incorporate all finite size corrections.
high energy physics theory
Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide the first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods \footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.
computer science
A D3-D5 intersection gives rise to a defect CFT, wherein the rank of the gauge group jumps by k units across a domain wall. The one-point functions of local operators in this set-up map to overlaps between on-shell Bethe states in the underlying spin chain and a boundary state representing the D5 brane. Focussing on the k=1 case, we extend the construction to gluonic and fermionic sectors, which was prohibitively difficult to achieve for k>1. As a byproduct, we test an all-loop proposal for the one-point functions in the su(2) sector at the half-wrapping order of perturbation theory.
high energy physics theory
Multicast services, whereby a common valuable message needs to reach a whole population of user equipments (UEs), are gaining attention on account of new applications such as vehicular networks. As it proves challenging to guarantee decodability by every UE in a large population, service reliability is indeed the Achilles' heel of multicast transmissions. To circumvent this problem, a two-phase protocol capitalizing on device-to-device (D2D) links between UEs has been proposed, which overcomes the vanishing behavior of the multicast rate. In this paper, we revisit such D2D-aided protocol in the new light of precoding capabilities at the base station (BS). We obtain an enhanced scheme that aims at selecting a subset of UEs who cooperate to spread the common message across the rest of the network via D2D retransmissions. With the objective of maximizing the multicast rate under some outage constraint, we propose an algorithm with provable convergence that jointly identifies the most pertinent relaying UEs and optimizes the precoding strategy at the BS.
electrical engineering and systems science
Humans can only interact with part of the surrounding environment due to biological restrictions. Therefore, we learn to reason the spatial relationships across a series of observations to piece together the surrounding environment. Inspired by such behavior and the fact that machines also have computational constraints, we propose \underline{CO}nditional \underline{CO}ordinate GAN (COCO-GAN) of which the generator generates images by parts based on their spatial coordinates as the condition. On the other hand, the discriminator learns to justify realism across multiple assembled patches by global coherence, local appearance, and edge-crossing continuity. Despite the full images are never generated during training, we show that COCO-GAN can produce \textbf{state-of-the-art-quality} full images during inference. We further demonstrate a variety of novel applications enabled by teaching the network to be aware of coordinates. First, we perform extrapolation to the learned coordinate manifold and generate off-the-boundary patches. Combining with the originally generated full image, COCO-GAN can produce images that are larger than training samples, which we called "beyond-boundary generation". We then showcase panorama generation within a cylindrical coordinate system that inherently preserves horizontally cyclic topology. On the computation side, COCO-GAN has a built-in divide-and-conquer paradigm that reduces memory requisition during training and inference, provides high-parallelism, and can generate parts of images on-demand.
computer science
We report the first $BV$ light curves and high-resolution spectra of the post-mass transfer binary star WASP 0131+28 to study the absolute properties of extremely low-mass white dwarfs. From the observed spectra, the double-lined radial velocities were derived, and the effective temperature and rotational velocity of the brighter, more massive primary were found to be $T_{\rm eff,1} = 10,000 \pm 200$ K and $v_1\sin$$i$ = 55 $\pm$ 10 km s$^{-1}$, respectively. The combined analysis of the {\it TESS} archive data and ours yielded the accurate fundamental parameters of the program target. The masses were derived to about 1.0 \% accuracy and the radii to 0.6 \%, or better. The secondary component's parameters of $M_2 = 0.200 \pm 0.002$ M$_\odot$, $R_2 = 0.528 \pm 0.003$ R$_\odot$, $T_{\rm eff,2}$ = 11,186 $\pm$ 235 K, and $L_2 = 3.9 \pm 0.3$ L$_\odot$ are in excellent agreement with the evolutionary sequence for a helium-core white dwarf of mass 0.203 M$_\odot$, and indicates that this star is halfway through the constant luminosity phase. The results presented in this article demonstrate that WASP 0131+28 is an EL CVn eclipsing binary in a thin disk, which is formed from the stable Roche-lobe overflow channel and composed of a main-sequence dwarf with a spectral type A0 and a pre-He white dwarf.
astrophysics
Consistency checks of cosmological data sets are an important tool because they may suggest systematic errors or the type of modifications to $\Lambda$CDM necessary to resolve current tensions. In this work, we derive an analytic method for calculating the level of correlations between model parameters from two correlated cosmological data sets, which complements more computationally expensive simulations. This method is an extension of the Fisher analysis that assumes a Gaussian likelihood and a known data covariance matrix. We apply this method to the SPTpol temperature and polarization CMB spectra (TE and EE). We find weak correlations between $\Lambda$CDM parameters with a 9$\%$ correlation between the TE-only and EE-only constraints on $H_0$ and a 25$\%$ and 32$\%$ correlation for log($A_s$) and $n_s$ respectively. Despite the negative correlations between the TE and EE power spectra, the correlations in the parameters are positive. The TE-EE parameter differences are consistent with zero, with a PTE of 0.53, in contrast to the PTE of 0.017 reported by SPTpol for the consistency of the TE and EE power spectra with $\Lambda$CDM. Using simulations we find that the results of these two tests are independent and that this difference can arise simply from statistical fluctuations. Ignoring correlations in the TT-TE and TE-EE comparisons biases the $\chi^2$ low, artificially making parameters look more consistent. Therefore, we conclude that these correlations need to be accounted for when performing internal consistency checks of the TT vs TE vs EE power spectra for future CMB analyses.
astrophysics
There is a growing interest in understanding how humans initiate and hold conversations. The affective understanding of conversations focuses on the problem of how speakers use emotions to react to a situation and to each other. In the CL-Aff Shared Task, the organizers released Get it #OffMyChest dataset, which contains Reddit comments from casual and confessional conversations, labeled for their disclosure and supportiveness characteristics. In this paper, we introduce a predictive ensemble model exploiting the finetuned contextualized word embeddings, RoBERTa and ALBERT. We show that our model outperforms the base models in all considered metrics, achieving an improvement of $3\%$ in the F1 score. We further conduct statistical analysis and outline deeper insights into the given dataset while providing a new characterization of impact for the dataset.
computer science
In this paper we study spin 2 fluctuations around a warped $AdS_3 \times S^2 \times T^4 \times \mathcal{I}_{\rho}$ background in type IIA supergravity with small $\mathcal{N} = (0,4)$ supersymmetry. We find a class of fluctuations, which will be called \textit{universal}, that is independent of the background data and corresponds to operators with scaling dimension $\Delta = 2l +2$, being $l$ the angular-momentum-quantum-number on the $S^2$ which realises the $SU(2)_R$ symmetry. We compute the central charge for $\mathcal{N} = (0,4)$ two-dimensional superconformal theories from the action of the spin 2 fluctuations.
high energy physics theory
We introduce a unified framework for the study of multilevel mixed integer linear optimization problems and multistage stochastic mixed integer linear optimization problems with recourse. The framework highlights the common mathematical structure of the two problems and allows for the development of a common algorithmic framework. Focusing on the two-stage case, we investigate, in particular, the nature of the value function of the second-stage problem, highlighting its connection to dual functions and the theory of duality for mixed integer linear optimization problems, and summarize different reformulations. We then present two main solution techniques, one based on a Benders-like decomposition to approximate either the risk function or the value function, and the other one based on cutting plane generation.
mathematics
We present the results of modeling and simulating the Hamamatsu R5912 photomultiplier tube that is used in most of the sites of the Latin American Giant Observatory (LAGO). The model was compared with data of in-operation water Cherenkov detectors (WCD) installed at Bucaramanga-Colombia and Bariloche-Argentina. The LAGO project is an international experiment that spans across Latin America at different altitudes joining more than 35 institutions of 11 countries. It is mainly oriented to basic research on gamma-ray bursts and space weather phenomena. The LAGO network consists of single or small arrays of WCDs composed mainly by a photomultiplier tube and a readout electronics that acquires single-particle or extensive air shower events triggered by the interaction of cosmic rays with the Earth atmosphere.
physics
Thompson sampling is one of the most widely used algorithms for many online decision problems, due to its simplicity in implementation and superior empirical performance over other state-of-the-art methods. Despite its popularity and empirical success, it has remained an open problem whether Thompson sampling can match the minimax lower bound $\Omega(\sqrt{KT})$ for $K$-armed bandit problems, where $T$ is the total time horizon. In this paper, we solve this long open problem by proposing a variant of Thompson sampling called MOTS that adaptively clips the sampling instance of the chosen arm at each time step. We prove that this simple variant of Thompson sampling achieves the minimax optimal regret bound $O(\sqrt{KT})$ for finite time horizon $T$, as well as the asymptotic optimal regret bound for Gaussian rewards when $T$ approaches infinity. To our knowledge, MOTS is the first Thompson sampling type algorithm that achieves the minimax optimality for multi-armed bandit problems.
computer science
Theory of superexchange interaction between $J$-multiplets in $f$ metal compounds is developed for an adequate description of Goodenough mechanism induced by the electron transfer between the magnetic $f$ orbitals and empty $d$ and $s$ orbitals. The developed multipolar exchange model is applied to ferromagnetic NdN by using first principles microscopic parameters. The multipolar interaction parameters up to nineth order are quantitatively determined, and the significance of the higher order interaction is confirmed. The basic magnetic properties such as magnetization and magnetic susceptibility of NdN are well reproduced with the model. In the calculated ferromagnetic phase, it is shown that the quadrupole and octupole moments develop (within ground $\Gamma_8$ multiplets), unambiguously indicating the significance of the multipolar interactions.
condensed matter
Platinum diselenide (PtSe${_2}$) is a two-dimensional (2D) material with outstanding electronic and piezoresistive properties. The material can be grown at low temperatures in a scalable manner which makes it extremely appealing for many potential electronics, photonics, and sensing applications. Here, we investigate the nanocrystalline structure of different PtSe${_2}$ thin films grown by thermally assisted conversion (TAC) and correlate them with their electronic and piezoresistive properties. We use scanning transmission electron microscopy for structural analysis, X-ray photoelectron spectroscopy (XPS) for chemical analysis, and Raman spectroscopy for phase identification. Electronic devices are fabricated using transferred PtSe${_2}$ films for electrical characterization and piezoresistive gauge factor measurements. The variations of crystallite size and their orientations are found to have a strong correlation with the electronic and piezoresistive properties of the films, especially the sheet resistivity and the effective charge carrier mobility. Our findings may pave the way for tuning and optimizing the properties of TAC-grown PtSe${_2}$ towards numerous applications.
condensed matter
Database platform-as-a-service (dbPaaS) is developing rapidly and a large number of databases have been migrated to run on the Clouds for the low cost and flexibility. Emerging Clouds rely on the tenants to provide the resource specification for their database workloads. However, they tend to over-estimate the resource requirement of their databases, resulting in the unnecessarily high cost and low Cloud utilization. A methodology that automatically suggests the "just-enough" resource specification that fulfills the performance requirement of every database workload is profitable. To this end, we propose URSA, a capacity planning and workload scheduling system for dbPaaS Clouds. USRA is comprised by an online capacity planner, a performance interference estimator, and a contention-aware scheduling engine. The capacity planner identifies the most cost-efficient resource specification for a database workload to achieve the required performance online. The interference estimator quantifies the pressure on the shared resource and the sensitivity to the shared resource contention of each database workload. The scheduling engine schedules the workloads across Cloud nodes carefully to eliminate unfair performance interference between the co-located workloads. Our real system experimental results show that URSA reduces 27.5% of CPU usage and 53.4% of memory usage for database workloads while satisfying their performance requirements. Meanwhile, URSA reduces the performance unfairness between the co-located workloads by 42.8% compared with the Kubernetes.
computer science
In randomized clinical trials, adjustments for baseline covariates at both design and analysis stages are highly encouraged by regulatory agencies. A recent trend is to use a model-assisted approach for covariate adjustment to gain credibility and efficiency while producing asymptotically valid inference even when the model is incorrect. In this article we present three principles for model-assisted inference in simple or covariate-adaptive randomized trials: (1) guaranteed efficiency gain principle, a model-assisted method should often gain but never hurt efficiency; (2) validity and universality principle, a valid procedure should be universally applicable to all commonly used randomization schemes; (3) robust standard error principle, variance estimation should be heteroscedasticity-robust. To fulfill these principles, we recommend a working model that includes all covariates utilized in randomization and all treatment-by-covariate interaction terms. Our conclusions are based on asymptotic theory with a generality that has not appeared in the literature, as most existing results are about linear contrasts of outcomes rather than the joint distribution and most existing inference results under covariate-adaptive randomization are special cases of our theory. Our theory also reveals distinct results between cases of two arms and multiple arms.
statistics
We study signatures of the energy landscape's evolution through the crystal-to-glass transition by compressing 2D finite aggregates of oil droplets. Droplets of two distinct sizes are used to compose small aggregates in an aqueous environment. Aggregates range from perfectly ordered monodisperse single crystals to disordered bidisperse glasses. The aggregates are compressed between two parallel boundaries, with one acting as a force sensor. The compression force provides a signature of the aggregate composition and gives insight into the energy landscape. In particular, crystals dissipate all the stored energy through single catastrophic fracture events whereas the glassy aggregates break step-by-step. Remarkably, the yielding properties of the 2D aggregates are strongly impacted by even a small amount of disorder.
condensed matter
Kilonovae produced by the coalescence of compact binaries with at least one neutron star are promising standard sirens for an independent measurement of the Hubble constant ($H_0$). Through their detection via follow-up of gravitational-wave (GW), short gamma-ray bursts (sGRBs) or optical surveys, a large sample of kilonovae (even without GW data) can be used for $H_0$ contraints. Here, we show measurement of $H_0$ using light curves associated with four sGRBs, assuming these are attributable to kilonovae, combined with GW170817. Including a systematic uncertainty on the models that is as large as the statistical ones, we find $H_0 = 73.8^{+6.3}_{-5.8}$\,$\mathrm{km}$ $\mathrm{s}^{-1}$ $\mathrm{Mpc}^{-1}$ and $H_0 = 71.2^{+3.2}_{-3.1}$\,$\mathrm{km}$ $\mathrm{s}^{-1}$ $\mathrm{Mpc}^{-1}$ for two different kilonova models that are consistent with the local and inverse-distance ladder measurements. For a given model, this measurement is about a factor of 2-3 more precise than the standard-siren measurement for GW170817 using only GWs.
astrophysics