text
stringlengths
11
9.77k
label
stringlengths
2
104
Clusters of galaxies are used to map the large-scale structures in the universe and as probe of universe evolution. They can be observed through the Sunyaev-Zel'dovich (SZ) effect. At this respect the spectro-imaging at low resolution frequency is an important tool, today, for the study of cluster of galaxies. We have developed KISS (KIDs-Interferometer-Spectrum-Survey), a spectrometric imager dedicated to the secondary anisotropies of the Cosmic Microwave Background (CMB). The multi-frequency approach permits to improve the component separation with respect to predecessor experiments. In this paper, firstly, we provide a description of the scientific context and the state of the art of SZ observations. Secondly, we describe the KISS instrument. Finally, we show preliminary results of the ongoing commissioning campaign.
astrophysics
The development of ultra-intense laser-based sources of high energy ions is an important goal, with a variety of potential applications. One of the barriers to achieving this goal is the need to maximize the conversion efficiency from laser energy to ion energy. We apply a new approach to this problem, in which we use an evolutionary algorithm to optimize conversion efficiency by exploring variations of the target density profile with thousands of one-dimensional particle-in-cell (PIC) simulations. We then compare this "optimal" target identified by the one-dimensional PIC simulations to more conventional choices, such as with an exponential scale length pre-plasma, with fully three-dimensional PIC simulations. The optimal target outperforms the conventional targets in terms of maximum ion energy by 20% and show a significant enhancement of conversion efficiency to high energy ions. This target geometry enhances laser coupling to the electrons, while still allowing the laser to strongly reflect from an effectively thin target. These results underscore the potential for this statistics-driven approach to guide research into optimizing laser-plasma simulations and experiments.
physics
Many real-world time series, such as in health, have changepoints where the system's structure or parameters change. Since changepoints can indicate critical events such as onset of illness, it is highly important to detect them. However, existing methods for changepoint detection (CPD) often require user-specified models and cannot recognize changes that occur gradually or at multiple time-scales. To address both, we show how CPD can be treated as a supervised learning problem, and propose a new deep neural network architecture to efficiently identify both abrupt and gradual changes at multiple timescales from multivariate data. Our proposed pyramid recurrent neural network (PRN) provides scale-invariance using wavelets and pyramid analysis techniques from multi-scale signal processing. Through experiments on synthetic and real-world datasets, we show that PRN can detect abrupt and gradual changes with higher accuracy than the state of the art and can extrapolate to detect changepoints at novel scales not seen in training.
computer science
This paper studies the downlink of a cloud radio access network (C-RAN) in which a centralized processor (CP) communicates with mobile users through base stations (BSs) that are connected to the CP via finite-capacity fronthaul links. Information theoretically, the downlink of a C-RAN is modeled as a two-hop broadcast-relay network. Among the various transmission and relaying strategies for such model, this paper focuses on the compression strategy, in which the CP centrally encodes the signals to be broadcasted jointly by the BSs, then compresses and sends these signals to the BSs through the fronthaul links. This paper characterizes an achievable rate region for a generalized compression strategy with Marton's multicoding for broadcasting and multivariate compression for fronthaul transmission. We then compare this rate region with the distributed decode-forward (DDF) scheme, which achieves the capacity of the general relay networks to within a constant gap, and show that the difference lies in that DDF performs Marton's multicoding and multivariate compression jointly as opposed to successively as in the compression strategy. A main result of this paper is that under the assumption that the fronthaul links are subject to a \emph{sum} capacity constraint this difference is immaterial, so the successive encoding based compression strategy can already achieve the capacity region of the C-RAN to within a constant gap, where the gap is independent of the channel parameters and the power constraints at the BSs. For the special case of the Gaussian network, we further establish that under individual fronthaul constraints, the compression strategy achieves to within a constant gap to the \emph{sum} capacity of the C-RAN.
computer science
We propose an adaptive sampling approach for multiple testing which aims to maximize statistical power while ensuring anytime false discovery control. We consider $n$ distributions whose means are partitioned by whether they are below or equal to a baseline (nulls), versus above the baseline (actual positives). In addition, each distribution can be sequentially and repeatedly sampled. Inspired by the multi-armed bandit literature, we provide an algorithm that takes as few samples as possible to exceed a target true positive proportion (i.e. proportion of actual positives discovered) while giving anytime control of the false discovery proportion (nulls predicted as actual positives). Our sample complexity results match known information theoretic lower bounds and through simulations we show a substantial performance improvement over uniform sampling and an adaptive elimination style algorithm. Given the simplicity of the approach, and its sample efficiency, the method has promise for wide adoption in the biological sciences, clinical testing for drug discovery, and online A/B/n testing problems.
statistics
Forming the Moon by a high-angular momentum impact may explain the Earth-Moon isotopic similarities, however, the post-impact angular momentum needs to be reduced by a factor of 2 or more to the current value (1 L_EM) after the Moon forms. Capture into the evection resonance, occurring when the lunar perigee precession period equals one year, could remove the angular momentum excess. However the appropriate angular momentum removal appears sensitive to the tidal model and chosen tidal parameters. In this work, we use a constant-time delay tidal model to explore the Moon's orbital evolution through evection. We find that exit from formal evection occurs early and that subsequently, the Moon enters a quasi-resonance regime, in which evection still regulates the lunar eccentricity even though the resonance angle is no longer librating. Although not in resonance proper, during quasi-resonance angular momentum is continuously removed from the Earth-Moon system and transferred to Earth's heliocentric orbit. The final angular momentum, set by the timing of quasi-resonance escape, is a function of the ratio of tidal strength in the Moon and Earth and the absolute rate of tidal dissipation in the Earth. We consider a physically-motivated model for tidal dissipation in the Earth as the mantle cools from a molten to a partially molten state. We find that as the mantle solidifies, increased terrestrial dissipation drives the Moon out of quasi-resonance. For post-impact systems that contain >2 L_EM, final angular momentum values after quasi-resonance escape remain significantly higher than the current Earth-Moon value.
astrophysics
Magnetic impurities on superconductors induce discrete bound levels inside the superconducting gap, known as Yu-Shiba-Rusinov (YSR) states. YSR levels are fully spin-polarized such that the tunneling between YSR states depends on their relative spin orientation. Here, we use scanning tunneling spectroscopy to resolve the spin dynamics in the tunneling process between two YSR states by experimentally extracting the angle between the spins. To this end, we exploit the ratio of thermally activated and direct spectral features in the measurement to directly extract the relative spin orientation between the two YSR states. We find freely rotating spins down to 7mK, indicating a purely paramagnetic nature of the impurities. Such a non-collinear spin alignment is essential not only for producing Majorana bound states but also as an outlook manipulating and moving the Majorana state onto the tip.
condensed matter
Stress-induced martensitic transformations enable metastable alloys to exhibit enhanced strain hardening capacity, leading to improved formability and toughness. As is well-known from transformation-induced plasticity (TRIP) steels, however, the resulting martensite can limit ductility and fatigue life due to its intrinsic brittleness. In this work, we explore an alloy design strategy that utilizes stress-induced martensitic transformations but does not retain the martensite phase. This strategy is based on the introduction of superelastic nano-precipitates, which exhibit reverse transformation after initial stress-induced forward transformation. To this end, utilizing ab-initio simulations and thermodynamic calculations we designed and produced a V45Ti30Ni25 (at%) alloy. In this alloy, TiNi is present as nano-precipitates uniformly distributed within a ductile V-rich base-centered cubic (bcc) beta matrix, as well as being present as a larger matrix phase. We characterized the microstructure of the produced alloy using various scanning electron microscopy (SEM) and transmission electron microscopy (TEM) methods. The bulk mechanical properties of the alloy are demonstrated through tensile tests, and the reversible transformation in each of the TiNi morphologies were confirmed by in-situ TEM micro-pillar compression experiments, in-situ high-energy diffraction synchrotron cyclic tensile tests, indentation experiments, and differential scanning calorimetry experiments. The observed transformation pathways and variables impacting phase stability are critically discussed
condensed matter
This paper proposes to learn analysis transform network for dynamic magnetic resonance imaging (LANTERN) with small dataset. Integrating the strength of CS-MRI and deep learning, the proposed framework is highlighted in three components: (i) The spatial and temporal domains are sparsely constrained by using adaptively trained CNN. (ii) We introduce an end-to-end framework to learn the parameters in LANTERN to solve the difficulty of parameter selection in traditional methods. (iii) Compared to existing deep learning reconstruction methods, our reconstruction accuracy is better when the amount of data is limited. Our model is able to fully exploit the redundancy in spatial and temporal of dynamic MR images. We performed quantitative and qualitative analysis of cardiac datasets at different acceleration factors (2x-11x) and different undersampling modes. In comparison with state-of-the-art methods, extensive experiments show that our method achieves consistent better reconstruction performance on the MRI reconstruction in terms of three quantitative metrics (PSNR, SSIM and HFEN) under different undersamling patterns and acceleration factors.
electrical engineering and systems science
The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art methods, we find NLIL can search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.
computer science
Personalization and active learning are key aspects to successful learning. These aspects are important to address in intelligent educational applications, as they help systems to adapt and close the gap between students with varying abilities, which becomes increasingly important in the context of online and distance learning. We run a comparative head-to-head study of learning outcomes for two popular online learning platforms: Platform A, which follows a traditional model delivering content over a series of lecture videos and multiple-choice quizzes, and Platform B, which creates a personalized learning environment and provides problem-solving exercises and personalized feedback. We report on the results of our study using pre- and post-assessment quizzes with participants taking courses on an introductory data science topic on two platforms. We observe a statistically significant increase in the learning outcomes on Platform B, highlighting the impact of well-designed and well-engineered technology supporting active learning and problem-based learning in online education. Moreover, the results of the self-assessment questionnaire, where participants reported on perceived learning gains, suggest that participants using Platform B improve their metacognition.
computer science
As an optical machine learning framework, Diffractive Deep Neural Networks (D2NN) take advantage of data-driven training methods used in deep learning to devise light-matter interaction in 3D for performing a desired statistical inference task. Multi-layer optical object recognition platforms designed with this diffractive framework have been shown to generalize to unseen image data achieving e.g., >98% blind inference accuracy for hand-written digit classification. The multi-layer structure of diffractive networks offers significant advantages in terms of their diffraction efficiency, inference capability and optical signal contrast. However, the use of multiple diffractive layers also brings practical challenges for the fabrication and alignment of these diffractive systems for accurate optical inference. Here, we introduce and experimentally demonstrate a new training scheme that significantly increases the robustness of diffractive networks against 3D misalignments and fabrication tolerances in the physical implementation of a trained diffractive network. By modeling the undesired layer-to-layer misalignments in 3D as continuous random variables in the optical forward model, diffractive networks are trained to maintain their inference accuracy over a large range of misalignments; we term this diffractive network design as vaccinated D2NN (v-D2NN). We further extend this vaccination strategy to the training of diffractive networks that use differential detectors at the output plane as well as to jointly-trained hybrid (optical-electronic) networks to reveal that all of these diffractive designs improve their resilience to misalignments by taking into account possible 3D fabrication variations and displacements during their training phase.
electrical engineering and systems science
In the AdS$_3$/CFT$_2$ correspondence, physical interest attaches to understanding Virasoro conformal blocks at large central charge and in a kinematical regime of large Lorentzian time separation, $t\sim c$. However, almost no analytical information about this regime is presently available. By employing the Wilson line representation we derive new results on conformal blocks at late times, effectively resumming all dependence on $t/c$. This is achieved in the context of "light-light" blocks, as opposed to the richer, but much less tractable, "heavy-light" blocks. The results exhibit an initial decay, followed by erratic behavior and recurrences. We also connect this result to gravitational contributions to anomalous dimensions of double trace operators by using the Lorentzian inversion formula to extract the latter. Inverting the stress tensor block provides a pedagogical example of inversion formula machinery.
high energy physics theory
Ultralong-range Rydberg trimer molecules are spectroscopically observed in an ultracold gas of Cs($nd_{3/2}$) atoms. The atomic Rydberg state anisotropy allows for the formation of angular trimer states, whose energies may not be obtained from integer multiples of dimer energies. These nonadditive trimers are predicted to coexist with Rydberg dimer lines. The existence of such effective three-body interactions is confirmed with observation of asymmetric line profiles and interpreted by a theoretical approach which includes relativistic spin interactions. Simulations of the observed spectra with and without angular trimer lines lends convincing support to the existence of effective three-body interactions.
physics
In the context of large spectroscopic surveys of stars, data-driven methods are key in deducing physical parameters for millions of spectra in a short time. Convolutional neural networks (CNNs) enable us to connect observables (e.g. spectra, stellar magnitudes) to physical properties (atmospheric parameters, chemical abundances, or labels in general). We trained a CNN, adopting stellar atmospheric parameters and chemical abundances from APOGEE DR16 (resolution R=22500) data as training set labels. As input, we used parts of the intermediate-resolution RAVE DR6 spectra (R~7500) overlapping with the APOGEE DR16 data as well as broad-band ALL_WISE and 2MASS photometry, together with Gaia DR2 photometry and parallaxes. We derived precise atmospheric parameters Teff, log(g), and [M/H] along with the chemical abundances of [Fe/H], [alpha/M], [Mg/Fe], [Si/Fe], [Al/Fe], and [Ni/Fe] for 420165 RAVE spectra. The precision typically amounts to 60K in Teff, 0.06 in log(g) and 0.02-0.04 dex for individual chemical abundances. Incorporating photometry and astrometry as additional constraints substantially improves the results in terms of the accuracy and precision of the derived labels. We provide a catalogue of CNN-trained atmospheric parameters and abundances along with their uncertainties for 420165 stars in the RAVE survey. CNN-based methods provide a powerful way to combine spectroscopic, photometric, and astrometric data without the need to apply any priors in the form of stellar evolutionary models. The developed procedure can extend the scientific output of RAVE spectra beyond DR6 to ongoing and planned surveys such as Gaia RVS, 4MOST, and WEAVE. We call on the community to place a particular collective emphasis and on efforts to create unbiased training samples for such future spectroscopic surveys.
astrophysics
The depleted CMOS sensors are emerging as one of the main candidate technologies for future tracking detectors in high luminosity colliders. Its capability of integrating the sensing diode into the CMOS wafer hosting the front-end electronics allows for reduced noise and higher signal sensitivity. They are suitable for high radiation environments due to the possibility of applying high depletion voltage and the availability of relatively high resistivity substrates. The use of a CMOS commercial fabrication process leads to their cost reduction and allows faster construction of large area detectors. A general perspective of the state of the art of these devices will be given in this contribution as well as a summary of the main developments carried out with regard to these devices in the framework of the CERN RD50 collaboration.
physics
Direct state measurement (DSM) is a tomography method that allows for retrieving the wave functions of quantum states directly. However, a shortcoming of current studies on the DSM is that it does not provide access to noisy quantum systems. Here, we attempt to fulfill the gap by investigating the measurement precision in the DSM that undergoes the state-preparation-and-measurement (SPAM) errors. We manipulate a quantum controlled measurement framework with various configurations and compare the efficiency between them. Under the effect of state-preparation error, the state to be measured lightly deviates from the true state, while the measurement error affects the postselection process, and thus results in less accuracy in the tomography. Our study could provide a reliable tool for SPAM tomography and thus contribute to understanding and resolving an urgent demand for current quantum technologies.
quantum physics
Resistance of Fe$_{1-x}$Ni$_x$(x=0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 and 0.9) has been measured using four probe method from 5K to 300K with and without a longitudinal magnetic field of 8T. The zero field resistivity of x=0.1 and 0.9 alloys, predominant contribution to resistivity above near room temperature is due to electron-phonon scattering, whereas for x=05 and 0.7 alloys electron-magnon scattering is dominant. Alloys with x=0.1 and 0.9 exhibit positive magnetoresistance(MR) from 5K to 300K. For x=0.5 and 0.7 alloys, magnetoresistance changes sign from positive to negative with increase in temperature. The temperature at which sign changes increase with Ni concentration in the alloy. The field dependent magnetoresistance is positive for x=0.1, 0.7 and 0.9 alloys whereas it is negative for x=0.5 alloy. MR follows linear behaviour with field for x=0.1 alloy. MR of all other alloys follow a second order polynomial in field.
condensed matter
Using numerical bootstrap method, we determine the critical exponents of the minimal three-dimensional N = 1 Wess-Zumino models with cubic superpotetential $W\sim d_{ijk}\Phi_i\Phi_j\Phi_k$. The tensor $d_{ijk}$ is taken to be the invariant tensor of either permutation group $S_N$, special unitary group $SU(N)$, or a series of groups called F4 family of Lie groups. Due to the equation of motion, at the Wess-Zumino fixed point, the operator $d_{ijk}\Phi_i\Phi_j\Phi_k$ is a (super)descendant of $\Phi_i$ . We observe such super-multiplet recombination in numerical bootstrap, which allows us to determine the scaling dimension of the super-field $\Phi_i$.
high energy physics theory
The coronavirus disease 2019 (COVID-19) global pandemic has led many countries to impose unprecedented lockdown measures in order to slow down the outbreak. Questions on whether governments have acted promptly enough, and whether lockdown measures can be lifted soon have since been central in public discourse. Data-driven models that predict COVID-19 fatalities under different lockdown policy scenarios are essential for addressing these questions and informing governments on future policy directions. To this end, this paper develops a Bayesian model for predicting the effects of COVID-19 lockdown policies in a global context -- we treat each country as a distinct data point, and exploit variations of policies across countries to learn country-specific policy effects. Our model utilizes a two-layer Gaussian process (GP) prior -- the lower layer uses a compartmental SEIR (Susceptible, Exposed, Infected, Recovered) model as a prior mean function with "country-and-policy-specific" parameters that capture fatality curves under "counterfactual" policies within each country, whereas the upper layer is shared across all countries, and learns lower-layer SEIR parameters as a function of a country's features and its policy indicators. Our model combines the solid mechanistic foundations of SEIR models (Bayesian priors) with the flexible data-driven modeling and gradient-based optimization routines of machine learning (Bayesian posteriors) -- i.e., the entire model is trained end-to-end via stochastic variational inference. We compare the projections of COVID-19 fatalities by our model with other models listed by the Center for Disease Control (CDC), and provide scenario analyses for various lockdown and reopening strategies highlighting their impact on COVID-19 fatalities.
statistics
We deal with a class of fractional magnetic Schr\"odinger equations in the whole line with exponential critical growth. Under a local condition on the potential, we use penalization methods and Ljusternik-Schnirelmann category theory to investigate the existence, multiplicity and concentration of nontrivial weak solutions.
mathematics
We explore asymptotic safety of gravity-matter systems, discovering indications for a near-perturbative nature of these systems in the ultraviolet. Our results are based on the dynamical emergence of effective universality at the asymptotically safe fixed point. Our findings support the conjecture that an asymptotically safe completion of the Standard Model with gravity could be realized in a near-perturbative setting.
high energy physics theory
Frequent metering of electricity consumption is crucial for demand side management in smart grids. However, metered data can be processed fairly easily by employing well-established nonintrusive appliance load monitoring techniques to infer appliance usage, which reveals information about consumers' private lives. Existing load shaping techniques for privacy primarily focus only on altering metered real power, whereas smart meters collect reactive power consumption data as well for various purposes. This study addresses consumer privacy preservation via load shaping in a demand response scheme, considering both real and reactive power. We build a multi-objective optimization framework that enables us to characterize the interplay between privacy maximization, user cost minimization, and user discomfort minimization objectives. Our results reveal that minimizing information leakage due to a single component, e.g., real power, would suffer from overlooking information leakage due to the other component, e.g., reactive power, causing sub-optimal decisions. In fact, joint shaping of real and reactive power components results in the best possible privacy preservation performance, which leads to more than a twofold increase in privacy in terms of mutual information.
electrical engineering and systems science
Items in a test are often used as a basis for making decisions and such tests are therefore required to have good psychometric properties, like unidimensionality. In many cases the sum score is used in combination with a threshold to decide between pass or fail, for instance. Here we consider whether such a decision function is appropriate, without a latent variable model, and which properties of a decision function are desirable. We consider reliability (stability) of the decision function, i.e., does the decision change upon perturbations, or changes in a fraction of the outcomes of the items (measurement error). We are concerned with questions of whether the sum score is the best way to aggregate the items, and if so why. We use ideas from test theory, social choice theory, graphical models, computer science and probability theory to answer these questions. We conclude that a weighted sum score has desirable properties that (i) fit with test theory and is observable (similar to a condition like conditional association), (ii) has the property that a decision is stable (reliable), and (iii) satisfies Rousseau's criterion that the input should match the decision. We use Fourier analysis of Boolean functions to investigate whether a decision function is stable and to figure out which (set of) items has proportionally too large an influence on the decision. To apply these techniques we invoke ideas from graphical models and use a pseudo-likelihood factorisation of the probability distribution.
statistics
Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications and presents interesting challenges. First, not all labels are well represented in the training set, due to the very large label set and the skewed label distributions of LMTC datasets. Also, label hierarchies and differences in human labelling guidelines may affect graph-aware annotation proximity. Finally, the label hierarchies are periodically updated, requiring LMTC models capable of zero-shot generalization. Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs), which (1) typically treat LMTC as flat multi-label classification; (2) may use the label hierarchy to improve zero-shot learning, although this practice is vastly understudied; and (3) have not been combined with pre-trained Transformers (e.g. BERT), which have led to state-of-the-art results in several NLP benchmarks. Here, for the first time, we empirically evaluate a battery of LMTC methods from vanilla LWANs to hierarchical classification approaches and transfer learning, on frequent, few, and zero-shot learning on three datasets from different domains. We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs. Furthermore, we show that Transformer-based approaches outperform the state-of-the-art in two of the datasets, and we propose a new state-of-the-art method which combines BERT with LWANs. Finally, we propose new models that leverage the label hierarchy to improve few and zero-shot learning, considering on each dataset a graph-aware annotation proximity measure that we introduce.
computer science
Clinical deployment of deep learning algorithms for chest x-ray interpretation requires a solution that can integrate into the vast spectrum of clinical workflows across the world. An appealing approach to scaled deployment is to leverage the ubiquity of smartphones by capturing photos of x-rays to share with clinicians using messaging services like WhatsApp. However, the application of chest x-ray algorithms to photos of chest x-rays requires reliable classification in the presence of artifacts not typically encountered in digital x-rays used to train machine learning models. We introduce CheXphoto, a dataset of smartphone photos and synthetic photographic transformations of chest x-rays sampled from the CheXpert dataset. To generate CheXphoto we (1) automatically and manually captured photos of digital x-rays under different settings, and (2) generated synthetic transformations of digital x-rays targeted to make them look like photos of digital x-rays and x-ray films. We release this dataset as a resource for testing and improving the robustness of deep learning algorithms for automated chest x-ray interpretation on smartphone photos of chest x-rays.
electrical engineering and systems science
We study the curves of coherence for the Bell-diagonal states including $l_{1}$-norm of coherence and relative entropy of coherence under the Markovian channels in the first subsystem once. For a special Bell-diagonal state under bit-phase flip channel, we find frozen coherence under $l_{1}$ norm occurs, but relative entropy of coherence decrease. It illustrates that the occurrence of frozen coherence depends on the type of the measure of coherence. We study the coherence evolution of Bell-diagonal states under Markovian channels in the first subsystem $n$ times and find coherence under depolarizing channel decreases initially then increases for small $n$ and tend to zero for large $n$. We discuss the dynamics of coherence of the Bell-diagonal state under two independent same type local Markovian channels. We depict the dynamic behaviors of relative entropy of coherence for Bell-diagonal state under the bi-side different Markovian channel. We depict the dynamic behaviors of relative entropy of coherence for Bell-diagonal state under the bi-side different Markovian channel.
quantum physics
In this work we obtain classical solutions of the bosonic sector of the supermembrane theory with two-form fluxes associated to a quantized constant $C_{\pm}$ background. This theory satisfies a flux condition on the worldvolume that induces monopoles over it. Classically it is stable as it does not contain string-like spikes with zero energy in distinction with the general case. At quantum level the bosonic membrane has a purely discrete spectrum but the relevance is that the same property holds for its supersymmetric spectrum. We find for this theory spinning membrane solutions, some of them including the presence of a non-vanishing symplectic gauge connection defined on its worldvolume in different approximations. By using the duality found between this theory and the so-called supermembrane with central charges, rotating membrane solutions found in that case, are also solutions of the M2-brane with $C_{\pm}$ fluxes. We generalize this result to other embeddings. We find new distinctive rotating membrane solutions, some of them including the presence of a non-vanishing symplectic gauge connection defined on its worldvolume. We obtain numerical and analytical solutions in different approximations characterizing the dynamics of the membrane with fluxes $C_{\pm}$ for different ans\"atze of the dynamical degrees of freedom. Finally we discuss the physical admissibility of some of these ans\"atze to model the components of the symplectic gauge field.
high energy physics theory
Optical properties and morphology of laser generated Aluminum and Copper phthalocyanine nanoparticles (nAlPc and nCuPc) in water are experimentally studied. Near infrared laser source of nanosecond pulse duration was used for fragmentation of Pc micro-powder suspended in H2O. Extinction spectra in the visible and near IR range of NPs colloidal solutions in MQ water were acquired by means of optical spectroscopy. The optical density of both nCuPc and nAlPc increases with laser fragmentation time. Transmission electron microscopy was used for characterization of nanoparticle morphology and size analysis. It is found that nCuPc are made of short (100 nm) rectangular bars interconnected at various angles with other bars. Similar experiments were carried out for a colloidal solution, which is a mixture of Au and AlPc nanoparticles. It turned out that Au NPs in presence of nAlPc form large agglomerates of Au.
physics
Four-dimensional gauge theories with matter can have regions in parameter space, often dubbed conformal windows, where they flow in the infrared to non-trivial conformal field theories. It has been conjectured that conformality can be lost because of merging of two nearby fixed points that move into the complex plane, and that a walking dynamics governed by scaling dimensions of operators defined at such complex fixed points can occur. We find controlled, parametrically weakly coupled, and ultraviolet-complete 4D gauge theories that explicitly realize this scenario. We show how the walking dynamics is controlled by the coupling of a double-trace operator that crosses marginality. The walking regime ends when the renormalization group flow of this coupling leads to a (weak) first-order phase transition with Coleman-Weinberg symmetry breaking. A light dilaton-like scalar particle appears in the spectrum, but it is not parametrically lighter than the other excitations.
high energy physics theory
The quantum Cram\'er-Rao bound is a cornerstone of modern quantum metrology, as it provides the ultimate precision in parameter estimation. In the multiparameter scenario, this bound becomes a matrix inequality, which can be cast to a scalar form with a properly chosen weight matrix. Multiparameter estimation thus elicits tradeoffs in the precision with which each parameter can be estimated. We show that, if the information is encoded in a unitary transformation, we can naturally choose the weight matrix as the metric tensor linked to the geometry of the underlying algebra $\mathfrak{su}(n)$. This ensures an intrinsic bound that is independent of the choice of parametrization.
quantum physics
We introduce a persistent random walk model with finite velocity and self-reinforcing directionality, which explains how exponentially distributed runs self-organize into truncated L\'evy walks observed in active intracellular transport by Chen et. al. [\textit{Nat. mat.}, 2015]. We derive the non-homogeneous in space and time, hyperbolic PDE for the probability density function (PDF) of particle position. This PDF exhibits a bimodal density (aggregation phenomena) in the superdiffusive regime, which is not observed in classical linear hyperbolic and L\'evy walk models. We find the exact solutions for the first and second moments and criteria for the transition to superdiffusion.
condensed matter
Current end-to-end semantic role labeling is mostly accomplished via graph-based neural models. However, these all are first-order models, where each decision for detecting any predicate-argument pair is made in isolation with local features. In this paper, we present a high-order refining mechanism to perform interaction between all predicate-argument pairs. Based on the baseline graph model, our high-order refining module learns higher-order features between all candidate pairs via attention calculation, which are later used to update the original token representations. After several iterations of refinement, the underlying token representations can be enriched with globally interacted features. Our high-order model achieves state-of-the-art results on Chinese SRL data, including CoNLL09 and Universal Proposition Bank, meanwhile relieving the long-range dependency issues.
computer science
Thermal dendrites (fractal-like structures) on the surface of water and some water solutions are found with an infrared camera. They are observed with specific sizes and temperature differences in the liquid and are not associated with the movement of the liquid.
condensed matter
Cascade decays of new scalars into final states with multiple photons and possibly quarks may lead to distinctive experimental signatures at high-energy colliders. Such signals are even more striking if the scalars are highly boosted, as when produced from the decay of a much heavier resonance. We study this type of events within the framework of the minimal stealth boson model, an anomaly-free $\text{U}(1)_{Y'}$ extension of the Standard Model with two complex scalar singlets. It is shown that, while those signals may have cross sections that might render them observable with LHC Run 2 data, they have little experimental coverage. We also establish a connection with a CMS excess observed in searches for new scalars decaying into diphoton final states near 96 GeV. In particular, we conclude that the predicted multiphoton signatures are compatible with such excess.
high energy physics phenomenology
In this paper we propose a practical quantum key distribution protocol based on geometrically uniform states and a standard decoy state technique. The protocol extends the ideas used in SARG04 to the limit where the core quantum communication is secure against unambiguous state discrimination and provides some level of inherent resistance to photon number splitting attacks. Additional decoy state overlay ensures its conformity to the conventional security requirements. The protocol security is analyzed by an explicit construction of the most optimal unitary attack, which is known to reach the tight security bound in the case of BB84.
quantum physics
We investigate the proposal that for weakly coupled two-dimensional magnets the transition temperature scales with a critical exponent which is equivalent to that of the susceptibility in the underlying two-dimensional model, $ \gamma $. Employing the exact diagonalization of transfer matrices we can determine the critical temperature for Ising models accurately and then fit to approximate this critical exponent. We find an additional logarithm is required to predict the transition temperature, stemming from the fact that the heat capacity exponent $ \alpha $ tends to zero for this Ising model, complicating the elementary prediction. We believe that the excitations of the transfer matrix correspond to thermalized topological excitations of the model and find that even the simplest model exhibits significant changes of behavior for the most relevant of these excitations as the temperature is varied.
condensed matter
Nanophotonic circuits using group III-nitrides on silicon are still lacking one key component: efficient electrical injection. In this paper we demonstrate an electrical injection scheme using a metal microbridge contact in thin III-nitride on silicon mushroom-type microrings that is compatible with integrated nanophotonic circuits with the goal of achieving electrically injected lasing. Using a central buried n-contact to bypass the insulating buffer layers, we are able to underetch the microring, which is essential for maintaining vertical confinement in a thin disk. We demonstrate direct current room-temperature electroluminescence with 440 mW/cm$^2$ output power density at 20 mA from such microrings with diameters of 30 to 50 $\mu$m. The first steps towards achieving an integrated photonic circuit are demonstrated.
physics
It was shown in arXiv:1410.7168 that compactifying $D=6$, $\mathcal{N}=(1,0)$ ungauged supergravity coupled to a single tensor multiplet on S$^3$ one gets a particular $D=3$, $\mathcal{N}=4$ gauged supergravity which is a consistent reduction. We construct two supersymmetric black string solutions in this 3-dimensional model with one and two active scalars respectively. Uplifting the first, one gets a dyonic string solution in $D=6$ that has been known for a long time. Whereas, uplifting the second solution, one finds a very interesting configuration where magnetic strings are located uniformly on a circle in a plane within the 4-dimensional flat transverse space and electric strings are distributed homogeneously inside this circle. Both solutions have $\mathrm{AdS}_3 \times$ S$^3$ limits.
high energy physics theory
This paper studies the Graph-Connected Clique-Partitioning Problem (GCCP), a clustering optimization model in which units are characterized by both individual and relational data. This problem, introduced by Benati et al. (2017) under the name of Connected Partitioning Problem, shows that the combination of the two data types improves the clustering quality in comparison with other methodologies. Nevertheless, the resulting optimization problem is difficult to solve; only small-sized instances can be solved exactly, large-sized instances require the application of heuristic algorithms. In this paper we improve the exact and the heuristic algorithms previously proposed. Here, we provide a new Integer Linear Programming (ILP) formulation, that solves larger instances, but at the cost of using an exponential number of variables. In order to limit the number of variables necessary to calculate the optimum, the new ILP formulation is solved implementing a branch-and-price algorithm. The resulting pricing problem is itself a new combinatorial model: the Maximum-weighted Graph-Connected Single-Clique problem (MGCSC), that we solve testing various Mixed Integer Linear Programming (MILP) formulations and proposing a new fast "random shrink" heuristic. In this way, we are able to improve the previous algorithms: The branch-and-price method outperforms the computational times of the previous MILP algorithms and the new random shrink heuristic, when applied to GCCP, is both faster and more accurate than the previous heuristic methods. Moreover, the combination of column generation and random shrink is itself a new MILP-relaxed matheuristic that can be applied to large instances too. Its main advantage is that all heuristic local optima are combined together in a restricted MILP, consisting in the application of the exact branch-and-price method but solving heuristically the pricing problem.
mathematics
Gain tuning is given for the twisting controller to ensure that the closed-loop trajectories of the perturbed double integrator, initialized within a bounded domain and affected by uniformly bounded disturbances, settle at the origin in prescribed time.
electrical engineering and systems science
This paper focuses on distributed learning-based control of decentralized multi-agent systems where the agents' dynamics are modeled by Gaussian Processes (GPs). Two fundamental problems are considered: the optimal design of experiment for concurrent learning of the agents' GP models, and the distributed coordination given the learned models. Using a Distributed Model Predictive Control (DMPC) approach, the two problems are formulated as distributed optimization problems, where each agent's sub-problem includes both local and shared objectives and constraints. To solve the resulting complex and non-convex DMPC problems efficiently, we develop an algorithm called Alternating Direction Method of Multipliers with Convexification (ADMM-C) that combines a distributed ADMM algorithm and a Sequential Convexification method. The computational efficiency of our proposed method comes from the facts that the computation for solving the DMPC problem is distributed to all agents and that efficient convex optimization solvers are used at the agents for solving the convexified sub-problems. We also prove that, under some technical assumptions, the ADMM-C algorithm converges to a stationary point of the penalized optimization problem. The effectiveness of our approach is demonstrated in numerical simulations of a multi-vehicle formation control example.
electrical engineering and systems science
Energy-aware architectures provide applications with a mix of low (LITTLE) and high (big) frequency cores. Choosing the best hardware configuration for a program running on such an architecture is difficult, because program parts benefit differently from the same hardware configuration. State-of-the-art techniques to solve this problem adapt the program's execution to dynamic characteristics of the runtime environment, such as energy consumption and throughput. We claim that these purely dynamic techniques can be improved if they are aware of the program's syntactic structure. To support this claim, we show how to use the compiler to partition source code into program phases: regions whose syntactic characteristics lead to similar runtime behavior. We use reinforcement learning to map pairs formed by a program phase and a hardware state to the configuration that best fit this setup. To demonstrate the effectiveness of our ideas, we have implemented the Astro system. Astro uses Q-learning to associate syntactic features of programs with hardware configurations. As a proof of concept, we provide evidence that Astro outperforms GTS, the ARM-based Linux scheduler tailored for heterogeneous architectures, on the parallel benchmarks from Rodinia and Parsec.
computer science
Trapped charged particles, and in particular trapped ions, are among the leading candidates for quantum computing technologies. However, our ability to interconnect arrays of ions in different traps is a significant hurdle in scaling trapped ion quantum computing. One approach to overcome this problem is using a solid state conducting wire to mediate the Coulomb interaction between particles in different traps. Additionally, there is strong interest in interfacing trapped ion qubits with solid state superconducting qubits to develop hybrid systems which benefit from the complementary strengths of the two technologies. For studies related to these fields, a trapped charged particle inducing charge on a conductor has long been modeled using equivalent circuit elements. The equivalent circuit element approach is popular partly due to the appeal of a model for analyzing systems using simple electronic components. As a result, a body of theoretical work is founded on this approach. However, careful consideration of the model leads to inconsistencies. Here, we identify the origin of the inconsistencies and their implications. Our result removes a potential road-block for future studies aiming to use conductors to connect independent arrays of ions, or to interface charged particles with solid-state qubit technologies. In addition, for the specific case of two trapped ions interacting via a conducting coupling system, we introduce an alternative way to use linear relationships, which reproduces results from other works that are not based on the circuit element model. This method is useful in trouble-shooting real experimental designs and assessing the accuracy of different theoretical models.
quantum physics
This document summarises the current theoretical and experimental status of the di-Higgs boson production searches, and of the direct and indirect constraints on the Higgs boson self-coupling, with the wish to serve as a useful guide for the next years. The document discusses the theoretical status, including state-of-the-art predictions for di-Higgs cross sections, developments on the effective field theory approach, and studies on specific new physics scenarios that can show up in the di-Higgs final state. The status of di-Higgs searches and the direct and indirect constraints on the Higgs self-coupling at the LHC are presented, with an overview of the relevant experimental techniques, and covering all the variety of relevant signatures. Finally, the capabilities of future colliders in determining the Higgs self-coupling are addressed, comparing the projected precision that can be obtained in such facilities. The work has started as the proceedings of the Di-Higgs workshop at Colliders, held at Fermilab from the 4th to the 9th of September 2018, but it went beyond the topics discussed at that workshop and included further developments.
high energy physics phenomenology
Applying machine learning to molecules is challenging because of their natural representation as graphs rather than vectors.Several architectures have been recently proposed for deep learning from molecular graphs, but they suffer from informationbottlenecks because they only pass information from a graph node to its direct neighbors. Here, we introduce a more expressiveroute-based multi-attention mechanism that incorporates features from routes between node pairs. We call the resulting methodGraph Informer. A single network layer can therefore attend to nodes several steps away. We show empirically that the proposedmethod compares favorably against existing approaches in two prediction tasks: (1) 13C Nuclear Magnetic Resonance (NMR)spectra, improving the state-of-the-art with an MAE of 1.35 ppm and (2) predicting drug bioactivity and toxicity. Additionally, wedevelop a variant called injective Graph Informer that isprovablyas powerful as the Weisfeiler-Lehman test for graph isomorphism.Furthermore, we demonstrate that the route information allows the method to be informed about thenonlocal topologyof the graphand, thus, even go beyond the capabilities of the Weisfeiler-Lehman test.
statistics
Plasmonic excitations such as surface-plasmon-polaritons (SPPs) and graphene-plasmons (GPs), carry large momenta and are thus able to confine electromagnetic fields to small dimensions. This property makes them ideal platforms for subwavelength optical control and manipulation at the nanoscale. The momenta of these plasmons are even further increased if a scheme of metal-insulator-metal and graphene-insulator-metal are used for SPPs and GPs, respectively. However, with such large momenta, their far-field excitation becomes challenging. In this work, we consider hybrids of graphene and metallic nanostructures and study the physical mechanisms behind the interaction of far-field light with the supported high momenta plasmon modes. While there are some similarities in the properties of GPs and SPPs, since both are of the plasmon-polariton type, their physical properties are also distinctly different. For GPs we find two different physical mechanism related to either GPs confined to isolated cavities, or large area collective grating couplers. Strikingly, we find that although the two systems are conceptually different, under specific conditions they can behave similarly. By applying the same study to SPPs, we find a different physical behavior, which fundamentally stems from the different dispersion relations of SPPs as compared to GPs. Furthermore, these hybrids produce large field enhancements that can also be electrically tuned and modulated making them the ideal candidates for a variety of plasmonic devices.
physics
Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides much richer information about the underlying data-generating process. However, the extension and application of methods designed for observational data to include interventions is not straightforward and remains an open problem. In this paper we provide a general framework based on continuous optimization and neural networks to create models for the combination of observational and interventional data. The proposed method is even applicable in the challenging and realistic case that the identity of the intervened upon variable is unknown. We examine the proposed method in the setting of graph recovery both de novo and from a partially-known edge set. We establish strong benchmark results on several structure learning tasks, including structure recovery of both synthetic graphs as well as standard graphs from the Bayesian Network Repository.
statistics
Modern wearable devices are embedded with a range of noninvasive biomarker sensors that hold promise for improving detection and treatment of disease. One such sensor is the single-lead electrocardiogram (ECG) which measures electrical signals in the heart. The benefits of the sheer volume of ECG measurements with rich longitudinal structure made possible by wearables come at the price of potentially noisier measurements compared to clinical ECGs, e.g., due to movement. In this work, we develop a statistical model to simulate a structured noise process in ECGs derived from a wearable sensor, design a beat-to-beat representation that is conducive for analyzing variation, and devise a factor analysis-based method to denoise the ECG. We study synthetic data generated using a realistic ECG simulator and a structured noise model. At varying levels of signal-to-noise, we quantitatively measure an upper bound on performance and compare estimates from linear and non-linear models. Finally, we apply our method to a set of ECGs collected by wearables in a mobile health study.
statistics
We review and systematize two (analytic) bootstrap techniques in two-dimensional conformal field theories using the S-modular transformation. The first one gives universal results in asymptotic regimes by relating extreme temperatures. Along with the presentation of known results, we use this technique to also derive asymptotic formulae for the Zamolodchikov recursion coefficients which match previous conjectures from numerics and from Regge asymptotic analysis. The second technique focuses on intermediate temperatures. We use it to sketch a methodology to derive a bound on off-diagonal squared OPE coefficients, as well as to improve existing bounds on the spectrum in case of non-negative diagonal OPE coefficients.
high energy physics theory
The use of kernel functions is a common technique to extract important features from data sets. A quantum computer can be used to estimate kernel entries as transition amplitudes of unitary circuits. It can be shown quantum kernels exist that, subject to computational hardness assumptions, can not be computed classically. It is an important challenge to find quantum kernels that provide an advantage in the classification of real world data. Here we introduce a class of quantum kernels that are related to covariant quantum measurements and can be used for data that has a group structure. The kernel is defined in terms of a single fiducial state that can be optimized by using a technique called kernel alignment. Quantum kernel alignment optimizes the kernel family to minimize the upper bound on the generalisation error for a given data set. We apply this general method to a specific learning problem we refer to as labeling cosets with error and implement the learning algorithm on $27$ qubits of a superconducting processor.
quantum physics
Both coronal plumes and network jets are rooted in network lanes. The relationship between the two, however, has yet to be addressed. For this purpose, we perform an observational analysis using images acquired with the Atmospheric Imaging Assembly (AIA) 171{\AA} passband to follow the evolution of coronal plumes, the observations taken by the Interface Region Imaging Spectrograph (IRIS) slit-jaw 1330{\AA} to study the network jets, and the line-of-sight magnetograms taken by the Helioseismic and Magnetic Imager (HMI) to overview the the photospheric magnetic features in the regions. Four regions in the network lanes are identified, and labeled ``R1--R4''. We find that coronal plumes are clearly seen only in ``R1''&''R2'' but not in ``R3''&``R4'', even though network jets abound in all these regions. Furthermore, while magnetic features in all these regions are dominated by positive polarity, they are more compact (suggesting stronger convergence) in ``R1''&``R2'' than that in ``R3''&``R4''. We develop an automated method to identify and track the network jets in the regions. We find that the network jets rooted in ``R1''&``R2'' are higher and faster than that in ``R3''&``R4'',indicating that network regions producing stronger coronal plumes also tend to produce more dynamic network jets. We suggest that the stronger convergence in ``R1''&``R2'' might provide a condition for faster shocks and/or more small-scale magnetic reconnection events that power more dynamic network jets and coronal plumes.
astrophysics
Using ideas from the brane world cosmological perturbation theory, we make linear stability analysis of dynamic thin shell wormholes constructed by cutting-and-pasting two building-block spacetime at arbitrary joining shell radiuses. We observed that in appropriate parameter choices, dynamical thin shell wormholes following from such a cut-and-paste procedure can be kept stable during the whole evolution process towards the final extreme point on which the joining-shell radius arrives on static values. Our work forms a valuable complementarity to previous analysis basing on virtual radial perturbations around the born-static value of the joining-shell radius which allows no real evolution of the wormhole.
high energy physics theory
The Sachdev-Ye-Kitaev (SYK) model incorporates rich physics, ranging from exotic non-Fermi liquid states without quasiparticle excitations, to holographic duality and quantum chaos. However, its experimental realization remains a daunting challenge due to various unnatural ingredients of the SYK Hamiltonian such as its strong randomness and fully nonlocal fermion interaction. At present, constructing such a nonlocal Hamiltonian and exploring its dynamics is best through digital quantum simulation, where state-of-the-art techniques can already handle a moderate number of qubits. Here we demonstrate a first step towards simulation of the SYK model on a nuclear-spin-chain simulator. We observed the fermion paring instability of the non-Fermi liquid state and the chaotic-nonchaotic transition at simulated temperatures, as was predicted by previous theories. As the realization of the SYK model in practice, our experiment opens a new avenue towards investigating the key features of non-Fermi liquid states, as well as the quantum chaotic systems and the AdS/CFT duality.
quantum physics
In this paper, we propose an efficient blood vessel segmentation method for the eye fundus images using adversarial learning with multiscale features and kernel factorization. In the generator network of the adversarial framework, spatial pyramid pooling, kernel factorization and squeeze excitation block are employed to enhance the feature representation in spatial domain on different scales with reduced computational complexity. In turn, the discriminator network of the adversarial framework is formulated by combining convolutional layers with an additional squeeze excitation block to differentiate the generated segmentation mask from its respective ground truth. Before feeding the images to the network, we pre-processed them by using edge sharpening and Gaussian regularization to reach an optimized solution for vessel segmentation. The output of the trained model is post-processed using morphological operations to remove the small speckles of noise. The proposed method qualitatively and quantitatively outperforms state-of-the-art vessel segmentation methods using DRIVE and STARE datasets.
electrical engineering and systems science
We demonstrate that loop integrands of (super-)gravity scattering amplitudes possess surprising properties in the ultraviolet (UV) region. In particular, we study the scaling of multi-particle unitarity cuts for asymptotically large momenta and expose an improved UV behavior of four-dimensional cuts through seven loops as compared to standard expectations. For N=8 supergravity, we show that the improved large momentum scaling combined with the behavior of the integrand under BCFW deformations of external kinematics uniquely fixes the loop integrands in a number of non-trivial cases. In the integrand construction, all scaling conditions are homogeneous. Therefore, the only required information about the amplitude is its vanishing at particular points in momentum space. This homogeneous construction gives indirect evidence for a new geometric picture for graviton amplitudes similar to the one found for planar N=4 super Yang-Mills theory. We also show how the behavior at infinity is related to the scaling of tree-level amplitudes under certain multi-line chiral shifts which can be used to construct new recursion relations.
high energy physics theory
We analyze mesons in constant magnetic fields ($B$) within a non-relativistic constituent quark model. Our quark model contains a harmonic oscillator type confining potential at the leading order, and we perturbatively add short range correlations to account for spin-flavor energy splittings. We study both neutral and charged mesons taking into account the internal quark dynamics. The neutral states are labelled by two-dimensional momenta for magnetic translations, while the charged states by two discrete indices related to angular momenta. For $B \ll \Lambda_{\rm QCD}^2$ ($\Lambda_{\rm QCD} \sim 200$ MeV: the QCD scale), the analyses proceed as in usual quark models, while special cares are needed for strong fields, $B \sim \Lambda_{\rm QCD}^2$, especially when we treat short range correlations such as the Fermi-Breit-Pauli interactions. We compute the energy spectra of mesons up to energies of $\sim 2.5$ GeV and construct the meson resonance gas. Within the assumption that the constituent quark masses are insensitive to magnetic fields, the phase space enhancement for mesons significantly increases the entropy, assisting a transition from a hadron gas to a quark gluon plasma.
high energy physics phenomenology
Generating intense ultrashort pulses with high-quality spatial modes is crucial for ultrafast and strong-field science. This can be accomplished by controlling propagation of femtosecond pulses under the influence of Kerr nonlinearity and achieving stable propagation with high intensity. In this work, we propose that the generation of spatial solitons in periodic layered Kerr media can provide an optimum condition for supercontinuum generation and pulse compression using multiple thin plates. With both the experimental and theoretical investigations, we successfully identify these solitary modes and reveal a universal relationship between the beam size and the critical nonlinear phase. Space-time coupling is shown to strongly influence the spectral, spatial and temporal profiles of femtosecond pulses. Taking advantage of the unique characters of these solitary modes, we demonstrate single-stage supercontinuum generation and compression of femtosecond pulses from initially 170 fs down to 22 fs with an efficiency ~90%. We also provide evidence of efficient mode self-cleaning which suggests rich spatial-temporal self-organization processes of laser beams in a nonlinear resonator.
physics
In the last two decades several biclustering methods have been developed as new unsupervised learning techniques to simultaneously cluster rows and columns of a data matrix. These algorithms play a central role in contemporary machine learning and in many applications, e.g. to computational biology and bioinformatics. The H-score is the evaluation score underlying the seminal biclustering algorithm by Cheng and Church, as well as many other subsequent biclustering methods. In this paper, we characterize a potentially troublesome bias in this score, that can distort biclustering results. We prove, both analytically and by simulation, that the average H-score increases with the number of rows/columns in a bicluster. This makes the H-score, and hence all algorithms based on it, biased towards small clusters. Based on our analytical proof, we are able to provide a straightforward way to correct this bias, allowing users to accurately compare biclusters.
statistics
We find the threshold for the existence of a collection of edge disjoint copies of $K_r$ that form a cyclic structure and span all vertices of $G_{n,p}$. We use a recent result of Riordan to give a two line proof of the main result.
mathematics
One of the primary concerns of product quality control in the automotive industry is an automated detection of defects of small sizes on specular car body surfaces. A new statistical learning approach is presented for surface finish defect detection based on spline smoothing method for feature extraction and $k$-nearest neighbour probabilistic classifier. Since the surfaces are specular, structured lightning reflection technique is applied for image acquisition. Reduced rank cubic regression splines are used to smooth the pixel values while the effective degrees of freedom of the obtained smooths serve as components of the feature vector. A key advantage of the approach is that it allows reaching near zero misclassification error rate when applying standard learning classifiers. We also propose probability based performance evaluation metrics as alternatives to the conventional metrics. The usage of those provides the means for uncertainty estimation of the predictive performance of a classifier. Experimental classification results on the images obtained from the pilot system located at Volvo GTO Cab plant in Ume{\aa}, Sweden, show that the proposed approach is much more efficient than the compared methods.
computer science
Dependency solving is a hard (NP-complete) problem in all non-trivial component models due to either mutually incompatible versions of the same packages or explicitly declared package conflicts. As such, software upgrade planning needs to rely on highly specialized dependency solvers, lest falling into pitfalls such as incompleteness-a combination of package versions that satisfy dependency constraints does exist, but the package manager is unable to find it. In this paper we look back at proposals from dependency solving research dating back a few years. Specifically, we review the idea of treating dependency solving as a separate concern in package manager implementations, relying on generic dependency solvers based on tried and tested techniques such as SAT solving, PBO, MILP, etc. By conducting a census of dependency solving capabilities in state-of-the-art package managers we conclude that some proposals are starting to take off (e.g., SAT-based dependency solving) while-with few exceptions-others have not (e.g., out-sourcing dependency solving to reusable components). We reflect on why that has been the case and look at novel challenges for dependency solving that have emerged since.
computer science
Measuring the photoluminescence of defects in crystals is a common experimental technique for analysis and identification. However, current theoretical simulations typically require the simulation of a large number of atoms to eliminate finite size effects, which discourages computationally expensive excited state methods. We show how to extract the room-temperature photoluminescence spectra of defect centres in bulk from an $\mathrm{\textit{ab-initio}}$ simulation of a defect in small clusters. The finite size effect of small clusters manifests as strong coupling to low frequency vibrational modes. We find that removing vibrations below a cutoff frequency determined by constrained optimization returns the main features of the solid state photoluminescence spectrum. This strategy is illustrated for an NV$^{-}$ defect in diamond, presenting a connection between defects in solid state and clusters; the first vibrationally resolved $\mathrm{\textit{ab-initio}}$ photoluminescence spectrum of an NV$^{-}$ defect in a nanodiamond; and an alternative technique for simulating photoluminescence for solid state defects utilizing more accurate excited state methods.
condensed matter
Quantile regression relates the quantile of the response to a linear predictor. For a discrete response distributions, like the Poission, Binomial and the negative Binomial, this approach is not feasible as the quantile function is not bijective. We argue to use a continuous model-aware interpolation of the quantile function, allowing for proper quantile inference while retaining model interpretation. This approach allows for proper uncertainty quantification and mitigates the issue of quantile crossing. Our reanalysis of hospitalisation data considered in Congdon (2017) shows the advantages of our proposal as well as introducing a novel method to exploit quantile regression in the context of disease mapping.
statistics
In this work we study the semileptonic decays of $B_c$ meson. We evaluated $B_{c}\rightarrow D(D^{\ast})$, $B_{c}\rightarrow D_s(D_s^{\ast})$ and $B_{c}\rightarrow \eta_{c}(J/\psi)$ transitions form factors in the full kinematical region within the covariant quark model. The calculated form factors are used to evaluate the semileptonic decays of $B_c$ meson and it was defined ratios ($R_{\eta_{c}}$, $R_{J/\psi}$, $R_{ D}$ ,$R_{ D^{\ast}}$) of the branching ratios, which will be hopefully tested on LHC experiments.We compare the obtained results with the results from other theoretical approaches.
high energy physics phenomenology
Quarterback performance can be difficult to rank, and much effort has been spent in creating new rating systems. However, the input statistics for such ratings are subject to randomness and factors outside the quarterback's control. To investigate this variance, we perform a sensitivity analysis of three quarterback rating statistics: the Traditional 1971 rating by Smith, the Burke, and the Wages of Wins ratings. The comparisons are made at the team level for the 32 NFL teams from 2002-2015, thus giving each case an even 16 games. We compute quarterback ratings for each offense with 1-5 additional touchdowns, 1-5 fewer interceptions, 1-5 additional sacks, and a 1-5 percent increase in the passing completion rate. Our sensitivity analysis provides insight into whether an elite passing team could seem mediocre or vice versa based on random outcomes. The results indicate that the Traditional rating is the most sensitive statistic with respect to touchdowns, interceptions, and completions, whereas the Burke rating is most sensitive to sacks. The analysis suggests that team passing offense rankings are highly sensitive to aspects of football that are out of the quarterback's hands (e.g., deflected passes that lead to interceptions). Thus, on the margins, we show arguments about whether a specific quarterback has entered the elite or remains mediocre are irrelevant.
statistics
We present a doubly holographic prescription for computing entanglement entropy on a gravitating brane. It involves a Ryu-Takayanagi surface with a Dirichlet anchoring condition. In braneworld cosmology, a related approach was used previously in arXiv:2007.06551. There, the prescription naturally computed a co-moving entanglement entropy, and was argued to resolve the information paradox for a black hole living in the cosmology. In this paper, we show that the Dirichlet prescription leads to reasonable results, when applied to a recently studied wedge holography set up with a gravitating bath. The nature of the information paradox and its resolution in our Dirichlet problem have a natural understanding in terms of the strength of gravity on the two branes and at the anchoring location. By sliding the anchor to the defect, we demonstrate that the limit where gravity decouples from the anchor is continuous -- in other words, as far as island physics is considered, weak gravity on the anchor is identical to no gravity. The weak and (moderately) strong gravity regions on the brane are separated by a "Dirichlet wall". We find an intricate interplay between various extremal surfaces, with an island coming to the rescue whenever there is an information paradox. This is despite the presence of massless gravitons in the spectrum. The overall physics is consistent with the slogan that gravity becomes "more holographic", as it gets stronger. Our observations strengthen the case that the conventional Page curve is indeed of significance, when discussing the information paradox in flat space. We work in high enough dimensions so that the graviton is non-trivial, and our results are in line with the previous discussions on gravitating baths in arXiv:2005.02993 and arXiv:2007.06551.
high energy physics theory
We investigate the Mott insulating states of the SU(4) Hubbard model on the square lattice with a staggered pattern of flux by employing the large-scale sign-problem free quantum Monte-Carlo simulations. As varying the flux $\phi$, the low energy fermions evolve from a nested Fermi surface at zero flux to isotropic Dirac cones at $\pi$-flux, and exhibit anisotropic Dirac cones in between. The simulations show the competitions among the Dirac semi-metal, the antiferromagnetic and valence-bond-solid phases. The phase diagram features a tri-critical point where these three phases meet. In the strong coupling limit, only the antiferromagnetic phase appears. The quantum phase transition between the antiferromagnetic phase and the valence-bond-solid phase is found to be continuous, and the critical exponents are numerically determined. We have also found that inside the valence-bond-solid phase, there exists a region that the single-particle gap vanishes but the spin gap remains finite, which is consistent with a plaquette valence-bonding ordering pattern.
condensed matter
We report on the first detection of very high-energy (VHE) gamma-ray emission from the Crab Nebula by a Cherenkov telescope in dual-mirror Schwarzschild-Couder (SC) configuration. The result has been achieved by means of the 4 m size ASTRI-Horn telescope, operated on Mt. Etna (Italy) and developed in the context of the Cherenkov Telescope Array Observatory preparatory phase. The dual-mirror SC design is aplanatic and characterized by a small plate scale, allowing us to implement large field of view cameras with small-size pixel sensors and a high compactness. The curved focal plane of the ASTRI camera is covered by silicon photo-multipliers (SiPMs), managed by an unconventional front-end electronics based on a customized peak-sensing detector mode. The system includes internal and external calibration systems, hardware and software for control and acquisition, and the complete data archiving and processing chain. The observations of the Crab Nebula were carried out in December 2018, during the telescope verification phase, for a total observation time (after data selection) of 24.4 h, equally divided into on- and off-axis source exposure. The camera system was still under commissioning and its functionality was not yet completely exploited. Furthermore, due to recent eruptions of the Etna Volcano, the mirror reflection efficiency was reduced. Nevertheless, the observations led to the detection of the source with a statistical significance of 5.4 sigma above an energy threshold of ~3 TeV. This result provides an important step towards the use of dual-mirror systems in Cherenkov gamma-ray astronomy. A pathfinder mini-array based on nine large field-of-view ASTRI-like telescopes is under implementation.
astrophysics
We propose an extension of the regular Cox's proportional hazards model which allows the estimation of the probabilities of rare events. It is known that when the data are heavily censored at the upper end of the survival distribution, the estimation of the tail of the survival distribution is not reliable. To estimate the distribution beyond the last observed data, we suppose that the survival data are in the domain of attraction of the Fr\'echet distribution conditionally to covariates. Under this condition, by the Fisher-Tippett-Gnedenko theorem, the tail of the baseline distribution can be adjusted by a Pareto distribution with parameter $\theta$ beyond a threshold $\tau$. The survival distributions conditioned to the covariates are easily computed from the baseline. We also propose an aggregated estimate of the survival probabilities. A procedure allowing an automatic choice of the threshold and an application on two data sets are given.
statistics
Certain six-dimensional (1,0) supersymmetric little string theories, when compactified on $T^3$, have moduli spaces of vacua given by smooth K3 surfaces. Using ideas of Gaiotto-Moore-Neitzke, we show that this provides a systematic procedure for determining the Ricci-flat metric on a smooth K3 surface in terms of BPS degeneracies of (compactified) little string theories.
high energy physics theory
Roman-era concrete is the iconic embodiment of long-term physicochemical resilience. We investigated the basis of this behavior across scales of observations by coupling time-lapse (4-D) tomographic imaging of macroscopic mechanical stressing with structural microscopy and chemical spectroscopy on Roman marine concrete (RMC) from ancient harbors in Italy and Israel. Stress-strain measurements revealed that RMC creeps and exhibits a ductile deformation mode. The permeability of specimens from Italy were found to be low due to increased matrix-aggregate bonding. Structural and chemical imaging shows the presence of well-developed sulfur-rich, fibrous minerals that are intertwined and embedded in a crossbred matrix having the chemical traits of both a calcium-aluminum-silicate-hydrate and a geopolymer. This latter likely reflects the ultra-alkaline volcanic nature of the primary source materials. We hypothesize that the fine interweave of sulfur-rich fibers within this crossbred matrix enhances aggregate bonding, which altogether contributes to the durability of RMC.
physics
We investigate the effect of Ni doping on the Fe-site in single crystals of the magnetic superconductor RbEuFe$_4$As$_4$ for doping concentrations of up to 4%. A clear suppression in the superconducting transition temperature is observed in specific heat, resistivity and magnetization measurements. Upon Ni-doping, the resistivity curves shift up in a parallel fashion indicating a strong increase of the residual resistivity due to scattering by charged dopand atoms while the shape of the curve and thus the electronic structure appears largely unchanged. The observed step $\Delta C/T_c$ at the superconducting transition decreases strongly for increasing Ni doping in agreement with expectations based on a model of multi-band superconductivity and strong inter-band pairing. The upper critical field slopes are reduced upon Ni doping for in- as well as out-of-plane fields leading to a small reduction in the superconducting anisotropy. The specific heat measurements of the magnetic transition reveal the same BKT behavior close to the transition temperature $T_m$ for all doping levels. The transition temperature is essentially unchanged upon doping. The in to out-of-plane anisotropy of Eu-magnetism observed at small magnetic fields is unaltered as compared to the undoped compound. All of these observations indicate a decoupling of the Eu magnetism from superconductivity and essentially no influence of Ni doping on the Eu magnetism in this compound.
condensed matter
Most e-commerce product feeds provide blended results of advertised products and recommended products to consumers. The underlying advertising and recommendation platforms share similar if not exactly the same set of candidate products. Consumers' behaviors on the advertised results constitute part of the recommendation model's training data and therefore can influence the recommended results. We refer to this process as Leverage. Considering this mechanism, we propose a novel perspective that advertisers can strategically bid through the advertising platform to optimize their recommended organic traffic. By analyzing the real-world data, we first explain the principles of Leverage mechanism, i.e., the dynamic models of Leverage. Then we introduce a novel Leverage optimization problem and formulate it with a Markov Decision Process. To deal with the sample complexity challenge in model-free reinforcement learning, we propose a novel Hybrid Training Leverage Bidding (HTLB) algorithm which combines the real-world samples and the emulator-generated samples to boost the learning speed and stability. Our offline experiments as well as the results from the online deployment demonstrate the superior performance of our approach.
statistics
DESI will precisely constrain cosmic expansion and the growth of structure by collecting $\sim$35 million redshifts across $\sim$80% of cosmic history and one third of the sky to study Baryon Acoustic Oscillations (BAO) and Redshift Space Distortions (RSD). We present a preliminary target selection for an Emission Line Galaxy (ELG) sample, which will comprise about half of all DESI tracers. The selection consists of a $g$-band magnitude cut and a $(g-r)$ vs. $(r-z)$ color box, which we validate using HSC/PDR2 photometric redshifts and DEEP2 spectroscopy. The ELG target density should be $\sim$2400 deg$^{-2}$, with $\sim$65% of ELG redshifts reliably within a redshift range of $0.6<z<1.6$. ELG targeting for DESI will be finalized during a `Survey Validation' phase.
astrophysics
We demonstrate a modeling and computational framework that allows for rapid screening of thousands of potential network designs for particular dynamic behavior. To illustrate this capability we consider the problem of hysteresis, a prerequisite for construction of robust bistable switches and hence a cornerstone for construction of more complex synthetic circuits. We evaluate and rank all three node networks according to their ability to robustly exhibit hysteresis. Focusing on the highest ranked networks, we demonstrate how additional robustness and design constraints can be applied. We compare our results to more traditional methods based on specific parameterization of ordinary differential equation models and demonstrate a strong qualitative match at a small fraction of the computational cost.
mathematics
Multi-channel short-time Fourier transform (STFT) domain-based processing of reverberant microphone signals commonly relies on power-spectral-density (PSD) estimates of early source images, where early refers to reflections contained within the same STFT frame. State-of-the-art approaches to multi-source early PSD estimation, given an estimate of the associated relative early transfer functions (RETFs), conventionally minimize the approximation error defined with respect to the early correlation matrix, requiring non-negative inequality constraints on the PSDs. Instead, we here propose to factorize the early correlation matrix and minimize the approximation error defined with respect to the early-correlation-matrix square root. The proposed minimization problem -- constituting a generalization of the so-called orthogonal Procrustes problem -- seeks a unitary matrix and the square roots of the early PSDs up to an arbitrary complex argument, making non-negative inequality constraints redundant. A solution is obtained iteratively, requiring one singular value decomposition (SVD) per iteration. The estimated unitary matrix and early PSD square roots further allow to recursively update the RETF estimate, which is not inherently possible in the conventional approach. An estimate of the said early-correlation-matrix square root itself is obtained by means of the generalized eigenvalue decomposition (GEVD), where we further propose to restore non-stationarities by desmoothing the generalized eigenvalues in order to compensate for inevitable recursive averaging. Simulation results indicate fast convergence of the proposed multi-source early PSD estimation approach in only one iteration if initialized appropriately, and better performance as compared to the conventional approach.
electrical engineering and systems science
A novel approach for unsupervised domain adaptation for neural networks is proposed. It relies on metric-based regularization of the learning process. The metric-based regularization aims at domain-invariant latent feature representations by means of maximizing the similarity between domain-specific activation distributions. The proposed metric results from modifying an integral probability metric such that it becomes less translation-sensitive on a polynomial function space. The metric has an intuitive interpretation in the dual space as the sum of differences of higher order central moments of the corresponding activation distributions. Under appropriate assumptions on the input distributions, error minimization is proven for the continuous case. As demonstrated by an analysis of standard benchmark experiments for sentiment analysis, object recognition and digit recognition, the outlined approach is robust regarding parameter changes and achieves higher classification accuracies than comparable approaches. The source code is available at https://github.com/wzell/mann.
statistics
The Iron Calorimeter (ICAL) detector at the proposed India-based Neutrino Observatory (INO) aims to detect atmospheric neutrinos and antineutrinos separately in the multi-GeV range of energies and over a wide range of baselines. By utilizing its charge identification capability, ICAL can efficiently distinguish $\mu^-$ and $\mu^+$ events. Atmospheric neutrinos passing long distances through Earth can be detected at ICAL with good resolution in energy and direction, which enables ICAL to see the density-dependent matter oscillations experienced by upward-going neutrinos in the multi-GeV range of energies. In this work, we explore the possibility of utilizing neutrino oscillations in the presence of matter to extract information about the internal structure of Earth complementary to seismic studies. Using good directional resolution, ICAL would be able to observe 331 $\mu^-$ and 146 $\mu^+$ core-passing events with 500 kt$\cdot$yr exposure. With this exposure, we show for the first time that the presence of Earth's core can be independently confirmed at ICAL with a median $\Delta \chi^2$ of 7.45 (4.83) assuming normal (inverted) mass ordering by ruling out the simple two-layered mantle-crust profile in theory while generating the prospective data with the PREM profile. If we generate the data with the simple three-layered core-mantle-crust profile of the Earth, the above-mentioned $\Delta \chi^2$ changes to 6.31 (3.92) assuming normal (inverted) mass ordering.
high energy physics phenomenology
We complete the classification of globally generated vector bundles with small $c_1$ on projective spaces by treating the case $c_1 = 5$ on $\mathbb{P}^n$, $n \geq 4$ (the case $c_1 \leq 3$ has been considered by Sierra and Ugaglia, while the cases $c_1 = 4$ on any projective space and $c_1 = 5$ on $\mathbb{P}^2$ and $\mathbb{P}^3$ have been studied in two of our previous papers). It turns out that there are very few indecomposable bundles of this kind: besides some obvious examples there are, roughly speaking, only the (first twist of the) rank 5 vector bundle which is the middle term of the monad defining the Horrocks bundle of rank 3 on $\mathbb{P}^5$, and its restriction to $\mathbb{P}^4$. We recall, in an appendix, from our preprint [arXiv:1805.11336], the main results allowing the classification of globally generated vector bundles with $c_1 = 5$ on $\mathbb{P}^3$. Since there are many such bundles, a large part of the main body of the paper is occupied with the proof of the fact that, except for the simplest ones, they do not extend to $\mathbb{P}^4$ as globally generated vector bundles.
mathematics
We study the space-time symmetries of the actions obtained by expanding the action for a massive free relativistic particle around the Galilean action. We obtain all the point space-time symmetries of the post-Galilean actions by working in canonical space. We also construct an infinite collection of generalized Schr\"odinger algebras parameterized by an integer $M$, with $M=0$ corresponding to the standard Schr\"odinger algebra. We discuss the Schr\"odinger equations associated to these algebras, their solutions and projective phases.
high energy physics theory
A polynomial form is established for the off-shell CHY scattering equations proposed by Lam and Yao. Re-expressing this in terms of independent Mandelstam invariants provides a new expression for the polynomial scattering equations, immediately valid off shell, which makes it evident that they yield the off-shell amplitudes given by massless $\phi^3$ Feynman graphs. A CHY expression for individual Feynman graphs, valid even off shell, is established through a recurrence relation.
high energy physics theory
We develop the idea of local duality symmetry (LDS) in gauge field theories. Using Clifford algebra techniques we construct dually invariant scalar Lagrangian of electrodynamics in the presence of sources and demonstrate that in tensor formalism it is exactly the same as the usual one. Then we localize the duality symmetry with two possible options for the appearing pseudovector field - massive and massless. The first option might be interpreted as a candidate for dark matter. Perspectives of the application of LDS in QCD are briefly discussed.
high energy physics theory
Using a formulation of quantum mechanics based on orthogonal polynomials in the energy and physical parameters, we present a method that gives the class of potential functions for exactly solvable problems corresponding to a given energy spectrum. In this work, we study the class of problems with a mix of continuous and discrete energy spectrum that are associated with the continuous dual Hahn polynomial. These include the one-dimensional logarithmic potential and the three-dimensional Coulomb plus linear potential.
quantum physics
We present a method for the fast computation of the eigenpairs of a bijective positive symmetric linear operator $\mathcal{L}$. The method is based on a combination of operator adapted wavelets (gamblets) with hierarchical subspace correction.First, gamblets provide a raw but fast approximation of the eigensubspaces of $\mathcal{L}$ by block-diagonalizing $\mathcal{L}$ into sparse and well-conditioned blocks. Next, the hierarchical subspace correction method, computes the eigenpairs associated with the Galerkin restriction of $\mathcal{L}$ to a coarse (low dimensional) gamblet subspace, and then, corrects those eigenpairs by solving a hierarchy of linear problems in the finer gamblet subspaces (from coarse to fine, using multigrid iteration). The proposed algorithm is robust for the presence of multiple (a continuum of) scales and is shown to be of near-linear complexity when $\mathcal{L}$ is an (arbitrary local, e.g.~differential) operator mapping $\mathcal{H}^s_0(\Omega)$ to $\mathcal{H}^{-s}(\Omega)$ (e.g.~an elliptic PDE with rough coefficients).
mathematics
The importance of fluid-elastic forces in tube bundle vibrations can hardly be over-emphasized, in view of their damaging potential. In the last decades, advanced models for representing fluid-elastic coupling have therefore been developed by the community of the domain. Those models are nowadays embedded in the methodologies that are used on a regular basis by both steam generators providers and operators, in order to prevent the risk of a tube failure with adequate safety margins. From an R&D point of view however, the need still remains for more advanced models of fluid-elastic coupling, in order to fully decipher the physics underlying the observed phenomena. As a consequence, new experimental flow-coupling coefficients are also required to specifically feed and validate those more sophisticated models. Recent experiments performed at CEA-Saclay suggest that the fluid stiffness and damping coefficients depend on further dimensionless parameters beyond the reduced velocity. In this work, the problem of data reduction is first revisited, in the light of dimensional analysis. For single-phase flows, it is underlined that the flow-coupling coefficients depend at least on two dimensionless parameters, namely the Reynolds number $Re$ and the Stokes number $Sk$. Therefore, reducing the experimental data in terms of the compound dimensionless quantity $V_r=Re/Sk$ necessarily leads to impoverish results, hence the data dispersion. In a second step, experimental data are presented using the dimensionless numbers $Re$ and $Sk$. We report experiments, for a 3x5 square tube bundle subjected to water transverse flow. The bundle is rigid, except for the central tube which is mounted on a flexible suspension allowing for translation motions in the lift direction.
physics
Inspired by the recent observation of $\chi_{c0}(3930)$, $X(4685)$ and $X(4630)$ by the LHCb Collaboration and some exotic resonances such as $X(4350)$, $X(4500)$, etc. by several experiment collaborations, the $cs\bar{c}\bar{s}$ tetraquark systems with $IJ^{P}=00^+$, $01^+$ and $02^+$ are systematically investigated in the framework of the quark delocalization color screening model(QDCSM). Two structures, the meson-meson and diquark-antidiquark structures, as well as the channel-coupling of all channels of these two configurations are considered in this work. The numerical results indicate that the molecular bound state $\bar{D}_{s}D_{s}$ with $IJ^{P}=00^+$ can be supposed to explain the $\chi_{c0}(3930)$. Besides, by using the stabilization method, several resonant states are obtained. There are four $IJ^{P}=00^{+}$ states around the resonance mass 4035 MeV, 4385 MeV, 4524 MeV, and 4632 MeV, respectively; one $IJ^{P}=01^{+}$ state around the resonance mass 4327 MeV; and two $IJ^{P}=02^{+}$ states around the resonance mass 4419 MeV and 4526 MeV, respectively. All of them are compact tetraquarks. Among these states, $X(4350)$, $X(4500)$ and $X(4700)$ can be explained as the compact tetraquark state with $IJ^{P}=00^{+}$, and the $X(4274)$ is possible to be a candidate of the compact tetraquark state with $IJ^{P}=01^{+}$. More experimental tests are expected to check the existence of all these possible resonance states.
high energy physics phenomenology
We present a semianalytical theory for exciton transport in organic molecular crystals interacting strongly with a single cavity mode. Based on the Holstein-Tavis-Cummings model and the Kubo formula, we derive an exciton mobility expression in the framework of a temperature-dependent variational canonical transformation, which can cover a wide range of exciton-vibration coupling, exciton-cavity coupling, and temperatures. A closed-form expression for the coherent part of the total mobility is obtained in the zeroth order of the exciton-vibration coupling, which demonstrates the significance of vibrationally dressed dark excitons in the determination of the transport mechanism. By performing numerical simulations on both the H- and J-aggregates, we find that the exciton-cavity coupling has significant effects on the total mobility: 1) At low temperatures, there exists an optimal exciton-cavity coupling strength for the H-aggregate at which a maximal mobility is reached, while the mobility in the J-aggregate decreases monotonically with increasing exciton-cavity coupling; 2) At high temperatures, the mobility in both types of aggregates get enhanced by the cavity. We illustrate the above-mentioned low-temperature optimal mobility observed in the H-aggregate by using realistic parameters at room temperature.
condensed matter
In-field demonstrations in real-world scenarios boost the development of a rising technology towards its integration in existing infrastructures. Although quantum key distribution (QKD) devices are already adopted outside the laboratories, current field implementations still suffer from high costs and low performances, preventing this emerging technology from a large-scale deployment in telecommunication networks. Here we present a simple, practical and efficient QKD scheme with finite-key analysis, performed over a 21 dB-losses fiber link installed in the metropolitan area of Florence (Italy). Coexistence of quantum and weak classical communication is also demonstrated by transmitting an optical synchronization signal through the same fiber link.
quantum physics
Van der Waals semiconductor heterostructures could be a platform to harness hot photoexcited carriers in the next generation of optoelectronic and photovoltaic devices. The internal quantum efficiency of hot-carrier devices is determined by the relation between photocarrier extraction and thermalization rates. Using \textit{ab-initio} methods we show that the photocarrier thermalization time in single-layer transition metal dichalcogenides strongly depends on the peculiarities of the phonon spectrum and the electronic spin-orbit coupling. In detail, the lifted spin degeneracy in the valence band suppresses the hole scattering on acoustic phonons, slowing down the thermalization of holes by one order of magnitude as compared to electrons. Moreover, the hole thermalization time behaves differently in MoS$_2$ and WSe$_2$ because spin-orbit interactions differ in these seemingly similar materials. We predict that the internal quantum efficiency of a tunneling van der Waals semiconductor heterostructure depends qualitatively on whether MoS$_2$ or WSe$_2$ is used.
condensed matter
Site-specific atom probe tomography (APT) from aluminum alloys has been limited by sample preparation issues. Indeed, Ga, which is conventionally used in focused-ion beam (FIB) preparations, has a high affinity for Al grain boundaries and causes their embrittlement. This leads to high concentrations of Ga at grain boundaries after specimen preparation, unreliable compositional analyses and low specimen yield. Here, to tackle this problem, we propose to use cryo-FIB for APT specimen preparation specifically from grain boundaries in a commercial Al-alloy. We demonstrate how this setup, easily implementable on conventional Ga-FIB instruments, is efficient to prevent Ga diffusion to grain boundaries. Specimens were prepared at room temperature and at cryogenic temperature (below approx. 90K) are compared, and we confirm that at room temperature, a compositional enrichment above 15 at.% of Ga is found at the grain boundary, whereas no enrichment could be detected for the cryo-prepared sample. We propose that this is due to the decrease of the diffusion rate of Ga at low temperature. The present results could have a high impact on the understanding of aluminum and Al-alloys.
condensed matter
Self-assembly of granular particles is of great interest in both applied and basic research. It is commonly observed that when randomly packed into a container, granular particles form disordered structures like glass. As the particles are athermal, the self-assembly of such packings can normally be directed with energy input via vibration or shear. However, here we show that in particular containers, mono-sized spheres and cubes can self-assemble into perfect crystals when randomly dropped in. This is because the favourable microstates for new particles are jammed in the ordered structure by the existing particles and the boundary synergistically. Such a self-assembly method has not been reported in the literature. It indicates that disordered packing structure may result from the conflict between the internal structure and the structure shaped by the boundary. Therefore, to bridge such inconsistency could be a general principle for directing self-assembly for different kinds of particles in emerging areas.
condensed matter
There has been increasing interest in deploying IoT devices to study human behaviour in locations such as homes and offices. Such devices can be deployed in a laboratory or `in the wild' in natural environments. The latter allows one to collect behavioural data that is not contaminated by the artificiality of a laboratory experiment. Using IoT devices in ordinary environments also brings the benefits of reduced cost, as compared with lab experiments, and less disturbance to the participants' daily routines which in turn helps with recruiting them into the research. However, in this case, it is essential to have an IoT infrastructure that can be easily and swiftly installed and from which real-time data can be securely and straightforwardly collected. In this paper, we present MakeSense, an IoT testbed that enables real-world experimentation for large scale social research on indoor activities through real-time monitoring and/or situation-aware applications. The testbed features quick setup, flexibility in deployment, the integration of a range of IoT devices, resilience, and scalability. We also present two case studies to demonstrate the use of the testbed, one in homes and one in offices.
computer science
We consider the effect of higher twist operators of the Wilson operator product expansion in the structure function $F_{2}(x,Q^{2})$ at small-$x$, taking into account QCD effective charges whose infrared behavior is constrained by a dynamical mass scale. The higher twist corrections are obtained from the renormalon formalism. Our analysis is performed within the conventional framework of next-to-leading order, with the factorization and renormalization scales chosen to be $Q^{2}$. The infrared properties of QCD are treated in the context of the generalized double-asymptotic-scaling approximation. We show that the corrections to $F_{2}$ associated with twist-four and twist-six are both necessary and sufficient for a good description of the deep infrared experimental data.
high energy physics phenomenology
Large project overruns and overtime work have been reported in the software industry, resulting in additional expense for companies and personal issues for developers. The present work aims to provide an overview of studies related to time pressure in software engineering; specifically, existing definitions, possible causes, and metrics relevant to time pressure were collected, and a mapping of the studies to software processes and approaches was performed. Moreover, we synthesize results of existing quantitative studies on the effects of time pressure on software development, and offer practical takeaways for practitioners and researchers, based on empirical evidence. Our search strategy examined 5,414 sources, found through repository searches and snowballing. Applying inclusion and exclusion criteria resulted in the selection of 102 papers, which made relevant contributions related to time pressure in software engineering. The majority of high quality studies report increased productivity and decreased quality under time pressure. Frequent categories of studies focus on quality assurance, cost estimation, and process simulation. It appears that time pressure is usually caused by errors in cost estimation. The effect of time pressure is most often identified during software quality assurance. The majority of empirical studies report increased productivity under time pressure, while the most cost estimation and process simulation models assume that compressing the schedule increases the total needed hours. We also find evidence of the mediating effect of knowledge on the effects of time pressure, and that tight deadlines impact tasks with an algorithmic nature more severely. Future research should better contextualize quantitative studies to account for the existing conflicting results and to provide an understanding of situations when time pressure is either beneficial or harmful.
computer science
The XENON1T collaboration reported an excess of the low-energy electron recoil events between 1 and 7 keV. We explore the possibility to explain such an anomaly by the MeV-scale dark matter (DM) heated by the interior of the Sun due to the same DM-electron interaction as in the detector. The kinetic energies of heated DM particles can reach a few keV, and can potentially account for the excess signals detected by XENON1T. We study different form factors of the DM-electron interactions, $F(q)\propto q^i$ with $i=0,1,2$ and $q$ being the momentum exchange, and find that for all these cases the inclusion of the Sun-heated DM component improves the fit to the XENON1T data. The inferred DM-electron scattering cross section (at $q=\alpha m_e$ where $\alpha$ is the fine structure constant and $m_e$ is electron mass) is from $\sim 10^{-38}$~cm$^2$ (for $i=0$) to $\sim 10^{-42}$~cm$^2$ (for $i=2$). We also derive constraints on the DM-electron cross sections for different form factors, which are stronger than previous results with similar assumptions. We emphasize that the Sun-heated DM scenario relies on the minimum assumption on DM models, which serves as a general explanation of the XENON1T anomaly via DM-electron interaction. The spectrum of the Sun-heated DM is typically soft comparing to other boosted DM, so the small recoil events are expected to be abundant in this scenario. More sensitive direct detection experiments with lower thresholds can possibly distinguish this scenario with other boosted DM models or solar axion models.
high energy physics phenomenology
Systems powered by artificial intelligence are being developed to be more user-friendly by communicating with users in a progressively human-like conversational way. Chatbots, also known as dialogue systems, interactive conversational agents, or virtual agents are an example of such systems used in a wide variety of applications ranging from customer support in the business domain to companionship in the healthcare sector. It is becoming increasingly important to develop chatbots that can best respond to the personalized needs of their users so that they can be as helpful to the user as possible in a real human way. This paper investigates and compares three popular existing chatbots API offerings and then propose and develop a voice interactive and multilingual chatbot that can effectively respond to users mood, tone, and language using IBM Watson Assistant, Tone Analyzer, and Language Translator. The chatbot was evaluated using a use case that was targeted at responding to users needs regarding exam stress based on university students survey data generated using Google Forms. The results of measuring the chatbot effectiveness at analyzing responses regarding exam stress indicate that the chatbot responding appropriately to the user queries regarding how they are feeling about exams 76.5%. The chatbot could also be adapted for use in other application areas such as student info-centers, government kiosks, and mental health support systems.
computer science
A nonlinear analytical model for the pressure dynamics in a vacuum chamber, pumped with a sputter ion pump (SIP), is proposed, discussed and experimentally evaluated. The model describes the physics of the pumping mechanism of SIPs in the context of a cold-atom experiment. By using this model, we fit pump-down curves of our vacuum system to extract the relevant physical parameters characterizing its pressure dynamics. The aim of this investigation is the optimization of cold-atom experiments in terms of reducing the dead time for quantum sensing using atom interferometry. We develop a calibration method to improve the precision in pressure measurements via the ion current in SIPs. Our method is based on a careful analysis of the gas conductance and pumping in order to reliably link the pressure readings at the SIP with the actual pressure in the vacuum (science) chamber. Our results are in agreement with the existence of essentially two pumping regimes determined by the pressure level in the system. In particular, we find our results in agreement with the well-known fact that for a given applied voltage, at low pressures, the discharge current efficiently sputters pumping material from the pump's electrodes. This process sets the leading pumping mechanism in this limit. At high pressures, the discharge current drops and the pumping is mainly performed by the already sputtered material.
physics
We recover PH3 in the atmosphere of Venus in data taken with ALMA, using three different calibration methods. The whole-planet signal is recovered with 5.4{\sigma} confidence using Venus bandpass self-calibration, and two simpler approaches are shown to yield example 4.5-4.8{\sigma} detections of the equatorial belt. Non-recovery by Villanueva et al. is attributable to (a) including areas of the planet with high spectral-artefacts and (b) retaining all antenna baselines which raises the noise by a factor ~2.5. We release a data-processing script that enables our whole-planet result to be reproduced. The JCMT detection of PH3 remains robust, with the alternative SO2 attribution proposed by Villanueva et al. appearing inconsistent both in line-velocity and with millimetre-wavelength SO2 monitoring. SO2 contamination of the ALMA PH3-line is minimal. Net abundances for PH3, in the gas column above ~55 km, are up to ~20 ppb planet-wide with JCMT, and ~7 ppb with ALMA (but with signal-loss possible on scales approaching planetary size). Derived abundances will differ if PH3 occupies restricted altitudes - molecules in the clouds will contribute significantly less absorption at line-centre than equivalent numbers of mesospheric molecules - but in the latter zone, PH3 lifetime is expected to be short. Given we recover phosphine, we suggest possible solutions (requiring substantial further testing): a small collisional broadening coefficient could give narrow lines from lower altitude, or a high eddy diffusion coefficient could allow molecules to survive longer at higher altitudes. Alternatively, PH3 could be actively produced by an unknown mechanism in the mesosphere, but this would need to be in addition to cloud-level PH3 detected retrospectively by Pioneer-Venus.
astrophysics
Most models for the connection between galaxies and their haloes ignore the possibility that galaxy properties may be correlated with halo properties other than mass, a phenomenon known as galaxy assembly bias. Yet, it is known that such correlations can lead to systematic errors in the interpretation of survey data. At present, the degree to which galaxy assembly bias may be present in the real Universe, and the best strategies for constraining it remain uncertain. We study the ability of several observables to constrain galaxy assembly bias from redshift survey data using the decorated halo occupation distribution (dHOD), an empirical model of the galaxy--halo connection that incorporates assembly bias. We cover an expansive set of observables, including the projected two-point correlation function $w_{\mathrm{p}}(r_{\mathrm{p}})$, the galaxy--galaxy lensing signal $\Delta \Sigma(r_{\mathrm{p}})$, the void probability function $\mathrm{VPF}(r)$, the distributions of counts-in-cylinders $P(N_{\mathrm{CIC}})$, and counts-in-annuli $P(N_{\mathrm{CIA}})$, and the distribution of the ratio of counts in cylinders of different sizes $P(N_2/N_5)$. We find that despite the frequent use of the combination $w_{\mathrm{p}}(r_{\mathrm{p}})+\Delta \Sigma(r_{\mathrm{p}})$ in interpreting galaxy data, the count statistics, $P(N_{\mathrm{CIC}})$ and $P(N_{\mathrm{CIA}})$, are generally more efficient in constraining galaxy assembly bias when combined with $w_{\mathrm{p}}(r_{\mathrm{p}})$. Constraints based upon $w_{\mathrm{p}}(r_{\mathrm{p}})$ and $\Delta \Sigma(r_{\mathrm{p}})$ share common degeneracy directions in the parameter space, while combinations of $w_{\mathrm{p}}(r_{\mathrm{p}})$ with the count statistics are more complementary. Therefore, we strongly suggest that count statistics should be used to complement the canonical observables in future studies of the galaxy--halo connection.
astrophysics