text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
The integration of large-scale wind farms and large-scale charging stations for electric vehicles with the electricity grids necessitate energy storage support for both technologies. Matching the energy variability of the wind farms with the demand variability of the electric vehicles (EVs) off-grid could potentially eliminate the need for expensive energy storage technologies required to stabilize the grid. The objective of this paper is to investigate the feasibility of using wind generation as direct energy source to power the EV charging stations. An interval-based approach corresponding to the time slot of EV charging is introduced for wind energy conversion and analyzed using different constrains and criteria including, wind speed averaging time interval, various turbines manufacturers, and standard high-resolution wind speed data sets. We performed a piecewise recursive of wind turbines' output energy to measure the EV charging efficiency. Wind averaging results show that the three minutes intervals have increased the total number of EVs by more than 80% compared to one-- and two -minute intervals. The potential cost reduction due to decoupling of both technologies from the utility grid, energy storage systems, and the associated energy conversion power electronics has merit and research in this direction is worth pursuing | electrical engineering and systems science |
We put forward a measure based on Gaussian steering to quantify the non-Markovianity of continuous-variable (CV) Gaussian quantum channels. We employ the proposed measure to assess and compare the non-Markovianity of a quantum Brownian motion (QBM) channel, originating from the interaction with Ohmic and sub-Ohmic environments with spectral densities described by a Lorentz-Drude cutoff, both at high and low temperatures, showing that sub-Ohmic, high temperature environments lead to highly non-Markovian evolution, with cyclic backflows of Gaussian steerability from the environment to the system. Our results add to the understanding of the interplay between quantum correlations and non-Markovianity for CV systems, and could be implemented at the experimental level to quantify non-Markovianity in some physical scenarios. | quantum physics |
We present a comparison between approximated methods for the construction of mock catalogs based on the halo-bias mapping technique. To this end, we use as reference a high resolution $N$-body simulation of 3840$^3$ dark matter particles on a 400$h^{-1}\rm{Mpc}$ cube box from the Multidark suite. In particular, we explore parametric versus non-parametric bias mapping approaches and compare them at reproducing the halo distribution in terms of the two and three point statistics down to $\sim 10^8\,{\rm M}_{\odot}\,h^{-1}$ halo masses. Our findings demonstrate that the parametric approach remains inaccurate even including complex deterministic and stochastic components. On the contrary, the non-parametric one is indistinguishable from the reference $N$-body calculation in the power-spectrum beyond $k=1\,h\,{\rm Mpc}^{-1}$, and in the bispectrum for typical configurations relevant to baryon acoustic oscillation analysis. We conclude, that approaches which extract the full bias information from $N$-body simulations in a non-parametric fashion are ready for the analysis of the new generation of large scale structure surveys. | astrophysics |
We investigate Majorana dark matter in a new variant of $U(1)_{L_{\mu}-L_{\tau}}$ gauge extension of Standard Model, where the scalar sector is enriched with an inert doublet and a $(\bar{3},1,1/3)$ scalar leptoquark. We compute the WIMP-nucleon cross section in leptoquark portal and the relic density mediated by inert doublet components, leptoquark and the new $Z^{\prime}$ boson. We constrain the parameter space consistent with Planck limit on relic density, PICO-60 and LUX bounds on spin-dependent direct detection cross section. Furthermore, we constrain the new couplings from the present experimental data on ${\rm Br}(\tau \to \mu \nu_\tau \bar \nu_\mu)$, ${\rm Br}( B \to X_s \gamma)$, ${\rm Br}( B^0 \to K^0 \mu^+ \mu^-)$, ${\rm Br}(B^+ \to K^+ \tau^+ \tau^-)$ and $B_s-\bar{B_s}$ mixing, which occur at one-loop level in the presence of $Z^\prime$ and leptoquark. Using the allowed parameter space, we estimate the form factor independent $P_{4,5}^\prime$ observables and the lepton non-universality parameters $R_{K}$, $R_{K^*}$ and $R_\phi$. We also briefly discuss about the neutrino mass generation at one-loop level and the viable parameter region to explain current neutrino oscillation data. | high energy physics phenomenology |
Entanglement and quantum squeezing have wide applications in quantum technologies due to their non-classical characteristics. Here we study entanglement and quantum squeezing in an open spin-optomechanical system, in which a Rabi model (a spin coupled to the mechanical oscillator) is coupled to an ancillary cavity field via a quadratic optomechanical coupling. We find that their performances can be significantly modulated via the photon of the ancillary cavity, which comes from photon-dependent spin-oscillator coupling and detuning. Specifically, a fully switchable spin-oscillator entanglement can be achieved, meanwhile a strong mechanical squeezing is also realized. Moreover, we study the environment-induced decoherence and dissipation, and find that they can be mitigated by increasing the number of photons. This work provides an effective way to manipulate entanglement and quantum squeezing and to suppress decoherence in the cavity quantum electrodynamics with a quadratic optomechanics. | quantum physics |
In this note we discuss the mathematical tools to define trend indicators which are used to describe market trends. We explain the relation between averages and moving averages on the one hand and the so called exponential moving average (EMA) on the other hand. We present a lot of examples and give the definition of the most frequently used trend indicator, the MACD, and discuss its properties. | statistics |
A graphical tool for investigating unimodality of hyperspherical data is proposed. It is based on the notion of statistical data depth function for directional data which extends the univariate concept of rank. Firstly a local version of distance-based depths for directional data based on aims at analyzing the local structure of hyperspherical data is proposed. Then such notion is compared to the global version of data depth by means of a two-dimensional scatterplot, i.e. the GLD-plot. The proposal is illustrated on simulated and real data examples. | statistics |
In a recent letter we suggested a natural generalization of the flat-space spinor-helicity formalism in four dimensions to anti-de Sitter space. In the present paper we give some technical details that were left implicit previously. For lower-spin fields we also derive potentials associated with the previously found plane wave solutions for field strengths. We then employ these potentials to evaluate some three-point amplitudes. This analysis illustrates a typical computation of an amplitude without internal lines in our formalism. | high energy physics theory |
We present a physically-motivated topology of a deep neural network that can efficiently infer extensive parameters (such as energy, entropy, or number of particles) of arbitrarily large systems, doing so with O(N) scaling. We use a form of domain decomposition for training and inference, where each sub-domain (tile) is comprised of a non-overlapping focus region surrounded by an overlapping context region. The size of these regions is motivated by the physical interaction length scales of the problem. We demonstrate the application of EDNNs to three physical systems: the Ising model and two hexagonal/graphene-like datasets. In the latter, an EDNN was able to make total energy predictions of a 60 atoms system, with comparable accuracy to density functional theory (DFT), in 57 milliseconds. Additionally EDNNs are well suited for massively parallel evaluation, as no communication is necessary during neural network evaluation. We demonstrate that EDNNs can be used to make an energy prediction of a two-dimensional 35.2 million atom system, over 1 square micrometer of material, at an accuracy comparable to DFT, in under 25 minutes. Such a system exists on a length scale visible with optical microscopy and larger than some living organisms. | physics |
Motivated by a study of acute kidney injury, we consider the setting of biomarker studies involving patients at multiple centers where the goal is to develop a biomarker combination for diagnosis, prognosis, or screening. As biomarker studies become larger, this type of data structure will be encountered more frequently. In the presence of multiple centers, one way to assess the predictive capacity of a given combination is to consider the center-adjusted AUC (aAUC), a summary of the ability of the combination to discriminate between cases and controls in each center. Rather than using a general method, such as logistic regression, to construct the biomarker combination, we propose directly maximizing the aAUC. Furthermore, it may be desirable to have a biomarker combination with similar performance across centers. To that end, we allow for penalization of the variability in the center-specific AUCs. We demonstrate desirable asymptotic properties of the resulting combinations. Simulations provide small-sample evidence that maximizing the aAUC can lead to combinations with improved performance. We also use simulated data to illustrate the utility of constructing combinations by maximizing the aAUC while penalizing variability. Finally, we apply these methods to data from the study of acute kidney injury. | statistics |
We introduce the Scanning Single Shot Detector (ScanSSD) for locating math formulas offset from text and embedded in textlines. ScanSSD uses only visual features for detection: no formatting or typesetting information such as layout, font, or character labels are employed. Given a 600 dpi document page image, a Single Shot Detector (SSD) locates formulas at multiple scales using sliding windows, after which candidate detections are pooled to obtain page-level results. For our experiments we use the TFD-ICDAR2019v2 dataset, a modification of the GTDB scanned math article collection. ScanSSD detects characters in formulas with high accuracy, obtaining a 0.926 f-score, and detects formulas with high recall overall. Detection errors are largely minor, such as splitting formulas at large whitespace gaps (e.g., for variable constraints) and merging formulas on adjacent textlines. Formula detection f-scores of 0.796 (IOU $\geq0.5$) and 0.733 (IOU $\ge 0.75$) are obtained. Our data, evaluation tools, and code are publicly available. | computer science |
The sensitivity of NMR and MRI can be boosted via hyperpolarization of nuclear spins. However, current methods are costly, polarization is relatively low, or applicability is limited. Here, we report a new hyperpolarization method combining the low-cost, high polarization of hydrogenative parahydrogen-induced polarization (PHIP) with the flexibility of polarization transfer via proton exchange. The new method can be used to polarize various molecules, including alcohols, water, lactate, and pyruvate. On average, only $\approx$3 mM of a hyperpolarized transfer agent was sufficient to significantly enhance the signal of $\approx$100 mM of target molecules via proton exchange. Thus, hydrogenative parahydrogen-induced hyperpolarization with proton exchange (PHIP-X) provides a new avenue for NMR applications beyond the limits imposed by thermal polarization. | physics |
Natural convection in porous media is a fundamental process for the long-term storage of CO2 in deep saline aquifers. Typically, details of mass transfer in porous media are inferred from the numerical solution of the volume-averaged Darcy-Oberbeck-Boussinesq (DOB) equations, even though these equations do not account for the microscopic properties of a porous medium. According to the DOB equations, natural convection in a porous medium is uniquely determined by the Rayleigh number. However, in contrast with experiments, DOB simulations yield a linear scaling of the Sherwood number with the Rayleigh number (Ra) for high values of Ra (Ra>>1,300). Here, we perform Direct Numerical Simulations (DNS), fully resolving the flow field within the pores. We show that the boundary layer thickness is determined by the pore size instead of the Rayleigh number, as previously assumed. The mega- and proto- plume sizes increase with the pore size. Our DNS results exhibit a nonlinear scaling of the Sherwood number at high porosity, and for the same Rayleigh number, higher Sherwood numbers are predicted by DNS at lower porosities. It can be concluded that the scaling of the Sherwood number depends on the porosity and the pore-scale parameters, which is consistent with experimental studies. | physics |
The goal of this paper is twofold. First, we introduce DALI, a large and rich multimodal dataset containing 5358 audio tracks with their time-aligned vocal melody notes and lyrics at four levels of granularity. The second goal is to explain our methodology where dataset creation and learning models interact using a teacher-student machine learning paradigm that benefits each other. We start with a set of manual annotations of draft time-aligned lyrics and notes made by non-expert users of Karaoke games. This set comes without audio. Therefore, we need to find the corresponding audio and adapt the annotations to it. To that end, we retrieve audio candidates from the Web. Each candidate is then turned into a singing-voice probability over time using a teacher, a deep convolutional neural network singing-voice detection system (SVD), trained on cleaned data. Comparing the time-aligned lyrics and the singing-voice probability, we detect matches and update the time-alignment lyrics accordingly. From this, we obtain new audio sets. They are then used to train new SVD students used to perform again the above comparison. The process could be repeated iteratively. We show that this allows to progressively improve the performances of our SVD and get better audio-matching and alignment. | electrical engineering and systems science |
Three-dimensional particle reconstruction with limited two-dimensional projects is an underdetermined inverse problem that the exact solution is often difficulty to be obtained. In general, approximate solutions can be obtained by optimization methods. In the current work, a practical particle reconstruction method based on convolutional neural network (CNN) is proposed. The proposed technique can refine the particle reconstruction from a very coarse initial guess of particle distribution from any traditional algebraic reconstruction technique (ART) based methods. Compared with available ART-based algorithms, the novel technique makes significant improvements in terms of reconstruction quality and at least an order of magnitude faster with dense particle concentration. | electrical engineering and systems science |
Adaptive mirrors based on voice-coil technology have force actuators with an internal metrology to close a local loop for controlling its shape in position. When actuators are requested to be disabled or slaved, control matrices have to be re-computed. The report describes the algorithms to re-compute the relevant matrixes for controlling of the mirror without the need of recalibration. This is related in particular to MMT, LBT, Magellan, VLT, ELT and GMT adaptive mirrors that use the voice-coil technology. The technique is successfully used in practice with LBT and VLT-UT4 adaptive secondary mirror units. | astrophysics |
We investigate genuine multipartite entanglement in general multipartite systems. Based on the norms of the correlation tensors of a multipartite state under various partitions, we present an analytical sufficient criterion for detecting the genuine four-partite entanglement. The results are generalized to arbitrary multipartite systems. | quantum physics |
In this work, we use a span-based approach for Vietnamese constituency parsing. Our method follows the self-attention encoder architecture and a chart decoder using a CKY-style inference algorithm. We present analyses of the experiment results of the comparison of our empirical method using pre-training models XLM-Roberta and PhoBERT on both Vietnamese datasets VietTreebank and NIIVTB1. The results show that our model with XLM-Roberta archived the significantly F1-score better than other pre-training models, VietTreebank at 81.19% and NIIVTB1 at 85.70%. | computer science |
We investigate a class of partially device-independent quantum key distribution protocols based on a prepare-and-measure setup which simplifies their implementation. The security of the protocols is based on the assumption that Alice's prepared states have limited overlaps, but no explicit bound on the Hilbert space dimension is required. The protocols are therefore immune to attacks on Bob's device, such as blinding attacks. The users can establish a secret key while continuously monitoring the correct functioning of their devices through observed statistics. We report a proof-of-principle demonstration, involving mostly off-the-shelf equipment, as well as a high-efficiency superconducting nanowire detector. A positive key rate is demonstrated over a 4.8 km low-loss optical fiber with finite-key analysis. The prospects of implementing these protocols over longer distances is discussed. | quantum physics |
Line profiles can provide fundamental information on the physics of active galactic nuclei (AGN). In the case of narrow-line Seyfert 1 galaxies (NLS1s) this is of particular importance since past studies revealed how their permitted line profiles are well reproduced by a Lorentzian function instead of a Gaussian. This has been explained with different properties of the broad-line region (BLR), which may present a more pronounced turbulent motions in NLS1s with respect to other AGN. We investigated the line profiles in a recent large NLS1 sample classified using SDSS, and we divided the sources into two subsamples according to their line shapes, Gaussian or Lorentzian. The line profiles clearly separate all the properties of NLS1s. Black hole mass, Eddington ratio, [O III], and Fe II strength are all very different in the Lorentzian and Gaussian samples. We interpret this in terms of evolution within the class of NLS1s. The Lorentzian sources may be the youngest objects, while Gaussian profiles may be typically associated to more evolved objects. Further detailed spectroscopic studies are needed to fully confirm our hypothesis. | astrophysics |
An electroweak baryogenesis (EWBG) mechanism mediated by $\tau$ lepton transport is proposed. We extend the Standard Model with a real singlet scalar $S$ to trigger the strong first-order electroweak phase transition (SFOEWPT), and with a set of leptophilic dimension-5 operators to provide sufficient CP violating source. We demonstrate this model is able to generate the observed baryon asymmetry of the universe. This scenario is experimentally testable via either the SFOEWPT gravitational wave signals at the next-generation space-based detectors, or the $pp\to h^*\to SS\to 4\tau$ process (where $h^*$ is an off-shell Higgs) at the hadron colliders. A detailed collider simulation shows that a considerable fraction of parameter space can be probed at the HL-LHC, while almost the whole parameter space allowed by EWBG can be reached by the 27 TeV HE-LHC. | high energy physics phenomenology |
For a holomorphic function $f$ in the open unit disc $\mathbb{D}$ and $\zeta\in\mathbb{D}$, $S_n(f,\zeta)$ denotes the $n$-th partial sum of the Taylor development of $f$ at $\zeta$. Given an increasing sequence of positive integers $\mu=(\mu_n)$, we consider the classes $\mathcal{U}(\mathbb{D},\zeta)$ (resp. $\mathcal{U}^{(\mu)}(\mathbb{D},\zeta)$) of such functions $f$ such that the partial sums $\{S_n(f,\zeta):n=1,2,\dots\}$ (resp. $\{S_{\mu_n}(f,\zeta):n=1,2,\dots\}$) approximate all polynomials uniformly on the compact sets $K\subset\{z\in\mathbb{C}:\vert z\vert\geq 1\}$ with connected complement. We show that these two classes of universal Taylor series coincide if and only if $\limsup_n\left(\frac{\mu_{n+1}}{\mu_n}\right)<+\infty$. In the same spirit, we prove that, for $\zeta\ne 0,$ we have the equality $\mathcal{U}^{(\mu)}(\mathbb{D},\zeta)=\mathcal{U}^{(\mu)}(\mathbb{D},0)$ if and only if $\limsup_n\left(\frac{\mu_{n+1}}{\mu_n}\right)<+\infty$. Finally we deal with the case of real universal Taylor series. | mathematics |
We extend the formalism of stochastic inflation to the setup of non-attractor inflation with a sound speed $c_s$. We obtain the Langevin equations for the superhorizon perturbations and calculate the stochastic corrections to curvature perturbation power spectrum. It is shown that the fractional stochastic corrections in mean number of e-folds and power spectrum are at the order of power spectrum. We also calculate the boundary crossing and the first hitting probabilities in a hypothetical dS space with two boundaries in field space. Furthermore, the stochastic corrections in power spectrum in a setup akin to eternal inflation with large diffusion term are calculated. | high energy physics theory |
Large-scale {\it in vitro} drug sensitivity screens are an important tool in personalized oncology to predict the effectiveness of potential cancer drugs. The prediction of the sensitivity of cancer cell lines to a panel of drugs is a multivariate regression problem with high-dimensional heterogeneous multi-omics data as input data and with potentially strong correlations between the outcome variables which represent the sensitivity to the different drugs. We propose a joint penalized regression approach with structured penalty terms which allow us to utilize the correlation structure between drugs with group-lasso-type penalties and at the same time address the heterogeneity between omics data sources by introducing data-source-specific penalty factors to penalize different data sources differently. By combining integrative penalty factors (IPF) with tree-guided group lasso, we create the IPF-tree-lasso method. We present a unified framework to transform more general IPF-type methods to the original penalized method. Because the structured penalty terms have multiple parameters, we demonstrate how the interval-search Efficient Parameter Selection via Global Optimization (EPSGO) algorithm can be used to optimize multiple penalty parameters efficiently. Simulation studies show that IPF-tree-lasso can improve the prediction performance compared to other lasso-type methods, in particular for heterogenous data sources. Finally, we employ the new methods to analyse data from the Genomics of Drug Sensitivity in Cancer project. | statistics |
Dr. John S. Newman, an expert and pioneer in electrochemical engineering, studied the electrical characteristics of disk electrodes extensively since the 1960s. Newman and his colleagues published the results in a series of articles in the Journal of the Electrochemical Society. This seminal series is consistent and well-written, and has been cited by many in electrochemistry and closely related fields. However, the articles, especially the later ones in the series, enjoined less familiarity in other fields, including biomedical engineering in which electrodes became widely used in neural stimulation. The purpose of this review is therefore to summarize Newman's work on disk electrodes together and provide a comprehensive understanding of the original articles. The review mainly focuses on the behaviors of interest to neural stimulation, namely the primary distribution, frequency dispersion, and the current step and voltage step responses. More mathematical details are supplemented to the original calculation to help the readers follow the derivation more easily. Several adjustments are made to Newman's original analyses. First, the equation sets are summarized into matrix form, which demonstrates the underlying structure of the electrode-electrolyte system. This formulation is helpful in showing the similarity and differences between different inputs discussed. Also, the normalization factors to give dimensionless variables have been slightly scaled by {\pi}/4 compared to the original articles, which endows them the representation of physical quantities. A consistent symbol naming system is used to refer to the results from different articles. Finally, some preliminary analyses are presented on the numeric accuracy of the solutions. The review will provide a comprehensive understanding of the original articles, especially in the context of neuroengineering applications. | physics |
Real-world image recognition is often challenged by the variability of visual styles including object textures, lighting conditions, filter effects, etc. Although these variations have been deemed to be implicitly handled by more training data and deeper networks, recent advances in image style transfer suggest that it is also possible to explicitly manipulate the style information. Extending this idea to general visual recognition problems, we present Batch-Instance Normalization (BIN) to explicitly normalize unnecessary styles from images. Considering certain style features play an essential role in discriminative tasks, BIN learns to selectively normalize only disturbing styles while preserving useful styles. The proposed normalization module is easily incorporated into existing network architectures such as Residual Networks, and surprisingly improves the recognition performance in various scenarios. Furthermore, experiments verify that BIN effectively adapts to completely different tasks like object classification and style transfer, by controlling the trade-off between preserving and removing style variations. BIN can be implemented with only a few lines of code using popular deep learning frameworks. | computer science |
We study the reconstruction of overdensity maps of galaxies as function of redshift in the range $0 < \mathrm z < 0.8$ using data from 1-m Schmidt Telescope of Byurakan Astrophysical Observatory (Armenia) in 16 medium band $\sim 250$ A and four broad band (u,g,r,i) filters. The data used in this work homogeneously cover $2.39$ sq. deg with accurate photometric redshiftss, down to $\mathrm R < 23$ mag (AB). We reconstructed the density contrast maps for the whole galaxy sample of the HS 47.5-22 ROSAT field in narrow slices for full range of redshifts. We select groups and clusters of galaxies with adaptive kernel based on density peaks which are larger than two times the mean density. The reconstructed overdensity field of galaxies consists of cluster-like structures outlining void-like regions for full redshift range $0 \leq \mathrm z \leq 0.8$. We detect known galaxy clusters in this field with software specially developed for this project. This gives us a possibility to study how star formation properties and galaxy morphology depend on the environments of the galaxies in this field. | astrophysics |
Characterizing habitable exoplanets and/or their moons is of paramount importance. Here we show the results of our magnetic field topological modeling which demonstrate that terrestrial exoplanet-exomoon coupled magnetospheres work together to protect the early atmospheres of both the exoplanet and the exomoon. When exomoon magnetospheres are within the exoplanet's magnetospheric cavity, the exomoon magnetosphere acts like a protective magnetic bubble providing an additional magnetopause confronting the stellar winds when the moon is on the dayside. In addition, magnetic reconnection would create a critical pathway for the atmosphere exchange between the early exoplanet and exomoon. When the exomoon's magnetosphere is outside of the exoplanet's magnetosphere it then becomes the first line of defense against strong stellar winds, reducing the exoplanet's atmospheric loss to space. A brief discussion is given on how this type of exomoon would modify radio emissions from magnetized exoplanets. | astrophysics |
In this present paper, we investigate the muon pairs production in the interaction between two quasireal photons in $e^+e^-$ collision. The total and differential cross section of the process $\gamma \gamma \to \mu^+\mu^-$ at a beam energy of photons from 3 GeV to 40 GeV in the center-of-mass and for different values of muon transverse momentum and the muon rapidity and the muon angle are calculated. We also study the total cross section, as a function of the $e^+ e^-$ center-of-mass energy $\sqrt {s}$ in the region 5 GeV $\leq \sqrt {s} \leq$ 209 GeV process of the $e^+ +e^- \to e^+ + e^- +\mu^+ + \mu^-$ by the two-photon mechanism. The obtained our results are in satisfactory agreement with the experimental data. | high energy physics phenomenology |
We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -- when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-to-end development of embodied AI algorithms -- defining tasks (e.g., navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents. These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works and find evidence for the opposite conclusion -- that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} x {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI. | computer science |
A key desiderata for inclusive and accessible speech recognition technology is ensuring its robust performance to children's speech. Notably, this includes the rapidly advancing neural network based end-to-end speech recognition systems. Children speech recognition is more challenging due to the larger intra-inter speaker variability in terms of acoustic and linguistic characteristics compared to adult speech. Furthermore, the lack of adequate and appropriate children speech resources adds to the challenge of designing robust end-to-end neural architectures. This study provides a critical assessment of automatic children speech recognition through an empirical study of contemporary state-of-the-art end-to-end speech recognition systems. Insights are provided on the aspects of training data requirements, adaptation on children data, and the effect of children age, utterance lengths, different architectures and loss functions for end-to-end systems and role of language models on the speech recognition performance. | electrical engineering and systems science |
Millimeter wave communication is eminently suitable for high-rate wireless systems, which may be beneficially amalgamated with intelligent reflecting surfaces (IRS), relying on beam-index modulation. Explicitly, we propose three different architectures based on IRSs for beam-index modulation in millimeter wave communication, which circumvent the line-of-sight blockage of millimeter wave frequencies. We conceive both the optimal maximum likelihood detector and a low-complexity compressed sensing detector for the proposed schemes. Finally, the schemes conceived are evaluated through extensive simulations, which are compared to our analytically obtained bounds. | electrical engineering and systems science |
Classical and quantum Tsallis distributions have been widely used in many branches of natural and social sciences. But, the quantum field theory of the Tsallis distributions is relatively a less explored arena. In this article we derive the expression for the thermal two-point functions for the Tsallis statistics with the help of the corresponding statistical mechanical formulations. We show that the quantum Tsallis distributions used in the literature appear in the thermal part of the propagator much in the same way the Boltzmann-Gibbs distributions appear in the conventional thermal field theory. As an application of our findings, thermal mass of the real scalar bosons subjected to phi^4 interaction has been calculated in the Tsallis statistics. | high energy physics phenomenology |
We propose a simple model in which the baryon asymmetry and dark matter are created via the decays and inverse decays of QCD-triplet scalars, at least one of which must be in the TeV mass range. Singlet fermions produced in these decays constitute the dark matter. The singlets never reach equilibrium, and their coherent production, propagation, and annihilation generates a baryon asymmetry. We find that that the out-of-equilibrium condition and the dark matter density constraint typically require the lightest scalar to be long-lived, giving good prospects for detection or exclusion in current and upcoming colliders. In generalizing the leptogenesis mechanism of Akhmedov, Rubakov and Smirnov, our model expands the phenomenological possibilities for low-scale baryogenesis. | high energy physics phenomenology |
We present an approach to couple the resolution of Combinatorial Optimization problems with methods from Machine Learning, applied to the single source, capacitated, facility location problem. Our study is framed in the context where a reference facility location optimization problem is given. Assuming there exist data for many variations of the reference problem (historical or simulated) along with their optimal solution, we study how one can exploit these to make predictions about an unseen new instance. We demonstrate how a classifier can be built from these data to determine whether the solution to the reference problem still applies to a new instance. In case the reference solution is partially applicable, we build a regressor indicating the magnitude of the expected change, and conversely how much of it can be kept for the new instance. This insight, derived from a priori information, is expressed via an additional constraint in the original mathematical programming formulation. We present an empirical evaluation and discuss the benefits, drawbacks and perspectives of such an approach. Although presented through the application to the facility location problem, the approach developed here is general and explores a new perspective on the exploitation of past experience in combinatorial optimization. | mathematics |
We investigate out of time ordered correlators in the bulk of dS JT gravity, using Schwarzian perturbation theory, and propose that these out of time ordered correlators are encoded on the second sheet of the gravitational path integral, different sheets corresponding to different gravitational operator orderings. Implementing this in practice, we establish maximal chaos, in agreement with shockwave intuition. | high energy physics theory |
We construct a time-dependent expression of the computational complexity of a quantum system which consists of two conformal complex scalar field theories in d dimensions coupled to constant electric potentials and defined on the boundaries of a charged AdS black hole in (d+1) dimensions. Using a suitable choice of the reference state, Hamiltonian gates and the metric on the manifold of unitaries, we find that the complexity grows linearly for a relatively large interval of time. We also remark that for scalar fields with very small charges the rate of variation of the complexity cannot exceed a maximum value known as the Lloyd bound. | high energy physics theory |
There may exist seven $\bar D^{(*)} \Sigma_c^{(*)}$ hadronic molecular states. We construct their corresponding interpolating currents, and calculate their masses and decay constants using QCD sum rules. Based on these results, we calculate their relative production rates in $\Lambda_b^0$ decays through the current algebra, {\it i.e.}, $\mathcal{B}(\Lambda_b^0 \to P_c K^-):\mathcal{B}(\Lambda_b^0 \to P_c^\prime K^-)$ with $P_c$ and $P_c^\prime$ two different states. We also study their decay properties through the Fierz rearrangement, and further calculate these ratios in the $J/\psi p$ mass spectrum, {\it i.e.}, $\mathcal{B}(\Lambda_b^0 \to P_c K^- \to J/\psi p K^-):\mathcal{B}(\Lambda_b^0 \to P_c^\prime K^- \to J/\psi p K^-)$. Our results suggest that the $\bar D^{*} \Sigma_c^{*}$ molecular states of $J^P = 1/2^-$ and $3/2^-$ are possible to be observed in future experiments. | high energy physics phenomenology |
The structural stability of spherical horizons corresponding to spacetime solutions of lagrangians including operators quadratic in curvature is studied, both in the usual second order approach and in the first order one. In second order, it is claimed that the generic solution in the asymptotic regime (large radius) can be matched not only with the usual solutions with horizons (like Schwarzschild-de Sitter) but also with a more generic (in the sense that it depends on more arbitrary parameters) horizonless family of solutions. It is however remarkable that these horizonless solutions are absent in the restricted (that is, when the background connection is the metric one) first order approach. | high energy physics theory |
In the game of cricket, the result of coin toss is assumed to be one of the determinants of match outcome. The decision to bat first after winning the toss is often taken to make the best use of superior pitch conditions and set a big target for the opponent. However, the opponent may fail to show their natural batting performance in the second innings due to a number of factors, including deteriorated pitch conditions and excessive pressure of chasing a high target score. The advantage of batting first has been highlighted in the literature and expert opinions, however, the effect of batting and bowling order on match outcome has not been investigated well enough to recommend a solution to any potential bias. This study proposes a probability theory-based model to study venue-specific scoring and chasing characteristics of teams under different match outcomes. A total of 1117 one-day international matches held in ten popular venues are analyzed to show substantially high scoring advantage and likelihood when the winning team bat in the first innings. Results suggest that the same 'bat-first' winning team is very unlikely to score or chase such a high score if they were to bat in the second innings. Therefore, the coin toss decision may favor one team over the other. A Bayesian model is proposed to revise the target score for each venue such that the winning and scoring likelihood is equal regardless of the toss decision. The data and source codes have been shared publicly for future research in creating competitive match outcomes by eliminating the advantage of batting order in run scoring. | statistics |
We fit an isothermal oscillatory density model to the outer disk of TW Hya in which planets have presumably already formed and they are orbiting within four observed dark gaps. At first sight, this 52 AU small disk does not appear to be similar to our solar nebula; it shows several physical properties comparable to those in HL Tau (size $R_{\rm max}=102$ AU) and very few similarities to AS 209 ($R_{\rm max}=144$ AU). We find a power-law density profile with index $k=-0.2$ (radial densities $\rho(R) \propto R^{-1.2}$) and centrifugal support against self-gravity so small that it virtually guarantees dynamical stability for millions of years of evolution to come. Compared to HL Tau, the scale length $R_0$ and the core size $R_1$ of TW Hya are smaller only by factors of $\sim$2, reflecting the disk's half size. On the opposite end, the Jeans frequency $\Omega_J$ and the angular velocity $\Omega_0$ of the smaller core of TW Hya are larger only by factors of $\sim$2. The only striking difference is that the central density ($\rho_0$) of TW Hya is 5.7 times larger than that of HL Tau, which is understood because the core of TW Hya is only half the size ($R_1$) of HL Tau and about twice as heavy ($\Omega_J$). In the end, we compare the protostellar disks that we have modeled so far. | astrophysics |
Recently, techniques have been developed to provably guarantee the robustness of a classifier to adversarial perturbations of bounded L_1 and L_2 magnitudes by using randomized smoothing: the robust classification is a consensus of base classifications on randomly noised samples where the noise is additive. In this paper, we extend this technique to the L_0 threat model. We propose an efficient and certifiably robust defense against sparse adversarial attacks by randomly ablating input features, rather than using additive noise. Experimentally, on MNIST, we can certify the classifications of over 50% of images to be robust to any distortion of at most 8 pixels. This is comparable to the observed empirical robustness of unprotected classifiers on MNIST to modern L_0 attacks, demonstrating the tightness of the proposed robustness certificate. We also evaluate our certificate on ImageNet and CIFAR-10. Our certificates represent an improvement on those provided in a concurrent work (Lee et al. 2019) which uses random noise rather than ablation (median certificates of 8 pixels versus 4 pixels on MNIST; 16 pixels versus 1 pixel on ImageNet.) Additionally, we empirically demonstrate that our classifier is highly robust to modern sparse adversarial attacks on MNIST. Our classifications are robust, in median, to adversarial perturbations of up to 31 pixels, compared to 22 pixels reported as the state-of-the-art defense, at the cost of a slight decrease (around 2.3%) in the classification accuracy. Code is available at https://github.com/alevine0/randomizedAblation/. | computer science |
Optimization-based samplers such as randomize-then-optimize (RTO) [2] provide an efficient and parallellizable approach to solving large-scale Bayesian inverse problems. These methods solve randomly perturbed optimization problems to draw samples from an approximate posterior distribution. "Correcting" these samples, either by Metropolization or importance sampling, enables characterization of the original posterior distribution. This paper focuses on the scalability of RTO to problems with high- or infinite-dimensional parameters. We introduce a new subspace acceleration strategy that makes the computational complexity of RTO scale linearly with the parameter dimension. This subspace perspective suggests a natural extension of RTO to a function space setting. We thus formalize a function space version of RTO and establish sufficient conditions for it to produce a valid Metropolis-Hastings proposal, yielding dimension-independent sampling performance. Numerical examples corroborate the dimension-independence of RTO and demonstrate sampling performance that is also robust to small observational noise. | statistics |
Absorption spectroscopy is traditionally used to determine the average gas temperature and species concentration along the laser line-of-sight by measuring the magnitude of two or more absorption transitions with different temperature dependence. Previous work has shown that the nonlinear temperature dependence of the absorption strength of each transition, set by the lower-state energy, E", can be used to infer temperature variations along the laser line-of-sight. In principle, measuring more absorption transitions with broader bandwidth light sources improves the ability to resolve temperature variations. Here, we introduce a singular value decomposition framework in order to explore the theoretical limits to resolving temperature distributions with single-beam line-of-sight absorption measurements. We show that in the absence of measurement noise or error, only the first ~14 well-selected absorption features improve the temperature resolution, and a Tikhonov regularization method improves the accuracy of the temperature inversion, particularly for recovery of the maximum gas temperature along the laser beam. We use inversion simulations to demonstrate that one can resolve a selection of temperature distributions along a laser beam line-of-sight to within 3% for the sample cases analyzed. In part II of this work, we explore the influence of measurement noise and error, and experimentally demonstrate the technique to show that there is benefit to measuring additional absorption transitions under real conditions. | physics |
We consider the problem of assessing goodness of fit of a single Bayesian model to the observed data in the inverse problem context. A novel procedure of goodness of fit test is proposed, based on construction of reference distributions using the `inverse' part of the given model. This is motivated by an example from palaeoclimatology in which it is of interest to reconstruct past climates using information obtained from fossils deposited in lake sediment. Technically, given a model $f(Y\mid X,\theta)$, where $Y$ is the observed data and $X$ is a set of (non-random) covariates, we obtain reference distributions based on the posterior $\pi(\tilde X\mid Y)$, where $\tilde X$ must be interpreted as the {\it unobserved} random vector corresponding to the {\it observed} covariates $X$. Put simply, if the posterior distribution $\pi(\tilde X\mid Y)$ gives high density to the observed covariates $X$, or equivalently, if the posterior distribution of $T(\tilde X)$ gives high density to $T(X)$, where $T$ is any appropriate statistic, then we say that the model fits the data. Otherwise the model in question is not adequate. We provide decision-theoretic justification of our proposed approach and discuss other theoretical and computational advantages. We demonstrate our methodology with many simulated examples and three complex, high-dimensional, realistic palaeoclimate problems, including the motivating palaeoclimate problem. | statistics |
In this note we present a complete computation of the topological K-theory of the reduced C*-algebra of a semidirect product of the form $\Gamma=\mathbb{Z}^n\rtimes_\rho\mathbb{Z}/2$ with no further assumptions about of the conjugacy action $\rho$. For this, we use some results for $\mathbb{Z}/2$-equivariant K-theory proved by Rosenberg and previous results of Davis and Luck when the conjugacy action $\rho$ is free outside the origin. | mathematics |
Autonomous vehicles rely heavily on sensors such as camera and LiDAR, which provide real-time information about their surroundings for the tasks of perception, planning and control. Typically a LiDAR can only provide sparse point cloud owing to a limited number of scanning lines. By employing depth completion, a dense depth map can be generated by assigning each camera pixel a corresponding depth value. However, the existing depth completion convolutional neural networks are very complex that requires high-end GPUs for processing, and thus they are not applicable to real-time autonomous driving. In this paper, a light-weight network is proposed for the task of LiDAR point cloud depth completion. With an astonishing 96.2% reduction in the number of parameters, it still achieves comparable performance (9.3% better in MAE but 3.9% worse in RMSE) to the state-of-the-art network. For real-time embedded platforms, depthwise separable technique is applied to both convolution and deconvolution operations and the number of parameters decreases further by a factor of 7.3, with only a small percentage increase in RMSE and MAE performance. Moreover, a system-on-chip architecture for depth completion is developed on a PYNQ-based FPGA platform that achieves real-time processing for HDL-64E LiDAR at the speed 11.1 frame per second. | electrical engineering and systems science |
Low Dose Computed Tomography suffers from a high amount of noise and/or undersampling artefacts in the reconstructed image. In the current article, a Deep Learning technique is exploited as a regularization term for the iterative reconstruction method SIRT. While SIRT minimizes the error in the sinogram space, the proposed regularization model additionally steers intermediate SIRT reconstructions towards the desired output. Extensive evaluations demonstrate the superior outcomes of the proposed method compared to the state of the art techniques. Comparing the forward projection of the reconstructed image with the original signal shows a higher fidelity to the sinogram space for the current approach amongst other learning based methods. | electrical engineering and systems science |
We show that the effects of a non-null Majorana phase, due to the existence of a decoherence environment, can cause a sizable distortion of the \textit{CP} violation phase $\delta_{\mathrm{CP}}$ to be measured at DUNE. These distortions are quantified comparing the latter with an eventual T2HK measurement, which would represent an upgrade in precision of the current best fit value obtained at T2K. For a decoherence magnitude of $\Gamma=1.5(2.5)\times 10^{-24} \mathrm{GeV}$, the discrepancies for $\delta_{\mathrm{CP}}$ are below the $2\sigma$ for null and positive values of the Majorana phase while for negative values, such as $-0.5\pi$, it can reach up to $3.8\sigma(5.8\sigma)$. Therefore, a novel finding of this letter is the possibility to reveal a non-null and negative value for the Majorana phase through oscillation physics. | high energy physics phenomenology |
The availability of mobile technologies has enabled the efficient collection prospective longitudinal, ecologically valid self-reported mood data from psychiatric patients. These data streams have potential for improving the efficiency and accuracy of psychiatric diagnosis as well predicting future mood states enabling earlier intervention. However, missing responses are common in such datasets and there is little consensus as to how this should be dealt with in practice. A signature-based method was used to capture different elements of self-reported mood alongside missing data to both classify diagnostic group and predict future mood in patients with bipolar disorder, borderline personality disorder and healthy controls. The missing-response-incorporated signature-based method achieves roughly 66\% correct diagnosis, with f1 scores for three different clinic groups 59\% (bipolar disorder), 75\% (healthy control) and 61\% (borderline personality disorder) respectively. This was significantly more efficient than the naive model which excluded missing data. Accuracies of predicting subsequent mood states and scores were also improved by inclusion of missing responses. The signature method provided an effective approach to the analysis of prospectively collected mood data where missing data was common and should be considered as an approach in other similar datasets. | statistics |
The designers' tendency to adhere to a specific mental set and heavy emotional investment in their initial ideas often hinder their ability to innovate during the design thinking and ideation process. In the fashion industry, in particular, the growing diversity of customers' needs, the intense global competition, and the shrinking time-to-market (a.k.a., "fast fashion") further exacerbate this challenge for designers. Recent advances in deep generative models have created new possibilities to overcome the cognitive obstacles of designers through automated generation and/or editing of design concepts. This paper explores the capabilities of generative adversarial networks (GAN) for automated attribute-level editing of design concepts. Specifically, attribute GAN (AttGAN)---a generative model proven successful for attribute editing of human faces---is utilized for automated editing of the visual attributes of garments and tested on a large fashion dataset. The experiments support the hypothesized potentials of GAN for attribute-level editing of design concepts, and underscore several key limitations and research questions to be addressed in future work. | computer science |
Deep neural networks (DNNs) achieve impressive results for complicated tasks like object detection on images and speech recognition. Motivated by this practical success, there is now a strong interest in showing good theoretical properties of DNNs. To describe for which tasks DNNs perform well and when they fail, it is a key challenge to understand their performance. The aim of this paper is to contribute to the current statistical theory of DNNs. We apply DNNs on high dimensional data and we show that the least squares regression estimates using DNNs are able to achieve dimensionality reduction in case that the regression function has locally low dimensionality. Consequently, the rate of convergence of the estimate does not depend on its input dimension $d$, but on its local dimension $d^*$ and the DNNs are able to circumvent the curse of dimensionality in case that $d^*$ is much smaller than $d$. In our simulation study we provide numerical experiments to support our theoretical result and we compare our estimate with other conventional nonparametric regression estimates. The performance of our estimates is also validated in experiments with real data. | statistics |
BMS symmetry, which is the asymptotic symmetry at null infinity of flat spacetime, is an important input for flat holography. In this paper, we give a holographic calculation of entanglement entropy and R\'{e}nyi entropy in three dimensional Einstein gravity and Topologically Massive Gravity. The geometric picture for the entanglement entropy is the length of a spacelike geodesic which is connected to the interval at null infinity by two null geodesics. The spacelike geodesic is the fixed points of replica symmetry, and the null geodesics are along the modular flow. Our strategy is to first reformulate the Rindler method for calculating entanglement entropy in a general setup, and apply it for BMS invariant field theories, and finally extend the calculation to the bulk. | high energy physics theory |
A new paradigm for large-scale spectrum occupancy learning based on long short-term memory (LSTM) recurrent neural networks is proposed. Studies have shown that spectrum usage is a highly correlated time series. Moreover, there is a correlation for occupancy of spectrum between different frequency channels. Therefore, revealing all these correlations using learning and prediction of one-dimensional time series is not a trivial task. In this paper, we introduce a new framework for representing the spectrum measurements in a tensor format. Next, a time-series prediction method based on CANDECOMP/PARFAC (CP) tensor decomposition and LSTM recurrent neural networks is proposed. The proposed method is computationally efficient and is able to capture different types of correlation within the measured spectrum. Moreover, it is robust against noise and missing entries of sensed spectrum. The superiority of the proposed method is evaluated over a large-scale synthetic dataset in terms of prediction accuracy and computational efficiency. | electrical engineering and systems science |
We present extensive radial-velocity observations of the intermediate polar DW Cnc during its 2018-2019 low state. We show that the 86 min signal, associated with the orbital period is strong in our radial velocity analysis, power spectrum search, as well as in our Doppler Tomography. However, we find that the velocity modulation associated with the 70-min beat period and the 38-min spin cycle are dramatically weaker than previously observed. We put forward two interpretations for this change. The first is that a sudden drop into a low state detected in 2018-2019 caused an episode of low mass transfer from the companion, thus inhibiting the light-house effect produced by the rebound emission. The second is that this is a consequence of a rare outburst detected in 2007 by Crawford (2008). We find this post-outburst hypothesis to be less likely. If the first scenario is correct, we predict that DW Cnc will recover its intermediate polar characteristics. A new ephemeris is presented by combining Patterson et aal. (2004) radial velocities with ours. | astrophysics |
We have carried out a search for massive white dwarfs (WDs) in the direction of young open star clusters using the Gaia DR2 database. The aim of this survey was to provide robust data for new and previously known high-mass WDs regarding cluster membership, to highlight WDs previously included in the Initial Final Mass Relation (IFMR) that are unlikely members of their respective clusters according to Gaia astrometry and to select an unequivocal WD sample that could then be compared with the host clusters' turnoff masses. All promising WD candidates in each cluster CMD were followed up with spectroscopy from Gemini in order to determine whether they were indeed WDs and derive their masses, temperatures and ages. In order to be considered cluster members, white dwarfs were required to have proper motions and parallaxes within 2, 3, or 4-$\sigma$ of that of their potential parent cluster based on how contaminated the field was in their region of the sky, have a cooling age that was less than the cluster age and a mass that was broadly consistent with the IFMR. A number of WDs included in current versions of the IFMR turned out to be non-members and a number of apparent members, based on Gaia's astrometric data alone, were rejected as their mass and/or cooling times were incompatible with cluster membership. In this way, we developed a highly selected IFMR sample for high mass WDs that, surprisingly, contained no precursor masses significantly in excess of ${\sim}$6 $M_{\odot}$. | astrophysics |
The soliton resolution for the Harry Dym equation is established for initial conditions in weighted Sobolev space $H^{1,1}(\mathbb{R})$. Combining the nonlinear steepest descent method and $\bar{\partial}$-derivatives condition, we obtain that when $\frac{y}{t}<-\epsilon(\epsilon>0)$ the long time asymptotic expansion of the solution $q(x,t)$ in any fixed cone \begin{equation} C\left(y_{1}, y_{2}, v_{1}, v_{2}\right)=\left\{(y, t) \in R^{2} \mid y=y_{0}+v t, y_{0} \in\left[y_{1}, y_{2}\right], v \in\left[v_{1}, v_{2}\right]\right\} \end{equation} up to an residual error of order $\mathcal{O}(t^{-1})$. The expansion shows the long time asymptotic behavior can be described as an $N(I)$-soliton on discrete spectrum whose parameters are modulated by a sum of localized soliton-soliton interactions as one moves through the cone and the second term coming from soliton-radiation interactionson on continuous spectrum. | mathematics |
Tunneling across superconducting junctions proceeds by a rich variety of processes, which transfer single electrons, Cooper pairs, or even larger numbers of electrons by multiple Andreev reflections. Photon-assisted tunneling combined with the venerable Tien-Gordon model has long been a powerful tool to identify tunneling processes between superconductors. Here, we probe superconducting tunnel junctions including an impurity-induced Yu-Shiba-Rusinov (YSR) state by exposing a scanning tunneling microscope with a superconducting tip to microwave radiation. We find that a simple Tien-Gordon description describes tunneling of single electrons and Cooper pairs into the bare substrate, but breaks down for tunneling via YSR states by resonant Andreev reflections. We develop an improved theoretical description which is in excellent agreement with the data. Our results establish photon-assisted tunneling as a powerful tool to analyze tunneling processes at the atomic scale which should be particularly informative for unconventional and topological superconductors. | condensed matter |
The "Cabibbo Angle Anomaly" (CAA) originates from the disagreement between the CKM elements $V_{ud}$ and $V_{us}$ extracted from superallowed beta and kaon decays, respectively, once compared via CKM unitarity. It points towards new physics with a significance of up to $4\,\sigma$, depending on the theoretical input used, and can be explained through modified $W$ couplings to leptons. In this context, vector-like leptons (VLLs) are prime candidates for a corresponding UV completion since they can affect $W\ell\nu$ couplings at tree-level, such that this modification can have the dominant phenomenological impact. In order to consistently asses the agreement with the data, a global fit is necessary which we perform for gauge-invariant dimension-6 operators and all patterns obtained for the six possible representations (under the SM gauge group) of VLLs. We find that even in the lepton flavour universal case, including the measurements of the CKM elements $V_{us}$ and $V_{ud}$ into the electroweak fit has a relevant impact, shifting the best fit point significantly. Concerning the VLLs we discuss the bounds from charged lepton flavour violating processes and observe that a single representation cannot describe experimental data significantly better than the SM hypothesis. However, allowing for several representations of VLLs at the same time, we find that the simple scenario in which $N$ couples to electrons via the Higgs and $\Sigma_1$ couples to muons not only explains the CAA but also improves the rest of the electroweak fit in such a way that its best fit point is preferred by more than $4\,\sigma$ with respect to the SM. | high energy physics phenomenology |
Despite society's strong dependence on electricity, power outages remain prevalent. Standard methods for directly measuring power availability are complex, often inaccurate, and are prone to attack. This paper explores an alternative approach to identifying power outages through intelligent monitoring of IP address availability. In finding these outages, we explore the trade-off between the accuracy of detection and false alarms. We begin by experimentally demonstrating that static, residential Internet connections serve as good indicators of power, as they are mostly active unless power fails and rarely have battery backups. We construct metrics that dynamically score the reliability of each residential IP, where a higher score indicates a higher correlation between that IP's availability and its regional power. We monitor specifically selected subsets of residential IPs and evaluate the accuracy with which they can indicate current county power status. Using data gathered during the power outages caused by Hurricane Florence, we demonstrate that we can track power outages at different granularities, state and county, in both sparse and dense regions. By comparing our detection with the reports gathered from power utility companies, we achieve an average detection accuracy of $90\%$, where we also show some of our false alarms and missed outage events could be due to imperfect ground truth data. Therefore, our method can be used as a complementary technique of power outage detection. | computer science |
Transported mediation effects may contribute to understanding how and why interventions may work differently when applied to new populations. However, we are not aware of any estimators for such effects. Thus, we propose several different estimators of transported stochastic direct and indirect effects: an inverse-probability of treatment stabilized weighted estimator, a doubly robust estimator that solves the estimating equation, and a doubly robust substitution estimator in the targeted minimum loss-based framework. We demonstrate their finite sample properties in a simulation study. | statistics |
Many fundamental results of pluripotential theory on the quaternionic space $\mathbb{H}^n$ are extended to the Heisenberg group. We introduce notions of a plurisubharmonic function, the quaternionic Monge-Amp\`{e}re operator, differential operators $d_0$ and $d_1$ and a closed positive current on the Heisenberg group. The quaternionic Monge-Amp\`{e}re operator is the coefficient of $ (d_0d_1u)^n$. We establish the Chern-Levine-Nirenberg type estimate, the existence of quaternionic Monge-Amp\`{e}re measure for a continuous quaternionic plurisubharmonic function and the minimum principle for the quaternionic Monge-Amp\`{e}re operator. Unlike the tangential Cauchy-Riemann operator $ \overline{\partial}_b $ on the Heisenberg group which behaves badly as $ \partial_b\overline{\partial}_b\neq -\overline{\partial}_b\partial_b $, the quaternionic counterpart $d_0$ and $d_1$ satisfy $ d_0d_1=-d_1d_0 $. This is the main reason that we have a better theory for the quaternionic Monge-Amp\`{e}re operator than $ (\partial_b\overline{\partial}_b)^n$. | mathematics |
Deep Convolutional Neural Networks (CNN) have drawn great attention in image super-resolution (SR). Recently, visual attention mechanism, which exploits both of the feature importance and contextual cues, has been introduced to image SR and proves to be effective to improve CNN-based SR performance. In this paper, we make a thorough investigation on the attention mechanisms in a SR model and shed light on how simple and effective improvements on these ideas improve the state-of-the-arts. We further propose a unified approach called "multi-grained attention networks (MGAN)" which fully exploits the advantages of multi-scale and attention mechanisms in SR tasks. In our method, the importance of each neuron is computed according to its surrounding regions in a multi-grained fashion and then is used to adaptively re-scale the feature responses. More importantly, the "channel attention" and "spatial attention" strategies in previous methods can be essentially considered as two special cases of our method. We also introduce multi-scale dense connections to extract the image features at multiple scales and capture the features of different layers through dense skip connections. Ablation studies on benchmark datasets demonstrate the effectiveness of our method. In comparison with other state-of-the-art SR methods, our method shows the superiority in terms of both accuracy and model size. | electrical engineering and systems science |
We study the late time plateau behavior of the spectral form factor in the Gaussian Unitary Ensemble (GUE) random matrix model. The time derivative of the spectral form factor in the plateau regime is not strictly zero, but non-zero due to a non-perturbative correction in the $1/N$ expansion. We argue that such a non-perturbative correction comes from the eigenvalue instanton of random matrix model and we explicitly compute the instanton correction as a function of time. | high energy physics theory |
We study the asymptotic behavior of solutions to an ODE - Schr\"{o}dinger type system that models the interaction of a particle with a Bose gas. We show that the particle has a ballistic trajectory asymptotically, and that the wave function describing the Bose gas converges to a soliton in $L^{\infty}.$ | mathematics |
We further research on the acceleration phenomenon on Riemannian manifolds by introducing the first global first-order method that achieves the same rates as accelerated gradient descent in the Euclidean space for the optimization of smooth and geodesically convex (g-convex) or strongly g-convex functions defined on the hyperbolic space or a subset of the sphere, up to constants and log factors. To the best of our knowledge, this is the first method that is proved to achieve these rates globally on functions defined on a Riemannian manifold $\mathcal{M}$ other than the Euclidean space. As a proxy, we solve a constrained non-convex Euclidean problem, under a condition between convexity and quasar-convexity, of independent interest. Additionally, for any Riemannian manifold of bounded sectional curvature, we provide reductions from optimization methods for smooth and g-convex functions to methods for smooth and strongly g-convex functions and vice versa. | mathematics |
Mass-spring networks (MSNs) have long been used as approximate descriptions of many biological and engineered systems, from actomyosin networks to mechanical trusses. In the last decade, MSNs have re-attracted theoretical interest as models for phononic metamaterials with exotic properties such as negative Poisson's ratio, negative effective mass, or gapped vibrational spectra. A practical advantage of MSNs is their tuneability, which allows the inverse design of materials with pre-specified bandgaps. Building on this fact, we demonstrate here that designed MSNs, when subjected to Coriolis forces, can host topologically protected chiral edge modes at predetermined frequencies, thus enabling robust unidirectional transmission of mechanical waves. Similar to other recently discovered topological materials, the topological phases of MSNs can be classified by a Chern invariant related to time-reversal symmetry breaking. | condensed matter |
While data science has emerged as a contentious new scientific field, enormous debates and discussions have been made on it why we need data science and what makes it as a science. In reviewing hundreds of pieces of literature which include data science in their titles, we find that the majority of the discussions essentially concern statistics, data mining, machine learning, big data, or broadly data analytics, and only a limited number of new data-driven challenges and directions have been explored. In this paper, we explore the intrinsic challenges and directions inspired by comprehensively exploring the complexities and intelligence embedded in data science problems. We focus on the research and innovation challenges inspired by the nature of data science problems as complex systems, and the methodologies for handling such systems. | computer science |
The H2 molecule has two nuclear spin isomers, the so-called ortho and para isomers. Nuclear spin conversion (NSC) between these states is forbidden in the gas phase. The energy difference between the lowest ortho and para states is as large as 14.7 meV, corresponding to ~170 K. Therefore, each state of H2 differently affects not only the chemistry but also the macroscopic gas dynamics in space, and thus, the ortho-to-para abundance ratio (OPR) of H2 has significant impacts on various astronomical phenomena. For a long time, the OPR of nascent H2 upon formation on dust grains has been assumed to have a statistical value of three and to gradually equilibrate in the gas phase at the temperature of the circumstances. Recently, NSC of H2 was experimentally revealed to occur on water ice at very low temperatures and thus incorporated into gas-dust chemical models. However, H2 molecules should form well before dust grains are coated by water ice. Information about how the OPR of H2 behaves on bare silicate dust before ice-mantle formation is lacking. Knowing the influence of the OPR of H2 if the OPR changes even on a bare silicate surface within an astronomically meaningful time scale is desirable. We report the first laboratory measurements of NSC of H2 physisorbed on amorphous silicate (Mg2SiO4) at temperatures up to 18 K. The conversion was found to occur very rapidly. | astrophysics |
In the manufacturing industry, it is often necessary to repeat expensive operational testing of machine in order to identify the range of input conditions under which the machine operates properly. Since it is often difficult to accurately control the input conditions during the actual usage of the machine, there is a need to guarantee the performance of the machine after properly incorporating the possible variation in input conditions. In this paper, we formulate this practical manufacturing scenario as an Input Uncertain Reliable Level Set Estimation (IU-rLSE) problem, and provide an efficient algorithm for solving it. The goal of IU-rLSE is to identify the input range in which the outputs smaller/greater than a desired threshold can be obtained with high probability when the input uncertainty is properly taken into consideration. We propose an active learning method to solve the IU-rLSE problem efficiently, theoretically analyze its accuracy and convergence, and illustrate its empirical performance through numerical experiments on artificial and real data. | statistics |
Quantum computing technologies promise to revolutionize calculations in many areas of physics, chemistry, and data science. Their power is expected to be especially pronounced for problems where direct analogs of a quantum system under study can be encoded coherently within a quantum computer. A first step toward harnessing this power is to express the building blocks of known physical systems within the language of quantum gates and circuits. In this paper, we present a quantum calculation of an archetypal quantum system: neutrino oscillations. We define gate arrangements that implement the neutral lepton mixing operation and neutrino time evolution in two-, three-, and four-flavor systems. We then calculate oscillation probabilities by coherently preparing quantum states within the processor, time evolving them unitarily, and performing measurements in the flavor basis, with close analogy to the physical processes realized in neutrino oscillation experiments, finding excellent agreement with classical calculations. We provide recipes for modeling oscillation in the standard three-flavor paradigm as well as beyond-standard-model scenarios, including systems with sterile neutrinos, non-standard interactions, Lorentz symmetry violation, and anomalous decoherence. | quantum physics |
We investigate the large time behavior of compactly supported solutions for a one-dimensional thin-film equation with linear mobility in the regime of partial wetting. We show the stability of steady state solutions. The proof uses the Lagrangian coordinates. Our method is to establish and exploit differential relations between the energy and the dissipation as well as some interpolation inequalities. Our result is different from earlier results because here we consider solutions with finite mass. | mathematics |
In scientific inference problems, the underlying statistical modeling assumptions have a crucial impact on the end results. There exist, however, only a few automatic means for validating these fundamental modelling assumptions. The contribution in this paper is a general criterion to evaluate the consistency of a set of statistical models with respect to observed data. This is achieved by automatically gauging the models' ability to generate data that is similar to the observed data. Importantly, the criterion follows from the model class itself and is therefore directly applicable to a broad range of inference problems with varying data types, ranging from independent univariate data to high-dimensional time-series. The proposed data consistency criterion is illustrated, evaluated and compared to several well-established methods using three synthetic and two real data sets. | statistics |
Deep learning had already demonstrated its power in medical images, including denoising, classification, segmentation, etc. All these applications are proposed to automatically analyze medical images beforehand, which brings more information to radiologists during clinical assessment for accuracy improvement. Recently, many medical denoising methods had shown their significant artifact reduction result and noise removal both quantitatively and qualitatively. However, those existing methods are developed around human-vision, i.e., they are designed to minimize the noise effect that can be perceived by human eyes. In this paper, we introduce an application-guided denoising framework, which focuses on denoising for the following neural networks. In our experiments, we apply the proposed framework to different datasets, models, and use cases. Experimental results show that our proposed framework can achieve a better result than human-vision denoising network. | electrical engineering and systems science |
While deep learning has resulted in major breakthroughs in many application domains, the frameworks commonly used in deep learning remain fragile to artificially-crafted and imperceptible changes in the data. In response to this fragility, adversarial training has emerged as a principled approach for enhancing the robustness of deep learning with respect to norm-bounded perturbations. However, there are other sources of fragility for deep learning that are arguably more common and less thoroughly studied. Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning. In this paper, we propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning. Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data. Critical to our paradigm is first obtaining a model of natural variation which can be used to vary data over a range of natural conditions. Such models may be either known a priori or else learned from data. In the latter case, we show that deep generative models can be used to learn models of natural variation that are consistent with realistic conditions. We then exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model. Our extensive experiments show that across a variety of naturally-occurring conditions and across various datasets, deep neural networks trained with our model-based algorithms significantly outperform both standard deep learning algorithms as well as norm-bounded robust deep learning algorithms. | computer science |
In this paper, we consider the transmit and receive antenna array gain of massive MIMO systems. In particular, we look at their dependence on the number of antennas in the array, and the antenna spacing for uniform linear and uniform circular arrays. It is known that the transmit array gain saturates at a certain antenna spacing, but the receive array gain had not been considered. With our physically consistent analysis based on the Multiport Communication Theory, we show that the receive array gain does not saturate, but that there is a peak at a certain antenna spacing when there is no decoupling network at the receiver. As implementing a decoupling network for massive MIMO would be almost impossible, this is a reasonable assumption. Furthermore, we analyze how the array gain changes depending on the antenna spacing and the size of the antenna array and derive design recommendations. | computer science |
We compute entanglement entropy and differential entropy in inhomogeneous holographic quenches in AdS$_3$/CFT$_2$. The quenches are arbitrarily inhomogeneous and modeled by an infalling shell of massless non-rotating matter where the final state is not dual to a static black hole but rather to a black hole with time-dependent stress-energy tensor modes. We study the entanglement entropy of an interval and differential entropy of a family of intervals analytically when the inhomogeneities have a perturbative amplitude and numerically for non-perturbative inhomogeneities. While we are in principle able to study these quantities for any inhomogeneities, we discuss two concrete examples: an oscillatory quench and a bilocal quench. Both cases display saturation towards a steady state but do not fully thermalize. Depending on the location and size of the interval, the entanglement entropy displays a variety of interesting phenomena such as plateau phases, bumps, and discontinuities in its first derivative with respect to time. | high energy physics theory |
Taking a comprehensive view, including a full range of boundary conditions, we reexamine QCD axion star solutions based on the relativistic Klein-Gordon equation (using the Ruffini-Bonazzola approach) and its non-relativistic limit, the Gross-Pitaevskii equation. A single free parameter, conveniently chosen as the central value of the wavefunction of the axion star, or alternatively the chemical potential with range $-m<\mu< 0$ (where $m$ is the axion mass), uniquely determines a spherically-symmetric ground state solution, the axion condensate. We clarify how the interplay of various terms of the Klein-Gordon equation determines the properties of solutions in three separate regions: the structurally stable (corresponding to a local energy minimum) dilute and dense regions, and the intermediate, structurally unstable transition region. From the Klein-Gordon equation, one can derive alternative equations of motion including the Gross-Pitaevskii and Sine-Gordon equations, which have been used previously to describe axion stars in the dense region. In this work, we clarify precisely how and why such methods break down as the binding energy increases, emphasizing the necessity of using the full relativistic Klein-Gordon approach. Finally, we point out that, even after including perturbative axion number violating corrections, solutions to the equations of motion, which assume approximate conservation of axion number, break down completely in the regime with strong binding energy, where the magnitude of the chemical potential approaches the axion mass. | high energy physics phenomenology |
This work is motivated by multimodality breast cancer imaging data, which is quite challenging in that the signals of discrete tumor-associated microvesicles (TMVs) are randomly distributed with heterogeneous patterns. This imposes a significant challenge for conventional imaging regression and dimension reduction models assuming a homogeneous feature structure. We develop an innovative multilayer tensor learning method to incorporate heterogeneity to a higher-order tensor decomposition and predict disease status effectively through utilizing subject-wise imaging features and multimodality information. Specifically, we construct a multilayer decomposition which leverages an individualized imaging layer in addition to a modality-specific tensor structure. One major advantage of our approach is that we are able to efficiently capture the heterogeneous spatial features of signals that are not characterized by a population structure as well as integrating multimodality information simultaneously. To achieve scalable computing, we develop a new bi-level block improvement algorithm. In theory, we investigate both the algorithm convergence property, tensor signal recovery error bound and asymptotic consistency for prediction model estimation. We also apply the proposed method for simulated and human breast cancer imaging data. Numerical results demonstrate that the proposed method outperforms other existing competing methods. | statistics |
The moment-of-fluid method (MOF) is an extension of the volume-of-fluid method with piecewise linear interface construction (VOF-PLIC). In MOF reconstruction, the optimized normal vector is determined from the reference centroid and the volume fraction by iteration. The state-of-art work by \citet{milcent_moment--fluid_2020} proposed an analytic gradient of the objective function, which greatly reduces the computational cost. In this study, we further accelerate the MOF reconstruction algorithm by using Gauss-Newton iteration instead of Broyden-Fletcher-Goldfarb-Shanno (BFGS) iteration. We also propose an improved initial guess for MOF reconstruction, which improves the efficiency and the robustness of the MOF reconstruction algorithm. Our implementation of the code and test cases are available on our Github repository. | physics |
What is the relationship between linguistic dependencies and statistical dependence? Building on earlier work in NLP and cognitive science, we study this question. We introduce a contextualized version of pointwise mutual information (CPMI), using pretrained language models to estimate probabilities of words in context. Extracting dependency trees which maximize CPMI, we compare the resulting structures against gold dependencies. Overall, we find that these maximum-CPMI trees correspond to linguistic dependencies more often than trees extracted from non-contextual PMI estimate, but only roughly as often as a simple baseline formed by connecting adjacent words. We also provide evidence that the extent to which the two kinds of dependency align cannot be explained by the distance between words or by the category of the dependency relation. Finally, our analysis sheds some light on the differences between large pretrained language models, specifically in the kinds of inductive biases they encode. | computer science |
The galaxy bispectrum on scales around and above the equality scale receives contributions from relativistic effects. Some of these arise from lightcone deformation effects, which come from local and line-of-sight integrated contributions. Here we calculate the local contributions from the generated vector and tensor background which is formed as scalar modes couple and enter the horizon. We show that these modes are sub-dominant when compared with other relativistic contributions. | astrophysics |
Passive RFID technology is widely used in user authentication and access control. We propose RF-Rhythm, a secure and usable two-factor RFID authentication system with strong resilience to lost/stolen/cloned RFID cards. In RF-Rhythm, each legitimate user performs a sequence of taps on his/her RFID card according to a self-chosen secret melody. Such rhythmic taps can induce phase changes in the backscattered signals, which the RFID reader can detect to recover the user's tapping rhythm. In addition to verifying the RFID card's identification information as usual, the backend server compares the extracted tapping rhythm with what it acquires in the user enrollment phase. The user passes authentication checks if and only if both verifications succeed. We also propose a novel phase-hopping protocol in which the RFID reader emits Continuous Wave (CW) with random phases for extracting the user's secret tapping rhythm. Our protocol can prevent a capable adversary from extracting and then replaying a legitimate tapping rhythm from sniffed RFID signals. Comprehensive user experiments confirm the high security and usability of RF-Rhythm with false-positive and false-negative rates close to zero. | electrical engineering and systems science |
Neural-networks based image restoration methods tend to use low-resolution image patches for training. Although higher-resolution image patches can provide more global information, state-of-the-art methods cannot utilize them due to their huge GPU memory usage, as well as the instable training process. However, plenty of studies have shown that global information is crucial for image restoration tasks like image demosaicing and enhancing. In this work, we propose a HighEr-Resolution Network (HERN) to fully learning global information in high-resolution image patches. To achieve this, the HERN employs two parallel paths to learn image features in two different resolutions, respectively. By combining global-aware features and multi-scale features, our HERN is able to learn global information with feasible GPU memory usage. Besides, we introduce a progressive training method to solve the instability issue and accelerate model convergence. On the task of image demosaicing and enhancing, our HERN achieves state-of-the-art performance on the AIM2019 RAW to RGB mapping challenge. The source code of our implementation is available at https://github.com/MKFMIKU/RAW2RGBNet. | electrical engineering and systems science |
Convolutional Neural Networks (CNNs) are the current de-facto models used for many imaging tasks due to their high learning capacity as well as their architectural qualities. The ubiquitous UNet architecture provides an efficient and multi-scale solution that combines local and global information. Despite the success of UNet architectures, the use of upsampling layers can cause artefacts. In this work, a method for assessing the structural biases of UNets and the effects these have on the outputs is presented, characterising their impact in the Fourier domain. A new upsampling module is proposed, based on a novel use of the Guided Image Filter, that provides spectrally consistent outputs when used in a UNet architecture, forming the Guided UNet (GUNet). The GUNet architecture is applied and evaluated for example applications of inverse tone mapping/dynamic range expansion and colourisation from grey-scale images and is shown to provide higher fidelity outputs. | electrical engineering and systems science |
In this article, we study the relationship among maximal green sequences, complete forward hom-orthogonal sequences and stability functions in abelian length categories. Mainly, we firstly give a one-to-one correspondence between maximal green sequences and complete forward hom-orthogonal sequences via mutual constructions, and then prove that a maximal green sequence can be induced by a central charge if and only if it satisfies crossing inequalities. As applications, we show that crossing inequalities can be computed by $c$-vectors for finite dimensional algebras; finally, we give the Rotation Lemma for finite dimensional Jacobian algebras. | mathematics |
We consider a non-supersymmetric domain-wall version of $\mathcal{N} = 4$ SYM theory where five out of the six scalar fields have non-zero classical values on one side of a wall of codimension one. The classical fields have commutators which constitute an irreducible representation of the Lie algebra $\mathfrak{so}(5)$ leading to a highly non-trivial mixing between color and flavor components of the quantum fields. Making use of fuzzy spherical harmonics on $S^4$, we explicitly solve the mixing problem and derive not only the spectrum of excitations at the quantum level but also the propagators of the original fields needed for perturbative quantum computations. As an application, we derive the one-loop one-point function of a chiral primary and find complete agreement with a supergravity prediction of the same quantity in a double-scaling limit which involves a limit of large instanton number in the dual D3-D7 probe-brane setup. | high energy physics theory |
All investigators working on the foundations of quantum mechanics agree that the theory has profoundly modified our conception of reality. But there ends the consensus. The unproblematic formalism of the theory gives rise to a number of very different interpretations, each of which has consequences on the notion of reality. This paper analyses how the Copenhagen interpretation, von Neumann's state vector collapse, Bohm and de Broglie's pilot wave and Everett's many worlds modify, each in its own way, the classical conception of reality, whose local character, in particular, requires revision. | quantum physics |
The present contribution aims at obtaining the energy/dark energy fraction of the universe by starting from the Sitter vacuum only and without involving any additional source of energy. To do so, we consider two different standard solutions of the Einstein vacuum equations with positive cosmological constant. In accordance with [9][10], we initially derive an uncertainty principle for the associated spherical and hyperbolical time-slices to highlight the connection between the classical notion of spatial curvature and the quantum mechanical uncertainty of position and momentum in de Sitter space (Theorem). Based on the positive and negative curvatures of these foliations, an alternative notion of (time-dependent) energy and dark energy of the vacuum is established. This opens the possibility of a formal derivation of Einstein's gravitational constant $\kappa$ by matching the dark energy contribution at the Planck scale at one Planck time after the initial singularity. Moreover, for the fraction of 70\% dark energy, the age of the universe is estimated to be about 13.7 billion years. Finally, we verify that the metric corresponding to a suitable junction of the hyperbolic and the spherical foliation is a solution of Einstein's field equations. This suggests a cosmology given by a de Sitter space embedded into an asymptotic Schwarzschild-Anti-de Sitter background. | physics |
We complete the analysis of the effective field theory at the electroweak scale for minimal models of fundamental partial compositeness. Specifically, we consider fermions in the complex and real representation of the gauge group underlying the composite Higgs dynamics, since the pseudo real representation was investigated earlier. The minimal models feature the cosets SU(4)xSU(4)/SU(4)_D and SU(5)/SO(5) respectively for the complex and real representations. We determine the vacuum alignment, the electroweak precision constraints, additional collider constraints. We finally discuss the main differences among the different models of minimal partial compositeness. | high energy physics phenomenology |
We study the shape and evolution of the star formation main sequence in three independently developed semi-analytic models of galaxy formation. We focus, in particular, on the characterization of the model galaxies that are significantly above the main sequence, and that can be identified with galaxies classified as `starburst' in recent observational work. We find that, in all three models considered, star formation triggered by merger events (both minor and major) contribute to only a very small fraction of the cosmic density of star formation. While mergers are associated to bursts of star formation in all models, galaxies that experienced recent merger events are not necessarily significantly above the main sequence. On the other hand, `starburst galaxies' are not necessarily associated with merger episodes, especially at the low-mass end. Galaxies that experienced recent mergers can have relatively low levels of star formation when/if the merger is gas-poor, and galaxies with no recent merger can experience episodes of starbursts due to a combination of large amount of cold gas available from cooling/accretion events and/or small disk radii which increases the cold gas surface density. | astrophysics |
We investigate the nonequilibrium back reaction on the Schwarzschild black hole from the radiation field. The back reactions are characterized by the membrane close to the black hole. When the membrane is thin, we found that larger temperature difference can lead to more significant negative surface tension, larger thermodynamic dissipation cost and back reaction in energy and entropy as well as larger black hole area. This may be relevant to the primordial black holes in early universe. Moreover, our nonequilibrium model can resolve the inconsistency issue of the black hole back reaction under zero mass limit in the equilibrium case. In the thick membrane case, the nonequilibrium back reaction is found to be more significant than that in the thin membrane case. The nonequilibrium temperature difference can increase the energy and entropy loss as well as the thermodynamic dissipation of the black hole and the membrane back reactions. The nonequilibrium dissipation cost characterized by the entropy production rate appears to be significant compared to the entropy rate radiated by the black hole under finite temperature difference. This may shed light on the black hole information paradox due to the information loss from the entropy production rate in the nonequilibirum cases. The nonequilibrium thermodynamic fluctuations can also reflect the effects of the back-reactions of the Hawking radiation on the evolution of a black hole. | high energy physics theory |
The migration behaviors of cancer cells are known to be heterogeneous. However, the interplay between the adhesion interactions, dynamical shape changes and fluid flows in regulating cell migration heterogeneity and plasticity during cancer metastasis is still elusive. To further quantitative understanding of cell motility and morphology, we develop a theory using stochastic quantization method that describes the role of biophysical cues in regulating diverse cell motility. We show that the cumulative effect of time dependent adhesion interactions that determine the structural rearrangements and self-generated force due to actin remodeling, dictate the super-diffusive motion of mesenchymal phenotype in the absence of flow. Interstitial flows regulate cell motility phenotype and promote the amoeboid over mesenchymal motility through adhesion interactions. Cells exhibit a dynamical slowing down of collective migration, with a decreasing degree of super-diffusion. Mesenchymal cells are more persistent and diffusive compared to amoeboid cells. Our findings, suggest a mechanism of Interstitial flow induced directed motion of cancer cells through adhesion, and provide the much needed insight into a recent experimental observation concerning the diverse motility of breast cancer cells. | condensed matter |
We study the vacuum geometry prescribed by the gauge invariant operators of the MSSM via the Plethystic Programme. This is achieved by using several tricks to perform the highly computationally challenging Molien-Weyl integral, from which we extract the Hilbert series, encoding the invariants of the geometry at all degrees. The fully refined Hilbert series is presented as the explicit sum of 1422 rational functions. We found a good choice of weights to unrefine the Hilbert series into a rational function of a single variable, from which we can read off the dimension and the degree of the vacuum moduli space of the MSSM gauge invariants. All data in Mathematica format are also presented. | high energy physics theory |
We describe the Dirichlet spectrum structure for the Fichera layers and crosses in any dimension $n\ge3$. Also the application of the obtained results to the classical Brownian exit times problem in these domains. | mathematics |
We propose an exact map from commuting lattice spin systems with gauge interactions to fermionic models in an arbitrary number of dimensions. | condensed matter |
Multilingual neural machine translation (NMT) enables training a single model that supports translation from multiple source languages into multiple target languages. In this paper, we push the limits of multilingual NMT in terms of number of languages being used. We perform extensive experiments in training massively multilingual NMT models, translating up to 102 languages to and from English within a single model. We explore different setups for training such models and analyze the trade-offs between translation quality and various modeling decisions. We report results on the publicly available TED talks multilingual corpus where we show that massively multilingual many-to-many models are effective in low resource settings, outperforming the previous state-of-the-art while supporting up to 59 languages. Our experiments on a large-scale dataset with 102 languages to and from English and up to one million examples per direction also show promising results, surpassing strong bilingual baselines and encouraging future work on massively multilingual NMT. | computer science |
We classify positive solutions to a class of quasilinear equations with Neumann or Robin boundary conditions in convex domains. Our main tool is an integral formula involving the trace of some relevant quantities for the problem. Under a suitable condition on the nonlinearity, a relevant consequence of our results is that we can extend to weak solutions a celebrated result obtained for stable solutions by Casten and Holland and by Matano. | mathematics |
We study the typical behavior of a generalized version of Google's PageRank algorithm on a large family of inhomogeneous random digraphs. This family includes as special cases directed versions of classical models such as the Erd\"os-R\'enyi model, the Chung-Lu model, the Poissonian random graph and the generalized random graph, and is suitable for modeling scale-free directed complex networks where the number of neighbors a vertex has is related to its attributes. In particular, we show that the rank of a randomly chosen node in a graph from this family converges weakly to the attracting endogenous solution to the stochastic fixed-point equation $$\mathcal{R} \stackrel{\mathcal{D}}{=} \sum_{i=1}^{\mathcal{N}} \mathcal{C}_i \mathcal{R}_i + \mathcal{Q},$$ where $(\mathcal{N}, \mathcal{Q}, \{\mathcal{C}_i\}_{i \geq 1})$ is a real-valued vector with $\mathcal{N}\in \{0,1,2,...\}$, the $\{ \mathcal{R}_i\}$ are i.i.d.~copies of $\mathcal{R}$, independent of $(\mathcal{N}, \mathcal{Q}, \{\mathcal{C}_i\}_{i \geq 1})$, with $\{ \mathcal{C}_i\}$ i.i.d.~and independent of $(\mathcal{N}, \mathcal{Q})$; $\stackrel{\mathcal{D}}{=}$ denotes equality in distribution. This result can then be used to provide further evidence of the power-law behavior of PageRank on scale-free graphs. | mathematics |
The geometric phase (GP) acquired by a neutron passing through a uniform magnetic field elucidates a subtle interplay between its spatial and spin degrees of freedom. In the standard setup using thermal neutrons, the kinetic energy is much larger than the typical Zeeman split. This causes the spin to undergo nearly perfect precession around the axis of the magnetic field and the GP becomes a function only of the corresponding cone angle. Here, we perform a plane wave analysis of the GP of very slow neutrons, for which the precession feature breaks down. Purely quantum-mechanical matter wave effects, such as resonance, reflection, and tunneling, become relevant for the behavior of the GP in this low energy scattering regime. | quantum physics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.