text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Histopathological images of tumors contain abundant information about how tumors grow and how they interact with their micro-environment. Better understanding of tissue phenotypes in these images could reveal novel determinants of pathological processes underlying cancer, and in turn improve diagnosis and treatment options. Advances of Deep learning makes it ideal to achieve those goals, however, its application is limited by the cost of high quality labels from patients data. Unsupervised learning, in particular, deep generative models with representation learning properties provides an alternative path to further understand cancer tissue phenotypes, capturing tissue morphologies. In this paper, we develop a framework which allows GANs to capture key tissue features and uses these characteristics to give structure to its latent space. To this end, we trained our model on two different datasets, an H&E colorectal cancer tissue from the National Center for Tumor diseases (NCT) and an H&E breast cancer tissue from the Netherlands Cancer Institute (NKI) and Vancouver General Hospital (VGH). Composed of 86 slide images and 576 TMAs respectively. We show that our model generates high quality images, with a FID of 16.65 (breast cancer) and 32.05 (colorectal cancer). We further assess the quality of the images with cancer tissue characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using quantitative information to calculate the FID and showing consistent performance of 9.86. Additionally, the latent space of our model shows an interpretable structure and allows semantic vector operations that translate into tissue feature transformations. Furthermore, ratings from two expert pathologists found no significant difference between our generated tissue images from real ones. The code, images, and pretrained models are available at https://github.com/AdalbertoCq/Pathology-GAN | electrical engineering and systems science |
Chronic pain is defined as pain that lasts or recurs for more than 3 to 6 months, often long after the injury or illness that initially caused the pain has healed. The "gold standard" for chronic pain assessment remains self report and clinical assessment via a biopsychosocial interview, since there has been no device that can measure it. A device to measure pain would be useful not only for clinical assessment, but potentially also as a biofeedback device leading to pain reduction. In this paper we propose an end-to-end deep learning framework for chronic pain score assessment. Our deep learning framework splits the long time-course data samples into shorter sequences, and uses Consensus Prediction to classify the results. We evaluate the performance of our framework on two chronic pain score datasets collected from two iterations of prototype Pain Meters that we have developed to help chronic pain subjects better understand their health condition. | electrical engineering and systems science |
Majorana zero modes can appear at the wire ends of a one-dimensional topological superconductor and manifest themselves as a quantized zero-bias conductance peak in the tunnel spectroscopy of normal-superconductor junctions. However, in superconductor-semiconductor hybrid nanowires, zero-bias conductance peaks may arise owing to topologically trivial mechanisms as well, mimicking the Majorana-induced topological peak in many aspects. In this work, we systematically investigate the characteristics of zero-bias conductance peaks for topological Majorana bound states, trivial quasi-Majorana bound states and low-energy Andreev bound states arising from smooth potential variations, and disorder-induced subgap bound states. Our focus is on the conductance peak value (i.e., equal to, greater than, or less than $2e^2/h$), as well as the robustness (plateau- or spike-like) against the tuning parameters (e.g., the magnetic field and tunneling gate voltage) for zero bias peaks arising from the different mechanisms. We find that for Majoranas and quasi-Majoranas, the zero-bias peak values are no more than $2e^2/h$, and a quantized conductance plateau forms generically as a function of parameters. By contrast, for conductance peaks due to low-energy Andreev bound states or disorder-induced bound states, the peak values may exceed $2e^2/h$, and a conductance plateau is rarely observed unless through careful fine-tunings. Our findings should shed light on the interpretation of experimental measurements on the tunneling spectroscopy of normal-superconductor junctions of hybrid Majorana nanowires. | condensed matter |
The RNN-Transducers and improved attention-based encoder-decoder models are widely applied to streaming speech recognition. Compared with these two end-to-end models, the CTC model is more efficient in training and inference. However, it cannot capture the linguistic dependencies between the output tokens. Inspired by the success of two-pass end-to-end models, we introduce a transformer decoder and the two-stage inference method into the streaming CTC model. During inference, the CTC decoder first generates many candidates in a streaming fashion. Then the transformer decoder selects the best candidate based on the corresponding acoustic encoded states. The second-stage transformer decoder can be regarded as a conditional language model. We assume that a large enough number and enough diversity of candidates generated in the first stage can compensate the CTC model for the lack of language modeling ability. All the experiments are conducted on a Chinese Mandarin dataset AISHELL-1. The results show that our proposed model can implement streaming decoding in a fast and straightforward way. Our model can achieve up to a 20% reduction in the character error rate than the baseline CTC model. In addition, our model can also perform non-streaming inference with only a little performance degradation. | electrical engineering and systems science |
The solar wind proton temperature at 1-au has been found to be correlated with small-scale intermittent magnetic structures, i.e., regions with enhanced temperature are associated with coherent structures such as current sheets. Using Parker Solar Probe data from the first encounter, we study this association using measurements of radial proton temperature, employing the Partial Variance of Increments (PVI) technique to identify intermittent magnetic structures. We observe that the probability density functions of high-PVI events have higher median temperatures than those with lower PVI, The regions in space where PVI peaks were also locations that had enhanced temperatures when compared with similar regions suggesting a heating mechanism in the young solar wind that is associated with intermittency developed by a nonlinear turbulent cascade.n the immediate vicinity. | physics |
This is the fourth of a series of papers on low X-ray luminosity galaxy clusters. The sample comprises 45 galaxy clusters with X-ray luminosities fainter than 0.7 10$^{44}$ erg s$^{-1}$ at redshifts lower than 0.2 in the regions of the Sloan Digital Sky Survey. The sample of spectroscopic members of the galaxy clusters was obtained with the criteria: r$_p$ $\le$ 1 Mpc and $\Delta V \leq \sigma$ using our $\sigma$ estimates containing 21 galaxy clusters with more than 6 spectroscopic members. We have also defined a sample of photometric members with galaxies that satisfy r$_p \le $ 1 Mpc, and $\Delta V \leq $ 6000 \kms including 45 galaxy clusters with more than 6 cluster members. We have divided the redshift range in three bins: $z \leq 0.065$; 0.065 $<$ z $<$ 0.10; and z $\ge$ 0.10. We have stacked the galaxy clusters using the spectroscopic sub-sample and we have computed the best RS linear fit within 1$\sigma$ dispersion. With the photometric sub-sample, we have added more data to the RS obtaining the photometric 1$\sigma$ dispersion relative to the spectroscopic RS fit. We have computed the luminosity function using the $1/V_{max}$ method fitting it with a Schechter function. The obtained parameters for these galaxy clusters with low X-ray luminosities are remarkably similar to those for groups and poor galaxy clusters at these lower redshifts. | astrophysics |
The Thermo Field Dynamics (TFD) formalism is used to investigate the regular black holes at finite temperature. Using the Teleparalelism Equivalent to General Relativity (TEGR) the gravitational Stefan-Boltzmann law and the gravitational Casimir effect at zero and finite temperature are calculated. In addition, the first law of thermodynamics is considered. Then the gravitational entropy and the temperature of the event horizon of a class of regular black holes are determined. | high energy physics theory |
We present a photonic integrated circuit architecture for a quantum programmable gate array (QPGA) capable of preparing arbitrary quantum states and operators. The architecture consists of a lattice of phase-modulated Mach-Zehnder interferometers, which perform rotations on path-encoded photonic qubits, and embedded quantum emitters, which use a two-photon scattering process to implement a deterministic controlled-$\sigma_z$ operation between adjacent qubits. By appropriately setting phase shifts within the lattice, the device can be programmed to implement any quantum circuit without hardware modifications. We provide algorithms for exactly preparing arbitrary quantum states and operators on the device and we show that gradient-based optimization can train a simulated QPGA to automatically implement highly compact approximations to important quantum circuits with near-unity fidelity. | quantum physics |
An alternative method is presented for extracting the von Neumann entropy $-\operatorname{Tr} (\rho \ln \rho)$ from $\operatorname{Tr} (\rho^n)$ for integer $n$ in a quantum system with density matrix $\rho$. Instead of relying on direct analytic continuation in $n$, the method uses a generating function $-\operatorname{Tr} \{ \rho \ln [(1-z \rho) / (1-z)] \}$ of an auxiliary complex variable $z$. The generating function has a Taylor series that is absolutely convergent within $|z|<1$, and may be analytically continued in $z$ to $z = -\infty$ where it gives the von Neumann entropy. As an example, we use the method to calculate analytically the CFT entanglement entropy of two intervals in the small cross ratio limit, reproducing a result that Calabrese et al. obtained by direct analytic continuation in $n$. Further examples are provided by numerical calculations of the entanglement entropy of two intervals for general cross ratios, and of one interval at finite temperature and finite interval length. | high energy physics theory |
Over-the-air computation (AirComp)-based federated learning (FL) enables low-latency uploads and the aggregation of machine learning models by exploiting simultaneous co-channel transmission and the resultant waveform superposition. This study aims at realizing secure AirComp-based FL against various privacy attacks where malicious central servers infer clients' private data from aggregated global models. To this end, a differentially private AirComp-based FL is designed in this study, where the key idea is to harness receiver noise perturbation injected to aggregated global models inherently, thereby preventing the inference of clients' private data. However, the variance of the inherent receiver noise is often uncontrollable, which renders the process of injecting an appropriate noise perturbation to achieve a desired privacy level quite challenging. Hence, this study designs transmit power control across clients, wherein the received signal level is adjusted intentionally to control the noise perturbation levels effectively, thereby achieving the desired privacy level. It is observed that a higher privacy level requires lower transmit power, which indicates the tradeoff between the privacy level and signal-to-noise ratio (SNR). To understand this tradeoff more fully, the closed-form expressions of SNR (with respect to the privacy level) are derived, and the tradeoff is analytically demonstrated. The analytical results also demonstrate that among the configurable parameters, the number of participating clients is a key parameter that enhances the received SNR under the aforementioned tradeoff. The analytical results are validated through numerical evaluations. | computer science |
Federated Learning (FL) allows edge devices to collaboratively learn a shared prediction model while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store data in the cloud. Despite the algorithmic advancements in FL, the support for on-device training of FL algorithms on edge devices remains poor. In this paper, we present an exploration of on-device FL on various smartphones and embedded devices using the Flower framework. We also evaluate the system costs of on-device FL and discuss how this quantification could be used to design more efficient FL algorithms. | computer science |
This paper proves Buss's hierarchy of bounded arithmetics $S^1_2 \subseteq S^2_2 \subseteq \cdots \subseteq S^i_2 \subseteq \cdots$ does not entirely collapse. More precisely, we prove that, for a certain $D$, $S^1_2 \subsetneq S^{2D+5}_2$ holds. Further, we can allow any finite set of true quantifier free formulas for the BASIC axioms of $S^1_2, S^2_2, \ldots$. By Takeuti's argument, this implies $\mathrm{P} \neq \mathrm{NP}$. Let $\mathbf{Ax}$ be a certain formulation of BASIC axioms. We prove that $S^1_2 \not\vdash \mathrm{Con}(\mathrm{PV}^-_1(D) + \mathbf{Ax})$ for sufficiently large $D$, while $S^{2D+7}_2 \vdash \mathrm{Con}(\mathrm{PV}^-_1(D) + \mathbf{Ax})$ for a system $\mathrm{PV}^-_1(D)$, a fragment of the system $\mathrm{PV}^-_1$, induction free first order extension of Cook's $\mathrm{PV}$, of which proofs contain only formulas with less than $D$ connectives. $S^1_2 \not\vdash \mathrm{Con}(\mathrm{PV}^-_1(D) + \mathbf{Ax})$ is proved by straightforward adaption of the proof of $\mathrm{PV} \not\vdash \mathrm{Con}(\mathrm{PV}^-)$ by Buss and Ignjatovi\'c. $S^{2D+5}_2 \vdash \mathrm{Con}(\mathrm{PV}^-_1(D) + \mathbf{Ax})$ is proved by $S^{2D+7}_2 \vdash \mathrm{Con}(\mathrm{PV}^-_q(D+2) + \mathbf{Ax})$, where $\mathrm{PV}^-_q$ is a quantifier-only extension of $\mathrm{PV}^-$. The later statement is proved by an extension of a technique used for Yamagata's proof of $S^2_2 \vdash \mathrm{Con}(\mathrm{PV}^-)$, in which a kind of satisfaction relation $\mathrm{Sat}$ is defined. By extending $\mathrm{Sat}$ to formulas with less than $D$-quantifiers, $S^{2D+3}_2 \vdash \mathrm{Con}(\mathrm{PV}^-_q(D) + \mathbf{Ax})$ is obtained in a straightforward way. | mathematics |
We present an in-depth analysis of the bright subgiant HR 7322 (KIC 10005473) using Kepler short-cadence photometry, optical interferometry from CHARA, high-resolution spectra from SONG, and stellar modelling using GARSTEC grids and the Bayesian grid-fitting algorithm BASTA. HR 7322 is only the second subgiant with high-quality Kepler asteroseismology for which we also have interferometric data. We find a limb-darkened angular diameter of $0.443 \pm 0.007$ mas, which, combined with a distance derived using the parallax from Gaia DR2 and a bolometric flux, yields a linear radius of $2.00 \pm 0.03$ R$_{\odot}$ and an effective temperature of $6350 \pm 90$ K. HR 7322 exhibits solar-like oscillations, and using the asteroseismic scaling relations and revisions thereof, we find good agreement between asteroseismic and interferometric stellar radius. The level of precision reached by the careful modelling is to a great extent due to the presence of an avoided crossing in the dipole oscillation mode pattern of HR 7322. We find that the standard models predict radius systematically smaller than the observed interferometric one and that a sub-solar mixing length parameter is needed to achieve a good fit to individual oscillation frequencies, interferometric temperature, and spectroscopic metallicity. | astrophysics |
The dualities that map hard-to-solve, interacting theories to free, non-interacting ones often trigger a deeper understanding of the systems to which they apply. However, simplifying assumptions such as Lorentz invariance, low dimensionality, or the absence of axial gauge fields, limit their application to a broad class of systems, including topological semimetals. Here we derive several axial field theory dualities in 2+1 and 3+1 dimensions by developing an axial slave-rotor approach capable of accounting for the axial anomaly. Our 2+1-dimensional duality suggests the existence of a dual, critical surface theory for strained three-dimensional non-symmorphic topological insulators. Our 3+1-dimensional duality maps free Dirac fermions to Dirac fermions coupled to emergent U(1) and Kalb-Ramond vector and axial gauge fields. Upon fixing an axial field configuration that breaks Lorentz invariance, this duality maps free to interacting Weyl semimetals, thereby suggesting that the quantization of the non-linear circular photogalvanic effect can be robust to certain interactions. Our work emphasizes how axial and Lorentz-breaking dualities improve our understanding of topological matter. | condensed matter |
We show that the free Banach lattice $\mathrm{FBL}(A)$ may be constructed as the completion of $\mathrm{FVL}(A)$ with respect to the maximal lattice seminorm $\nu$ on $\mathrm{FVL}(A)$ with $\nu(a)\le 1$ for all $a\in A$. We present a similar construction for the free Banach lattice $\mathrm{FBL}[E]$ generated by a Banach space $E$. | mathematics |
We study fibrations in the category of cubespaces/nilspaces. We show that a fibration of finite degree $f \colon X\rightarrow Y$ between compact ergodic gluing cubespaces (in particular nilspaces) factors as a (possibly countable) tower of compact abelian Lie group principal fiber bundles over $Y$. If the structure groups of $f$ are connected then the fibers are (uniformly) isomorphic (in a strong sense) to an inverse limit of nilmanifolds. In addition we give conditions under which the fibers of $f$ are isomorphic as subcubespaces. We introduce regionally proximal equivalence relations relative to factor maps between minimal topological dynamical systems for an arbitrary acting group. We prove that any factor map between minimal distal systems is a fibration and conclude that if such a map is of finite degree then it factors as a (possibly countable) tower of principal abelian Lie compact group extensions, thus achieving a refinement of both the Furstenberg's and the Bronstein-Ellis structure theorems in this setting. | mathematics |
Topologically-ordered phases of matter, although stable against local perturbations, are usually restricted to relatively small regions in phase diagrams. Their preparation requires thus a precise fine tunning of the system's parameters, a very challenging task in most experimental setups. In this work, we investigate a model of spinless fermions interacting with dynamical $\mathbb{Z}_2$ gauge fields on a cross-linked ladder, and show evidence of topological order throughout the full parameter space. In particular, we show how a magnetic flux is spontaneously generated through the ladder due to an Aharonov-Bohm instability, giving rise to topological order even in the absence of a plaquette term. Moreover, the latter coexists here with a symmetry-protected topological phase in the matter sector, that displays fractionalised gauge-matter edge states, and intertwines with it by a flux-threading phenomenon. Finally, we unveil the robustness of these features through a gauge frustration mechanism, akin to geometric frustration in spin liquids, allowing topological order to survive to arbitrarily large quantum fluctuations. In particular, we show how, at finite chemical potential, topological solitons are created in the gauge field configuration, which bound to fermions forming $\mathbb{Z}_2$ deconfined quasi-particles. The simplicity of the model makes it an ideal candidate where 2D gauge theory phenomena, as well as exotic topological effects, can be investigated using cold-atom quantum simulators. | condensed matter |
Correctly pricing products or services in an online marketplace presents a challenging problem and one of the critical factors for the success of the business. When users are looking to buy an item they typically search for it. Query relevance models are used at this stage to retrieve and rank the items on the search page from most relevant to least relevant. The presented items are naturally "competing" against each other for user purchases. We provide a practical two-stage model to price this set of retrieved items for which distributions of their values are learned. The initial output of the pricing strategy is a price vector for the top displayed items in one search event. We later aggregate these results over searches to provide the supplier with the optimal price for each item. We applied our solution to large-scale search data obtained from Airbnb Experiences marketplace. Offline evaluation results show that our strategy improves upon baseline pricing strategies on key metrics by at least +20% in terms of booking regret and +55% in terms of revenue potential. | computer science |
Quantum sensing exploits fundamental features of quantum system to achieve highly efficient measurement of physical quantities. Here, we propose a strategy to realize a single-qubit pseudo-Hermitian sensor from a dilated two-qubit Hermitian system. The pseudo-Hermitian sensor exhibits divergent susceptibility in dynamical evolution that does not necessarily involve exceptional point. We demonstrate its potential advantages to overcome noises that cannot be averaged out by repetitive measurements. The proposal is feasible with the state-of-art experimental capability in a variety of qubit systems, and represents a step towards the application of non-Hermitian physics in quantum sensing. | quantum physics |
In this work, the CFT dual of the Schwarzschild black hole is investigated. A Weyl rescaling factor is presented, so that the Weyl rescaled Schwarzschild metric, after a coordinate transformation, has an $AdS_{2} \times S^{2}$ geometry at vicinity of its origin. Since the near origin spacetime admits an $AdS_{2}$ factor, it is dealt with a 2D effective gravity which is dimensionally reduced from the near origin solution. It is exhibited that the dual CFT has a central charge $c=96 M^3$ which is an asymptotic conserved charge of the effective solution. Finally, the microscopic entropy of Schwarzschild black hole is achieved by using Cardy formula. It is revealed that the microscopic entropy exactly reproduces the Bekenstein-Hawking entropy of the Schwarzschild black hole. | high energy physics theory |
This work regards IIB string theory embeddings of massive and bi-metric $AdS_{4}$ gravity where the spectrum of the theory includes multiple massive gravitons. The corresponding geometry consists of multiple $AdS_{4}\times_{w}\mathcal{M}_{6}$ spacetimes coupled by Janus throats of different radii. In the second part of this work, the AdS distance conjecture is studied in the above context and it is shown that the scale of the predicted infinite tower of light states is related to the breakdown scale of the effective field theory. | high energy physics theory |
We give an upper bound for the maximum number of edges in an $n$-vertex 2-connected $r$-uniform hypergraph with no Berge cycle of length $k$ or greater, where $n\geq k \geq 4r\geq 12$. For $n$ large with respect to $r$ and $k$, this bound is sharp and is significantly stronger than the bound without restrictions on connectivity. It turned out that it is simpler to prove the bound for the broader class of Sperner families where the size of each set is at most $r$. For such families, our bound is sharp for all $n\geq k\geq r\geq 3$. | mathematics |
We use the spectroscopy and homogeneous photometry of 97 Type Ia supernovae obtained by the \emph{Carnegie Supernova Project} as well as a subset of 36 Type Ia supernovae presented by Zheng et al. (2018) to examine maximum-light correlations in a four-dimensional (4-D) parameter space: $B$-band absolute magnitude, $M_B$, \ion{Si}{2}~$\lambda6355$ velocity, \vsi, and \ion{Si}{2} pseudo-equivalent widths pEW(\ion{Si}{2}~$\lambda6355$) and pEW(\ion{Si}{2}~$\lambda5972$). It is shown using Gaussian mixture models (GMMs) that the original four groups in the Branch diagram are well-defined and robust in this parameterization. We find three continuous groups that describe the behavior of our sample in [$M_B$, \vsi] space. Extending the GMM into the full 4-D space yields a grouping system that only slightly alters group definitions in the [$M_B$, \vsi] projection, showing that most of the clustering information in [$M_B$, \vsi] is already contained in the 2-D GMM groupings. However, the full 4-D space does divide group membership for faster objects between core-normal and broad-line objects in the Branch diagram. A significant correlation between $M_B$ and pEW(\ion{Si}{2}~$\lambda5972$) is found, which implies that Branch group membership can be well-constrained by spectroscopic quantities alone. In general, we find that higher-dimensional GMMs reduce the uncertainty of group membership for objects between the originally defined Branch groups. We also find that the broad-line Branch group becomes nearly distinct with the inclusion of \vsi, indicating that this subclass of SNe Ia may be somehow different from the other groups. | astrophysics |
The "double descent" risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models. This article provides a precise mathematical analysis for the shape of this curve in two simple data models with the least squares/least norm predictor. Specifically, it is shown that the risk peaks when the number of features $p$ is close to the sample size $n$, but also that the risk decreases towards its minimum as $p$ increases beyond $n$. This behavior is contrasted with that of "prescient" models that select features in an a priori optimal order. | computer science |
The financial sector presents many opportunities to apply various machine learning techniques. Centralized machine learning creates a constraint which limits further applications in finance sectors. Data privacy is a fundamental challenge for a variety of finance and insurance applications that account on learning a model across different sections. In this paper, we define a new practical scheme of collaborative machine learning that one party owns data, but another party owns labels only, and term this \textbf{Asymmetrically Collaborative Machine Learning}. For this scheme, we propose a novel privacy-preserving architecture where two parties can collaboratively train a deep learning model efficiently while preserving the privacy of each party's data. More specifically, we decompose the forward propagation and backpropagation of the neural network into four different steps and propose a novel protocol to handle information leakage in these steps. Our extensive experiments on different datasets demonstrate not only stable training without accuracy loss, but also more than 100 times speedup compared with the state-of-the-art system. | computer science |
The idea of dark matter particles coupled only gravitationally is minimalist yet viable. Assuming an additional $Z_2$-breaking linear coupling of scalar curvature to the dark matter scalar (gravity portal) Refs. arXiv:1603.03696 and arXiv:1611.00725 claimed a strong parametric growth of the dark matter particle decay rate with its mass, which implies pronounced phenomenological signatures for the model. This peculiarity was attributed by the authors to the enhancement due to the presence of longitudinal gauge bosons in the final state. Quite unfortunately there were overlooked cancellations in the tree-level amplitudes. There is no miracle: all perturbative decay rates are suppressed by the strong coupling scale. | high energy physics phenomenology |
We have developed a new calorimeter for measuring thermodynamic properties in pulsed magnetic fields. An instrumental design is described along with the construction details including the sensitivity of a RuO2 thermometer. The operation of the calorimeter is demonstrated by measuring heat capacity of three samples, pure Germanium, CeCu2Ge2, and $\kappa$-(BEDT-TTF)2Cu[N(CN)2]Br, in pulsed fields up to 43.5 T. We have found that realization of field-stability is a key for measuring high-resolution heat capacity under pulsed fields. We have also tested the performance of the calorimeter by employing two measurement techniques; the quasi-adiabatic and dual-slope methods. We demonstrate that the calorimeter developed in this study is capable of performing high-resolution calorimetry in pulsed magnetic fields, which opens new opportunities for high-field thermodynamic studies. | physics |
We examined the latest data release from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey covering $345^\circ < l < 60^\circ$, $180^\circ < l < 240^\circ$, using these data and that of the Widefield Infrared Survey Explorer to follow up proposed candidate Supernova Remnants from other sources. Of the 101 candidates proposed in the region, we are able to definitively confirm ten as SNRs, tentatively confirm two as SNRs, and reclassify five as Hii regions. A further two are detectable in our images but difficult to classify; the remaining 82 are undetectable in these data. We also investigated the 18 unclassified Multi-Array Galactic Plane Imaging Survey (MAGPIS) candidate SNRs, newly confirming three as SNRs, reclassifying two as Hii regions, and exploring the unusual spectra and morphology of two others. | astrophysics |
The ESA Rosetta mission has acquired unprecedented measurements of comet 67/P-Churyumov-Gerasimenko (hereafter 67P) nucleus surface, whose composition, as determined by in situ and remote sensing instruments including VIRTIS (Visible, InfraRed and Thermal Imaging Spectrometer) appears to be made by an assemblage of ices, minerals, and organic material. We performed a refined analysis of infrared observations of the nucleus of comet 67P carried out by the VIRTIS-M hyperspectral imager. We found that the overall shape of the 67P infrared spectrum is similar to that of other carbon-rich outer solar system objects suggesting a possible genetic link with them. More importantly, we are also able to confirm the complex spectral structure of the wide 2.8-3.6 micron absorption feature populated by fainter bands. Among these, we unambiguously identified the presence of aliphatic organics by their ubiquitous 3.38, 3.42 and 3.47 micron bands. This novel infrared detection of aliphatic species on a cometary surface has strong implications for the evolutionary history of the primordial solar system and give evidence that comets provide an evolutionary link between interstellar material and solar system bodies. | astrophysics |
minimal-lagrangians is a Python program which allows one to specify the field content of an extension of the Standard Model of particle physics and, using this information, to generate the most general renormalizable Lagrangian that describes such a model. As the program was originally created for the study of minimal dark matter models with radiative neutrino masses, it can handle additional scalar or Weyl fermion fields which are $\mathrm{SU}(3)_{\mathrm{C}}$ singlets, $\mathrm{SU}(2)_{\mathrm{L}}$ singlets, doublets or triplets, and can have arbitrary $\mathrm{U}(1)_{\mathrm{Y}}$ hypercharge. It is also possible to enforce an arbitrary number of global $\mathrm{U}(1)$ symmetries (with $\mathbb{Z}_2$ as a special case) so that the new fields can additionally carry such global charges. In addition to human-readable and $\mathrm{\LaTeX}$ output, the program can generate SARAH model files containing the computed Lagrangian, as well as information about the fields after electroweak symmetry breaking (EWSB), such as vacuum expectation values (VEVs) and mixing matrices. This capability allows further detailed investigation of the model in question, with minimal-lagrangians as the first component in a tool chain for rapid phenomenological studies of "minimal" dark matter models requiring little effort and no unnecessary input from the user. | high energy physics phenomenology |
Rate splitting (RS) is a potentially powerful and flexible technique for multi-antenna downlink transmission. In this paper, we address several technical challenges towards its practical implementation for beyond 5G systems. To this end, we focus on a single-cell system with a multi-antenna base station (BS) and K single-antenna receivers. We consider RS in its most general form, and joint decoding to fully exploit the potential of RS. First, we investigate the achievable rates under joint decoding and formulate the precoder design problems to maximize a general utility function, or to minimize the transmit power under pre-defined rate targets. Building upon the concave-convex procedure (CCCP), we propose precoder design algorithms for an arbitrary number of users. Our proposed algorithms approximate the intractable non-convex problems with a number of successively refined convex problems, and provably converge to stationary points of the original problems. Then, to reduce the decoding complexity, we consider the optimization of the precoder and the decoding order under successive decoding. Further, we propose a stream selection algorithm to reduce the number of precoded signals. With a reduced number of streams and successive decoding at the receivers, our proposed algorithm can even be implemented when the number of users is relatively large, whereas the complexity was previously considered as prohibitively high in the same setting. Finally, we propose a simple adaptation of our algorithms to account for the imperfection of the channel state information at the transmitter. Numerical results demonstrate that the general RS scheme provides a substantial performance gain as compared to state-of-the-art linear precoding schemes, especially with a moderately large number of users. | computer science |
In a recent paper (arXiv:1905.07348v1), Dyson scheme to deal with density matrix of non-Hermitian Hamiltonians has been used to investigate the entanglement of states of a PT-symmetric bosonic system. They found that von Neumann entropy can show a different behavior in the broken and unbroken regime. We show that their results can be recast in terms of an abstract model of pseudo-Hermitian random matrices. It is found however that, although the formalism is practically the same, the entanglement is not of Fock states but of Bell states. | quantum physics |
We present new 2D high resolution Fabry-Perot spectroscopic observations of 152 star-forming galaxies which are part of the $Herschel$ Reference Survey (HRS), a complete $K$-band selected, volume-limited sample of nearby galaxies, spanning a wide range in stellar mass and morphological type. Using improved data reduction techniques that provide adaptive binning based on Voronoi tessellation, using large field-of-view observations, we derive high spectral resolution (R$>$10,000) H$\alpha$ datacubes from which we compute H$\alpha$ maps and radial 2D velocity fields that are based on several thousand independent measurements. A robust method based on such fields allows us to accurately compute rotation curves and kinematical parameters, for which uncertainties are calculated using a method based on the power spectrum of the residual velocity fields. We check the consistency of the rotation curves by comparing our maximum rotational velocities to those derived from HI data, and computing the $i$-band, NIR, stellar and baryonic Tully-Fisher relations. We use this set of kinematical data combined to those available at other frequencies to study for the first time the relation between the dynamical and the total baryonic mass (stars, atomic and molecular gas, metals and dust), and derive the baryonic and dynamical main sequence on a representative sample of the local universe. | astrophysics |
In this paper we compare two finite words $u$ and $v$ by the lexicographical order of the infinite words $u^\omega$ and $v^\omega$. Informally, we say that we compare $u$ and $v$ by the infinite order. We show several properties of Lyndon words expressed using this infinite order. The innovative aspect of this approach is that it allows to take into account also non trivial conditions on the prefixes of a word, instead that only on the suffixes. In particular, we derive a result of Ufnarovskij [V. Ufnarovskij, "Combinatorial and asymptotic methods in algebra", 1995] that characterizes a Lyndon word as a word which is greater, with respect to the infinite order, than all its prefixes. Motivated by this result, we introduce the prefix standard permutation of a Lyndon word and the corresponding (left) Cartesian tree. We prove that the left Cartesian tree is equal to the left Lyndon tree, defined by the left standard factorization of Viennot [G. Viennot, "Alg\`ebres de Lie libres et mono\"ides libres", 1978]. This result is dual with respect to a theorem of Hohlweg and Reutenauer [C. Hohlweg and C. Reutenauer, "Lyndon words, permutations and trees", 2003]. | computer science |
Constructing efficient low-rate error-correcting codes with low-complexity encoding and decoding have become increasingly important for applications involving ultra-low-power devices such as Internet-of-Things (IoT) networks. To this end, schemes based on concatenating the state-of-the-art codes at moderate rates with repetition codes have emerged as practical solutions deployed in various standards. In this paper, we propose a novel mechanism for concatenating outer polar codes with inner repetition codes which we refer to as polar coded repetition. More specifically, we propose to transmit a slightly modified polar codeword by deviating from Arikan's standard 2 x 2 Kernel in a certain number of polarization recursions at each repetition block. We show how this modification can improve the asymptotic achievable rate of the polar-repetition scheme, while ensuring that the overall encoding and decoding complexity is kept almost the same. The achievable rate is analyzed for the binary erasure channels (BEC). | computer science |
We derive conditions in the form of inequalities to detect the genuine $N$-partite entanglement of $N$ systems. The inequalities are expressed in terms of variances of spin operators, and can be tested by local spin measurements performed on the individual systems. Violation of the inequalities is sufficient (but not necessary) to certify the multipartite entanglement, and occurs when a type of spin squeezing is created. The inequalities are similar to those derived for continuous-variable systems, but instead are based on the Heisenberg spin-uncertainty relation $\Delta J_{x}\Delta J_{y}\geq|\langle J_{z}\rangle|/2$. We also extend previous work to derive spin-variance inequalities that certify the full tripartite inseparability or genuine multi-partite entanglement among systems with fixed spin $J$, as in Greenberger-Horne-Zeilinger (GHZ) states and W states where $J=1/2$. These inequalities are derived from the planar spin-uncertainty relation $(\Delta J_{x})^{2}+(\Delta J_{y})^{2}\geq C_{J}$ where $C_{J}$ is a constant for each $J$. Finally, it is shown how the inequalities detect multipartite entanglement based on Stokes operators. We illustrate with experiments that create entanglement shared among separated atomic ensembles, polarization-entangled optical modes, and the clouds of atoms of an expanding spin-squeezed Bose-Einstein condensate. For each example, we give a criterion to certify the mutual entanglement. | quantum physics |
Road extraction from very high resolution satellite (VHR) images is one of the most important topics in the field of remote sensing. In this paper, we propose an efficient Non-Local LinkNet with non-local blocks that can grasp relations between global features. This enables each spatial feature point to refer to all other contextual information and results in more accurate road segmentation. In detail, our single model without any post-processing like CRF refinement, performed better than any other published state-of-the-art ensemble model in the official DeepGlobe Challenge. Moreover, our NL-LinkNet beat the D-LinkNet, the winner of the DeepGlobe challenge, with 43 \% less parameters, less giga floating-point operations per seconds (GFLOPs) and shorter training convergence time. We also present empirical analyses on the proper usages of non-local blocks for the baseline model. | computer science |
We study the Kondo alloy model on a square lattice using dynamical mean-field theory (DMFT) for Kondo substitution and disorder effects, together with static mean-field approximations. We computed and analyzed photoemission properties as a function of electronic filling $n_c$, Kondo impurity concentration $x$, and strength of Kondo temperature $T_K$. We provide a complete description of the Angle Resolved Photoemission Spectroscopy (ARPES) signals expected in the paramagnetic Kondo phases. By analyzing the Fermi surface, we observe the Lifshitz-like transition predicted previously for strong $T_K$ at $x=n_c$ and we discuss the evolution of the dispersion from the dense coherent to the dilute Kondo regimes. At smaller $T_K$, we find that this transition marking the breakdown of coherence at $x=n_c$ becomes a crossover. However, we identify another transition at a smaller concentration $x^\star$ where the effective mass continuously vanishes. $x^\star$ separates the one-branch and the two-branches ARPES dispersions characterizing respectively dilute and dense Kondo paramagnetic regimes. The $x-T_K$ phase diagrams are also described, suggesting that the transition at $x^\star$ might be experimentally observable since magnetically ordered phases are stabilized at much lower $T_K$. Fermi surface reconstructions in antiferromagnetic and ferromagnetic phases are also discussed. | condensed matter |
Treating neural network inputs and outputs as random variables, we characterize the structure of neural networks that can be used to model data that are invariant or equivariant under the action of a compact group. Much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures, in an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings. By considering group invariance from the perspective of probabilistic symmetry, we establish a link between functional and probabilistic symmetry, and obtain generative functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Our representations completely characterize the structure of neural networks that can be used to model such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We demonstrate that examples from the recent literature are special cases, and develop the details of the general program for exchangeable sequences and arrays. | statistics |
The theory of limits of discrete combinatorial objects has been thriving for the last decade or so. The syntactic, algebraic approach to the subject is popularly known as "flag algebras", while the semantic, geometric one is often associated with the name ``graph limits''. The language of graph limits is generally more intuitive and expressible, but a price that one has to pay for it is that it is better suited for the case of ordinary graphs than for more general combinatorial objects. Accordingly, there have been several attempts in the literature, of varying degree of generality, to define limit objects for more complicated combinatorial structures. This paper is another attempt at a workable general theory of dense limit objects. Unlike previous efforts in this direction (with notable exception of [Ashwini Aroskar and James Cummings. Limits, regularity and removal for finite structures. Technical Report arXiv:1412.2014 [math.LO], arXiv e-print, 2014.]), we base our account on the same concepts from the first-order logic and the model theory as in the theory of flag algebras. We show how our definition naturally encompasses a host of previously considered cases (graphons, hypergraphons, digraphons, permutons, posetons, colored graphs, etc.), and we extend the fundamental properties of existence and uniqueness to this more general case. We also give an intuitive general proof of the continuous version of the Induced Removal Lemma based on the completeness theorem for propositional calculus. We capitalize on the notion of an open interpretation that often allows to transfer methods and results from one situation to another. Again, we show that some previous arguments can be quite naturally framed using this language. | mathematics |
We propose an extension of the LSST survey to cover the northern sky to DEC < +30 (accessible at airmass <1.8). This survey will increase the LSST sky coverage by ~9,600 square degrees from 18,900 to 28,500 square degrees (a 50% increase) but use only 0.6-2.5% of the time depending on the synergies with other surveys. This increased area addresses a wide range of science cases that enhance all of the primary LSST science goals by significant amounts. The science enabled includes: increasing the area of the sky accessible for follow-up of multi-messenger transients including gravitational waves, mapping the milky way halo and halo dwarfs including discovery of RR Lyrae stars in the outer galactic halo, discovery of z>7 quasars in combination Euclid, enabling a second generation DESI and other spectroscopic surveys, and enhancing all areas of science by improving synergies with Euclid, WFIRST, and unique northern survey facilities. This white paper is the result of the Tri-Agency Working Group (TAG) appointed to develop synergies between missions and presents a unified plan for northern coverage. The range of time estimates reflects synergies with other surveys. If the modified DESC WFD survey, the ecliptic plane mini survey, and the north galactic spur mini survey are executed this plan would only need 0.6% of the LSST time, however if none of these are included the overall request is 2.5% of the 10 year survey life. In other words, the majority of these observations are already suggested as part of these other surveys and the intent of this white paper is to propose a unified baseline plan to carry out a broad range of objectives to facilitate a combination of multiple science objectives. A companion white paper gives Euclid specific science goals, and we support the white papers for southern extensions of the LSST survey. | astrophysics |
As a technology foundation of cryptocurrencies, blockchain enables decentralized peer-to-peer trading through consensus mechanisms without the involvement of a third party. Blockchain has been regarded as an auspicious technology for future cellular networks. It is able to provide solutions to problems related to mobile operators and user trust, embedded smart contracts, security concerns, pricing (e.g. for roaming), etc. When applying blockchain to cellular networks, there are significant challenges in terms of deployment and application, due to resource-constrained transactions. This article begins by introducing the basic concept of blockchain and then moves on to illustrate its benefits and limitations in the roaming system. Two models of roaming-based blockchain technologies are offered to show their suitability for cellular networks as opposed to traditional technology. Finally, potential issues and challenges of roaming-based blockchains are addressed and evaluated using the roaming use case in the EU. | computer science |
We introduce Targeted Smooth Bayesian Causal Forests (tsBCF), a nonparametric Bayesian approach for estimating heterogeneous treatment effects which vary smoothly over a single covariate in the observational data setting. The tsBCF method induces smoothness by parameterizing terminal tree nodes with smooth functions, and allows for separate regularization of treatment effects versus prognostic effect of control covariates. Smoothing parameters for prognostic and treatment effects can be chosen to reflect prior knowledge or tuned in a data-dependent way. We use tsBCF to analyze a new clinical protocol for early medical abortion. Our aim is to assess relative effectiveness of simultaneous versus interval administration of mifepristone and misoprostol over the first nine weeks of gestation. The model reflects our expectation that the relative effectiveness varies smoothly over gestation, but not necessarily over other covariates. We demonstrate the performance of the tsBCF method on benchmarking experiments. Software for tsBCF is available at https://github.com/jestarling/tsbcf/. | statistics |
A wide range of human-robot collaborative applications in diverse domains such as manufacturing, health care, the entertainment industry, and social interactions, require an autonomous robot to follow its human companion. Different working environments and applications pose diverse challenges by adding constraints on the choice of sensors, the degree of autonomy, and dynamics of a person-following robot. Researchers have addressed these challenges in many ways and contributed to the development of a large body of literature. This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots. Also, the corresponding operational challenges are identified based on various design choices for ground, underwater, and aerial scenarios. In addition, state-of-the-art methods for perception, planning, control, and interaction are elaborately discussed and their applicability in varied operational scenarios are presented. Then, some of the prominent methods are qualitatively compared, corresponding practicalities are illustrated, and their feasibility is analyzed for various use-cases. Furthermore, several prospective application areas are identified, and open problems are highlighted for future research. | computer science |
We present the results of a global, three-dimensional magnetohydrodynamics simulation of an accretion disk with a rotating, weakly magnetized central star. The disk is threaded by a weak, large-scale poloidal magnetic field, and the central star has no strong stellar magnetosphere initially. Our simulation investigates the structure of the accretion flows from a turbulent accretion disk onto the star. The simulation reveals that fast accretion onto the star at high latitudes occurs even without a stellar magnetosphere. We find that the failed disk wind becomes the fast, high-latitude accretion as a result of angular momentum exchange mediated by magnetic fields well above the disk, where the Lorentz force that decelerates the rotational motion of gas can be comparable to the centrifugal force. Unlike the classical magnetospheric accretion scenario, fast accretion streams are not guided by magnetic fields of the stellar magnetosphere. Nevertheless, the accretion velocity reaches the free-fall velocity at the stellar surface due to the efficient angular momentum loss at a distant place from the star. This study provides a possible explanation why Herbig Ae/Be stars whose magnetic fields are generally not strong enough to form magnetospheres also show indications of fast accretion. A magnetically driven jet is not formed from the disk in our model. The differential rotation cannot generate sufficiently strong magnetic fields for the jet acceleration because the Parker instability interrupts the field amplification. | astrophysics |
Differential abundance tests in the compositional data are essential and fundamental tasks in various biomedical applications, such as single-cell, bulk RNA-seq, and microbiome data analysis. Despite the recent developments in the fields, differential abundance analysis in the compositional data is still a complicated and unsolved statistical problem because of the compositional constraint and prevalent zero counts in the dataset. A new differential abundance test is introduced in this paper to address these challenges, referred to as the robust differential abundance (RDB) test. Compared with existing methods, the RDB test 1) is simple and computationally efficient, 2) is robust to prevalent zero counts in compositional datasets, 3) can take the data's compositional nature into account, and 4) has a theoretical guarantee to control false discoveries in a general setting. Furthermore, in the presence of observed covariates, the RDB test can work with the covariate balancing techniques to remove the potential confounding effects and draw reliable conclusions. To demonstrate its practical merits, we apply the new test to several numerical examples using both simulated and real datasets. | statistics |
We investigate the renormalization structure of the scalar Galileon model in flat spacetime by calculating the one-loop divergences in a closed geometric form. The geometric formulation is based on the definition of an effective Galileon metric and allows to apply known heat-kernel techniques. The result for the one-loop divergences is compactly expressed in terms of curvature invariants of the effective Galileon metric and corresponds to a resummation of the divergent one-loop contributions of all n-point functions. The divergent part of the one-loop effective action therefore serves as generating functional for arbitrary n-point counterterms. We discuss our result within the Galileon effective field theory and give a brief outlook on extensions to more general Galileon models in curved spacetime. | high energy physics theory |
A radar and communication integration (RCI) system has great flexibility in allocating antenna resources to guarantee both radar and communication performance. This paper considers the array allocation problems for multiple target localization and multiple platforms communication in an RCI network. The objective of array allocation is to maximize the communication capacity for each channel and to minimize the localization error for each target. In this paper, we firstly build a localization and communication model for array allocation in an RCI network. Minorization maximization (MM) is then applied to create surrogate functions for multiple objective optimization problems. The projected gradient descent (PGD) method is further employed to solve two array allocation problems with and without a certain communication capacity constraint. Computer simulations are conducted to evaluate the performance of the proposed algorithms. The results show that the proposed algorithms have improved localization and communication performance after efficiently allocating the array resource in the RCI network. | electrical engineering and systems science |
The gap between the predictions of collapse models and those of standard quantum mechanics widens with the complexity of the involved systems. Addressing the way such gap scales with the mass or size of the system being investigated paves the way to testing the validity of the collapse theory and identify the values of the parameters that characterize it. Here, we review the recently proposed non-interferometric approach to the testing of collapse models, focusing on the opto-mechanical platform. | quantum physics |
The holographic duals of Entanglement of Purification through the Entanglement Wedge Cross Section has been a well-discussed topic in the literature recently. More general entanglement measures involving multipartite information and their holographic duals have also been proposed. On the other hand, the recent intriguing program reproducing the Page Curve in Black hole entropy using the notion of islands has also been an obvious issue of attraction. A toy model involving Multiboundary wormholes in AdS$_{3}$ was able to capture many interesting facts about such calculations. In such a toy model, the notion of islands was intuitively connected to quantum error correction. We try to bridge the ideas of the two programs especially in AdS$_{3}$/CFT$_{2}$ and give a description of the islands in terms of multipartite entanglement of purification. This clarifies a few simplified assumptions made while describing the toy model and also enables us to understand the familiar information paradox within the framework of the same model. | high energy physics theory |
Complex oxides exhibit a variety of unusual physical properties, which can be used for designing novel electronic devices. Here we fabricate and study experimentally nano-scale Superconductor/ Ferromagnet/Superconductor junctions with the high-Tc cuprate superconductor YBa2Cu3O7 and the colossal magnetoresistive (CMR) manganite ferromagnets LaXMnO3 (X: Ca or Sr). We demonstrate that in a broad temperature range the magnetization of a manganite nanoparticle, forming the junction interface, switches abruptly in a mono-domain manner. The CMR phenomenon translates the magnetization loop into a hysteretic magnetoresistance loop. The latter facilitates a memory functionality of such a junction with just a single CMR ferromagnetic layer. The orientation of the magnetization (stored information) can be read out by simply measuring the junction resistance in an applied magnetic field. The CMR facilitates a large read-out signal in a small applied field. We argue that such a simple single layer CMR junction can operate as a memory cell both in the superconducting state at cryogenic temperatures and in the normal state up to room temperature. | condensed matter |
We calculate the two-loop QCD corrections to $gg \to ZZ$ involving a closed top-quark loop. We present a new method to systematically construct linear combinations of Feynman integrals with a convergent parametric representation, where we also allow for irreducible numerators, higher powers of propagators, dimensionally shifted integrals, and subsector integrals. The amplitude is expressed in terms of such finite integrals by employing syzygies derived with linear algebra and finite field techniques. Evaluating the amplitude using numerical integration, we find agreement with previous expansions in asymptotic limits and provide ab initio results also for intermediate partonic energies and non-central scattering at higher energies. | high energy physics phenomenology |
We describe the implementation of a three-dimensional Paul ion trap fabricated from a stack of precision-machined silica glass wafers, which incorporates a pair of junctions for 2-dimensional ion transport. The trap has 142 dedicated electrodes which can be used to define multiple potential wells in which strings of ions can be held. By supplying time-varying potentials, this also allows for transport and re-configuration of ion strings. We describe the design, simulation, fabrication and packaging of the trap, including explorations of different parameter regimes and possible optimizations and design choices. We give results of initial testing of the trap, including measurements of heating rates and junction transport. | quantum physics |
We explore the physics potential of using precision timing information at the LHC in searches for long-lived particles (LLPs). In comparison with the light Standard Model particles, the decay products of massive LLPs arrive at detectors with time delays around the nanosecond scale. We propose new strategies to take advantage of this time delay feature by using initial state radiation to timestamp the collision event and require at least one LLP to decay within the detector. This search strategy is effective for a broad range of models. In addition to outlining this general approach, we demonstrate its effectiveness with the projected reach for two benchmark scenarios: Higgs decaying into a pair of LLPs, and pair production of long-lived neutralinos in the gauge mediated supersymmetry breaking models. Our strategy increases the sensitivity to the lifetime of the LLP by two orders of magnitude or more and particularly exhibits a better behavior with a linear dependence on lifetime in the large lifetime region compared to traditional LLP searches. The timing information significantly reduces the Standard Model background and provides a powerful new dimension for LLP searches. | high energy physics phenomenology |
Fast data acquisition in Magnetic Resonance Imaging (MRI) is vastly in demand and scan time directly depends on the number of acquired k-space samples. Conventional MRI reconstruction methods for fast MRI acquisition mostly relied on different regularizers which represent analytical models of sparsity. However, recent data-driven methods based on deep learning has resulted in promising improvements in image reconstruction algorithms. In this paper, we propose a deep plug-and-play prior framework for parallel MRI reconstruction problems which utilize a deep neural network (DNN) as an advanced denoiser within an iterative method. This, in turn, enables rapid acquisition of MR images with improved image quality. The proposed method was compared with the reconstructions using the clinical gold standard GRAPPA method. Our results with undersampled data demonstrate that our method can deliver considerably higher quality images at high acceleration factors in comparison to clinical gold standard method for MRI reconstructions. Our proposed reconstruction enables an increase in acceleration factor, and a reduction in acquisition time while maintaining high image quality. | electrical engineering and systems science |
This work describes an outlier detection procedure (named "OutlierTree") loosely based on the GritBot software developed by RuleQuest research, which works by evaluating and following supervised decision tree splits on variables, in whose branches 1-d confidence intervals are constructed for the target variable and potential outliers flagged according to these confidence intervals. Under this logic, it's possible to produce human-readable explanations for why a given value of a variable in an observation can be considered as outlier, by considering the decision tree branch conditions along with general distribution statistics among the non-outlier observations that fell into the same branch, which can then be contrasted against the value which lies outside the CI. The supervised splits help to ensure that the generated conditions are not spurious, but rather related to the target variable and having logical breakpoints. | statistics |
The design of a complex instrument such as Einstein Telescope (ET) is based on a target sensitivity derived from an elaborate case for scientific exploration. At the same time it incorporates many trade-off decisions to maximise the scientific value by balancing the performance of the various subsystems against the cost of the installation and operation. In this paper we discuss the impact of a long signal recycling cavity (SRC) on the quantum noise performance. We show the reduction in sensitivity due to a long SRC for an ET high-frequency interferometer, provide details on possible compensations schemes and suggest a reduction of the SRC length. We also recall details of the trade-off between the length and optical losses for filter cavities, and show the strict requirements for an ET low-frequency interferometer. Finally, we present an alternative filter cavity design for an ET low-frequency interferometer making use of a coupled cavity, and discuss the advantages of the design in this context. | astrophysics |
In wall-modeled large-eddy simulations (WMLES), the near-wall model plays a significant role in predicting the skin friction, although the majority of the boundary layer is resolved by the outer large-eddy simulation (LES) solver. In this work, we aim at developing a new ordinary differential equation (ODE)-based wall model, which is as simple as the classical equilibrium model yet capable of capturing non-equilibrium effects and low Reynolds number effects. The proposed model reformulates the classical equilibrium model by introducing a new non-dimensional mixing-length function. The new mixing-length function is parameterized in terms of the boundary layer shape factor instead of the commonly used pressure-gradient parameters. As a result, the newly introduced mixing-length function exhibits great universality within the viscous sublayer, the buffer layer, and the log region (i.e., $0 < y < 0.1\delta$, where the wall model is typically deployed in a WMLES setup). The performance of the new model is validated by predicting a wide range of canonical flows with the friction Reynolds number between 200 and 5200, and the Clauser pressure-gradient parameter between -0.3 and 4. Compared to the classical equilibrium wall model, remarkable error reduction in terms of the skin friction prediction is obtained by the new model. Moreover, since the new model is ODE-based, it is straightforward to be deployed for predicting flows with complex geometries and therefore promising for a wide range of applications. | physics |
Conditional kernel mean embeddings form an attractive nonparametric framework for representing conditional means of functions, describing the observation processes for many complex models. However, the recovery of the original underlying function of interest whose conditional mean was observed is a challenging inference task. We formalize deconditional kernel mean embeddings as a solution to this inverse problem, and show that it can be naturally viewed as a nonparametric Bayes' rule. Critically, we introduce the notion of task transformed Gaussian processes and establish deconditional kernel means as their posterior predictive mean. This connection provides Bayesian interpretations and uncertainty estimates for deconditional kernel mean embeddings, explains their regularization hyperparameters, and reveals a marginal likelihood for kernel hyperparameter learning. These revelations further enable practical applications such as likelihood-free inference and learning sparse representations for big data. | statistics |
Battery-based energy storage has emerged as an enabling technology for a variety of grid energy optimizations, such as peak shaving and cost arbitrage. A key component of battery-driven peak shaving optimizations is peak forecasting, which predicts the hours of the day that see the greatest demand. While there has been significant prior work on load forecasting, we argue that the problem of predicting periods where the demand peaks for individual consumers or micro-grids is more challenging than forecasting load at a grid scale. We propose a new model for peak forecasting, based on deep learning, that predicts the k hours of each day with the highest and lowest demand. We evaluate our approach using a two year trace from a real micro-grid of 156 buildings and show that it outperforms the state of the art load forecasting techniques adapted for peak predictions by 11-32%. When used for battery-based peak shaving, our model yields annual savings of $496,320 for a 4 MWhr battery for this micro-grid. | electrical engineering and systems science |
S-nitrosylation, the covalent addition of NO to the thiol side chain of cysteine, is an important post-transitional modification that can alter the function of various proteins. The structural dynamics and vibrational spectroscopy of S-nitrosylation in the condensed phase is investigated for the methyl-capped cysteine model system and for myoglobin. Using conventional point charge and physically more realistic multipolar force fields for the -SNO group it is found that the SN- and NO-stretch and the SNO-bend vibrations can be located and distinguished from the other protein modes for simulations of MbSNO at 50 K. The finding of stable cis- and trans-MbSNO is consistent with experiments on other proteins as is the observation of buried -SNO. For MbSNO the observed relocation of the EF loop in the simulations by $\sim 3$ \AA\/ is consistent with the available X-ray structure and the conformations adopted by the -SNO label are in good overall agreement with the X-ray structure. Despite the larger size of the -SNO group, MbSNO is found to recruit more water molecules within 10 \AA\/ of the modification site than WT Mb due to the stronger electrostatics. Similarly, when comparing the hydration between the A- and H-helices they differ by up to 30 \% between WT and MbSNO. This suggests that local hydration can also be significantly modulated through nitrosylation. | physics |
The elastic properties of the Zr$_{50}$Cu$_{40}$Ag$_{10}$ metallic alloy, such as the bulk modulus $B$, the shear modulus $G$, the Young's modulus $E$ and the Poisson's ratio $\sigma$, are investigated by molecular dynamics simulation in the temperature range $T=250-2000$ K and at an external pressure of $p = 1.0$ bar. It is shown that the liquid-glass transition is accompanied by a considerable increase in the shear modulus $G$ and the Young's modulus $E$ (by more than $50\%$). The temperature dependence of the Poisson's ratio exhibits a sharp fall from typical values for metals of approximately $0.32-0.33$ to low values (close to zero), which are characteristic for brittle bulk metallic glasses. Non-monotonic temperature dependence of the longitudinal and transverse sound velocity near the liquid-glass transition is also observed. The glass forming ability of the alloy is evaluated in terms of the fragility index $m$. As found, its value is $m\approx64$ for the Zr$_{50}$Cu$_{40}$Ag$_{10}$ metallic glass, that is in a good agreement with the experimental data for the Zr-Cu-based metallic glasses. | condensed matter |
In this work, we present a method to locate the dominant zero of the Energy Probability Distribution (EPD) Zeros method applied to the study of phase transitions. The technique strongly reduces computer processing time and also increases the accuracy of the results. As an example, we apply it to the 2D Ising model, comparing with both the exact Onsager's results and the previous EPD zeros method results. We show that, for large lattices, the processing time is drastically reduced when compared to other EPD zeros search procedures, whereas for small lattices the gain in accuracy allows very accurate predictions in the thermodynamic limit. | condensed matter |
This study experimentally determines the dielectric properties of a powdery titanium (II) oxide (TiO) material within the microwave range (0.1-13.5 GHz). The properties were determined using a coaxial airline method using a TiO/paraffin mixture at several loading fractions. A permittivity of 60 for volume fraction below 30% and 100 for volume fraction above were measured. | physics |
We present a model-independent measurement of spatial curvature $\Omega_{k}$ in the Friedmann-Lema\^itre-Robertson-Walker (FLRW) universe, based on observations of the Hubble parameter $H(z)$ using cosmic chronometers, and a Gaussian Process (GP) reconstruction of the HII galaxy Hubble diagram. We show that the imposition of spatial flatness (i.e., $\Omega_k=0$) easily distinguishes between the Hubble constant measured with {\it Planck} and that based on the local distance ladder. We find an optimized curvature parameter $\Omega_{k} = -0.120^{+0.168}_{-0.147}$ when using the former (i.e., $H_0=67.66\pm0.42 \, \mathrm{km}\,\mathrm{s}^{-1} \,\mathrm{Mpc}^{-1}$), and $\Omega_{k} = -0.298^{+0.122}_{-0.088}$ for the latter ($H_0=73.24\pm 1.74 \,\mathrm{km}\,\mathrm{s}^{-1} \,\mathrm{Mpc}^{-1}$). The quoted uncertainties are extracted by Monte Carlo sampling, taking into consideration the covariances between the function and its derivative reconstructed by GP. These data therefore reveal that the condition of spatial flatness favours the {\it Planck} measurement, while ruling out the locally inferred Hubble constant as a true measure of the large-scale cosmic expansion rate at a confidence level of $\sim 3\sigma$. | astrophysics |
As the first diagnostic imaging modality of avascular necrosis of the femoral head (AVNFH), accurately staging AVNFH from a plain radiograph is critical yet challenging for orthopedists. Thus, we propose a deep learning-based AVNFH diagnosis system (AVN-net). The proposed AVN-net reads plain radiographs of the pelvis, conducts diagnosis, and visualizes results automatically. Deep convolutional neural networks are trained to provide an end-to-end diagnosis solution, covering tasks of femoral head detection, exam-view identification, side classification, AVNFH diagnosis, and key clinical notes generation. AVN-net is able to obtain state-of-the-art testing AUC of 0.97 (95% CI: 0.97-0.98) in AVNFH detection and significantly greater F1 scores than less-to-moderately experienced orthopedists in all diagnostic tests (p<0.01). Furthermore, two real-world pilot studies were conducted for diagnosis support and education assistance, respectively, to assess the utility of AVN-net. The experimental results are promising. With the AVN-net diagnosis as a reference, the diagnostic accuracy and consistency of all orthopedists considerably improved while requiring only 1/4 of the time. Students self-studying the AVNFH diagnosis using AVN-net can learn better and faster than the control group. To the best of our knowledge, this study is the first research on the prospective use of a deep learning-based diagnosis system for AVNFH by conducting two pilot studies representing real-world application scenarios. We have demonstrated that the proposed AVN-net achieves expert-level AVNFH diagnosis performance, provides efficient support in clinical decision-making, and effectively passes clinical experience to students. | electrical engineering and systems science |
We study several deformations of the Skyrme model in three dimensions with self-dual sectors of arbitrary baryonic charge. We show that, for a family of background metrics as well as for a family of field dependent couplings, the model has one BPS sector, which may have any topological charge. We also study the gravitating case, where there are infinite BPS sectors, provided that a cosmological constant is added to the model. | high energy physics theory |
We present a new approach combining top down fabrication and bottom up overgrowth to create diamond photonic nanostructures in form of single-crystalline diamond nanopyramids. Our approach relies on diamond nanopillars, that are overgrown with single-crystalline diamond to form pyramidal structures oriented along crystal facets. To characterize the photonic properties of the pyramids, color centers are created in a controlled way using ion implantation and annealing. We find very high collection efficiency from color centers close to the pyramid apex. We further show excellent smoothness and sharpness of our diamond pyramids with measured tip radii on the order of 10 nm. Our results offer interesting prospects for nanoscale quantum sensing using diamond color centers, where our diamond pyramids could be used as scanning probes for nanoscale imaging. There, our approach would offer signifikant advantages compared to the cone-shaped scanning probes which define the current state of the art. | condensed matter |
Purpose: To enable rigid-body motion tolerant parallel volumetric magnetic resonance imaging by retrospective head motion correction on a variety of spatio-temporal scales and imaging sequences. Theory and methods: Tolerance against rigid-body motion is based on distributed and incoherent sampling orders for boosting a joint retrospective motion estimation and reconstruction framework. Motion resilience stems from the encoding redundancy in the data, as generally provided by the coil array. Hence, it does not require external sensors, navigators or training data, so the methodology is readily applicable to sequences using 3D encodings. Results: Simulations are performed showing full inter-shot corrections for usual levels of in-vivo motion, large number of shots, standard levels of noise and moderate acceleration factors. Feasibility of inter- and intra-shot corrections is shown under controlled motion in-vivo. Practical efficacy is illustrated by high quality results in most corrupted of 208 volumes from a series of 26 clinical pediatric examinations collected using standard protocols. Conclusion: The proposed framework addresses the rigid motion problem in volumetric anatomical brain scans with sufficient encoding redundancy which has enabled reliable pediatric examinations without sedation. | physics |
We examine the role of trustworthiness and trust in statistical inference, arguing that it is the extent of trustworthiness in inferential statistical tools which enables trust in the conclusions. Certain tools, such as the p-value and significance test, have recently come under renewed criticism, with some arguing that they damage trust in statistics. We argue the contrary, beginning from the position that the central role of these methods is to form the basis for trusted conclusions in the face of uncertainty in the data, and noting that it is the misuse and misunderstanding of these tools which damages trustworthiness and hence trust. We go on to argue that recent calls to ban these tools would tackle the symptom, not the cause, and themselves risk damaging the capability of science to advance, and feeding into public suspicion of the discipline of statistics. The consequence could be aggravated mistrust of our discipline and of science more generally. In short, the very proposals could work in quite the contrary direction from that intended. We make some alternative proposals for tackling the misuse and misunderstanding of these methods, and for how trust in our discipline might be promoted. | statistics |
Quantification of metabolites from magnetic resonance spectra (MRS) has many applications in medicine and psychology, but remains a challenging task despite considerable research efforts. For example, the neurotransmitter $\gamma$-aminobutyric acid (GABA), present in very low concentration in vivo, regulates inhibitory neurotransmission in the brain and is involved in several processes outside the brain. Reliable quantification is required to determine its role in various physiological and pathological conditions. We present a novel approach to quantification of metabolites from MRS with convolutional neural networks --- MRSNet. MRSNet is trained to perform the multi-class regression problem of identifying relative metabolite concentrations from given input spectra, focusing specifically on the quantification of GABA, which is particularly difficult to resolve. Typically it can only be detected at all using special editing acquisition sequences such as MEGA-PRESS. A large range of network structures, data representations and automatic processing methods are investigated. Results are benchmarked using experimental datasets from test objects of known composition and compared to state-of-the-art quantification methods: LCModel, jMRUI (AQUES, QUEST), TARQUIN, VeSPA and Gannet. The results show that the overall accuracy and precision of metabolite quantification is improved using convolutional neural networks. | electrical engineering and systems science |
Should quantum computers become available, they will reduce the effective key length of basic secret-key primitives, such as blockciphers. To address this we will either need to use blockciphers which inherently have longer keys or use key-length extension techniques which employ a blockcipher to construct a more secure blockcipher that uses longer keys. We consider the latter approach by analyzing the security of the FX and double encryption constructions. Classically, FX is known to be secure, while double encryption is no more secure than single encryption due to a meet-in-the-middle attack. We provide positive results, with concrete and tight bounds, for both of these constructions against quantum attackers in ideal models. For FX, we consider security in the "Q1 model," a natural model in which the attacker has quantum access to the ideal primitive, but only classic access to FX. We provide two partial results in this model. The first establishes the security of FX against non-adaptive attackers. The second establishes fully adaptive security when considering a variant of FX using a random oracle in place of an ideal cipher. This result relies on the techniques of Zhandry (CRYPTO '19) for lazily sampling a quantum random oracle and are thus hard to extend to the true FX construction because it is unknown if a quantum random permutation can be lazily sampled. To the best of our knowledge, this result also is the first to introduce techniques to handle Q1 security in ideal models without analyzing the classical and quantum oracles separately, which may be of broader interest. For double encryption we apply a technique of Tessaro and Thiruvengadam (TCC '18) to establish that security reduces to the difficulty of solving the list disjointness problem, which we are able to reduce through a chain of results to the known quantum difficulty of the element distinctness problem. | quantum physics |
Doppler radar systems enable unobtrusive and privacy-preserving long-term monitoring of human motions indoors. In particular, a person's gait can provide important information about their state of health. Utilizing micro-Doppler signatures, we show that radar is capable of detecting small differences between the step motions of the two legs, which results in asymmetric gait. Image-based and physical features are extracted from the radar return signals of several individuals, including four persons with different diagnosed gait disorders. It is shown that gait asymmetry is correctly detected with high probability, irrespective of the underlying pathology, for at least one motion direction. | electrical engineering and systems science |
We revisit the basic problem of quantum state certification: given copies of unknown mixed state $\rho\in\mathbb{C}^{d\times d}$ and the description of a mixed state $\sigma$, decide whether $\sigma = \rho$ or $\|\sigma - \rho\|_{\mathsf{tr}} \ge \epsilon$. When $\sigma$ is maximally mixed, this is mixedness testing, and it is known that $\Omega(d^{\Theta(1)}/\epsilon^2)$ copies are necessary, where the exact exponent depends on the type of measurements the learner can make [OW15, BCL20], and in many of these settings there is a matching upper bound [OW15, BOW19, BCL20]. Can one avoid this $d^{\Theta(1)}$ dependence for certain kinds of mixed states $\sigma$, e.g. ones which are approximately low rank? More ambitiously, does there exist a simple functional $f:\mathbb{C}^{d\times d}\to\mathbb{R}_{\ge 0}$ for which one can show that $\Theta(f(\sigma)/\epsilon^2)$ copies are necessary and sufficient for state certification with respect to any $\sigma$? Such instance-optimal bounds are known in the context of classical distribution testing, e.g. [VV17]. Here we give the first bounds of this nature for the quantum setting, showing (up to log factors) that the copy complexity for state certification using nonadaptive incoherent measurements is essentially given by the copy complexity for mixedness testing times the fidelity between $\sigma$ and the maximally mixed state. Surprisingly, our bound differs substantially from instance optimal bounds for the classical problem, demonstrating a qualitative difference between the two settings. | quantum physics |
Let $\Gamma$ be an Abelian group and let $G$ be a simple graph. We say that $G$ is \emph{$\Gamma$-colorable} if for some fixed orientation of $G$ and every edge labeling $\ell:E(G)\rightarrow \Gamma$, there exists a vertex coloring $c$ by the elements of $\Gamma$ such that $c(y)-c(x)\neq \ell(e)$, for every edge $e=xy$ (oriented from $x$ to $y$). Langhede and Thomassen \cite{LanghedeThomassen2} proved recently that every planar graph on $n$ vertices has at least $2^{n/9}$ different $\mathbb{Z}_5$-colorings. By using a different approach based on graph polynomials, we extend this result to $K_5$-minor-free graphs in a more general setting of \emph{field coloring}. More specifically, we prove that every such graph on $n$ vertices is $\mathbb{F}$-$5$-choosable, whenever $\mathbb{F}$ is an arbitrary filed with at least $5$ elements. Moreover, the number of colorings (for every list assignment) is at least $5^{n/4}$. | mathematics |
In order to identify the mechanism responsible for the formation of charge-density waves (CDW) in cuprate superconductors, it is important to understand which aspects of the CDW's microscopic structure are generic and which are material-dependent. Here, we show that, at the local scale probed by NMR, long-range CDW order in YBa2Cu3Oy is unidirectional with a commensurate period of three unit cells (lambda = 3b), implying that the incommensurability found in X-ray scattering is ensured by phase slips (discommensurations). Furthermore, NMR spectra reveal a predominant oxygen character of the CDW with an out-of-phase relationship between certain lattice sites but no specific signature of a secondary CDW with lambda = 6b associated with a putative pair-density wave. These results shed light on universal aspects of the cuprate CDW. In particular, its spatial profile appears to generically result from the interplay between an incommensurate tendency at long length scales, possibly related to properties of the Fermi surface, and local commensuration effects, due to electron-electron interactions or lock-in to the lattice. | condensed matter |
In computer vision, superpixels have been widely used as an effective way to reduce the number of image primitives for subsequent processing. But only a few attempts have been made to incorporate them into deep neural networks. One main reason is that the standard convolution operation is defined on regular grids and becomes inefficient when applied to superpixels. Inspired by an initialization strategy commonly adopted by traditional superpixel algorithms, we present a novel method that employs a simple fully convolutional network to predict superpixels on a regular image grid. Experimental results on benchmark datasets show that our method achieves state-of-the-art superpixel segmentation performance while running at about 50fps. Based on the predicted superpixels, we further develop a downsampling/upsampling scheme for deep networks with the goal of generating high-resolution outputs for dense prediction tasks. Specifically, we modify a popular network architecture for stereo matching to simultaneously predict superpixels and disparities. We show that improved disparity estimation accuracy can be obtained on public datasets. | computer science |
We present a comprehensive study of the statistical features of a three-dimensional time-reversible Navier-Stokes (RNS) system, wherein the standard viscosity $\nu$ is replaced by a fluctuating thermostat that dynamically compensates for fluctuations in the total energy. We analyze the statistical features of the RNS steady states in terms of a non-negative dimensionless control parameter $\mathcal{R}_r$, which quantifies the balance between the fluctuations of kinetic energy at the forcing length scale $\ell_{\rm f}$ and the total energy $E_0$. We find that the system exhibits a transition from a high-enstrophy phase at small $\mathcal{R}_r$, where truncation effects tend to produce partially thermalized states, to a hydrodynamical phase with low enstrophy at large $\mathcal{R}_r$. Using insights from a diffusion model of turbulence (Leith model), we argue that the transition is in fact akin to a continuous phase transition, where $\mathcal{R}_r$ indeed behaves as a thermodynamic control parameter, e.g., a temperature, the enstrophy plays the role of an order parameter, while the symmetry breaking parameter $h$ is (one over) the truncation scale $k_{\rm max}$. We find that the signatures of the phase transition close to the critical point $\mathcal{R}_r^\star$ can essentially be deduced from a heuristic mean-field Landau free energy. This point of view allows us to reinterpret the relevant asymptotics in which the dynamical ensemble equivalence conjectured by Gallavotti, Phys.Lett.A, 223, 1996 could hold true. Our numerics indicate that the low-order statistics of the 3D RNS are indeed qualitatively similar to those observed in direct numerical simulations of the standard Navier-Stokes (NS) equations with viscosity chosen so as to match the average value of the reversible viscosity. | physics |
Adopting thin Si wafers for PV reduces capital expenditure (capex) and manufacturing cost, and accelerates the growth of PV manufacturing. There are two key questions about thin Si today: (a) how much can we still benefit economically from thinning wafers? (b) what are the technological challenges to transition to thin wafers? In this work, we re-evaluate the benefits and challenges of thin Si for current and future PV modules using a comprehensive techno-economic framework that couples device simulation, bottom-up cost modeling, and a cash-flow growth model. When adopting an advanced technology concept that features sufficiently good surface passivation, similarly high efficiencies are achievable for 50-um wafers as for 160-um ones. We then quantify the economic benefits for thin Si wafers in terms of poly-Si-to-module manufacturing capex, module cost, and levelized cost of electricity (LCOE) for utility PV systems. Particularly, LCOE favors thinner wafers for all investigated device architectures, and can potentially be reduced by more than 5% from the value of 160-um wafers. With further improvements in module efficiency, an advanced thin-wafer device concept with 50-um wafers could reduce manufacturing capex by 48%, module cost by 28%, and LCOE by 24%. Furthermore, we apply a sustainable growth model to investigate PV deployment scenarios in 2030. It is found that the state-of-the-art industry concept could not achieve the climate targets even with very aggressive financial scenarios, therefore the capex reduction benefit of thin wafers is needed to facilitate more rapid PV growth. Lastly, we discuss the remaining technological challenges and areas for innovation to enable high-yield manufacturing of high-efficiency PV modules with thin Si wafers. | physics |
In many particle physics models, domain wall can form during the phase transition process after discrete symmetry breaking. We study the scenario within a complex singlet extended Standard Model framework, where a strongly first order phase transition can occur depending on the hidden scalar mass and the mixing between the extra heavy Higgs and the SM Higgs mass. The gravitational wave spectrum is of a typical two-peak shape, the amplitude and the peak from the strongly first order phase transition is able to be probed by the future space-based interferometers, and the one locates around the peak from the domain wall decay is far beyond the capability of the current PTA, and future SKA. | high energy physics phenomenology |
We present Point-Voxel CNN (PVCNN) for efficient, fast 3D deep learning. Previous work processes 3D data using either voxel-based or point-based NN models. However, both approaches are computationally inefficient. The computation cost and memory footprints of the voxel-based models grow cubically with the input resolution, making it memory-prohibitive to scale up the resolution. As for point-based networks, up to 80% of the time is wasted on structuring the sparse data which have rather poor memory locality, not on the actual feature extraction. In this paper, we propose PVCNN that represents the 3D input data in points to reduce the memory consumption, while performing the convolutions in voxels to reduce the irregular, sparse data access and improve the locality. Our PVCNN model is both memory and computation efficient. Evaluated on semantic and part segmentation datasets, it achieves much higher accuracy than the voxel-based baseline with 10x GPU memory reduction; it also outperforms the state-of-the-art point-based models with 7x measured speedup on average. Remarkably, the narrower version of PVCNN achieves 2x speedup over PointNet (an extremely efficient model) on part and scene segmentation benchmarks with much higher accuracy. We validate the general effectiveness of PVCNN on 3D object detection: by replacing the primitives in Frustrum PointNet with PVConv, it outperforms Frustrum PointNet++ by 2.4% mAP on average with 1.5x measured speedup and GPU memory reduction. | computer science |
The Leidenfrost phenomenon entails the levitation of a liquid droplet over a superheated surface, cushioned by its vapor layer. For water, superhydrophobic surfaces are believed to suppress the Leidenfrost point ($\it{T}$$_{\rm L}$)-the temperature at which this phenomenon occurs. The vapor film obstructs boiling heat transfer in heat exchangers, thereby compromising energy efficiency and safety. Thus, it is desirable to realize superhydrophobicity without suppressing $\it{T}$$_{\rm L}$. Here we demonstrate that the $\it{T}$$_{\rm L}$ of water on microtextured superhydrophobic surfaces comprising doubly reentrant pillars (DRPs) can exceed those on hydrophilic and even superhydrophilic surfaces. We disentangle the contributions of microtexture, heat transfer, and surface chemistry on $\it{T}$$_{\rm L}$ and reveal how superhydrophobicity can be realized without suppressing $\it{T}$$_{\rm L}$. For instance, silica surfaces with DRPs facilitate ~300% greater heat transfer to water droplets at 200$^{\circ}$C in comparison with silica surfaces coated with perfluorinated-nanoparticles. Thus, superhydrophobic surfaces could be harnessed for energy efficient thermal machinery. | physics |
Compressed sensing (CS) is a signal processing technique that enables the efficient recovery of a sparse high-dimensional signal from low-dimensional measurements. In the multiple measurement vector (MMV) framework, a set of signals with the same support must be recovered from their corresponding measurements. Here, we present the first exploration of the MMV problem where signals are independently drawn from a sparse, multivariate Poisson distribution. We are primarily motivated by a suite of biosensing applications of microfluidics where analytes (such as whole cells or biomarkers) are captured in small volume partitions according to a Poisson distribution. We recover the sparse parameter vector of Poisson rates through maximum likelihood estimation with our novel Sparse Poisson Recovery (SPoRe) algorithm. SPoRe uses batch stochastic gradient ascent enabled by Monte Carlo approximations of otherwise intractable gradients. By uniquely leveraging the Poisson structure, SPoRe substantially outperforms a comprehensive set of existing and custom baseline CS algorithms. Notably, SPoRe can exhibit high performance even with one-dimensional measurements and high noise levels. This resource efficiency is not only unprecedented in the field of CS but is also particularly potent for applications in microfluidics in which the number of resolvable measurements per partition is often severely limited. We prove the identifiability property of the Poisson model under such lax conditions, analytically develop insights into system performance, and confirm these insights in simulated experiments. Our findings encourage a new approach to biosensing and are generalizable to other applications featuring spatial and temporal Poisson signals. | electrical engineering and systems science |
We present thermodynamic and neutron scattering measurements on the quantum spin ice candidate Nd$_2$Zr$_2$O$_7$. The parameterization of the anisotropic exchange Hamiltonian is refined based on high-energy-resolution inelastic neutron scattering data together with thermodynamic data using linear spin wave theory and numerical linked cluster expansion. Magnetic phase diagrams are calculated using classical Monte Carlo simulations with fields along \mbox{[100]}, \mbox{[110]} and \mbox{[111]} crystallographic directions which agree qualitatively with the experiment. Large hysteresis and irreversibility for \mbox{[111]} is reproduced and the microscopic mechanism is revealed by mean field calculations to be the existence of metastable states and domain inversion. Our results shed light on the explanations of the recently observed dynamical kagome ice in Nd$_2$Zr$_2$O$_7$ in \mbox{[111]} fields. | condensed matter |
Metric type II solar radio bursts and solar energetic particles (SEPs) are both associated with shock fronts driven by coronal mass ejections (CMEs) in the solar corona. Recent studies of ground level enhancements (GLEs), regular large solar energetic particle (SEP) events and filament eruption (FE) associated large SEP events have shown that SEP events are organized by spectral index of proton fluence spectra and by the average starting frequencies of the associated type II radio bursts. Both these results indicate a hierarchical relationship between CME kinematics and SEP event properties. In this study, we expand the investigations to fluence spectra and the longitudinal extent of metric type II associated SEP events including low-intensity SEP events. We utilize SEP measurements of particle instruments on the Solar and Heliospheric Observatory (SOHO) and Solar Terrestrial Relations Observatory (STEREO) spacecraft together with radio bursts observations by ground-based radio observatories during solar cycle 24. Our results show that low-intensity SEP events follow the hierarchy of spectral index or the hierarchy of the starting frequency of type II radio bursts. We also find indications of a trend between the onset frequency of metric type II bursts and the estimated longitudinal extent of the SEP events although the scatter of data points is quite large. These two results strongly support the idea of SEP acceleration by shocks. Stronger shocks develop closer to the Sun. | physics |
In many data scientific problems, we are interested not only in modeling the behaviour of a system that is passively observed, but also in inferring how the system reacts to changes in the data generating mechanism. Given knowledge of the underlying causal structure, such behaviour can be estimated from purely observational data. To do so, one typically assumes that the causal structure of the data generating mechanism can be fully specified. Furthermore, many methods assume that data are generated as independent replications from that mechanism. Both of these assumptions are usually hard to justify in practice: datasets often have complex dependence structures, as is the case for spatio-temporal data, and the full causal structure between all involved variables is hardly known. Here, we present causal models that are adapted to the characteristics of spatio-temporal data, and which allow us to define and quantify causal effects despite incomplete causal background knowledge. We further introduce a simple approach for estimating causal effects, and a non-parametric hypothesis test for these effects being zero. The proposed methods do not rely on any distributional assumptions on the data, and allow for arbitrarily many latent confounders, given that these confounders do not vary across time (or, alternatively, they do not vary across space). Our theoretical findings are supported by simulations and code is available online. This work has been motivated by the following real-world question: how has the Colombian conflict influenced tropical forest loss? There is evidence for both enhancing and reducing impacts, but most literature analyzing this problem is not using formal causal methodology. When applying our method to data from 2000 to 2018, we find a reducing but insignificant causal effect of conflict on forest loss. Regionally, both enhancing and reducing effects can be identified. | statistics |
We study the effects of the flux configurations on the emergent Majorana fermions in the $S=1/2$ Kitaev model on a honeycomb lattice, where quantum spins are fractionalized into itinerant Majorana fermions and localized fluxes. A quantum spin liquid appears as the ground state of the Kitaev model in the flux-free sector, which has intensively been investigated so far. In this flux sector, the Majorana fermion system has linear dispersions and shows power law behavior in the Majorana correlations. On the other hand, periodically-arranged flux configurations yield low-energy excitations in the Majorana fermion system, which are distinctly different from those in the flux-free state. We find that one of the periodically arranged flux states results in the gapped Majorana dispersion and the exponential decay in the Majorana correlations. The Kitaev system with another flux configuration exhibits a semi-Dirac like dispersion, leading to the power law decay with a smaller power than that in the flux-free sector along symmetry axes. We also examine the effect of the randomness in the flux configurations and clarify that the Majorana density of states is filled by increasing the flux density, and power-law decay in the Majorana correlations remains. The present results could be important to control the motion of Majorana fermions, which carries the spin excitations, in the Kitaev candidate materials. | condensed matter |
We illustrate how Bayesian reweighting can be used to incorporate the constraints provided by new measurements into a global Monte Carlo analysis of the Standard Model Effective Field Theory (SMEFT). This method, extensively applied to study the impact of new data on the parton distribution functions of the proton, is here validated by means of our recent SMEFiT analysis of the top quark sector. We show how, under well-defined conditions and for the SMEFT operators directly sensitive to the new data, the reweighting procedure is equivalent to a corresponding new fit. We quantify the amount of information added to the SMEFT parameter space by means of the Shannon entropy and of the Kolmogorov-Smirnov statistic. We investigate the dependence of our results upon the choice of either the NNPDF or the Giele-Keller expressions of the weights. | high energy physics phenomenology |
Distant metastases (DM) refer to the dissemination of tumors, usually, beyond the organ where the tumor originated. They are the leading cause of death in patients with soft-tissue sarcomas (STSs). Positron emission tomography-computed tomography (PET-CT) is regarded as the imaging modality of choice for the management of STSs. It is difficult to determine from imaging studies which STS patients will develop metastases. 'Radiomics' refers to the extraction and analysis of quantitative features from medical images and it has been employed to help identify such tumors. The state-of-the-art in radiomics is based on convolutional neural networks (CNNs). Most CNNs are designed for single-modality imaging data (CT or PET alone) and do not exploit the information embedded in PET-CT where there is a combination of an anatomical and functional imaging modality. Furthermore, most radiomic methods rely on manual input from imaging specialists for tumor delineation, definition and selection of radiomic features. This approach, however, may not be scalable to tumors with complex boundaries and where there are multiple other sites of disease. We outline a new 3D CNN to help predict DM in STS patients from PET-CT data. The 3D CNN uses a constrained feature learning module and a hierarchical multi-modality feature learning module that leverages the complementary information from the modalities to focus on semantically important regions. Our results on a public PET-CT dataset of STS patients show that multi-modal information improves the ability to identify those patients who develop DM. Further our method outperformed all other related state-of-the-art methods. | electrical engineering and systems science |
Rational Krylov subspaces have become a reference tool in dimension reduction procedures for several application problems. When data matrices are symmetric, a short-term recurrence can be used to generate an associated orthonormal basis. In the past this procedure was abandoned because it requires twice the number of linear system solves per iteration than with the classical long-term method. We propose an implementation that allows one to obtain key rational subspace matrices without explicitly storing the whole orthonormal basis, with a moderate computational overhead associated with sparse system solves. Several applications are discussed to illustrate the advantages of the proposed procedure. | mathematics |
In online learning from non-stationary data streams, it is both necessary to learn robustly to outliers and to adapt to changes of underlying data generating mechanism quickly. In this paper, we refer to the former nature of online learning algorithms as robustness and the latter as adaptivity. There is an obvious tradeoff between them. It is a fundamental issue to quantify and evaluate the tradeoff because it provides important information on the data generating mechanism. However, no previous work has considered the tradeoff quantitatively. We propose a novel algorithm called the Stochastic approximation-based Robustness-Adaptivity algorithm (SRA) to evaluate the tradeoff. The key idea of SRA is to update parameters of distribution or sufficient statistics with the biased stochastic approximation scheme, while dropping data points with large values of the stochastic update. We address the relation between two parameters, one of which is the step size of the stochastic approximation, and the other is the threshold parameter of the norm of the stochastic update. The former controls the adaptivity and the latter does the robustness. We give a theoretical analysis for the non-asymptotic convergence of SRA in the presence of outliers, which depends on both the step size and the threshold parameter. Since SRA is formulated on the majorization-minimization principle, it is a general algorithm including many algorithms, such as the online EM algorithm and stochastic gradient descent. Empirical experiments for both synthetic and real datasets demonstrated that SRA was superior to previous methods. | statistics |
Elliptic flow of hadrons observed at relativistic heavy-ion collision experiments at Relativistic Heavy-Ion Collider (RHIC) and Large Hadron Collider (LHC), provides us an important signature of possible de-confinement transition from hadronic phase to partonic phase. However, hadronization processes of de-confined partons back into final hadrons are found to play a vital role in the observed hadronic flow. In the present work, we use coalescence mechanism also known as Recombination (ReCo) to combine quarks into hadrons. To get there, we have used Boltzmann transport equation in relaxation time approximation to transport the quarks into equilibration and finally to freeze-out surface, before coalescence takes place. A Boltzmann-Gibbs Blast Wave (BGBW) function is taken as an equilibrium function to get the final distribution and a power-like function to describe the initial distributions of partons produced in heavy-ion collisions. In the present work, we try to estimate the elliptic flow of identified hadrons such as $\pi$, $K$, $p$ etc., produced in Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV at the LHC for different centralities. The elliptic flow ($v_2$) of identified hadrons seems to be described quite well in the available $p_{\rm T}$ range. After the evolution of quarks until freeze-out time, has been calculated using BTE-RTA, the approach used in this paper consists of combining two or more quarks to explain the produced hadrons at intermediate momenta regions. The formalism is found to describe elliptic flow of hadrons produced in Pb+Pb collisions to a large extent. | high energy physics phenomenology |
Modern density functional approximations achieve moderate accuracy at low computational cost for many electronic structure calculations. Some background is given relating the gradient expansion of density functional theory to the WKB expansion in one dimension, and modern approaches to asymptotic expansions. A mathematical framework for analyzing asymptotic behavior for the sums of energies unites both corrections to the gradient expansion of DFT and hyperasymptotics of sums. Simple examples are given for the model problem of orbital-free DFT in one dimension. In some cases, errors can be made as small as 10$^{-32}$ Hartree suggesting that, if these new ingredients can be applied, they might produce approximate functionals that are much more accurate than those in current use. A variation of the Euler-Maclaurin formula generalizes previous results. | physics |
In this study, we experimentally investigated the time dependence of the statistical properties of two-dimensional drying crack patterns to determine the functional form of fragment size distribution. Experiments using a thin layer of a magnesium carbonate hydroxide paste revealed a "dynamical scaling" property in the time series of the fragment size distribution, which has been predicted by theoretical and numerical studies. Further analysis results based on Bayesian inference show the transition of the functional form of the fragment size distribution from a log-normal distribution to a generalized gamma distribution. The combination of a statistical model of the fragmentation process and the dynamics of stress concentration of a drying thin layer of viscoelastic material explains the origin of the transition. | condensed matter |
We present new transit observations of the hot Jupiter WASP-74 b ($T_\mathrm{eq} \sim$ 1860 K) using the high-resolution spectrograph HARPS-N and the multi-colour simultaneous imager MuSCAT2. We refine the orbital properties of the planet and its host star, and measure its obliquity for the first time. The measured sky-projected angle between the stellar spin-axis and the planet's orbital axis is compatible with an orbit well-aligned with the equator of the host star ($\lambda = 0.77\pm0.99 \mathrm{deg}$). We are not able to detect any absorption feature of H$\alpha$, or any other atomic spectral features, in its high-resolution transmission spectra due to low S/N at the line cores. Despite previous claims regarding the presence of strong optical absorbers such TiO and VO gases in the atmosphere of WASP-74 b, the new ground-based photometry combined with a reanalysis of previously reported observations from the literature shows a slope in the low-resolution transmission spectrum steeper than expected from Rayleigh scattering alone. | astrophysics |
Fusion energy is often regarded as a long-term solution to the world's energy needs. However, even after solving the critical research challenges, engineering and materials science will still impose significant constraints on the characteristics of a fusion power plant. Meanwhile, the global energy grid must transition to low-carbon sources by 2050 to prevent the worst effects of climate change. We review three factors affecting fusion's future trajectory: (1) the significant drop in the price of renewable energy, (2) the intermittency of renewable sources and implications for future energy grids, and (3) the recent proposition of intermediate-level nuclear waste as a product of fusion. Within the scenario assumed by our premises, we find that while there remains a clear motivation to develop fusion power plants, this motivation is likely weakened by the time they become available. We also conclude that most current fusion reactor designs do not take these factors into account and, to increase market penetration, fusion research should consider relaxed nuclear waste design criteria, raw material availability constraints and load-following designs with pulsed operation. | physics |
Source independent quantum networks are considered as a natural generalization to the Bell scenario where we investigate the nonlocal properties of quantum states distributed and measured in a network. Considering the simplest network of entanglement swapping, recently Gisin et. al. and Andreoli et. al. independently provided a systematic characterization of the set of quantum states leading to violation of the so-called 'bilocality' inequality. In this work, we consider the complexities in the quantum networks with an arbitrary number of parties distributed in chain-shaped and star-shaped networks. We derive the maximal violation of the 'n-local' inequality that can be achieved by arbitrary two-qubit states for such chain and star-shaped networks. This would further provide us deeper understanding of quantum correlations in complex structures. | quantum physics |
In this work, we investigate the reaction of $\gamma\gamma \to D\bar{D}$, taking into account the s-wave $D\bar{D}$ final state interaction. By fitting to the $D\bar{D}$ invariant mass distributions measured by the Belle and BaBar Collaborations, we obtain a good reproduction of the data by means of a $D\bar{D}$ amplitude that produces a bound $D\bar{D}$ state with $I=0$ close to threshold. The error bands of the fits indicate, however, that more precise data on this reaction are needed to be more assertive about the position and width of such state. | high energy physics phenomenology |
We derive a general relation between the bosonic and fermionic entanglement in the ground states of supersymmetric quadratic Hamiltonians. For this, we construct canonical identifications between bosonic and fermionic subsystems. Our derivation relies on a unified framework to describe both, bosonic and fermionic Gaussian states in terms of so-called linear complex structures $J$. The resulting dualities apply to the full entanglement spectrum between the bosonic and the fermionic systems, such that the von Neumann entropy and arbitrary Renyi entropies can be related. We illustrate our findings in one and two-dimensional systems, including the paradigmatic Kitaev honeycomb model. While typically SUSY preserves features like area law scaling of the entanglement entropies on either side, we find a peculiar phenomenon, namely, an amplified scaling of the entanglement entropy ("super area law") in bosonic subsystems when the dual fermionic subsystems develop almost maximally entangled modes. | quantum physics |
In this paper, we consider the possibility that a new stage of matter, stemming from hidden/dark sectors beyond the Standard Model, to be formed in $pp$ collisions at the LHC, can significantly modify the correlations among final-state particles. In particular, two-particle azimuthal correlations are studied by means of a Fourier series sensitive to the near-side ridge effect while assuming that hidden/dark particles decay on top of the conventional parton shower. Then, new (fractional) harmonic terms should be included in the Fourier analysis of the azimuthal anisotropies, encoding the hypothetical new physics contribution enabling its detection in a complementary way to other signatures. | high energy physics phenomenology |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.