abstract
stringlengths
42
2.09k
Intent classification and slot filling are two critical tasks for natural language understanding. Traditionally the two tasks have been deemed to proceed independently. However, more recently, joint models for intent classification and slot filling have achieved state-of-the-art performance, and have proved that there exists a strong relationship between the two tasks. This article is a compilation of past work in natural language understanding, especially joint intent classification and slot filling. We observe three milestones in this research so far: Intent detection to identify the speaker's intention, slot filling to label each word token in the speech/text, and finally, joint intent classification and slot filling tasks. In this article, we describe trends, approaches, issues, data sets, evaluation metrics in intent classification and slot filling. We also discuss representative performance values, describe shared tasks, and provide pointers to future work, as given in prior works. To interpret the state-of-the-art trends, we provide multiple tables that describe and summarise past research along different dimensions, including the types of features, base approaches, and dataset domain used.
This is the user manual for CosmoLattice, a modern package for lattice simulations of the dynamics of interacting scalar and gauge fields in an expanding universe. CosmoLattice incorporates a series of features that makes it very versatile and powerful: $i)$ it is written in C++ fully exploiting the object oriented programming paradigm, with a modular structure and a clear separation between the physics and the technical details, $ii)$ it is MPI-based and uses a discrete Fourier transform parallelized in multiple spatial dimensions, which makes it specially appropriate for probing scenarios with well-separated scales, running very high resolution simulations, or simply very long ones, $iii)$ it introduces its own symbolic language, defining field variables and operations over them, so that one can introduce differential equations and operators in a manner as close as possible to the continuum, $iv)$ it includes a library of numerical algorithms, ranging from $O(\delta t^2)$ to $O(\delta t^{10})$ methods, suitable for simulating global and gauge theories in an expanding grid, including the case of `self-consistent' expansion sourced by the fields themselves. Relevant observables are provided for each algorithm (e.g.~energy densities, field spectra, lattice snapshots) and we note that remarkably all our algorithms for gauge theories always respect the Gauss constraint to machine precision. In this manual we explain how to obtain and run CosmoLattice in a computer (let it be your laptop, desktop or a cluster). We introduce the general structure of the code and describe in detail the basic files that any user needs to handle. We explain how to implement any model characterized by a scalar potential and a set of scalar fields, either singlets or interacting with $U(1)$ and/or $SU(2)$ gauge fields. CosmoLattice is publicly available at www.cosmolattice.net.
A spin-1/2 Heisenberg model on honeycomb lattice is investigated by doing triplon analysis and quantum Monte Carlo calculations. This model, inspired by Cu$_2$(pymca)$_3$(ClO$_4$), has three different antiferromagnetic exchange interactions ($J_A$, $J_B$, $J_C$) on three different sets of nearest-neighbour bonds which form a kagome superlattice. While the model is bipartite and unfrustrated, its quantum phase diagram is found to be dominated by a quantum paramagnetic phase that is best described as a spin-gapped hexagonal-singlet state. The N\'eel antiferromagnetic order survives only in a small region around $J_A=J_B=J_C$. The magnetization produced by external magnetic field is found to exhibit plateaus at 1/3 and 2/3 of the saturation value, or at 1/3 alone, or no plateaus. Notably, the plateaus exist only inside a bounded region within the hexagonal-singlet phase. This study provides a clear understanding of the spin-gapped behaviour and magnetization plateaus observed in Cu$_2$(pymca)$_3$(ClO$_4$), and also predicts the possible disappearance of 2/3 plateau under pressure.
Atomic carbon (CI) has been proposed to be a global tracer of the molecular gas as a substitute for CO, however, its utility remains unproven. To evaluate the suitability of CI as the tracer, we performed [CI]$(^3P_1-^3P_0)$ (hereinafter [CI](1-0)) mapping observations of the northern part of the nearby spiral galaxy M83 with the ASTE telescope and compared the distributions of [CI](1-0) with CO lines (CO(1-0), CO(3-2), and $^{13}$CO(1-0)), HI, and infrared (IR) emission (70, 160, and 250$ \mu$m). The [CI](1-0) distribution in the central region is similar to that of the CO lines, whereas [CI](1-0) in the arm region is distributed outside the CO. We examined the dust temperature, $T_{\rm dust}$, and dust mass surface density, $\Sigma_{\rm dust}$, by fitting the IR continuum-spectrum distribution with a single-temperature modified blackbody. The distribution of $\Sigma_{\rm dust}$ shows a much better consistency with the integrated intensity of CO(1-0) than with that of [CI](1-0), indicating that CO(1-0) is a good tracer of the cold molecular gas. The spatial distribution of the [CI] excitation temperature, $T_{\rm ex}$, was examined using the intensity ratio of the two [CI] transitions. An appropriate $T_{\rm ex}$ at the central, bar, arm, and inter-arm regions yields a constant [C]/[H$_2$] abundance ratio of $\sim7 \times 10^{-5}$ within a range of 0.1 dex in all regions. We successfully detected weak [CI](1-0) emission, even in the inter-arm region, in addition to the central, arm, and bar regions, using spectral stacking analysis. The stacked intensity of [CI](1-0) is found to be strongly correlated with $T_{\rm dust}$. Our results indicate that the atomic carbon is a photodissociation product of CO, and consequently, compared to CO(1-0), [CI](1-0) is less reliable in tracing the bulk of "cold" molecular gas in the galactic disk.
Using Galois theory of functional equations, we give a new proof of the main result of the paper "Transcendental transcendency of certain functions of Poincar\'e" by J.F. Ritt, on the differential transcendence of the solutions of the functional equation R(y(t))=y(qt), where R is a rational function with complex coefficients which verifies R(0)=0, R'(0)=q, where q is a complex number with |q|>1. We also give a partial result in the case of an algebraic function R.
We construct an injection from the set of permutations of length $n$ that contain exactly one copy of the decreasing pattern of length $k$ to the set of permutations of length $n+2$ that avoid that pattern. We then prove that the generating function counting the former is not rational, and in the case when $k$ is even and $k\geq 4$, it is not even algebraic. We extend our injection and our nonrationality result to a larger class of patterns.
This paper proposes a non-autoregressive extension of our previously proposed sequence-to-sequence (S2S) model-based voice conversion (VC) methods. S2S model-based VC methods have attracted particular attention in recent years for their flexibility in converting not only the voice identity but also the pitch contour and local duration of input speech, thanks to the ability of the encoder-decoder architecture with the attention mechanism. However, one of the obstacles to making these methods work in real-time is the autoregressive (AR) structure. To overcome this obstacle, we develop a method to obtain a model that is free from an AR structure and behaves similarly to the original S2S models, based on a teacher-student learning framework. In our method, called "FastS2S-VC", the student model consists of encoder, decoder, and attention predictor. The attention predictor learns to predict attention distributions solely from source speech along with a target class index with the guidance of those predicted by the teacher model from both source and target speech. Thanks to this structure, the model is freed from an AR structure and allows for parallelization. Furthermore, we show that FastS2S-VC is suitable for real-time implementation based on a sliding-window approach, and describe how to make it run in real-time. Through speaker-identity and emotional-expression conversion experiments, we confirmed that FastS2S-VC was able to speed up the conversion process by 70 to 100 times compared to the original AR-type S2S-VC methods, without significantly degrading the audio quality and similarity to target speech. We also confirmed that the real-time version of FastS2S-VC can be run with a latency of 32 ms when run on a GPU.
A nested coordinate system is a reassigning of independent variables to take advantage of geometric or symmetry properties of a particular application. Polar, cylindrical and spherical coordinate systems are primary examples of such a regrouping that have proved their importance in the separation of variables method for solving partial differential equations. Geometric algebra offers powerful complimentary algebraic tools that are unavailable in other treatments.
The study of the mapping class group of the plane minus a Cantor set uses a graph of loops, which is an analogous of the curve graph in the study of mapping class groups of compact surfaces. The Gromov boundary of this loop graph can be described in terms of "cliques of high-filling rays": high-filling rays are simple geodesics of the surface which are complicated enough to be infinitely far away from any loop in the graph. Moreover, these rays are arranged in cliques: any two high-filling rays which are both disjoint from a third one are necessarily mutually disjoint. Every such clique is a point of the Gromov-boundary of the loop graph. Some examples of cliques with any finite number of high-filling rays are already known. In this paper, we construct an infinite clique of high-filling rays.
We consider the asymptotic expansion of the functional series \[S_{\mu,\gamma}(a;\lambda)=\sum_{n=1}^\infty \frac{n^\gamma e^{-\lambda n^2/a^2}}{(n^2+a^2)^\mu}\] for real values of the parameters $\gamma$, $\lambda>0$ and $\mu\geq0$ as $|a|\to \infty$ in the sector $|\arg\,a|<\pi/4$. For general values of $\gamma$ the expansion is of algebraic type with terms involving the Riemann zeta function and a terminating confluent hypergeometric function. Of principal interest in this study is the case corresponding to even integer values of $\gamma$, where the algebraic-type expansion consists of a finite number of terms together with a contribution comprising an infinite sequence of increasingly subdominant exponentially small expansions. This situation is analogous to the well-known Poisson-Jacobi formula corresponding to the case $\mu=\gamma=0$. Numerical examples are provided to illustrate the accuracy of these expansions.
We present a novel method for predicting accurate depths from monocular images with high efficiency. This optimal efficiency is achieved by exploiting wavelet decomposition, which is integrated in a fully differentiable encoder-decoder architecture. We demonstrate that we can reconstruct high-fidelity depth maps by predicting sparse wavelet coefficients. In contrast with previous works, we show that wavelet coefficients can be learned without direct supervision on coefficients. Instead we supervise only the final depth image that is reconstructed through the inverse wavelet transform. We additionally show that wavelet coefficients can be learned in fully self-supervised scenarios, without access to ground-truth depth. Finally, we apply our method to different state-of-the-art monocular depth estimation models, in each case giving similar or better results compared to the original model, while requiring less than half the multiply-adds in the decoder network. Code at https://github.com/nianticlabs/wavelet-monodepth
This work presents a novel target-free extrinsic calibration algorithm for a 3D Lidar and an IMU pair using an Extended Kalman Filter (EKF) which exploits the \textit{motion based calibration constraint} for state update. The steps include, data collection by motion excitation of the Lidar Inertial Sensor suite along all degrees of freedom, determination of the inter sensor rotation by using rotational component of the aforementioned \textit{motion based calibration constraint} in a least squares optimization framework, and finally, the determination of inter sensor translation using the \textit{motion based calibration constraint} for state update in an Extended Kalman Filter (EKF) framework. We experimentally validate our method using data collected in our lab and open-source (https://github.com/unmannedlab/imu_lidar_calibration) our contribution for the robotics research community.
The field of quantum simulations in ultra-cold atomic gases has been remarkably successful. In principle it allows for an exact treatment of a variety of highly relevant lattice models and their emergent phases of matter. But so far there is a lack in the theoretical literature concerning the systematic study of the effects of the trap potential as well as the finite size of the systems, as numerical studies of such non periodic, correlated fermionic lattices models are numerically demanding beyond one dimension. We use the recently introduced real-space truncated unity functional renormalization group to study these boundary and trap effects with a focus on their impact on the superconducting phase of the $2$D Hubbard model. We find that in the experiments not only lower temperatures need to be reached compared to current capabilities, but also system size and trap potential shape play a crucial role to simulate emergent phases of matter.
The vast majority of semantic segmentation approaches rely on pixel-level annotations that are tedious and time consuming to obtain and suffer from significant inter and intra-expert variability. To address these issues, recent approaches have leveraged categorical annotations at the slide-level, that in general suffer from robustness and generalization. In this paper, we propose a novel weakly supervised multi-instance learning approach that deciphers quantitative slide-level annotations which are fast to obtain and regularly present in clinical routine. The extreme potentials of the proposed approach are demonstrated for tumor segmentation of solid cancer subtypes. The proposed approach achieves superior performance in out-of-distribution, out-of-location, and out-of-domain testing sets.
The purpose of this report is to look at the measures of importance of components in systems in terms of reliability. In the first work of Birnbaum (1968) on this subject, many interesting studies were created and important indicators were constructed that allowed to organize the components of complex systems. They are helpful in analyzing the reliability of the designed systems, establishing the principles of operation and maintenance. The significance measures presented here are collected and discussed regarding the motivation behind their creation. They concern an approach in which both elements and systems are binary, and the possibility of generalization to multistate systems is only mentioned. Among the discussed is one new proposal using the methods of game theory, combining the sensitivity to the structure of the system and the operational effects on the system performance. The presented severity measures use a knowledge of the system structure as well as reliability and wear and tear, and whether the components can be repaired and maintained.
Driven by increased complexity of dynamical systems, the solution of system of differential equations through numerical simulation in optimization problems has become computationally expensive. This paper provides a smart data driven mechanism to construct low dimensional surrogate models. These surrogate models reduce the computational time for solution of the complex optimization problems by using training instances derived from the evaluations of the true objective functions. The surrogate models are constructed using combination of proper orthogonal decomposition and radial basis functions and provides system responses by simple matrix multiplication. Using relative maximum absolute error as the measure of accuracy of approximation, it is shown surrogate models with latin hypercube sampling and spline radial basis functions dominate variable order methods in computational time of optimization, while preserving the accuracy. These surrogate models also show robustness in presence of model non-linearities. Therefore, these computational efficient predictive surrogate models are applicable in various fields, specifically to solve inverse problems and optimal control problems, some examples of which are demonstrated in this paper.
First-order nonadiabatic coupling matrix elements (fo-NACMEs) are the basic quantities in theoretical descriptions of electronically nonadiabatic processes that are ubiquitous in molecular physics and chemistry. Given the large size of systems of chemical interests, time-dependent density functional theory (TDDFT) is usually the first choice. However, the lack of wave functions in TDDFT renders the formulation of NAC-TDDFT for fo-NACMEs conceptually difficult. The present account aims to analyze the available variants of NAC-TDDFT in a critical but concise manner and meanwhile point out the proper ways for implementation. It can be concluded, from both theoretical and numerical points of view, that the equation of motion-based variant of NAC-TDDFT is the right choice. Possible future developments of this variant are also highlighted.
Immense field enhancement and nanoscale confinement of light are possible within nanoparticle-on-mirror (NPoM) plasmonic resonators, which enable novel optically-activated physical and chemical phenomena, and render these nanocavities greatly sensitive to minute structural changes, down to the atomic scale. Although a few of these structural parameters, primarily linked to the nanoparticle and the mirror morphology, have been identified, the impact of molecular assembly and organization of the spacer layer between them has often been left uncharacterized. Here, we experimentally investigate how the complex and reconfigurable nature of a thiol-based self-assembled monolayer (SAM) adsorbed on the mirror surface impacts the optical properties of the NPoMs. We fabricate NPoMs with distinct molecular organizations by controlling the incubation time of the mirror in the thiol solution. Afterwards, we investigate the structural changes that occur under laser irradiation by tracking the bonding dipole plasmon mode, while also monitoring Stokes and anti-Stokes Raman scattering from the molecules as a probe of their integrity. First, we find an effective decrease in the SAM height as the laser power increases, compatible with an irreversible change of molecule orientation caused by heating. Second, we observe that the nanocavities prepared with a densely packed and more ordered monolayer of molecules are more prone to changes in their resonance compared to samples with sparser and more disordered SAMs. Our measurements indicate that molecular orientation and packing on the mirror surface play a key role in determining the stability of NPoM structures and hence highlight the under-recognized significance of SAM characterization in the development of NPoM-based applications.
Recent literature has demonstrated that the use of per-channel energy normalization (PCEN), has significant performance improvements over traditional log-scaled mel-frequency spectrograms in acoustic sound event detection (SED) in a multi-class setting with overlapping events. However, the configuration of PCEN's parameters is sensitive to the recording environment, the characteristics of the class of events of interest, and the presence of multiple overlapping events. This leads to improvements on a class-by-class basis, but poor cross-class performance. In this article, we experiment using PCEN spectrograms as an alternative method for SED in urban audio using the UrbanSED dataset, demonstrating per-class improvements based on parameter configuration. Furthermore, we address cross-class performance with PCEN using a novel method, Multi-Rate PCEN (MRPCEN). We demonstrate cross-class SED performance with MRPCEN, demonstrating improvements to cross-class performance compared to traditional single-rate PCEN.
We study the Bose polaron problem in a nonequilibrium setting, by considering an impurity embedded in a quantum fluid of light realized by exciton-polaritons in a microcavity, subject to a coherent drive and dissipation on account of pump and cavity losses. We obtain the polaron effective mass, the drag force acting on the impurity, and determine polaron trajectories at a semiclassical level. We find different dynamical regimes, originating from the unique features of the excitation spectrum of driven-dissipative polariton fluids, in particular a non-trivial regime of acceleration against the flow. Our work promotes the study of impurity dynamics as an alternative testbed for probing superfluidity in quantum fluids of light.
In this paper, we focus on the performance of vehicle-to-vehicle (V2V) communication adopting the Dedicated Short Range Communication (DSRC) application in periodic broadcast mode. An analytical model is studied and a fixed point method is used to analyze the packet delivery ratio (PDR) and mean delay based on the IEEE 802.11p standard in a fully connected network under the assumption of perfect PHY performance. With the characteristics of V2V communication, we develop the Semi-persistent Contention Density Control (SpCDC) scheme to improve the DSRC performance. We use Monte Carlo simulation to verify the results obtained by the analytical model. The simulation results show that the packet delivery ratio in SpCDC scheme increases more than 10% compared with IEEE 802.11p in heavy vehicle load scenarios. Meanwhile, the mean reception delay decreases more than 50%, which provides more reliable road safety.
Spectral factorization is a prominent tool with several important applications in various areas of applied science. Wiener and Masani proved the existence of matrix spectral factorization. Their theorem has been extended to the multivariable case by Helson and Lowdenslager. Solving the problem numerically is challenging in both situations, and also important due to its practical applications. Therefore, several authors have developed algorithms for factorization. The Janashia-Lagvilava algorithm is a relatively new method for matrix spectral factorization which has proved to be useful in several applications. In this paper, we extend this method to the multivariable case. Consequently, a new numerical algorithm for multivariable matrix spectral factorization is constructed.
We investigate the torque field and skyrmion movement at an interface between a ferromagnet hosting a skyrmion and a material with strong spin-orbit interaction. We analyze both semiconductor materials and topological insulators using a Hamiltonian model that includes a linear term. The spin torque inducing current is considered to flow in the single band limit therefore a quantum model of current is used. Skyrmion movement due spin transfer torque proves to be more difficult in presence of spin orbit interaction in the case where only interface in-plane currents are present. However, edge effects in narrow nanowires can be used to drive the skyrmion movement and to exert a limited control on its movement direction. We also show the differences and similarities between torque fields due to electric current in the many and in the single band limits.
For any second-order scalar PDE $\mathcal{E}$ in one unknown function, that we interpret as a hypersurface of a second-order jet space $J^2$, we construct, by means of the characteristics of $\mathcal{E}$, a sub-bundle of the contact distribution of the underlying contact manifold $J^1$, consisting of conic varieties. We call it the contact cone structure associated with $\mathcal{E}$. We then focus on symplectic Monge-Amp\`ere equations in 3 independent variables, that are naturally parametrized by a 13-dimensional real projective space. If we pass to the field of complex numbers $\mathbb{C}$, this projective space turns out to be the projectivization of the 14-dimensional irreducible representation of the simple Lie group $\mathsf{Sp}(6,\mathbb{C})$: the associated moment map allows to define a rational map $\varpi$ from the space of symplectic 3D Monge-Amp\`ere equations to the projectivization of the space of quadratic forms on a $6$-dimensional symplectic vector space. We study in details the relationship between the zero locus of the image of $\varpi$, herewith called the cocharacteristic variety, and the contact cone structure of a 3D Monge-Amp\`ere equation $\mathcal{E}$: under the hypothesis of non-degenerate symbol, we prove that these two constructions coincide. A key tool in achieving such a result will be a complete list of mutually non-equivalent quadratic forms on a $6$-dimensional symplectic space, which has an interest on its own.
What do word vector representations reveal about the emotions associated with words? In this study, we consider the task of estimating word-level emotion intensity scores for specific emotions, exploring unsupervised, supervised, and finally a self-supervised method of extracting emotional associations from word vector representations. Overall, we find that word vectors carry substantial potential for inducing fine-grained emotion intensity scores, showing a far higher correlation with human ground truth ratings than achieved by state-of-the-art emotion lexicons.
Landau suggested that the low-temperature properties of metals can be understood in terms of long-lived quasiparticles with all complex interactions included in Fermi-liquid parameters, such as the effective mass $m^{\star}$. Despite its wide applicability, electronic transport in bad or strange metals and unconventional superconductors is controversially discussed towards a possible collapse of the quasiparticle concept. Here we explore the electrodynamic response of correlated metals at half filling for varying correlation strength upon approaching a Mott insulator. We reveal persistent Fermi-liquid behavior with pronounced quadratic dependences of the optical scattering rate on temperature and frequency, along with a puzzling elastic contribution to relaxation. The strong increase of the resistivity beyond the Ioffe-Regel-Mott limit is accompanied by a `displaced Drude peak' in the optical conductivity. Our results, supported by a theoretical model for the optical response, demonstrate the emergence of a bad metal from resilient quasiparticles that are subject to dynamical localization and dissolve near the Mott transition.
In his 1935 Gedankenexperiment, Erwin Schr\"{o}dinger imagined a poisonous substance which has a 50% probability of being released, based on the decay of a radioactive atom. As such, the life of the cat and the state of the poison become entangled, and the fate of the cat is determined upon opening the box. We present an experimental technique that keeps the cat alive on any account. This method relies on the time-resolved Hong-Ou-Mandel effect: two long, identical photons impinging on a beam splitter always bunch in either of the outputs. Interpreting the first photon detection as the state of the poison, the second photon is identified as the state of the cat. Even after the collapse of the first photon's state, we show their fates are intertwined through quantum interference. We demonstrate this by a sudden phase change between the inputs, administered conditionally on the outcome of the first detection, which steers the second photon to a pre-defined output and ensures that the cat is always observed alive.
The exsitance of three-dimensional Hall effect (3DQHE) due to spontaneous Fermi surface instabilities in strong magnetic field was proposed decades ago, and has stimulated recent progress in experiments. The reports in recent experiments show that the Hall plateaus and vanishing transverse magneto-resistivities (TMRs) (which are two main signatures of 3DQHE) are not easy to be observed in natural materials. And two main different explanations of the slow varying slope like Hall plateaus and non-vanishing TMRs (which can be called as quasi-quantized Hall effect (QQHE)) have been proposed. By studying the magneto-transport with a simple effective periodic 3D system, we show how 3DQHE can be achieved in certain parameter regimes at first. We find two new mechanisms that may give rise to QQHE. One mechanism is the "low" Fermi energy effect, and the other is the "strong" impurity effect. Our studies also proved that the artificial superlattice is an ideal platform for realizing 3DQHE with high layer barrier periodic potential.
Presence often is considered the most important quale describing the subjective feeling of being in a computer-generated (virtual) or computer-mediated environment. The identification and separation of two orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. In this model, immersion is a proposed contributing factor to the place illusion. Lately, copresence and social presence illusions have extended this model, and coherence was proposed as a contributing factor to the plausibility illusion. Such factors strive to identify (objectively) measurable characteristics of an experience, e.g., systems properties that allow controlled manipulations of VR experiences. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of Virtual, Augmented, and Mixed Reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion as a consequence of a plausible generation of spatial cues, and similarly for all of the current model's so-defined illusions. Finally, we propose coherence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects.
Many teleoperation tasks require three or more tools working together, which need the cooperation of multiple operators. The effectiveness of such schemes may be limited by communication. Trimanipulation by a single operator using an artificial third arm controlled together with their natural arms is a promising solution to this issue. Foot-controlled interfaces have previously shown the capability to be used for the continuous control of robot arms. However, the use of such interfaces for controlling a supernumerary robotic limb (SRLs) in coordination with the natural limbs, is not well understood. In this paper, a teleoperation task imitating physically coupled hands in a virtual reality scene was conducted with 14 subjects to evaluate human performance during tri-manipulation. The participants were required to move three limbs together in a coordinated way mimicking three arms holding a shared physical object. It was found that after a short practice session, the three-hand tri-manipulation using a single subject's hands and foot was still slower than dyad operation, however, they displayed similar performance in success rate and higher motion efficiency than two person's cooperation.
The linear frequency modulated (LFM) frequency agile radar (FAR) can synthesize a wide signal bandwidth through coherent processing while keeping the bandwidth of each pulse narrow. In this way, high range resolution profiles (HRRP) can be obtained without increasing the hardware system cost. Furthermore, the agility provides improved both robustness to jamming and spectrum efficiency. Motivated by the Newtonalized orthogonal matching pursuit (NOMP) for line spectral estimation problem, the NOMP for the FAR radar termed as NOMP-FAR is designed to process each coarse range bin to extract the HRRP and velocities of multiple targets, including the guide for determining the oversampling factor and the stopping criterion. In addition, it is shown that the target will cause false alarm in the nearby coarse range bins, a postprocessing algorithm is then proposed to suppress the ghost targets. Numerical simulations are conducted to demonstrate the effectiveness of NOMP-FAR.
We present the task description and discussion on the results of the DCASE 2021 Challenge Task 2. In 2020, we organized an unsupervised anomalous sound detection (ASD) task, identifying whether a given sound was normal or anomalous without anomalous training data. In 2021, we organized an advanced unsupervised ASD task under domain-shift conditions, which focuses on the inevitable problem of the practical use of ASD systems. The main challenge of this task is to detect unknown anomalous sounds where the acoustic characteristics of the training and testing samples are different, i.e., domain-shifted. This problem frequently occurs due to changes in seasons, manufactured products, and/or environmental noise. We received 75 submissions from 26 teams, and several novel approaches have been developed in this challenge. On the basis of the analysis of the evaluation results, we found that there are two types of remarkable approaches that TOP-5 winning teams adopted: 1) ensemble approaches of ``outlier exposure'' (OE)-based detectors and ``inlier modeling'' (IM)-based detectors and 2) approaches based on IM-based detection for features learned in a machine-identification task.
Quantitative information on tumor heterogeneity and cell load could assist in designing effective and refined personalized treatment strategies. It was recently shown by us that such information can be inferred from the diffusion parameter D derived from the diffusion-weighted MRI (DWI) if a relation between D and cell density can be established. However, such relation cannot a priori be assumed to be constant for all patients and tumor types. Hence to assist in clinical decisions in palliative settings, the relation needs to be established without tumor resection. It is here demonstrated that biopsies may contain sufficient information for this purpose if the localization of biopsies is chosen as systematically elaborated in this paper. A superpixel-based method for automated optimal localization of biopsies from the DWI D-map is proposed. The performance of the DWI-guided procedure is evaluated by extensive simulations of biopsies. Needle biopsies yield sufficient histological information to establish a quantitative relationship between D-value and cell density, provided they are taken from regions with high, intermediate, and low D-value in DWI. The automated localization of the biopsy regions is demonstrated from a NSCLC patient tumor. In this case, even two or three biopsies give a reasonable estimate. Simulations of needle biopsies under different conditions indicate that the DWI-guidance highly improves the estimation results. Tumor cellularity and heterogeneity in solid tumors may be reliably investigated from DWI and a few needle biopsies that are sampled in regions of well-separated D-values, excluding adipose tissue. This procedure could provide a way of embedding in the clinical workflow assistance in cancer diagnosis and treatment based on personalized information.
In this paper we give a classification of the asymptotic expansion of the $q$-expansion of reciprocals of Eisenstein series $E_k$ of weight $k$ for the modular group $\func{SL}_2(\mathbb{Z})$. For $k \geq 12$ even, this extends results of Hardy and Ramanujan, and Berndt, Bialek and Yee, utilizing the Circle Method on the one hand, and results of Petersson, and Bringmann and Kane, developing a theory of meromorphic Poincar{\'e} series on the other. We follow a uniform approach, based on the zeros of the Eisenstein series with the largest imaginary part. These special zeros provide information on the singularities of the Fourier expansion of $1/E_k(z)$ with respect to $q = e^{2 \pi i z}$.
Interactive single-image segmentation is ubiquitous in the scientific and commercial imaging software. In this work, we focus on the single-image segmentation problem only with some seeds such as scribbles. Inspired by the dynamic receptive field in the human being's visual system, we propose the Gaussian dynamic convolution (GDC) to fast and efficiently aggregate the contextual information for neural networks. The core idea is randomly selecting the spatial sampling area according to the Gaussian distribution offsets. Our GDC can be easily used as a module to build lightweight or complex segmentation networks. We adopt the proposed GDC to address the typical single-image segmentation tasks. Furthermore, we also build a Gaussian dynamic pyramid Pooling to show its potential and generality in common semantic segmentation. Experiments demonstrate that the GDC outperforms other existing convolutions on three benchmark segmentation datasets including Pascal-Context, Pascal-VOC 2012, and Cityscapes. Additional experiments are also conducted to illustrate that the GDC can produce richer and more vivid features compared with other convolutions. In general, our GDC is conducive to the convolutional neural networks to form an overall impression of the image.
For any positive integer $r$, we construct a smooth complex projective rational surface which has at least $r$ real forms not isomorphic over $\mathbb{R}$.
We present a very simple form of the supercharges and the Hamiltonian of ${\cal N} {=}\,2$ supersymmetric extension of $n$-particle Ruijsenaars--Schneider models for three cases of the interaction: $1/(x_i-x_j)$, $1/tan(x_i-x_j)$, $1/tanh(x_i-x_j)$. The long "fermionic tails" of the supercharges and Hamiltonian rolled up in the simple rational functions depending on fermionic bilinears.
We present a study of the structure and differential capacitance of electric double layers of aqueous electrolytes. We consider Electric Double Layer Capacitors (EDLC) composed of spherical cations and anions in a dielectric continuum confined between a planar cathode and anode. The model system includes steric as well as Coulombic ion-ion and ion-electrode interactions. We compare results of computationally expensive, but "exact", Brownian Dynamics (BD) simulations with approximate, but cheap, calculations based on classical Density Functional Theory (DFT). Excellent overall agreement is found for a large set of system parameters $-$ including variations in concentrations, ionic size- and valency-asymmetries, applied voltages, and electrode separation $-$ provided the differences between the canonical ensemble of the BD simulations and the grand-canonical ensemble of DFT are properly taken into account. In particular a careful distinction is made between the differential capacitance $C_N$ at fixed number of ions and $C_\mu$ at fixed ionic chemical potential. Furthermore, we derive and exploit their thermodynamic relations. In the future these relations are also useful for comparing and contrasting.
We analyse an extremal question on the degrees of the link graphs of a finite regular graph, that is, the subgraphs induced by non-trivial spheres. We show that if $G$ is $d$-regular and connected but not complete then some link graph of $G$ has minimum degree at most $\lfloor 2d/3\rfloor-1$, and if $G$ is sufficiently large in terms of $d$ then some link graph has minimum degree at most $\lfloor d/2\rfloor-1$; both bounds are best possible. We also give the corresponding best-possible result for the corresponding problem where subgraphs induced by balls, rather than spheres, are considered. We motivate these questions by posing a conjecture concerning expansion of link graphs in large bounded-degree graphs, together with a heuristic justification thereof.
#P-hardness of computing matrix immanants are proved for each member of a broad class of shapes and restricted sets of matrices. The class is characterized in the following way. If a shape of size $n$ in it is in form $(w,\mathbf{1}+\lambda)$ or its conjugate is in that form, where $\mathbf{1}$ is the all-$1$ vector, then $|\lambda|$ is $n^{\varepsilon}$ for some $0<\varepsilon$, $\lambda$ can be tiled with $1\times 2$ dominos and $(3w+3h(\lambda)+1)|\lambda| \le n$, where $h(\lambda)$ is the height of $\lambda$. The problem remains \#P-hard if the immanants are evaluated on $0$-$1$ matrices. We also give hardness proofs of some immanants whose shape $\lambda = (\mathbf{1}+\lambda_d)$ has size $n$ such that $|\lambda_d| = n^{\varepsilon}$ for some $0<\varepsilon<\frac{1}{2}$, and for some $w$, the shape $\lambda_d/(w)$ is tilable with $1\times 2$ dominos. The \#P-hardness result holds when these immanants are evaluated on adjacency matrices of planar, directed graphs, however, in these cases the edges have small positive integer weights.
The chromaticity diagram associated with the CIE 1931 color matching functions is shown to be slightly non-convex. While having no impact on practical colorimetric computations, the non-convexity does have a significant impact on the shape of some optimal object color reflectance distributions associated with the outer surface of the object color solid. Instead of the usual two-transition Schrodinger form, many optimal colors exhibit higher transition counts. A linear programming formulation is developed and is used to locate where these higher-transition optimal object colors reside on the object color solid surface. The regions of higher transition count appear to have a point-symmetric complementary structure. The final peer-reviewed version (to appear) contains additional material concerning convexification of the color-matching functions and and additional analysis of modern "physiologically-relevant" CMFs transformed from cone fundamentals.
Raman spectroscopy is an advantageous method for studying the local structure of materials, but the interpretation of measured spectra is complicated by the presence of oblique phonons in polycrystals of polar materials. Whilst group theory considerations and standard ab initio calculations are helpful, they are often valid only for single crystals. In this paper, we introduce a method for computing Raman spectra of polycrystalline materials from first principles. We start from the standard approach based on the (Placzek) rotation invariants of the Raman tensors and extend it to include the effect of the coupling between the lattice vibrations and the induced electric field, and the electro-optic contribution, relevant for polar materials like ferroelectrics. As exemplified by applying the method to rhombohedral BaTiO3, AlN, and LiNbO3, such an extension brings the simulated Raman spectrum to a much better correspondence with the experimental one. Additional advantages of the method are that it is general, permits automation, and thus can be used in high-throughput fashion.
This paper is a continuation of our article (European J. Math., https://doi.org/10.1007/s40879-020-00419-8). The notion of a poor complex compact manifold was introduced there and the group $Aut(X)$ for a $P^1$-bundle over such a manifold was proven to be very Jordan. We call a group $G$ very Jordan if it contains a normal abelian subgroup $G_0$ such that the orders of finite subgroups of the quotient $G/G_0$ are bounded by a constant depending on $G$ only. In this paper we provide explicit examples of infinite families of poor manifolds of any complex dimension, namely simple tori of algebraic dimension zero. Then we consider a non-trivial holomorphic $P^1$-bundle $(X,p,Y)$ over a non-uniruled complex compact Kaehler manifold $Y$. We prove that $Aut(X)$ is very Jordan provided some additional conditions on the set of sections of $p$ are met. Applications to $P^1$-bundles over non-algebraic complex tori are given.
In this work we obtain a geometric characterization of the measures $\mu$ in $\mathbb{R}^{n+1}$ with polynomial upper growth of degree $n$ such that the $n$-dimensional Riesz transform $\mathcal{R}\mu (x) = \int \frac{x-y}{|x-y|^{n+1}}\,d\mu(y)$ belongs to $L^2(\mu)$, under the assumption that $\mu$ satisfies the following Wolff energy estimate, for any ball $B\subset\mathbb{R}^{n+1}$: $$\int_B \int_0^\infty \left(\frac{\mu(B(x,r))}{r^{n-\frac38}}\right)^2\,\frac{dr}r\,d\mu(x)\leq M\,\bigg(\frac{\mu(2B)}{r(B)^{n-\frac38}}\bigg)^2\,\mu(2B).$$ More precisely, we show that $\mu$ satisfies the following estimate: $$\|\mathcal{R}\mu\|_{L^2(\mu)}^2 + \|\mu\|\approx \int\!\!\int_0^\infty \beta_{\mu,2}(x,r)^2\,\frac{\mu(B(x,r))}{r^n}\,\frac{dr}r\,d\mu(x) + \|\mu\|,$$ where $\beta_{\mu,2}(x,r)^2 = \inf_L \frac1{r^n}\int_{B(x,r)} \left(\frac{\mathrm{dist}(y,L)}r\right)^2\,d\mu(y),$ with the infimum taken over all affine $n$-planes $L\subset\mathbb{R}^{n+1}$. In a companion paper which relies on the results obtained in this work it is shown that the same result holds without the above assumption regarding the Wolff energy of $\mu$. This result has important consequences for the Painlev\'e problem for Lipschitz harmonic functions.
We propose a new approach for trading VIX futures. We assume that the term structure of VIX futures follows a Markov model. Our trading strategy selects a position in VIX futures by maximizing the expected utility for a day-ahead horizon given the current shape and level of the term structure. Computationally, we model the functional dependence between the VIX futures curve, the VIX futures positions, and the expected utility as a deep neural network with five hidden layers. Out-of-sample backtests of the VIX futures trading strategy suggest that this approach gives rise to reasonable portfolio performance, and to positions in which the investor will be either long or short VIX futures contracts depending on the market environment.
In this paper we study the connectivity of Fatou components for maps in a large family of singular perturbations. We prove that, for some parameters inside the family, the dynamical planes for the corresponding maps present Fatou components of arbitrarily large connectivity and we determine precisely these connectivities. In particular, these results extend the ones obtained in [Can17, Can18].
Cross features play an important role in click-through rate (CTR) prediction. Most of the existing methods adopt a DNN-based model to capture the cross features in an implicit manner. These implicit methods may lead to a sub-optimized performance due to the limitation in explicit semantic modeling. Although traditional statistical explicit semantic cross features can address the problem in these implicit methods, it still suffers from some challenges, including lack of generalization and expensive memory cost. Few works focus on tackling these challenges. In this paper, we take the first step in learning the explicit semantic cross features and propose Pre-trained Cross Feature learning Graph Neural Networks (PCF-GNN), a GNN based pre-trained model aiming at generating cross features in an explicit fashion. Extensive experiments are conducted on both public and industrial datasets, where PCF-GNN shows competence in both performance and memory-efficiency in various tasks.
The number of photographs taken worldwide is growing rapidly and steadily. While a small subset of these images is annotated and shared by users through social media platforms, due to the sheer number of images in personal photo repositories (shared or not shared), finding specific images remains challenging. This survey explores existing image retrieval techniques as well as photo-organizer applications to highlight their relative strengths in addressing this challenge.
Partitioning graphs into blocks of roughly equal size is widely used when processing large graphs. Currently there is a gap in the space of available partitioning algorithms. On the one hand, there are streaming algorithms that have been adopted to partition massive graph data on small machines. In the streaming model, vertices arrive one at a time including their neighborhood and then have to be assigned directly to a block. These algorithms can partition huge graphs quickly with little memory, but they produce partitions with low solution quality. On the other hand, there are offline (shared-memory) multilevel algorithms that produce partitions with high quality but also need a machine with enough memory. We make a first step to close this gap by presenting an algorithm that computes significantly improved partitions of huge graphs using a single machine with little memory in streaming setting. First, we adopt the buffered streaming model which is a more reasonable approach in practice. In this model, a processing element can store a buffer, or batch, of nodes before making assignment decisions. When our algorithm receives a batch of nodes, we build a model graph that represents the nodes of the batch and the already present partition structure. This model enables us to apply multilevel algorithms and in turn compute much higher quality solutions of huge graphs on cheap machines than previously possible. To partition the model, we develop a multilevel algorithm that optimizes an objective function that has previously shown to be effective for the streaming setting. This also removes the dependency on the number of blocks k from the running time compared to the previous state-of-the-art. Overall, our algorithm computes, on average, 75.9% better solutions than Fennel using a very small buffer size. In addition, for large values of k our algorithm becomes faster than Fennel.
Let $X$ be a nonempty set and let $T(X)$ be the full transformation semigroup on $X$. The main objective of this paper is to study the subsemigroup $\overline{\Omega}(X, Y)$ of $T(X)$ defined by \[\overline{\Omega}(X, Y) = \{f\in T(X)\colon Yf = Y\},\] where $Y$ is a fixed nonempty subset of $X$. We describe regular elements in $\overline{\Omega}(X, Y)$ and show that $\overline{\Omega}(X, Y)$ is regular if and only if $Y$ is finite. We characterize unit-regular elements in $\overline{\Omega}(X, Y)$ and prove that $\overline{\Omega}(X, Y)$ is unit-regular if and only if $X$ is finite. We characterize Green's relations on $\overline{\Omega}(X, Y)$ and prove that $\mathcal{D} =\mathcal{J}$ on $\overline{\Omega}(X, Y)$ if and only if $Y$ is finite. We also determine ideals of $\overline{\Omega}(X, Y)$ and investigate its kernel. This paper extends several results appeared in the literature.
We elaborate on the correspondence between the canonical partition function in asymptotically AdS universes and the no-boundary proposal for positive vacuum energy. For the case of a pure cosmological constant, the analytic continuation of the AdS partition function is seen to define the no-boundary wave function (in dS) uniquely in the simplest minisuperspace model. A consideration of the AdS gravitational path integral implies that on the dS side, saddle points with Hawking-Moss/Coleman-De Luccia-type tunnelling geometries are irrelevant. This implies that simple topology changing geometries do not contribute to the nucleation of the universe. The analytic AdS/dS equivalence holds up once tensor fluctuations are added. It also works, at the level of the saddle point approximation, when a scalar field with a mass term is included, though in the latter case, it is the mass that must be analytically continued. Our results illustrate the emergence of time from space by means of a Stokes phenomenon, in the case of positive vacuum energy. Furthermore, we arrive at a new characterisation of the no-boundary condition, namely that there should be no momentum flux at the nucleation of the universe.
The ultimate detection limit of optical biosensors is often limited by various noise sources, including those introduced by the optical measurement setup. While sophisticated modifications to instrumentation may reduce noise, a simpler approach that can benefit all sensor platforms is the application of signal processing to minimize the deleterious effects of noise. In this work, we show that applying complex Morlet wavelet convolution to Fabry-P\'erot interference fringes characteristic of thin film reflectometric biosensors effectively filters out white noise and low frequency reflectance variations. Subsequent calculation of an average difference in phase between the filtered analyte and reference signals enables a significant reduction in the limit of detection (LOD) enabling closer competition with current state-of-the-art techniques. This method is applied on experimental data sets of thin film porous silicon sensors (PSi) in buffered solution and complex media obtained from two different laboratories. The demonstrated improvement in LOD achieved using wavelet convolution and average phase difference paves the way for PSi optical biosensors to operate with clinically relevant detection limits for medical diagnostics, environmental monitoring, and food safety.
In addition to the well-known gas phase mass-metallicity relation (MZR), recent spatially-resolved observations have shown that local galaxies also obey a mass-metallicity gradient relation (MZGR) whereby metallicity gradients can vary systematically with galaxy mass. In this work, we use our recently-developed analytic model for metallicity distributions in galactic discs, which includes a wide range of physical processes -- radial advection, metal diffusion, cosmological accretion, and metal-enriched outflows -- to simultaneously analyse the MZR and MZGR. We show that the same physical principles govern the shape of both: centrally-peaked metal production favours steeper gradients, and this steepening is diluted by the addition of metal-poor gas, which is supplied by inward advection for low-mass galaxies and by cosmological accretion for massive galaxies. The MZR and the MZGR both bend at galaxy stellar mass $\sim 10^{10} - 10^{10.5}\,\rm{M_{\odot}}$, and we show that this feature corresponds to the transition of galaxies from the advection-dominated to the accretion-dominated regime. We also find that both the MZR and MZGR strongly suggest that low-mass galaxies preferentially lose metals entrained in their galactic winds. While this metal-enrichment of the galactic outflows is crucial for reproducing both the MZR and the MZGR at the low-mass end, we show that the flattening of gradients in massive galaxies is expected regardless of the nature of their winds.
Direct Volume Rendering (DVR) using Volumetric Path Tracing (VPT) is a scientific visualization technique that simulates light transport with objects' matter using physically-based lighting models. Monte Carlo (MC) path tracing is often used with surface models, yet its application for volumetric models is difficult due to the complexity of integrating MC light-paths in volumetric media with none or smooth material boundaries. Moreover, auxiliary geometry-buffers (G-buffers) produced for volumes are typically very noisy, failing to guide image denoisers relying on that information to preserve image details. This makes existing real-time denoisers, which take noise-free G-buffers as their input, less effective when denoising VPT images. We propose the necessary modifications to an image-based denoiser previously used when rendering surface models, and demonstrate effective denoising of VPT images. In particular, our denoising exploits temporal coherence between frames, without relying on noise-free G-buffers, which has been a common assumption of existing denoisers for surface-models. Our technique preserves high-frequency details through a weighted recursive least squares that handles heterogeneous noise for volumetric models. We show for various real data sets that our method improves the visual fidelity and temporal stability of VPT during classic DVR operations such as camera movements, modifications of the light sources, and editions to the volume transfer function.
In the last decade, substantial progress has been made towards standardizing the syntax of graph query languages, and towards understanding their semantics and complexity of evaluation. In this paper, we consider temporal property graphs (TPGs) and propose temporal regular path queries (TRPQs) that incorporate time into TPG navigation. Starting with design principles, we propose a natural syntactic extension of the MATCH clause of popular graph query languages. We then formally present the semantics of TRPQs, and study the complexity of their evaluation. We show that TRPQs can be evaluated in polynomial time if TPGs are time-stamped with time points, and identify fragments of the TRPQ language that admit efficient evaluation over a more succinct interval-annotated representation. Finally, we implement a fragment of the language in a state-of-the-art dataflow framework, and experimentally demonstrate that TRPQ can be evaluated efficiently.
Let $G$ be a simple undirected graph with vertex set $V(G)=\{v_1, v_2, \ldots, v_n\}$ and edge set $E(G)$. The Sombor matrix $\mathcal{S}(G)$ of a graph $G$ is defined so that its $(i,j)$-entry is equal to $\sqrt{d_i^2+d_j^2}$ if the vertices $v_i$ and $v_j$ are adjacent, and zero otherwise, where $d_i$ denotes the degree of vertex $v_i$ in $G$. In this paper, lower and upper bounds on the spectral radius, energy and Estrada index of the Sombor matrix of graphs are obtained, and the respective extremal graphs are characterized.
COVID19 has impacted Indian engineering institutions (EIs) enormously. It has tightened its knot around EIs that forced their previous half shut shades completely down to prevent the risk of spreading COVID19. In such a situation, fetching new enrollments on EI campuses is a difficult and challenging task, as students behavior and family preferences have changed drastically due to mental stress and emotions attached to them. Consequently, it becomes a prerequisite to examine the choice characteristics influencing the selection of EI during the COVID-19 pandemic to make it normal for new enrollments. The purpose of this study is to critically examine choice characteristics that affect students choice for EI and consequently to explore relationships between institutional characteristics and the suitability of EI during the COVID19 pandemic across students characteristics. The findings of this study revealed dissimilarities across students characteristics regarding the suitability of EIs under pandemic conditions. Regression analysis revealed that EI characteristics such as proximity, image and reputation, quality education and curriculum delivery have significantly contributed to suitability under COVID19. At the micro level, multiple relationships were noted between EI characteristics and the suitability of EI under the pandemic across students characteristics. The study has successfully demonstrated how choice characteristics can be executed to regulate the suitability of EI under the COVID19 pandemic for the inclusion of diversity. It is useful for policy makers and academicians to reposition EIs that fetch diversity during the pandemic. This study is the first to provide insights into the performance of choice characteristics and their relationship with the suitability of EIs under a pandemic and can be a yardstick in administering new enrollments.
We introduce Reflective Hamiltonian Monte Carlo (ReHMC), an HMC-based algorithm, to sample from a log-concave distribution restricted to a convex body. We prove that, starting from a warm start, the walk mixes to a log-concave target distribution $\pi(x) \propto e^{-f(x)}$, where $f$ is $L$-smooth and $m$-strongly-convex, within accuracy $\varepsilon$ after $\widetilde O(\kappa d^2 \ell^2 \log (1 / \varepsilon))$ steps for a well-rounded convex body where $\kappa = L / m$ is the condition number of the negative log-density, $d$ is the dimension, $\ell$ is an upper bound on the number of reflections, and $\varepsilon$ is the accuracy parameter. We also developed an efficient open source implementation of ReHMC and we performed an experimental study on various high-dimensional data-sets. The experiments suggest that ReHMC outperfroms Hit-and-Run and Coordinate-Hit-and-Run regarding the time it needs to produce an independent sample and introduces practical truncated sampling in thousands of dimensions.
When flipping a fair coin, let $W = L_1L_2...L_N$ with $L_i\in\{H,T\}$ be a binary word of length $N=2$ or $N=3$. In this paper, we establish second- and third-order linear recurrence relations and their generating functions to discuss the probabilities $p_{W}(n)$ that binary words $W$ appear for the first time after $n$ coin tosses.
Climate change and global warming are the significant challenges of the new century. A viable solution to mitigate greenhouse gas emissions is via a globally incentivized market mechanism proposed in the Kyoto protocol. In this view, the carbon dioxide (or other greenhouse gases) emission is considered a commodity, forming a carbon trading system. There have been attempts in developing this idea in the past decade with limited success. The main challenges of current systems are fragmented implementations, lack of transparency leading to over-crediting and double-spending, and substantial transaction costs that transfer wealth to brokers and agents. We aim to create a Carbon Credit Ecosystem using smart contracts that operate in conjunction with blockchain technology in order to bring more transparency, accessibility, liquidity, and standardization to carbon markets. This ecosystem includes a tokenization mechanism to securely digitize carbon credits with clear minting and burning protocols, a transparent mechanism for distribution of tokens, a free automated market maker for trading the carbon tokens, and mechanisms to engage all stakeholders, including the energy industry, project verifiers, liquidity providers, NGOs, concerned citizens, and governments. This approach could be used in a variety of other credit/trading systems.
We demonstrate a successful navigation and docking control system for the John Deere Tango autonomous mower, using only a single camera as the input. This vision-only system is of interest because it is inexpensive, simple for production, and requires no external sensing. This is in contrast to existing systems that rely on integrated position sensors and global positioning system (GPS) technologies. To produce our system we combined a state-of-the-art object detection architecture, You Only Look Once (YOLO), with a reinforcement learning (RL) architecture, Double Deep QNetworks (Double DQN). The object detection network identifies features on the mower and passes its output to the RL network, providing it with a low-dimensional representation that enables rapid and robust training. Finally, the RL network learns how to navigate the machine to the desired spot in a custom simulation environment. When tested on mower hardware, the system is able to dock with centimeter-level accuracy from arbitrary initial locations and orientations.
Few researches have been proposed specifically for real-time semantic segmentation in rainy environments. However, the demand in this area is huge and it is challenging for lightweight networks. Therefore, this paper proposes a lightweight network which is specially designed for the foreground segmentation in rainy environments, named De-raining Semantic Segmentation Network (DRSNet). By analyzing the characteristics of raindrops, the MultiScaleSE Block is targetedly designed to encode the input image, it uses multi-scale dilated convolutions to increase the receptive field, and SE attention mechanism to learn the weights of each channels. In order to combine semantic information between different encoder and decoder layers, it is proposed to use Asymmetric Skip, that is, the higher semantic layer of encoder employs bilinear interpolation and the output passes through pointwise convolution, then added element-wise to the lower semantic layer of decoder. According to the control experiments, the performances of MultiScaleSE Block and Asymmetric Skip compared with SEResNet18 and Symmetric Skip respectively are improved to a certain degree on the Foreground Accuracy index. The parameters and the floating point of operations (FLOPs) of DRSNet is only 0.54M and 0.20GFLOPs separately. The state-of-the-art results and real-time performances are achieved on both the UESTC all-day Scenery add rain (UAS-add-rain) and the Baidu People Segmentation add rain (BPS-add-rain) benchmarks with the input sizes of 192*128, 384*256 and 768*512. The speed of DRSNet exceeds all the networks within 1GFLOPs, and Foreground Accuracy index is also the best among the similar magnitude networks on both benchmarks.
Understanding electrical energy demand at the consumer level plays an important role in planning the distribution of electrical networks and offering of off-peak tariffs, but observing individual consumption patterns is still expensive. On the other hand, aggregated load curves are normally available at the substation level. The proposed methodology separates substation aggregated loads into estimated mean consumption curves, called typical curves, including information given by explanatory variables. In addition, a model-based clustering approach for substations is proposed based on the similarity of their consumers typical curves and covariance structures. The methodology is applied to a real substation load monitoring dataset from the United Kingdom and tested in eight simulated scenarios.
The measurements of $V_{us}$ in leptonic $(K_{\mu 2})$ and semileptonic $(K_{l3})$ kaon decays exhibit a $3\sigma$ disagreement, which could originate either from physics beyond the Standard Model or some large unidentified Standard Model systematic effects. Clarifying this issue requires a careful examination of all existing Standard Model inputs. Making use of a newly-proposed computational framework and the most recent lattice QCD results, we perform a comprehensive re-analysis of the electroweak radiative corrections to the $K_{e3}$ decay rates that achieves an unprecedented level of precision of $10^{-4}$, which improves the current best results by almost an order of magnitude. No large systematic effects are found, which suggests that the electroweak radiative corrections should be removed from the ``list of culprits'' responsible for the $K_{\mu 2}$--$K_{l3}$ discrepancy.
We present a new method to capture detailed human motion, sampling more than 1000 unique points on the body. Our method outputs highly accurate 4D (spatio-temporal) point coordinates and, crucially, automatically assigns a unique label to each of the points. The locations and unique labels of the points are inferred from individual 2D input images only, without relying on temporal tracking or any human body shape or skeletal kinematics models. Therefore, our captured point trajectories contain all of the details from the input images, including motion due to breathing, muscle contractions and flesh deformation, and are well suited to be used as training data to fit advanced models of the human body and its motion. The key idea behind our system is a new type of motion capture suit which contains a special pattern with checkerboard-like corners and two-letter codes. The images from our multi-camera system are processed by a sequence of neural networks which are trained to localize the corners and recognize the codes, while being robust to suit stretching and self-occlusions of the body. Our system relies only on standard RGB or monochrome sensors and fully passive lighting and the passive suit, making our method easy to replicate, deploy and use. Our experiments demonstrate highly accurate captures of a wide variety of human poses, including challenging motions such as yoga, gymnastics, or rolling on the ground.
The application of remaining useful life (RUL) prediction has taken great importance in terms of energy optimization, cost-effectiveness, and risk mitigation. The existing RUL prediction algorithms mostly constitute deep learning frameworks. In this paper, we implement LSTM and GRU models and compare the obtained results with a proposed genetically trained neural network. The current models solely depend on Adam and SGD for optimization and learning. Although the models have worked well with these optimizers, even little uncertainties in prognostics prediction can result in huge losses. We hope to improve the consistency of the predictions by adding another layer of optimization using Genetic Algorithms. The hyper-parameters - learning rate and batch size are optimized beyond manual capacity. These models and the proposed architecture are tested on the NASA Turbofan Jet Engine dataset. The optimized architecture can predict the given hyper-parameters autonomously and provide superior results.
In this paper, we study the optimal transmission of a multi-quality tiled 360 virtual reality (VR) video from a multi-antenna server (e.g., access point or base station) to multiple single-antenna users in a multiple-input multiple-output (MIMO)-orthogonal frequency division multiple access (OFDMA) system. We minimize the total transmission power with respect to the subcarrier allocation constraints, rate allocation constraints, and successful transmission constraints, by optimizing the beamforming vector and subcarrier, transmission power and rate allocation. The formulated resource allocation problem is a challenging mixed discrete-continuous optimization problem. We obtain an asymptotically optimal solution in the case of a large antenna array, and a suboptimal solution in the general case. As far as we know, this is the first work providing optimization-based design for 360 VR video transmission in MIMO-OFDMA systems. Finally, by numerical results, we show that the proposed solutions achieve significant improvement in performance compared to the existing solutions.
We discover that deep ReLU neural network classifiers can see a low-dimensional Riemannian manifold structure on data. Such structure comes via the local data matrix, a variation of the Fisher information matrix, where the role of the model parameters is taken by the data variables. We obtain a foliation of the data domain and we show that the dataset on which the model is trained lies on a leaf, the data leaf, whose dimension is bounded by the number of classification labels. We validate our results with some experiments with the MNIST dataset: paths on the data leaf connect valid images, while other leaves cover noisy images.
Backhauling services through satellite systems have doubled between 2012 and 2018. There is an increasing demand for this service for which satellite systems typically allocate a fixed resource. This solution may not help in optimizing the usage of the scarce satellite resource. This study measures the relevance of using dynamic resource allocation mechanisms for backhaul services through satellite systems. The satellite system is emulated with OpenSAND, the LTE system with Amarisoft and the experiments are orchestrated by OpenBACH. We compare the relevance of applying TCP PEP mechanisms and dynamic resource allocations for different traffic services by measuring the QoE for web browsing, data transfer and VoIP applications. The main conclusions are the following. When the system is congested, PEP and layer-2 access mechanisms do not provide significant improvements. When the system is not congested, data transfer can be greatly improved through protocols and channel access mechanism optimization. Tuning the Constant Rate Assignment can help in reducing the cost of the resource and provide QoE improvements when the network is not loaded.
Grain boundaries (GBs) are planar lattice defects that govern the properties of many types of polycrystalline materials. Hence, their structures have been investigated in great detail. However, much less is known about their chemical features, owing to the experimental difficulties to probe these features at the atomic length scale inside bulk material specimens. Atom probe tomography (APT) is a tool capable of accomplishing this task, with an ability to quantify chemical characteristics at near-atomic scale. Using APT data sets, we present here a machine-learning-based approach for the automated quantification of chemical features of GBs. We trained a convolutional neural network (CNN) using twenty thousand synthesized images of grain interiors, GBs, or triple junctions. Such a trained CNN automatically detects the locations of GBs from APT data. Those GBs are then subjected to compositional mapping and analysis, including revealing their in-plane chemical decoration patterns. We applied this approach to experimentally obtained APT data sets pertaining to three case studies, namely, Ni-P, Pt-Au, and Al-Zn-Mg-Cu alloys. In the first case, we extracted GB-specific segregation features as a function of misorientation and coincidence site lattice character. Secondly, we revealed interfacial excesses and in-plane chemical features that could not have been found by standard compositional analyses. Lastly, we tracked the temporal evolution of chemical decoration from early-stage solute GB segregation in the dilute limit to interfacial phase separation, characterized by the evolution of complex composition patterns. This machine-learning-based approach provides quantitative, unbiased, and automated access to GB chemical analyses, serving as an enabling tool for new discoveries related to interface thermodynamics, kinetics, and the associated chemistry-structure-property relations.
The Fock space $\mathcal{F}(\mathbb{C}^n)$ is the space of holomorphic functions on $\mathbb{C}^n$ that are square-integrable with respect to the Gaussian measure on $\mathbb{C}^n$. This space plays an important role in several subfields of analysis and representation theory. In particular, it has for a long time been a model to study Toeplitz operators. Esmeral and Maximenko showed in 2016 that radial Toeplitz operators on $\mathcal{F}(\mathbb{C})$ generate a commutative $C^*$-algebra which is isometrically isomorphic to the $C^*$-algebra $C_{b,u}(\mathbb{N}_0,\rho_1)$. In this article, we extend the result to $k$-quasi-radial symbols acting on the Fock space $\mathcal{F}(\mathbb{C}^n)$. We calculate the spectra of the said Toeplitz operators and show that the set of all eigenvalue functions is dense in the $C^*$-algebra $C_{b,u}(\mathbb{N}_0^k,\rho_k)$ of bounded functions on $\mathbb{N}_0^k$ which are uniformly continuous with respect to the square-root metric. In fact, the $C^*$-algebra generated by Toeplitz operators with quasi-radial symbols is $C_{b,u}(\mathbb{N}_0^k,\rho_k)$.
We present optical follow-up imaging obtained with the Katzman Automatic Imaging Telescope, Las Cumbres Observatory Global Telescope Network, Nickel Telescope, Swope Telescope, and Thacher Telescope of the LIGO/Virgo gravitational wave (GW) signal from the neutron star-black hole (NSBH) merger GW190814. We searched the GW190814 localization region (19 deg$^{2}$ for the 90th percentile best localization), covering a total of 51 deg$^{2}$ and 94.6% of the two-dimensional localization region. Analyzing the properties of 189 transients that we consider as candidate counterparts to the NSBH merger, including their localizations, discovery times from merger, optical spectra, likely host-galaxy redshifts, and photometric evolution, we conclude that none of these objects are likely to be associated with GW190814. Based on this finding, we consider the likely optical properties of an electromagnetic counterpart to GW190814, including possible kilonovae and short gamma-ray burst afterglows. Using the joint limits from our follow-up imaging, we conclude that a counterpart with an $r$-band decline rate of 0.68 mag day$^{-1}$, similar to the kilonova AT 2017gfo, could peak at an absolute magnitude of at most $-17.8$ mag (50% confidence). Our data are not constraining for ''red'' kilonovae and rule out ''blue'' kilonovae with $M>0.5 M_{\odot}$ (30% confidence). We strongly rule out all known types of short gamma-ray burst afterglows with viewing angles $<$17$^{\circ}$ assuming an initial jet opening angle of $\sim$$5.2^{\circ}$ and explosion energies and circumburst densities similar to afterglows explored in the literature. Finally, we explore the possibility that GW190814 merged in the disk of an active galactic nucleus, of which we find four in the localization region, but we do not find any candidate counterparts among these sources.
Ultrafast lasers are ideal tools to process transparent materials because they spatially confine the deposition of laser energy within the material's bulk via nonlinear photoionization processes. Nonlinear propagation and filamentation were initially regarded as deleterious effects. But in the last decade, they turned out to be benefits to control energy deposition over long distances. These effects create very high aspect ratio structures which have found a number of important applications, particularly for glass separation with non-ablative techniques. This chapter reviews the developments of in-volume ultrafast laser processing of transparent materials. We discuss the basic physics of the processes, characterization means, filamentation of Gaussian and Bessel beams and provide an overview of present applications.
Hyperspectral imaging at cryogenic temperatures is used to investigate exciton and trion propagation in MoSe$_2$ monolayers encapsulated with hexagonal boron nitride (hBN). Under a tightly focused, continuous-wave laser excitation, the spatial distribution of neutral excitons and charged trions strongly differ at high excitation densities. Remarkably, in this regime the trion distribution develops a halo shape, similar to that previously observed in WS2 monolayers at room temperature and under pulsed excitation. In contrast, the exciton distribution only presents a moderate broadening without the appereance of a halo. Spatially and spectrally resolved luminescence spectra reveal the buildup of a significant temperature gradient at high excitation power, that is attributed to the energy relaxation of photoinduced hot carriers. We show, via a numerical resolution of the transport equations for excitons and trions, that the halo can be interpreted as thermal drift of trions due to a Seebeck term in the particle current. The model shows that the difference between trion and exciton profiles is simply understood in terms of the very different lifetimes of these two quasiparticles.
The conformational states of a semiflexible polymer enclosed in a volume $V:=\ell^{3}$ are studied as stochastic realizations of paths using the stochastic curvature approach developed in [Rev. E 100, 012503 (2019)], in the regime whenever $3\ell/\ell_ {p}> 1$, where $\ell_{p}$ is the persistence length. The cases of a semiflexible polymer enclosed in a cube and sphere are considered. In these cases, we explore the Spakowitz-Wang type polymer shape transition, where the critical persistence length distinguishes between an oscillating and a monotonic phase at the level of the mean-square end-to-end distance. This shape transition provides evidence of a universal signature of the behavior of a semiflexible polymer confined in a compact domain.
We construct a cosmological model from the inception of the Friedmann-Lem\^aitre-Robertson-Walker metric into the field equations of the $f(R,L_m)$ gravity theory, with $R$ being the Ricci scalar and $L_m$ being the matter lagrangian density. The formalism is developed for a particular $f(R,L_m)$ function, namely $R/16\pi +(1+\sigma R)L_{m}$, with $\sigma$ being a constant that carries the geometry-matter coupling. Our solutions are remarkably capable of evading the Big-Bang singularity as well as predict the cosmic acceleration with no need for the cosmological constant, but simply as a consequence of the geometry-matter coupling terms in the Friedmann-like equations.
Molecular science is governed by the dynamics of electrons, atomic nuclei, and their interaction with electromagnetic fields. A reliable physicochemical understanding of these processes is crucial for the design and synthesis of chemicals and materials of economic value. Although some problems in this field are adequately addressed by classical mechanics, many require an explicit quantum mechanical description. Such quantum problems represented by exponentially large wave function should naturally benefit from quantum computation on a number of logical qubits that scales only linearly with system size. In this perspective, we focus on the potential of quantum computing for solving relevant problems in the molecular sciences -- molecular physics, chemistry, biochemistry, and materials science.
This research discusses multi-criteria decision making (MCDM) using Fuzzy-AHP methods of tourism. The fuzzy-AHP process will rank tourism trends based on data from social media. Social media is one of the channels with the largest source of data input in determining tourism development. The development uses social media interactions based on the facilities visited, including reviews, stories, likes, forums, blogs, and feedback. This experiment aims to prioritize facilities that are the trend of tourism. The priority ranking uses weight criteria and the ranking process. The highest rank is in the attractions of the Park/Picnic Area, with the final weight calculation value of 0.6361. Fuzzy-AHP can rank optimally with an MSE value of \approx 0.0002.
Despite their recent success on image denoising, the need for deep and complex architectures still hinders the practical usage of CNNs. Older but computationally more efficient methods such as BM3D remain a popular choice, especially in resource-constrained scenarios. In this study, we aim to find out whether compact neural networks can learn to produce competitive results as compared to BM3D for AWGN image denoising. To this end, we configure networks with only two hidden layers and employ different neuron models and layer widths for comparing the performance with BM3D across different AWGN noise levels. Our results conclusively show that the recently proposed self-organized variant of operational neural networks based on a generative neuron model (Self-ONNs) is not only a better choice as compared to CNNs, but also provide competitive results as compared to BM3D and even significantly surpass it for high noise levels.
This work investigates the feasibility of using input-output data-driven control techniques for building control and their susceptibility to data-poisoning techniques. The analysis is performed on a digital replica of the KTH Livein Lab, a non-linear validated model representing one of the KTH Live-in Lab building testbeds. This work is motivated by recent trends showing a surge of interest in using data-based techniques to control cyber-physical systems. We also analyze the susceptibility of these controllers to data-poisoning methods, a particular type of machine learning threat geared towards finding imperceptible attacks that can undermine the performance of the system under consideration. We consider the Virtual Reference Feedback Tuning (VRFT), a popular data-driven control technique, and show its performance on the KTH Live-In Lab digital replica. We then demonstrate how poisoning attacks can be crafted and illustrate the impact of such attacks. Numerical experiments reveal the feasibility of using data-driven control methods for finding efficient control laws. However, a subtle change in the datasets can significantly deteriorate the performance of VRFT.
Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an object hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. We split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are not only smooth but also plausible and geometrically consistent with the scene. Our method offers competitive performances to the other state-of-the-art models, and it enables a~plethora of novel applications.
Using five year monitoring observations, we did a blind search for pulses for rotating radio transient (RRAT) J0139+33 and PSR B0320+39. At the interval \pm 1.5m of the time corresponding to the source passing through the meridian, we detected 39377 individual pulses for the pulsar B0320+39 and 1013 pulses for RRAT J0139+33. The share of registered pulses from the total number of observed periods for the pulsar B0320+39 is 74%, and for the transient J0139+33 it is 0.42%. Signal-to-noise ratio (S/N) for the strongest registered pulses is approximately equal to: S/N = 262 (for B0320+39) and S/N = 154 (for J0139+33). Distributions of the number of detected pulses in S/N units for the pulsar and for the rotating transient are obtained. The distributions could be approximated with a lognormal and power dependencies. For B0320+39 pulsar, the dependence is lognormal, it turns into a power dependence at high values of S/N, and for RRAT J0139+33, the distribution of pulses by energy is described by a broken (bimodal) power dependence with an exponent of about 0.4 and 1.8 (S/N < 19 and S/N > 19). We have not detected regular (pulsar) emission of J0139+33. Analysis of the obtained data suggests that RRAT J0139+33 is a pulsar with giant pulses.
A Banach space X has the SHAI (surjective homomorphisms are injective) property provided that for every Banach space Y, every continuous surjective algebra homomorphism from the bounded linear operators on X onto the bounded linear operators on Y is injective. The main result gives a sufficient condition for X to have the SHAI property. The condition is satisfied for L^p (0, 1) for 1 < p < \infty, spaces with symmetric bases that have finite cotype, and the Schatten p-spaces for 1 < p < \infty.
This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To help our model better represent and understand abstract concepts in natural language, we well-design many simple and effective approaches adapted to the backbone model (RoBERTa). Specifically, we formalize the subtasks into the multiple-choice question answering format and add special tokens to abstract concepts, then, the final prediction of question answering is considered as the result of subtasks. Additionally, we employ many finetuning tricks to improve the performance. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches achieve eighth rank on subtask-1 and tenth rank on subtask-2.
How should we understand the social and political effects of the datafication of human life? This paper argues that the effects of data should be understood as a constitutive shift in social and political relations. We explore how datafication, or quantification of human and non-human factors into binary code, affects the identity of individuals and groups. This fundamental shift goes beyond economic and ethical concerns, which has been the focus of other efforts to explore the effects of datafication and AI. We highlight that technologies such as datafication and AI (and previously, the printing press) both disrupted extant power arrangements, leading to decentralization, and triggered a recentralization of power by new actors better adapted to leveraging the new technology. We use the analogy of the printing press to provide a framework for understanding constitutive change. The printing press example gives us more clarity on 1) what can happen when the medium of communication drastically alters how information is communicated and stored; 2) the shift in power from state to private actors; and 3) the tension of simultaneously connecting individuals while driving them towards narrower communities through algorithmic analyses of data.
The concept of an angle is one that often causes difficulties in metrology. These are partly caused by a confusing mixture of several mathematical terms, partly by real mathematical difficulties and finally by imprecise terminology. The purpose of this publication is to clarify misunderstandings and to explain why strict terminology is important. It will also be shown that most misunderstandings regarding the `radian' can be avoided if some simple rules are obeyed.
We explore the parameter space of a U(1) extension of the standard model -- also called the super-weak model -- from the point of view of explaining the observed dark matter energy density in the Universe. The new particle spectrum contains a complex scalar singlet and three right-handed neutrinos, among which the lightest one is the dark matter candidate. We explore both freeze-in and freeze-out mechanisms of dark matter production. In both cases, we find regions in the plane of the super-weak coupling vs. the mass of the new gauge boson that are not excluded by current experimental constraints. These regions are distinct and the one for freeze-out will be explored in searches for neutral gauge boson in the near future.
This paper focuses on a core task in computational sustainability and statistical ecology: species distribution modeling (SDM). In SDM, the occurrence pattern of a species on a landscape is predicted by environmental features based on observations at a set of locations. At first, SDM may appear to be a binary classification problem, and one might be inclined to employ classic tools (e.g., logistic regression, support vector machines, neural networks) to tackle it. However, wildlife surveys introduce structured noise (especially under-counting) in the species observations. If unaccounted for, these observation errors systematically bias SDMs. To address the unique challenges of SDM, this paper proposes a framework called StatEcoNet. Specifically, this work employs a graphical generative model in statistical ecology to serve as the skeleton of the proposed computational framework and carefully integrates neural networks under the framework. The advantages of StatEcoNet over related approaches are demonstrated on simulated datasets as well as bird species data. Since SDMs are critical tools for ecological science and natural resource management, StatEcoNet may offer boosted computational and analytical powers to a wide range of applications that have significant social impacts, e.g., the study and conservation of threatened species.
We study phase contributions of wave functions that occur in the evolution of Gaussian surface gravity water wave packets with nonzero initial momenta propagating in the presence and absence of an effective external linear potential. Our approach takes advantage of the fact that in contrast to matter waves, water waves allow us to measure both their amplitudes and phases.
Non-Hermitian systems show a non-Hermitian skin effect, where the bulk states are localized at a boundary of the systems with open boundary conditions. In this paper, we study dependence of the localization length of the eigenstates on a system size in a specific non-Hermitian model with a critical non-Hermitian skin effect, where the energy spectrum undergoes discontinuous transition in the thermodynamic limit. We analytically show that the eigenstates exhibit remarkable localization, known as scale-free localization, where the localization length is proportional to a system size. Our result gives a theoretical support for the scale-free localization, which has been proposed only numerically in previous works.
Recently, [8] has proposed that heterogeneity of infectiousness (and susceptibility) across individuals in infectious diseases, plays a major role in affecting the Herd Immunity Threshold (HIT). Such heterogeneity has been observed in COVID-19 and is recognized as overdispersion (or "super-spreading"). The model of [8] suggests that super-spreaders contribute significantly to the effective reproduction factor, R, and that they are likely to get infected and immune early in the process. Consequently, under R_0 = 3 (attributed to COVID-19), the Herd Immunity Threshold (HIT) is as low as 5%, in contrast to 67% according to the traditional models [1, 2, 4, 10]. This work follows up on [8] and proposes that heterogeneity of infectiousness (susceptibility) has two "faces" whose mix affects dramatically the HIT: (1) Personal-Trait-, and (2) Event-Based- Infectiousness (Susceptibility). The former is a personal trait of specific individuals (super-spreaders) and is nullified once those individuals are immune (as in [8]). The latter is event-based (e.g cultural super-spreading events) and remains effective throughout the process, even after the super-spreaders immune. We extend [8]'s model to account for these two factors, analyze it and conclude that the HIT is very sensitive to the mix between (1) and (2), and under R_0 = 3 it can vary between 5% and 67%. Preliminary data from COVID-19 suggests that herd immunity is not reached at 5%. We address operational aspects and analyze the effects of lockdown strategies on the spread of a disease. We find that herd immunity (and HIT) is very sensitive to the lockdown type. While some lockdowns affect positively the disease blocking and increase herd immunity, others have adverse effects and reduce the herd immunity.
The quantum operator $\hat{T}_3$, corresponding to the projection of the toroidal moment on the $z$ axis, admits several self-adjoint extensions, when defined on the whole $\mathbb{R}^3$ space. $\hat{T}_3$ commutes with $\hat{L}_3$ (the projection of the angular momentum operator on the $z$ axis) and they have a \textit{natural set of coordinates} $(k,u,\phi)$ where $\phi$ is the azimuthal angle. The second set of \textit{natural coordinates} is $(k_1,k_2,u)$, where $k_1 = k\cos\phi$, $k_2 = k\sin\phi$. In both sets, $\hat{T}_3 = -i\hbar\partial/\partial u$, so any operator that is a function of $k$ and the partial derivatives with respect to the \textit{natural variables} $(k, u, \phi)$ commute with $\hat{T}_3$ and $\hat{L}_3$. Similarly, operators that are functions of $k_1$, $k_2$, and the partial derivatives with respect to $k_1$, $k_2$, and $u$ commute with $\hat{T}_3$. Therefore, we introduce here the operators $\hat{p}_{k} \equiv -i \hbar \partial/\partial k$, $\hat{p}^{(k1)} \equiv -i \hbar \partial/\partial k_1$, and $\hat{p}^{(k2)} \equiv -i \hbar \partial/\partial k_2$ and express them in the $(x,y,z)$ coordinates. One may also invert the relations and write the typical operators, like the momentum $\hat{\bf p} \equiv -i\hbar {\bf \nabla}$ or the kinetic energy $\hat{H}_0 \equiv -\hbar^2\Delta/(2m)$ in terms of the "toroidal" operators $\hat{T}_3$, $\hat{p}^{(k)}$, $\hat{p}^{(k1)}$, $\hat{p}^{(k2)}$, and, eventually, $\hat{L}_3$. The formalism may be applied to specific physical systems, like nuclei, condensed matter systems, or metamaterials. We exemplify it by calculating the momentum operator and the free particle Hamiltonian in terms of \textit{natural coordinates} in a thin torus, where the general relations get considerably simplified.
Electronic states in the gap of a superconductor inherit intriguing many-body properties from the superconductor. Here, we create these in-gap states by manipulating Cr atomic chains on the $\beta$-Bi$_2$Pd superconductor. We find that the topological properties of the in-gap states can greatly vary depending on the crafted spin chain. These systems make an ideal platform for non-trivial topological phases because of the large atom-superconductor interactions and the existence of a large Rashba coupling at the Bi-terminated surface. We study two spin chains, one with atoms two-lattice-parameter apart and one with square-root-of-two lattice parameters. Of these, only the second one is in a topologically non-trivial phase, in correspondence with the spin interactions for this geometry.
Using density functional theory combined with nonequilibrium Green's function method, the transport properties of borophene-based nano gas sensors with gold electrodes are calculated, and comprehensive understandings regarding the effects of gas molecules, MoS$_2$ substrate and gold electrodes to the transport properties of borophene are made. Results show that borophene-based sensors can be used to detect and distinguish CO, NO, NO$_2$ and NH$_3$ gas molecules, MoS$_2$ substrate leads to a non-linear behavior on the current-voltage characteristic, and gold electrodes provide charges to borophene and form a potential barrier, which reduced the current values compared to the current of the systems without gold electrodes. Our studies not only provide useful information on the computationally design of borophene-based gas sensors, but also help understand the transport behaviors and underlying physics of 2D metallic materials with metal electrodes.
RX J0123.4-7321 is a well-established Be star X-ray binary system (BeXRB) in the Small Magellanic Cloud (SMC). Like many such systems the variable X-ray emission is driven by the underlying behaviour of the mass donor Be star. Previous work has shown that the optical and X-ray were characterised by regular outbursts at the proposed binary period of 119 d. However around February 2008 the optical behaviour changed substantially, with the previously regular optical outbursts ending. Reported here are new optical (OGLE) and X-ray (Swift) observations covering the period after 2008 which suggest an almost total circumstellar disc loss followed by a gradual recovery. This indicates the probable transition of a Be star to a B star, and back again. However, at the time of the most recent OGLE data (March 2020) the characteristic periodic outbursts had yet to return to their early state, indicating that the disk still had some re-building yet to complete.
Convection has been discussed in the field of accretion discs for several decades, both as a means of angular momentum transport and also because of its role in controlling discs' vertical structure via heat transport. If the gas is sufficiently ionized and threaded by a weak magnetic field, convection might interact in non-trivial ways with the magnetorotational instability (MRI). Recently, vertically stratified local simulations of the MRI have reported considerable variation in the angular momentum transport, as measured by the stress to thermal pressure ratio $\alpha$, when convection is thought to be present. Although MRI turbulence can act as a heat source for convection, it is not clear how the instabilities will interact dynamically. Here we aim to investigate the interplay between the two instabilities in controlled numerical experiments, and thus isolate the generic features of their interaction. We perform vertically stratified, 3D MHD shearing box simulations with a perfect gas equation of state with the conservative, finite-volume code PLUTO. We find two characteristic outcomes of the interaction between the two instabilities: straight MRI and MRI/convective cycles, with the latter exhibiting alternating phases of convection-dominated flow (during which the turbulent transport is weak) and MRI-dominated flow. During the latter phase we find that $\alpha$ is enhanced by nearly an order of magnitude, reaching peak values of $\sim 0.08$. In addition, we find that convection in the non-linear phase takes the form of large-scale and oscillatory convective cells. Convection can also help the MRI persist to lower Rm than it would otherwise do. Finally we discuss how our results help interpret simulations of Dwarf Novae.
In this paper we study Chow motives whose identity map is killed by a natural number. Examples of such objects were constructed by Gorchinskiy-Orlov. We introduce various invariants of torsion motives, in particular, the $p$-level. We show that this invariant bounds from below the dimension of the variety a torsion motive $M$ is a direct summand of and imposes restrictions on motivic and singular cohomology of $M$. We study in more details the $p$-torsion motives of surfaces, in particular, the Godeaux torsion motive. We show that such motives are in 1-to-1 correspondence with certain Rost cycle submodules of free modules over $H^*_{et}$. This description is parallel to that of mod-$p$ reduced motives of curves.
A very useful identity for Parseval frames for Hilbert spaces was obtained by Balan, Casazza, Edidin, and Kutyniok. In this paper, we obtain a similar identity for Parseval p-approximate Schauder frames for Banach spaces which admits a homogeneous semi-inner product in the sense of Lumer-Giles.
We construct the hydrodynamic theory of coherent collective motion ("flocking") at a solid-liquid interface. The polar order parameter and concentration of a collection of "active" (self-propelled) particles at a planar interface between a passive, isotropic bulk fluid and a solid surface are dynamically coupled to the bulk fluid. We find that such systems are stable, and have long-range orientational order, over a wide range of parameters. When stable, these systems exhibit "giant number fluctuations", i.e., large fluctuations of the number of active particles in a fixed large area. Specifically, these number fluctuations grow as the $3/4$th power of the mean number within the area. Stable systems also exhibit anomalously rapid diffusion of tagged particles suspended in the passive fluid along any directions in a plane parallel to the solid-liquid interface, whereas the diffusivity along the direction perpendicular to the plane is non-anomalous. In other parameter regimes, the system becomes unstable.
As the fundamental physical process with many astrophysical implications, the diffusion of cosmic rays (CRs) is determined by their interaction with magnetohydrodynamic (MHD) turbulence. We consider the magnetic mirroring effect arising from MHD turbulence on the diffusion of CRs. Due to the intrinsic superdiffusion of turbulent magnetic fields, CRs with large pitch angles that undergo mirror reflection, i.e., bouncing CRs, are not trapped between magnetic mirrors, but move diffusively along the turbulent magnetic field, leading to a new type of parallel diffusion, i.e., mirror diffusion. This mirror diffusion is in general slower than the diffusion of non-bouncing CRs with small pitch angles that undergo gyroresonant scattering. The critical pitch angle at the balance between magnetic mirroring and pitch-angle scattering is important for determining the diffusion coefficients of both bouncing and non-bouncing CRs and their scalings with the CR energy. We find non-universal energy scalings of diffusion coefficients, depending on the properties of MHD turbulence.