abstract
stringlengths
42
2.09k
Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limitations of handmade representations. In this work we show that we can train a single learnable frontend that outperforms mel-filterbanks on a wide range of audio signals, including speech, music, audio events and animal sounds, providing a general-purpose learned frontend for audio classification. To do so, we introduce a new principled, lightweight, fully learnable architecture that can be used as a drop-in replacement of mel-filterbanks. Our system learns all operations of audio features extraction, from filtering to pooling, compression and normalization, and can be integrated into any neural network at a negligible parameter cost. We perform multi-task training on eight diverse audio classification tasks, and show consistent improvements of our model over mel-filterbanks and previous learnable alternatives. Moreover, our system outperforms the current state-of-the-art learnable frontend on Audioset, with orders of magnitude fewer parameters.
With the dramatic rise in high-quality galaxy data expected from Euclid and Vera C. Rubin Observatory, there will be increasing demand for fast high-precision methods for measuring galaxy fluxes. These will be essential for inferring the redshifts of the galaxies. In this paper, we introduce Lumos, a deep learning method to measure photometry from galaxy images. Lumos builds on BKGnet, an algorithm to predict the background and its associated error, and predicts the background-subtracted flux probability density function. We have developed Lumos for data from the Physics of the Accelerating Universe Survey (PAUS), an imaging survey using 40 narrow-band filter camera (PAUCam). PAUCam images are affected by scattered light, displaying a background noise pattern that can be predicted and corrected for. On average, Lumos increases the SNR of the observations by a factor of 2 compared to an aperture photometry algorithm. It also incorporates other advantages like robustness towards distorting artifacts, e.g. cosmic rays or scattered light, the ability of deblending and less sensitivity to uncertainties in the galaxy profile parameters used to infer the photometry. Indeed, the number of flagged photometry outlier observations is reduced from 10% to 2%, comparing to aperture photometry. Furthermore, with Lumos photometry, the photo-z scatter is reduced by ~10% with the Deepz machine learning photo-z code and the photo-z outlier rate by 20%. The photo-z improvement is lower than expected from the SNR increment, however currently the photometric calibration and outliers in the photometry seem to be its limiting factor.
It is well known that quantum effects may lead to remove the intrinsic singularity point of back holes. Also, the quintessence scalar field is a candidate model for describing late-time acceleration expansion. Accordingly, Kazakov and Solodukhin considered the existence of back-reaction of the spacetime due to the quantum fluctuations of the background metric to deform Schwarzschild black hole, which led to change the intrinsic singularity of the black hole to a 2-sphere with a radius of the order of the Planck length. Also, Kiselev rewrote the Schwarzschild metric by taking into account the quintessence field in the background. In this study, we consider the quantum-corrected Schwarzschild black hole inspired by Kazakov-Solodukhin's work, and Schwarzschild black hole surrounded by quintessence deduced by Kiselev to study the mutual effects of quantum fluctuations and quintessence on the accretion onto the black hole. Consequently, the radial component of 4-velocity and the proper energy density of the accreting fluid have a finite value on the surface of its central 2-sphere due to the presence of quantum corrections. Also, by comparing the accretion parameters in different kinds of black holes, we infer that the presence of a point-like electric charge in the spacetime is somewhat similar to some quantum fluctuations in the background metric.
Batched network coding is a variation of random linear network coding which has low computational and storage costs. In order to adapt to random fluctuations in the number of erasures in individual batches, it is not optimal to recode and transmit the same number of packets for all batches. Different distributed optimization models, which are called adaptive recoding schemes, were formulated for this purpose. The key component of these optimization problems is the expected value of the rank distribution of a batch at the next network node, which is also known as the expected rank. In this paper, we put forth a unified adaptive recoding framework with an arbitrary recoding field size. We show that the expected rank functions are concave when the packet loss pattern is a stationary stochastic process, which covers but not limited to independent packet loss and Gilbert-Elliott packet loss model. Under this concavity assumption, we show that there always exists a solution which not only can minimize the randomness on the number of recoded packets but also can tolerate rank distribution errors due to inaccurate measurements or limited precision of the machine. We provide an algorithm to obtain such an optimal optimal solution, and propose tuning schemes that can turn any feasible solution into a desired optimal solution.
Based on direct numerical simulations with point-like inertial particles transported by homogeneous and isotropic turbulent flows, we present evidence for the existence of Markov property in Lagrangian turbulence. We show that the Markov property is valid for a finite step size larger than a Stokes number-dependent Einstein-Markov memory length. This enables the description of multi-scale statistics of Lagrangian particles by Fokker-Planck equations, which can be embedded in an interdisciplinary approach linking the statistical description of turbulence with fluctuation theorems of non-equilibrium stochastic thermodynamics and fluctuation theorems, and local flow structures.
A dedicated in situ heating setup in a scanning electron microscope (SEM) followed by an ex situ atomic force microscopy (AFM) and electron backscatter diffraction (EBSD) is used to characterize the nucleation and early growth stages of Fe-Al intermetallics (IMs) at 596 {\deg}C. A location tracking is used to interpret further characterization. Ex situ AFM observations reveal a slight shrinkage and out of plane protrusion of the IM at the onset of IM nucleation followed by directional growth. The formed interfacial IM compounds were identified by ex situ EBSD. It is now clearly demonstrated that the {\theta}-phase nucleates first prior to the diffusion-controlled growth of the {\eta}-phase. The {\theta}-phase prevails the intermetallic layer.
We present a conceptual study of a large format imaging spectrograph for the Large Submillimeter Telescope (LST) and the Atacama Large Aperture Submillimeter Telescope (AtLAST). Recent observations of high-redshift galaxies indicate the onset of earliest star formation just a few 100 million years after the Big Bang (i.e., z = 12--15), and LST/AtLAST will provide a unique pathway to uncover spectroscopically-identified first forming galaxies in the pre-reionization era, once it will be equipped with a large format imaging spectrograph. We propose a 3-band (200, 255, and 350 GHz), medium resolution (R = 2,000) imaging spectrograph with 1.5 M detectors in total based on the KATANA concept (Karatsu et al. 2019), which exploits technologies of the integrated superconducting spectrometer (ISS) and a large-format imaging array. A 1-deg2 drilling survey (3,500 hr) will capture a large number of [O III] 88 um (and [C II] 158 um) emitters at z = 8--9, and constrain [O III] luminosity functions at z > 12.
Score-based diffusion models synthesize samples by reversing a stochastic process that diffuses data to noise, and are trained by minimizing a weighted combination of score matching losses. The log-likelihood of score-based diffusion models can be tractably computed through a connection to continuous normalizing flows, but log-likelihood is not directly optimized by the weighted combination of score matching losses. We show that for a specific weighting scheme, the objective upper bounds the negative log-likelihood, thus enabling approximate maximum likelihood training of score-based diffusion models. We empirically observe that maximum likelihood training consistently improves the likelihood of score-based diffusion models across multiple datasets, stochastic processes, and model architectures. Our best models achieve negative log-likelihoods of 2.83 and 3.76 bits/dim on CIFAR-10 and ImageNet 32x32 without any data augmentation, on a par with state-of-the-art autoregressive models on these tasks.
We study a model system with nematic and magnetic orders, within a channel geometry modelled by an interval, $[-D, D]$. The system is characterised by a tensor-valued nematic order parameter $\mathbf{Q}$ and a vector-valued magnetisation $\mathbf{M}$, and the observable states are modelled as stable critical points of an appropriately defined free energy. In particular, the full energy includes a nemato-magnetic coupling term characterised by a parameter $c$. We (i) derive $L^\infty$ bounds for $\mathbf{Q}$ and $\mathbf{M}$; (ii) prove a uniqueness result in parameter regimes defined by $c$, $D$ and material- and temperature-dependent correlation lengths; (iii) analyse order reconstruction solutions, possessing domain walls, and their stabilities as a function of $D$ and $c$ and (iv) perform numerical studies that elucidate the interplay of $c$ and $D$ for multistability.
We propose a mechanism to substantially rectify radiative heat flow by matching thin films of metal-to-insulator transition materials and polar dielectrics in the electromagnetic near field. By leveraging the distinct scaling behaviors of the local density of states with film thickness for metals and insulators, we theoretically achieve rectification ratios over 140-a 10-fold improvement over the state of the art-with nanofilms of vanadium dioxide and cubic boron nitride in the parallel-plane geometry at experimentally feasible gap sizes (~100 nm). Our rational design offers relative ease of fabrication, flexible choice of materials, and robustness against deviations from optimal film thicknesses. We expect this work to facilitate the application of thermal diodes in solid-state thermal circuits and energy conversion devices.
We propose a predictive model of the turbulent burning velocity over a wide range of conditions. The model consists of sub models of the stretch factor and the turbulent flame area. The stretch factor characterizes the flame response of turbulence stretch and incorporates effects of detailed chemistry and transport with a lookup table of laminar counterflow flames. The flame area model captures the area growth based on Lagrangian statistics of propagating surfaces, and considers effects of turbulence length scales and fuel characteristics. The present model predicts the turbulent burning velocity via an algebraic expression without free parameters. It is validated against 285 cases of the direct numerical simulation or experiment reported from various research groups on planar and Bunsen flames over a wide range of conditions, covering fuels from hydrogen to n-dodecane, pressures from 1 to 20 atm, lean and rich mixtures, turbulence intensity ratios from 0.35 to 110, and turbulence length ratios from 0.5 to 80. The comprehensive comparison shows that the proposed turbulent burning velocity model has an overall good agreement over the wide range of conditions, with the averaged modeling error of 25.3%. Furthermore, the model prediction involves the uncertainty quantification for model parameters and chemical kinetics to extend the model applicability.
We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel training where multiple generators are concurrently trained, each one of them focusing on a single data label. Performance is assessed in terms of inception score and image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a significant improvement in comparison to state-of-the-art techniques to training DC-CGANs. Weak scaling is attained on all the four datasets using up to 1,000 processes and 2,000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.
In this paper we show how inter-cellular molecular communication may change the overall levels of photosynthesis in plants. Individual plant cells respond to external stimuli, such as illumination levels, to regulate their photosynthetic output. Here, we present a mathematical model which shows that by sharing information internally using molecular communication, plants may increase overall photosynthate production. Numerical results show that higher mutual information between cells corresponds to an increase in overall photosynthesis by as much as 25 per cent. This suggests that molecular communication plays a vital role in maximising the photosynthesis in plants and therefore suggests new routes to influence plant development in agriculture and elsewhere.
In this paper, the split common null point problem in two Banach spaces is considered. Then, using the generalized resolvents of maximal monotone operators and the generalized projections and an infinite family of nonexpansive mappings, a strong convergence theorem for finding a solution of the split common null point problem in two Banach spaces in the presence of a sequence of errors will be proved.
Spectrally-efficient secure non-orthogonal multiple access (NOMA) has recently attained a substantial research interest for fifth generation development. This work explores crucial security issue in NOMA which is stemmed from utilizing the decoding concept of successive interference cancellation. Considering untrusted users, we design a novel secure NOMA transmission protocol to maximize secrecy fairness among users. A new decoding order for two users' NOMA is proposed that provides positive secrecy rate to both users. Observing the objective of maximizing secrecy fairness between users under given power budget constraint, the problem is formulated as minimizing the maximum secrecy outage probability (SOP) between users. In particular, closed-form expressions of SOP for both users are derived to analyze secrecy performance. SOP minimization problems are solved using pseudoconvexity concept, and optimized power allocation (PA) for each user is obtained. Asymptotic expressions of SOPs, and optimal PAs minimizing these approximations are obtained to get deeper insights. Further, globally-optimized power control solution from secrecy fairness perspective is obtained at a low computational complexity and, asymptotic approximation is obtained to gain analytical insights. Numerical results validate the correctness of analysis, and present insights on optimal solutions. Finally, we present insights on global-optimal PA by which fairness is ensured and gains of about 55.12%, 69.30%, and 19.11%, respectively are achieved, compared to fixed PA and individual users' optimal PAs.
Variational inference enables approximate posterior inference of the highly over-parameterized neural networks that are popular in modern machine learning. Unfortunately, such posteriors are known to exhibit various pathological behaviors. We prove that as the number of hidden units in a single-layer Bayesian neural network tends to infinity, the function-space posterior mean under mean-field variational inference actually converges to zero, completely ignoring the data. This is in contrast to the true posterior, which converges to a Gaussian process. Our work provides insight into the over-regularization of the KL divergence in variational inference.
We present TransitFit, an open-source Python~3 package designed to fit exoplanetary transit light-curves for transmission spectroscopy studies (Available at https://github.com/joshjchayes/TransitFit and https://github.com/spearnet/TransitFit, with documentation at https://transitfit.readthedocs.io/). TransitFit employs nested sampling to offer efficient and robust multi-epoch, multi-wavelength fitting of transit data obtained from one or more telescopes. TransitFit allows per-telescope detrending to be performed simultaneously with parameter fitting, including the use of user-supplied detrending alogorithms. Host limb darkening can be fitted either independently ("uncoupled") for each filter or combined ("coupled") using prior conditioning from the PHOENIX stellar atmosphere models. For this TransitFit uses the Limb Darkening Toolkit (LDTk) together with filter profiles, including user-supplied filter profiles. We demonstrate the application of TransitFit in three different contexts. First, we model SPEARNET broadband optical data of the low-density hot-Neptune WASP-127~b. The data were obtained from a globally-distributed network of 0.5m--2.4m telescopes. We find clear improvement in our broadband results using the coupled mode over uncoupled mode, when compared against the higher spectral resolution GTC/OSIRIS transmission spectrum obtained by Chen et al. (2018). Using TransitFit, we fit 26 transit observations by TESS to recover improved ephemerides of the hot-Jupiter WASP-91~b and a transit depth determined to a precision of 170~ppm. Finally, we use TransitFit to conduct an investigation into the contested presence of TTV signatures in WASP-126~b using 126 transits observed by TESS, concluding that there is no statistically significant evidence for such signatures from observations spanning 31 TESS sectors.
The ongoing Coronavirus disease 2019 (COVID-19) is a major crisis that has significantly affected the healthcare sector and global economies, which made it the main subject of various fields in scientific and technical research. To properly understand and control this new epidemic, mathematical modelling is presented as a very effective tool that can illustrate the mechanisms of its propagation. In this regard, the use of compartmental models is the most prominent approach adopted in the literature to describe the dynamics of COVID-19. Along the same line, we aim during this study to generalize and ameliorate many existing works that consecrated to analyse the behaviour of this epidemic. Precisely, we propose an SQEAIHR epidemic system for Coronavirus. Our constructed model is enriched by taking into account the media intervention and vital dynamics. By the use of the next-generation matrix method, the theoretical basic reproductive number $R_0$ is obtained for COVID-19. Based on some nonstandard and generalized analytical techniques, the local and global stability of the disease-free equilibrium are proven when $R_0 < 1$. Moreover, in the case of $R_0 > 1$, the uniform persistence of COVID-19 model is also shown. In order to better adapt our epidemic model to reality, the randomness factor is taken into account by considering a proportional white noises, which leads to a well-posed stochastic model. Under appropriate conditions, interesting asymptotic properties are proved, namely: extinction and persistence in the mean. The theoretical results show that the dynamics of the perturbed COVID-19 model are determined by parameters that are closely related to the magnitude of the stochastic noise. Finally, we present some numerical illustrations to confirm our theoretical results and to show the impact of media intervention and quarantine strategies.
In this work, we generalize the reaction-diffusion equation in statistical physics, Schr\"odinger equation in quantum mechanics, Helmholtz equation in paraxial optics into the neural partial differential equations (NPDE), which can be considered as the fundamental equations in the field of artificial intelligence research. We take finite difference method to discretize NPDE for finding numerical solution, and the basic building blocks of deep neural network architecture, including multi-layer perceptron, convolutional neural network and recurrent neural networks, are generated. The learning strategies, such as Adaptive moment estimation, L-BFGS, pseudoinverse learning algorithms and partial differential equation constrained optimization, are also presented. We believe it is of significance that presented clear physical image of interpretable deep neural networks, which makes it be possible for applying to analog computing device design, and pave the road to physical artificial intelligence.
Generative modeling has recently shown great promise in computer vision, but it has mostly focused on synthesizing visually realistic images. In this paper, motivated by multi-task learning of shareable feature representations, we consider a novel problem of learning a shared generative model that is useful across various visual perception tasks. Correspondingly, we propose a general multi-task oriented generative modeling (MGM) framework, by coupling a discriminative multi-task network with a generative network. While it is challenging to synthesize both RGB images and pixel-level annotations in multi-task scenarios, our framework enables us to use synthesized images paired with only weak annotations (i.e., image-level scene labels) to facilitate multiple visual tasks. Experimental evaluation on challenging multi-task benchmarks, including NYUv2 and Taskonomy, demonstrates that our MGM framework improves the performance of all the tasks by large margins, consistently outperforming state-of-the-art multi-task approaches.
Understanding the physics of strongly correlated electronic systems has been a central issue in condensed matter physics for decades. In transition metal oxides, strong correlations characteristic of narrow $d$ bands is at the origin of such remarkable properties as the Mott gap opening, enhanced effective mass, and anomalous vibronic coupling, to mention a few. SrVO$_3$, with V$^{4+}$ in a $3d^1$ electronic configuration is the simplest example of a 3D correlated metallic electronic system. Here, we focus on the observation of a (roughly) quadratic temperature dependence of the inverse electron mobility of this seemingly simple system, which is an intriguing property shared by other metallic oxides. The systematic analysis of electronic transport in SrVO$_3$ thin films discloses the limitations of the simplest picture of e-e correlations in a Fermi liquid; instead, we show that the quasi-2D topology of the Fermi surface and a strong electron-phonon coupling, contributing to dress carriers with a phonon cloud, play a pivotal role on the reported electron spectroscopic, optical, thermodynamic and transport data. The picture that emerges is not restricted to SrVO$_3$ but can be shared with other $3d$ and $4d$ metallic oxides.
Let $ E \subset \mathbb{R}^2 $ be a finite set, and let $ f : E \to [0,\infty) $. In this paper, we address the algorithmic aspects of nonnegative $C^2$ interpolation in the plane. Specifically, we provide an efficient algorithm to compute a nonnegative $C^2(\mathbb{R}^2)$ extension of $ f $ with norm within a universal constant factor of the least possible. We also provide an efficient algorithm to approximate the trace norm.
In the past decades, the revolutionary advances of Machine Learning (ML) have shown a rapid adoption of ML models into software systems of diverse types. Such Machine Learning Software Applications (MLSAs) are gaining importance in our daily lives. As such, the Quality Assurance (QA) of MLSAs is of paramount importance. Several research efforts are dedicated to determining the specific challenges we can face while adopting ML models into software systems. However, we are aware of no research that offered a holistic view of the distribution of those ML quality assurance challenges across the various phases of software development life cycles (SDLC). This paper conducts an in-depth literature review of a large volume of research papers that focused on the quality assurance of ML models. We developed a taxonomy of MLSA quality assurance issues by mapping the various ML adoption challenges across different phases of SDLC. We provide recommendations and research opportunities to improve SDLC practices based on the taxonomy. This mapping can help prioritize quality assurance efforts of MLSAs where the adoption of ML models can be considered crucial.
Giant Radio Galaxies (GRGs) are the largest single structures in the Universe. Exhibiting extended radio morphology, their projected sizes range from 0.7 Mpc up to 4.9 Mpc. LOFAR has opened a new window on the discovery and investigation of GRGs and, despite the hundreds that are today known, their main growth catalyst is still debated. One natural explanation for the exceptional size of GRGs is their old age. In this context, hard X-ray selected GRGs show evidence of restarting activity, with the giant radio lobes being mostly disconnected from the nuclear source, if any. In this paper, we present the serendipitous discovery of a distant ($z=0.629$), medium X-ray selected GRG in the Bo\"otes field. High-quality, deep Chandra and LOFAR data allow a robust study of the connection between the nucleus and the lobes, at a larger redshift so far inaccessible to coded-mask hard X-ray instruments. The radio morphology of the GRG presented in this work does not show evidence for restarted activity, and the nuclear radio core spectrum does not appear to be GPS-like. On the other hand, the X-ray properties of the new GRG are perfectly consistent with the ones previously studied with Swift/BAT and INTEGRAL at lower redshift. In particular, the bolometric luminosity measured from the X-ray spectrum is a factor of six larger than the one derived from the radio lobes, although the large uncertainties make them formally consistent at $1\sigma$. Finally, the moderately dense environment around the GRG, traced by the spatial distribution of galaxies, supports recent findings that the growth of GRGs is not primarily driven by underdense environments.
By employing a pseudo-orthonormal coordinate-free approach, the Dirac equation for particles in the Kerr--Newman spacetime is separated into its radial and angular parts. In the massless case to which a special attention is given, the general Heun-type equations turn into their confluent form. We show how one recovers some results previously obtained in literature, by other means.
Although many techniques have been applied to matrix factorization (MF), they may not fully exploit the feature structure. In this paper, we incorporate the grouping effect into MF and propose a novel method called Robust Matrix Factorization with Grouping effect (GRMF). The grouping effect is a generalization of the sparsity effect, which conducts denoising by clustering similar values around multiple centers instead of just around 0. Compared with existing algorithms, the proposed GRMF can automatically learn the grouping structure and sparsity in MF without prior knowledge, by introducing a naturally adjustable non-convex regularization to achieve simultaneous sparsity and grouping effect. Specifically, GRMF uses an efficient alternating minimization framework to perform MF, in which the original non-convex problem is first converted into a convex problem through Difference-of-Convex (DC) programming, and then solved by Alternating Direction Method of Multipliers (ADMM). In addition, GRMF can be easily extended to the Non-negative Matrix Factorization (NMF) settings. Extensive experiments have been conducted using real-world data sets with outliers and contaminated noise, where the experimental results show that GRMF has promoted performance and robustness, compared to five benchmark algorithms.
The increasing market penetration of electric vehicles (EVs) may pose significant electricity demand on power systems. This electricity demand is affected by the inherent uncertainties of EVs' travel behavior that makes forecasting the daily charging demand (CD) very challenging. In this project, we use the National House Hold Survey (NHTS) data to form sequences of trips, and develop machine learning models to predict the parameters of the next trip of the drivers, including trip start time, end time, and distance. These parameters are later used to model the temporal charging behavior of EVs. The simulation results show that the proposed modeling can effectively estimate the daily CD pattern based on travel behavior of EVs, and simple machine learning techniques can forecast the travel parameters with acceptable accuracy.
We argue that neutrino oscillations at JUNO offer a unique opportunity to study Sorkin's triple-path interference, which is predicted to be zero in canonical quantum mechanics by virtue of the Born rule. In particular, we compute the expected bounds on triple-path interference at JUNO and demonstrate that they are comparable to those already available from electromagnetic probes. Furthermore, the neutrino probe of the Born rule is much more direct due to an intrinsic independence from any boundary conditions, whereas such dependence on boundary conditions is always present in the case of electromagnetic probes. Thus, neutrino oscillations present an ideal probe of this aspect of the foundations of quantum mechanics.
This paper is concerned with the asymptotic analysis of sojourn times of random fields with continuous sample paths. Under a very general framework we show that there is an interesting relationship between tail asymptotics of sojourn times and that of supremum. Moreover, we establish the uniform double-sum method to derive the tail asymptotics of sojourn times. In the literature, based on the pioneering research of S. Berman the sojourn times have been utilised to derive the tail asymptotics of supremum of Gaussian processes. In this paper we show that the opposite direction is even more fruitful, namely knowing the asymptotics of supremum o f random processes and fields (in particular Gaussian) it is possible to establish the asymptotics of their sojourn times. We illustrate our findings considering i) two dimensional Gaussian random fields, ii) chi-process generated by stationary Gaussian processes and iii) stationary Gaussian queueing processes.
We consider the setting where the nodes of an undirected, connected network collaborate to solve a shared objective modeled as the sum of smooth functions. We assume that each summand is privately known by a unique node. NEAR-DGD is a distributed first order method which permits adjusting the amount of communication between nodes relative to the amount of computation performed locally in order to balance convergence accuracy and total application cost. In this work, we generalize the convergence properties of a variant of NEAR-DGD from the strongly convex to the nonconvex case. Under mild assumptions, we show convergence to minimizers of a custom Lyapunov function. Moreover, we demonstrate that the gap between those minimizers and the second order stationary solutions of the original problem can become arbitrarily small depending on the choice of algorithm parameters. Finally, we accompany our theoretical analysis with a numerical experiment to evaluate the empirical performance of NEAR-DGD in the nonconvex setting.
We calculate models of stellar evolution for very massive stars and include the effects of modified gravity to investigate the influence on the physical properties of blue supergiant stars and their use as extragalactic distance indicators. With shielding and fifth force parameters in a similar range as in previous studies of Cepheid and tip of the red giant branch (TRGB) stars we find clear effects on stellar luminosity and flux-weighted gravity. The relationship between flux weighted gravity, g_F = g/Teff^4, and bolometric magnitude M_bol (FGLR), which has been used successfully for accurate distance determinations, is systematically affected. While the stellar evolution FGLRs show a systematic offset from the observed relation, we can use the differential shifts between models with Newtonian and modified gravity to estimate the influence on FGLR distance determinations. Modified gravity leads to a distance increase of 0.05 to 0.15 magnitudes in distance modulus. These change are comparable to the ones found for Cepheid stars. We compare observed FGLR and TRGB distances of nine galaxies to constrain the free parameters of modified gravity. Not accounting for systematic differences between TRGB and FGLR distances shielding parameters of 5*10^-7 and 10^-6 and fifth force parameters of 1/3 and 1 can be ruled out with about 90% confidence. Allowing for potential systematic offsets between TRGB and FGLR distances no determination is possible for a shielding parameter of 10^-6. For 5*10$^-7 a fifth force parameter of 1 can be ruled out to 92% but 1/3 is unlikely only to 60%.
The superior performance of CNN on medical image analysis heavily depends on the annotation quality, such as the number of labeled image, the source of image, and the expert experience. The annotation requires great expertise and labour. To deal with the high inter-rater variability, the study of imperfect label has great significance in medical image segmentation tasks. In this paper, we present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation. Our model consists of three independent network, which can effectively learn useful information from the peer networks. The framework includes two stages. In the first stage, we select the clean annotated samples via a model committee setting, the networks are trained by minimizing a segmentation loss using the selected clean samples. In the second stage, we design a joint optimization framework with label correction to gradually correct the wrong annotation and improve the network performance. We conduct experiments on the public chest X-ray image datasets collected by Shenzhen Hospital. The results show that our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
The huge amount of data produced in the fifth-generation (5G) networks not only brings new challenges to the reliability and efficiency of mobile devices but also drives rapid development of new storage techniques. With the benefits of fast access speed and high reliability, NAND flash memory has become a promising storage solution for the 5G networks. In this paper, we investigate a protograph-coded bit-interleaved coded modulation with iterative detection and decoding (BICM-ID) utilizing irregular mapping (IM) in the multi-level-cell (MLC) NAND flash-memory systems. First, we propose an enhanced protograph-based extrinsic information transfer (EPEXIT) algorithm to facilitate the analysis of protograph codes in the IM-BICM-ID systems. With the use of EPEXIT algorithm, a simple design method is conceived for the construction of a family of high-rate protograph codes, called irregular-mapped accumulate-repeat-accumulate (IMARA) codes, which possess both excellent decoding thresholds and linear-minimum-distance-growth property. Furthermore, motivated by the voltage-region iterative gain characteristics of IM-BICM-ID systems, a novel read-voltage optimization scheme is developed to acquire accurate read-voltage levels, thus minimizing the decoding thresholds of protograph codes. Theoretical analyses and error-rate simulations indicate that the proposed IMARA-aided IM-BICM-ID scheme and the proposed read-voltage optimization scheme remarkably improve the convergence and decoding performance of flash-memory systems. Thus, the proposed protograph-coded IM-BICM-ID flash-memory systems can be viewed as a reliable and efficient storage solution for the new-generation mobile networks with massive data-storage requirement.
We propose and experimentally demonstrate a novel interference fading suppression method for phase-sensitive optical time domain reflectometry (Phi-OTDR) using space-division multiplexed (SDM) pulse probes in few-mode fiber. The SDM probes consist of multiple different modes, and three spatial modes (LP01, LP11a and LP11b) are used in this work for proof of concept. Firstly, the Rayleigh backscattering light of different modes is experimentally characterized, and it turns out that the waveforms of Phi-OTDR traces of distinct modes are all different from each other. Thanks to the spatial difference of fading positions of distinct modes, multiple probes from spatially multiplexed modes can be used to suppress the interference fading in Phi-OTDR. Then, the performances of the Phi-OTDR systems using single probe and multiple probes are evaluated and compared. Specifically, statistical analysis shows that both fading probabilities over fiber length and time are reduced significantly by using multiple SDM probes, which verifies the significant performance improvement on fading suppression. The proposed novel interference fading suppression method does not require complicated frequency or phase modulation, which has the advantages of simplicity, good effectiveness and high reliability.
Nonuniform fast Fourier transforms dominate the computational cost in many applications including image reconstruction and signal processing. We thus present a general-purpose GPU-based CUDA library for type 1 (nonuniform to uniform) and type 2 (uniform to nonuniform) transforms in dimensions 2 and 3, in single or double precision. It achieves high performance for a given user-requested accuracy, regardless of the distribution of nonuniform points, via cache-aware point reordering, and load-balanced blocked spreading in shared memory. At low accuracies, this gives on-GPU throughputs around $10^9$ nonuniform points per second, and (even including host-device transfer) is typically 4-10$\times$ faster than the latest parallel CPU code FINUFFT (at 28 threads). It is competitive with two established GPU codes, being up to 90$\times$ faster at high accuracy and/or type 1 clustered point distributions. Finally we demonstrate a 5-12$\times$ speedup versus CPU in an X-ray diffraction 3D iterative reconstruction task at $10^{-12}$ accuracy, observing excellent multi-GPU weak scaling up to one rank per GPU.
In this paper, we discuss the properties of the generating functions of spin Hurwitz numbers. In particular, for spin Hurwitz numbers with arbitrary ramification profiles, we construct the weighed sums which are given by Orlov's hypergeometric solutions of the 2-component BKP hierarchy. We derive the closed algebraic formulas for the correlation functions associated with these tau-functions, and under reasonable analytical assumptions we prove the loop equations (the blobbed topological recursion). Finally, we prove a version of topological recursion for the spin Hurwitz numbers with the spin completed cycles (a generalized version of the Giacchetto--Kramer--Lewa\'nski conjecture).
We study inverse problems for the nonlinear wave equation $\square_g u + w(x,u, \nabla_g u) = 0$ in a Lorentzian manifold $(M,g)$ with boundary, where $\nabla_g u$ denotes the gradient and $w(x,u, \xi)$ is smooth and quadratic in $\xi$. Under appropriate assumptions, we show that the conformal class of the Lorentzian metric $g$ can be recovered up to diffeomorphisms, from the knowledge of the Neumann-to-Dirichlet map. With some additional conditions, we can recover the metric itself up to diffeomorphisms. Moreover, we can recover the second and third quadratic forms in the Taylor expansion of $w(x,u, \xi)$ with respect to $u$ up to null forms.
This article discusses the physical and kinematical characteristics of planetary nebulae accompanying PG1159 central stars. The study is based on the parallax and proper motion measurements recently offered by Gaia space mission. Two approaches were used to investigate the kinematical properties of the sample. The results revealed that most of the studied nebulae arise from progenitor stars of mass range; $0.9-1.75$\,M$_{\odot}$. Furthermore, they tend to live within the Galactic thick-disk and moving with an average peculiar velocity of $61.7\pm19.2$\,km\,s$^{-1}$ at a mean vertical height of $469\pm79$ pc. The locations of the PG1159 stars on the H-R diagram indicate that they have an average final stellar mass and evolutionary age of $0.58\pm0.08$\,M$_{\odot}$ and $25.5\pm5.3 \rm{x}10^3$ yr, respectively. We found a good agreement between the mean evolutionary age of the PG1159 stars and the mean dynamical age of their companion planetary nebulae ($28.0\pm6.4 \rm{x}10^3$ yr).
The pentakis dodecahedron, the dual of the truncated icosahedron, consists of 60 edge-sharing triangles. It has 20 six- and 12 five-fold coordinated vertices, with the former forming a dodecahedron, and each of the latter connected to the vertices of one of the 12 pentagons of the dodecahedron. When spins mounted on the vertices of the pentakis dodecahedron interact according to the nearest-neighbor antiferromagnetic Heisenberg model, the two different vertex types necessitate the introduction of two exchange constants. As the relative strength of the two constants is varied the molecule interpolates between the dodecahedron and a molecule consisting only of quadrangles. The competition between the two exchange constants, frustration, and an external magnetic field results in a multitude of ground-state magnetization and susceptibility discontinuities. At the classical level the maximum is ten magnetization and one susceptibility discontinuities when the 12 five-fold vertices interact with the dodecahedron spins with approximately one-half the strength of their interaction. When the two interactions are approximately equal in strength the number of discontinuities is also maximized, with three of the magnetization and eight of the susceptibility. At the full quantum limit, where the magnitude of the spins equals 1/2, there can be up to three ground-state magnetization jumps that have the total z spin component changing by \Delta S^z=2, even though quantum fluctuations rarely allow discontinuities of the magnetization. The full quantum case also supports a \Delta S^z=3 discontinuity. Frustration also results in nonmagnetic states inside the singlet-triplet gap. These results make the pentakis dodecahedron the molecule with the most discontinuous magnetic response from the quantum to the classical level.
Dynamical scaling is an asymptotic property typical for the dynamics of first-order phase transitions in physical systems and related to self-similarity. Based on the integral-representation for the marginal probabilities of a fractional non-homogeneous Poisson process introduced by Leonenko et al. (2017) and generalising the standard fractional Poisson process, we prove the dynamical scaling under fairly mild conditions. Our result also includes the special case of the standard fractional Poisson process.
The performance of superconducting radio-frequency (SRF) cavities depends on the niobium surface condition. Recently, various heat-treatment methods have been investigated to achieve unprecedented high quality factor (Q) and high accelerating field (E). We report the influence of a new baking process called furnace baking on the Q-E behavior of 1.3 GHz SRF cavities. Furnace baking is performed as the final step of the cavity surface treatment; the cavities are heated in a vacuum furnace for 3 h, followed by high-pressure rinsing and radio-frequency measurement. This method is simpler and potentially more reliable than previously reported heat-treatment methods, and it is therefore, easier to apply to the SRF cavities. We find that the quality factor is increased after furnace baking at temperatures ranging from 300C to 400C, while strong decreasing the quality factor at high accelerating field is observed after furnace baking at temperatures ranging from 600C to 800C. We find significant differences in the surface resistance for various processing temperatures.
Sound Event Detection and Audio Classification tasks are traditionally addressed through time-frequency representations of audio signals such as spectrograms. However, the emergence of deep neural networks as efficient feature extractors has enabled the direct use of audio signals for classification purposes. In this paper, we attempt to recognize musical instruments in polyphonic audio by only feeding their raw waveforms into deep learning models. Various recurrent and convolutional architectures incorporating residual connections are examined and parameterized in order to build end-to-end classi-fiers with low computational cost and only minimal preprocessing. We obtain competitive classification scores and useful instrument-wise insight through the IRMAS test set, utilizing a parallel CNN-BiGRU model with multiple residual connections, while maintaining a significantly reduced number of trainable parameters.
We study the topological properties of a spin-orbit coupled Hofstadter model on the Kagome lattice. The model is time-reversal invariant and realizes a $\mathbb{Z}_2$ topological insulator as a result of artificial gauge fields. We develop topological arguments to describe this system showing three inequivalent sites in a unit cell and a flat band in its energy spectrum in addition to the topological dispersive energy bands. We show the stability of the topological phase towards spin-flip processes and different types of on-site potentials. In particular, we also address the situation where on-site energies may differ inside a unit cell. Moreover, a staggered potential on the lattice may realize topological phases for the half-filled situation. Another interesting result is the occurrence of a topological phase for large on-site energies. To describe topological properties of the system we use a numerical approach based on the twisted boundary conditions and we develop a mathematical approach, related to smooth fields.
Cross-flow turbines convert kinetic power in wind or water currents to mechanical power. Unlike axial-flow turbines, the influence of geometric parameters on turbine performance is not well-understood, in part because there are neither generalized analytical formulations nor inexpensive, accurate numerical models that describe their fluid dynamics. Here, we experimentally investigate the effect of aspect ratio - the ratio of the blade span to rotor diameter - on the performance of a straight-bladed cross-flow turbine in a water channel. To isolate the effect of aspect ratio, all other non-dimensional parameters are held constant, including the relative confinement, Froude number, and Reynolds number. The coefficient of performance is found to be invariant for the range of aspect ratios tested (0.95 - 1.63), which we ascribe to minimal blade-support interactions for this turbine design. Finally, a subset of experiments is repeated without controlling for the Froude number and the coefficient of performance is found to increase, a consequence of Froude number variation that could mistakenly be ascribed to aspect ratio. This highlights the importance of rigorous experimental design when exploring the effect of geometric parameters on cross-flow turbine performance.
After more than a decade of intense focus on automated vehicles, we are still facing huge challenges for the vision of fully autonomous driving to become a reality. The same "disillusionment" is true in many other domains, in which autonomous Cyber-Physical Systems (CPS) could considerably help to overcome societal challenges and be highly beneficial to society and individuals. Taking the automotive domain, i.e. highly automated vehicles (HAV), as an example, this paper sets out to summarize the major challenges that are still to overcome for achieving safe, secure, reliable and trustworthy highly automated resp. autonomous CPS. We constrain ourselves to technical challenges, acknowledging the importance of (legal) regulations, certification, standardization, ethics, and societal acceptance, to name but a few, without delving deeper into them as this is beyond the scope of this paper. Four challenges have been identified as being the main obstacles to realizing HAV: Realization of continuous, post-deployment systems improvement, handling of uncertainties and incomplete information, verification of HAV with machine learning components, and prediction. Each of these challenges is described in detail, including sub-challenges and, where appropriate, possible approaches to overcome them. By working together in a common effort between industry and academy and focusing on these challenges, the authors hope to contribute to overcome the "disillusionment" for realizing HAV.
We present an integrated design to precisely measure optical frequency using weak value amplification with a multi-mode interferometer. The technique involves introducing a weak perturbation to the system and then post-selecting the data in such a way that the signal is amplified without amplifying the technical noise, as has previously been demonstrated in a free-space setup. We demonstrate the advantages of a Bragg grating with two band gaps for obtaining simultaneous, stable high transmission and high dispersion. We numerically model the interferometer in order to demonstrate the amplification effect. The device is shown to have advantages over both the free-space implementation and other methods of measuring optical frequency on a chip, such as an integrated Mach-Zehnder interferometer.
In this paper, we introduce single acceptance sampling inspection plan (SASIP) for transmuted Rayleigh (TR) distribution when the lifetime experiment is truncated at a prefixed time. Establish the proposed plan for different choices of confidence level, acceptance number and ratio of true mean lifetime to specified mean lifetime. Minimum sample size necessary to ensure a certain specified lifetime is obtained. Operating characteristic(OC) values and producer's risk of proposed plan are presented. Two real life example has been presented to show the applicability of proposed SASIP.
In this paper, we study the periodicity structure of finite field linear recurring sequences whose period is not necessarily maximal and determine necessary and sufficient conditions for the characteristic polynomial~\(f\) to have exactly two periods in the sense that the period of any sequence generated by~\(f\) is either one or a unique integer greater than one.
We present a comprehensive analysis of all XMM-Newton spectra of OJ 287 spanning 15 years of X-ray spectroscopy of this bright blazar. We also report the latest results from our dedicated Swift UVOT and XRT monitoring of OJ 287 which started in 2015, along with all earlier public Swift data since 2005. During this time interval, OJ 287 was caught in extreme minima and outburst states. Its X-ray spectrum is highly variable and encompasses all states seen in blazars from very flat to exceptionally steep. The spectrum can be decomposed into three spectral components: Inverse Compton (IC) emission dominant at low-states, super-soft synchrotron emission which becomes increasingly dominant as OJ 287 brightens, and an intermediately-soft (Gamma_x=2.2) additional component seen at outburst. This last component extends beyond 10 keV and plausibly represents either a second synchrotron/IC component and/or a temporary disk corona of the primary supermassive black hole (SMBH). Our 2018 XMM-Newton observation, quasi-simultaneous with the Event Horizon Telescope observation of OJ 287, is well described by a two-component model with a hard IC component of Gamma_x=1.5 and a soft synchrotron component. Low-state spectra limit any long-lived accretion disk/corona contribution in X-rays to a very low value of L_x/L_Edd < 5.6 times 10^(-4) (for M_(BH, primary) = 1.8 times 10^10 M_sun). Some implications for the binary SMBH model of OJ 287 are discussed.
Several recent applications of optimal transport (OT) theory to machine learning have relied on regularization, notably entropy and the Sinkhorn algorithm. Because matrix-vector products are pervasive in the Sinkhorn algorithm, several works have proposed to \textit{approximate} kernel matrices appearing in its iterations using low-rank factors. Another route lies instead in imposing low-rank constraints on the feasible set of couplings considered in OT problems, with no approximations on cost nor kernel matrices. This route was first explored by Forrow et al., 2018, who proposed an algorithm tailored for the squared Euclidean ground cost, using a proxy objective that can be solved through the machinery of regularized 2-Wasserstein barycenters. Building on this, we introduce in this work a generic approach that aims at solving, in full generality, the OT problem under low-rank constraints with arbitrary costs. Our algorithm relies on an explicit factorization of low rank couplings as a product of \textit{sub-coupling} factors linked by a common marginal; similar to an NMF approach, we alternatively updates these factors. We prove the non-asymptotic stationary convergence of this algorithm and illustrate its efficiency on benchmark experiments.
The hierarchy of nonlocality and entanglement in multipartite systems is one of the fundamental problems in quantum physics. Existing studies on this topic to date were limited to the entanglement classification according to the numbers of particles enrolled. Equivalence under stochastic local operations and classical communication provides a more detailed classification, e. g. the genuine three-qubit entanglement being divided into W and GHZ classes. We construct two families of local models for the three-qubit Greenberger-Horne-Zeilinger (GHZ)-symmetric states, whose entanglement classes have a complete description. The key technology of construction the local models in this work is the GHZ symmetrization on tripartite extensions of the optimal local-hidden-state models for Bell diagonal states. Our models show that entanglement and nonlocality are inequivalent for all the entanglement classes (biseparable, W, and GHZ) in three-qubit systems.
We show that both the classical as well as the quantum definitions of the Fisher information faithfully identify resourceful quantum states in general quantum resource theories, in the sense that they can always distinguish between states with and without a given resource. This shows that all quantum resources confer an advantage in metrology, and establishes the Fisher information as a universal tool to probe the resourcefulness of quantum states. We provide bounds on the extent of this advantage, as well as a simple criterion to test whether different resources are useful for the estimation of unitarily encoded parameters. Finally, we extend the results to show that the Fisher information is also able to identify the dynamical resourcefulness of quantum operations.
The relation of period spacing ($\Delta P$) versus period ($P$) of dipole prograde g modes is known to be useful to measure rotation rates in the g-mode cavity of rapidly rotating $\gamma$ Dor and slowly pulsating B (SPB) stars. In a rapidly rotating star, an inertial mode in the convective core can resonantly couple with g modes propagative in the surrounding radiative region. The resonant coupling causes a dip in the $P$-$\Delta P$ relation, distinct from the modulations due to the chemical composition gradient. Such a resonance dip in $\Delta P$ of prograde dipole g modes appears around a frequency corresponding to a spin parameter $2f_{\rm rot}{\rm(cc)}/\nu_{\rm co-rot} \sim 8-11$ with $f_{\rm rot}$(cc) being the rotation frequency of the convective core and $\nu_{\rm co-rot}$ the pulsation frequency in the co-rotating frame. The spin parameter at the resonance depends somewhat on the extent of core overshooting, central hydrogen abundance, and other stellar parameters. We can fit the period at the observed dip with the prediction from prograde dipole g modes of a main-sequence model, allowing the convective core to rotate differentially from the surrounding g-mode cavity. We have performed such fittings for 16 selected $\gamma$ Dor stars having well defined dips, and found that the majority of $\gamma$ Dor stars we studied rotate nearly uniformly, while convective cores tend to rotate slightly faster than the g-mode cavity in less evolved stars.
Magnetic reconnection is explored on the Terrestrial Reconnection Experiment (TREX) for asymmetric inflow conditions and in a configuration where the absolute rate of reconnection is set by an external drive. Magnetic pileup enhances the upstream magnetic field of the high density inflow, leading to an increased upstream Alfven speed and helping to lower the normalized reconnection rate to values expected from theoretical consideration. In addition, a shock interface between the far upstream supersonic plasma inflow and the region of magnetic flux pileup is observed, important to the overall force balance of the system, hereby demonstrating the role of shock formation for configurations including a supersonically driven inflow. Despite the specialised geometry where a strong reconnection drive is applied from only one side of the reconnection layer, previous numerical and theoretical results remain robust and are shown to accurately predict the normalized rate of reconnection for the range of system sizes considered. This experimental rate of reconnection is dependent on system size, reaching values as high as 0.8 at the smallest normalized system size applied.
In the de Sitter gauge theory (DGT), the fundamental variables are the de Sitter (dS) connection and the gravitational Higgs/Goldstone field $\xi^A$. Previously, a model for DGT was analyzed, which generalizes the MacDowell--Mansouri gravity to have a variable cosmological constant $\Lambda=3/l^2$, where $l$ is related to $\xi^A$ by $\xi^A\xi_A=l^2$. It was shown that the model sourced by a perfect fluid does not support a radiation epoch and the accelerated expansion of the parity invariant universe. In this work, I consider a similar model, namely, the Stelle--West gravity, and couple it to a modified perfect fluid, such that the total Lagrangian 4-form is polynomial in the gravitational variables. The Lagrangian of the modified fluid has a nontrivial variational derivative with respect to $l$, and as a result, the problems encountered in the previous work no longer appear. Moreover, to explore the elegance of the general theory, as well as to write down the basic framework, I perform the Lagrange--Noether analysis for DGT sourced by a matter field, yielding the field equations and the identities with respect to the symmetries of the system. The resulted formula are dS covariant and do not rely on the existence of the metric field.
Privacy and energy are primary concerns for sensor devices that offload compute to a potentially untrusted edge server or cloud. Homomorphic Encryption (HE) enables offload processing of encrypted data. HE offload processing retains data privacy, but is limited by the need for frequent communication between the client device and the offload server. Existing client-aided encrypted computing systems are optimized for performance on the offload server, failing to sufficiently address client costs, and precluding HE offload for low-resource (e.g., IoT) devices. We introduce Client-aided HE for Opaque Compute Offloading (CHOCO), a client-optimized system for encrypted offload processing. CHOCO introduces rotational redundancy, an algorithmic optimization to minimize computing and communication costs. We design Client-Aided HE for Opaque Compute Offloading Through Accelerated Cryptographic Operations (CHOCO-TACO), a comprehensive architectural accelerator for client-side cryptographic operations that eliminates most of their time and energy costs. Our evaluation shows that CHOCO makes client-aided HE offloading feasible for resource-constrained clients. Compared to existing encrypted computing solutions, CHOCO reduces communication cost by up to 2948x. With hardware support, client-side encryption/decryption is faster by 1094x and uses 648x less energy. In our end-to-end implementation of a large-scale DNN (VGG16), CHOCO uses 37% less energy than local (unencrypted) computation.
Primordial perturbations in our universe are believed to have a quantum origin, and can be described by the wavefunction of the universe (or equivalently, cosmological correlators). It follows that these observables must carry the imprint of the founding principle of quantum mechanics: unitary time evolution. Indeed, it was recently discovered that unitarity implies an infinite set of relations among tree-level wavefunction coefficients, dubbed the Cosmological Optical Theorem. Here, we show that unitarity leads to a systematic set of "Cosmological Cutting Rules" which constrain wavefunction coefficients for any number of fields and to any loop order. These rules fix the discontinuity of an n-loop diagram in terms of lower-loop diagrams and the discontinuity of tree-level diagrams in terms of tree-level diagrams with fewer external fields. Our results apply with remarkable generality, namely for arbitrary interactions of fields of any mass and any spin with a Bunch-Davies vacuum around a very general class of FLRW spacetimes. As an application, we show how one-loop corrections in the Effective Field Theory of inflation are fixed by tree-level calculations and discuss related perturbative unitarity bounds. These findings greatly extend the potential of using unitarity to bootstrap cosmological observables and to restrict the space of consistent effective field theories on curved spacetimes.
In this paper, we show how the absorption and re-radiation energy from molecules in the air can influence the Multiple Input Multiple Output (MIMO) performance in high-frequency bands, e.g., millimeter wave (mmWave) and terahertz. In more detail, some common atmosphere molecules, such as oxygen and water, can absorb and re-radiate energy in their natural resonance frequencies, such as 60 GHz, 180 GHz and 320 GHz. Hence, when hit by electromagnetic waves, molecules will get excited and absorb energy, which leads to an extra path loss and is known as molecular attenuation. Meanwhile, the absorbed energy will be re-radiated towards a random direction with a random phase. These re-radiated waves also interfere with the signal transmission. Although, the molecular re-radiation was mostly considered as noise in literature, recent works show that it is correlated to the main signal and can be viewed as a composition of multiple delayed or scattered signals. Such a phenomenon can provide non-line-of-sight (NLoS) paths in an environment that lacks scatterers, which increases spatial multiplexing and thus greatly enhances the performance of MIMO systems. Therefore in this paper, we explore the scattering model and noise models of molecular re-radiation to characterize the channel transfer function of the NLoS channels created by atmosphere molecules. Our simulation results show that the re-radiation can increase MIMO capacity up to 3 folds in mmWave and 6 folds in terahertz for a set of realistic transmit power, distance, and antenna numbers. We also show that in the high SNR, the re-radiation makes the open-loop precoding viable, which is an alternative to beamforming to avoid beam alignment sensitivity in high mobility applications.
We present a Hubble Space Telescope/Wide-Field Camera 3 near infrared spectrum of the archetype Y dwarf WISEP 182831.08+265037.8. The spectrum covers the 0.9-1.7 um wavelength range at a resolving power of lambda/Delta lambda ~180 and is a significant improvement over the previously published spectrum because it covers a broader wavelength range and is uncontaminated by light from a background star. The spectrum is unique for a cool brown dwarf in that the flux peaks in the Y, J, and H band are of near equal intensity in units of f_lambda. We fail to detect any absorption bands of NH_3 in the spectrum, in contrast to the predictions of chemical equilibrium models, but tentatively identify CH_4 as the carrier of an unknown absorption feature centered at 1.015 um. Using previously published ground- and spaced-based photometry, and using a Rayleigh Jeans tail to account for flux emerging longward of 4.5 um, we compute a bolometric luminosity of log (L_bol/L_sun)=-6.50+-0.02 which is significantly lower than previously published results. Finally, we compare the spectrum and photometry to two sets of atmospheric models and find that best overall match to the observed properties of WISEP 182831.08+265037.8 is a ~1 Gyr old binary composed of two T_eff~325 K, ~5 M_Jup brown dwarfs with subsolar [C/O] ratios.
Consider traffic data (i.e., triplets in the form of source-destination-timestamp) that grow over time. Tensors (i.e., multi-dimensional arrays) with a time mode are widely used for modeling and analyzing such multi-aspect data streams. In such tensors, however, new entries are added only once per period, which is often an hour, a day, or even a year. This discreteness of tensors has limited their usage for real-time applications, where new data should be analyzed instantly as it arrives. How can we analyze time-evolving multi-aspect sparse data 'continuously' using tensors where time is'discrete'? We propose SLICENSTITCH for continuous CANDECOMP/PARAFAC (CP) decomposition, which has numerous time-critical applications, including anomaly detection, recommender systems, and stock market prediction. SLICENSTITCH changes the starting point of each period adaptively, based on the current time, and updates factor matrices (i.e., outputs of CP decomposition) instantly as new data arrives. We show, theoretically and experimentally, that SLICENSTITCH is (1) 'Any time': updating factor matrices immediately without having to wait until the current time period ends, (2) Fast: with constant-time updates up to 464x faster than online methods, and (3) Accurate: with fitness comparable (specifically, 72 ~ 100%) to offline methods.
We develop a first-principles-based generalized mode-coupling theory (GMCT) for the tagged-particle motion of glassy systems. This theory establishes a hierarchy of coupled integro-differential equations for self-multi-point density correlation functions, which can formally be extended up to infinite order. We use our GMCT framework to calculate the self-nonergodicity parameters and the self-intermediate scattering function for the Percus-Yevick hard sphere system, based on the first few levels of the GMCT hierarchy. We also test the scaling laws in the $\alpha$- and $\beta$-relaxation regimes near the glass-transition singularity. Furthermore, we study the mean-square displacement and the Stoke-Einstein relation in the supercooled regime. We find that qualitatively our GMCT results share many similarities with the well-established predictions from standard mode-coupling theory, but the quantitative results change, and typically improve, by increasing the GMCT closure level. However, we also demonstrate on general theoretical grounds that the current GMCT framework is unable to account for violation of the Stokes-Einstein relation, underlining the need for further improvements in the first-principles description of glassy dynamics.
In this article, we define amorphic complexity for actions of locally compact $\sigma$-compact amenable groups on compact metric spaces. Amorphic complexity, originally introduced for $\mathbb Z$-actions, is a topological invariant which measures the complexity of dynamical systems in the regime of zero entropy. We show that it is tailor-made to study strictly ergodic group actions with discrete spectrum and continuous eigenfunctions. This class of actions includes, in particular, Delone dynamical systems related to regular model sets obtained via Meyer's cut and project method. We provide sharp upper bounds on amorphic complexity of such systems. In doing so, we observe an intimate relationship between amorphic complexity and fractal geometry.
In this paper we prove regularity results for a class of nonlinear degenerate elliptic equations of the form $\displaystyle -\operatorname{div}(A(|\nabla u|)\nabla u)+B\left( |\nabla u|\right) =f(u)$; in particular, we investigate the second order regularity of the solutions. As a consequence of these results, we obtain symmetry and monotonicity properties of positive solutions for this class of degenerate problems in convex symmetric domains via a suitable adaption of the celebrated moving plane method of Alexandrov-Serrin.
In the centre of mass frame, we have studied theoretically the $Z$-boson resonant production in the presence of an intense laser field via the weak process $e^+e^- \to \mu^+\mu^-$. Dressing the incident particles by a Circularly Polarized laser field (CP-laser field), at the first step, shows that for a given laser field's parameters, the $Z$- boson cross section decreases by several orders of magnitude. We have compared the the Total Cross Section (TCS) obtained by using the scattering matrix method with that given by the Breit-Wigner approach in the presence of a CP-laser field and the results are found to be very consistent. This result indicates that Breit-Wigner formula is valid not only for the laser-free process but also in the presence of a CP-laser field. The dependence of the laser-assisted differential cross section on the Centre of Mass Energy (CME) for different scattering angles proves that it reaches its maximum for small and high scattering angles. At the next step and by dressing both incident and scattered particles, we have shown that the CP-laser field largely affects the TCS, especially when its strength reaches $10^{9}\,V.cm^{-1}$. This result confirms that obtained for the elastic electron-proton scattering in the presence of a CP-laser field [I. Dahiri et al., arXiv:2102.00722v1]. It is interpreted by the fact that heavy interacting particles require high laser field's intensity to affect the collision's cross section.
In this work, we give a new technique for constructing self-dual codes over commutative Frobenius rings using $\lambda$-circulant matrices. The new construction was derived as a modification of the well-known four circulant construction of self-dual codes. Applying this technique together with the building-up construction, we construct singly-even binary self-dual codes of lengths 56, 58, 64, 80 and 92 that were not known in the literature before. Singly-even self-dual codes of length 80 with $\beta\in\{2,4,5,6,8\}$ in their weight enumerators are constructed for the first time in the literature.
The need for a comprehensive study to explore various aspects of online social media has been instigated by many researchers. This paper gives an insight into the social platform, Twitter. In this present work, we have illustrated stepwise procedure for crawling the data and discuss the key issues related to extracting associated features that can be useful in Twitter-related research while crawling these data from Application Programming Interfaces (APIs). Further, the data that comprises of over 86 million tweets have been analysed from various perspective including the most used languages, most frequent words, most frequent users, countries with most and least tweets and re-tweets, etc. The analysis reveals that the users' data associated with Twitter has a high affinity for researches in the various domain that includes politics, social science, economics, and linguistics, etc. In addition, the relation between Twitter users of a country and its human development index has been identified. It is observed that countries with very high human development indices have a relatively higher number of tweets compared to low human development indices countries. It is envisaged that the present study shall open many doors of researches in information processing and data science.
We study the structural evolution of isolated star-forming galaxies in the Illustris TNG100-1 hydrodynamical simulation, with a focus on investigating the growth of the central core density within 2 kpc ($\Sigma_{*,2kpc}$) in relation to total stellar mass ($M_*$) at z < 0.5. First, we show that several observational trends in the $\Sigma_{*,2kpc}$-$M_*$ plane are qualitatively reproduced in IllustrisTNG, including the distributions of AGN, star forming galaxies, quiescent galaxies, and radial profiles of stellar age, sSFR, and metallicity. We find that galaxies with dense cores evolve parallel to the $\Sigma_{*,2kpc}$-$M_*$ relation, while galaxies with diffuse cores evolve along shallower trajectories. We investigate possible drivers of rapid growth in $\Sigma_{*,2kpc}$ compared to $M_*$. Both the current sSFR gradient and the BH accretion rate are indicators of past core growth, but are not predictors of future core growth. Major mergers (although rare in our sample; $\sim$10%) cause steeper core growth, except for high mass ($M_*$ >$\sim$ $10^{10} M_{\odot}$) mergers, which are mostly dry. Disc instabilities, as measured by the fraction of mass with Toomre Q < 2, are not predictive of rapid core growth. Instead, rapid core growth results in more stable discs. The cumulative black hole feedback history sets the maximum rate of core growth, preventing rapid growth in high-mass galaxies ($M_*$ >$\sim$ $10^{9.5} M_{\odot}$). For massive galaxies the total specific angular momentum of accreting gas is the most important predictor of future core growth. Our results suggest that the angular momentum of accreting gas controls the slope, width and zero-point evolution of the $\Sigma_{*,2kpc}$-$M_*$ relation.
This article introduces a neural network-based signal processing framework for intelligent reflecting surface (IRS) aided wireless communications systems. By modeling radio-frequency (RF) impairments inside the "meta-atoms" of IRS (including nonlinearity and memory effects), we present an approach that generalizes the entire IRS-aided system as a reservoir computing (RC) system, an efficient recurrent neural network (RNN) operating in a state near the "edge of chaos". This framework enables us to take advantage of the nonlinearity of this "fabricated" wireless environment to overcome link degradation due to model mismatch. Accordingly, the randomness of the wireless channel and RF imperfections are naturally embedded into the RC framework, enabling the internal RC dynamics lying on the edge of chaos. Furthermore, several practical issues, such as channel state information acquisition, passive beamforming design, and physical layer reference signal design, are discussed.
Is critical input information encoded in specific sparse pathways within the neural network? In this work, we discuss the problem of identifying these critical pathways and subsequently leverage them for interpreting the network's response to an input. The pruning objective -- selecting the smallest group of neurons for which the response remains equivalent to the original network -- has been previously proposed for identifying critical pathways. We demonstrate that sparse pathways derived from pruning do not necessarily encode critical input information. To ensure sparse pathways include critical fragments of the encoded input information, we propose pathway selection via neurons' contribution to the response. We proceed to explain how critical pathways can reveal critical input features. We prove that pathways selected via neuron contribution are locally linear (in an L2-ball), a property that we use for proposing a feature attribution method: "pathway gradient". We validate our interpretation method using mainstream evaluation experiments. The validation of pathway gradient interpretation method further confirms that selected pathways using neuron contributions correspond to critical input features. The code is publicly available.
The paper investigates the problem of finding communities in complex network systems, the detection of which allows a better understanding of the laws of their functioning. To solve this problem, two approaches are proposed based on the use of flows characteristics of complex network. The first of these approaches consists in calculating the parameters of influence of separate subsystems of the network system, distinguished by the principles of ordering or subordination, and the second, in using the concept of its flow core. Based on the proposed approaches, reliable criteria for finding communities have been formulated and efficient algorithms for their detection in complex network systems have been developed. It is shown that the proposed approaches make it possible to single out communities in cases in which the existing numerical and visual methods turn out to be disabled.
The novel concept of simultaneously transmitting and reflecting (STAR) reconfigurable intelligent surfaces (RISs) is investigated, where the incident wireless signal is divided into transmitted and reflected signals passing into both sides of the space surrounding the surface, thus facilitating a full-space manipulation of signal propagation. Based on the introduced basic signal model of `STAR', three practical operating protocols for STAR-RISs are proposed, namely energy splitting (ES), mode switching (MS), and time switching (TS). Moreover, a STAR-RIS aided downlink communication system is considered for both unicast and multicast transmission, where a multi-antenna base station (BS) sends information to two users, i.e., one on each side of the STAR-RIS. A power consumption minimization problem for the joint optimization of the active beamforming at the BS and the passive transmission and reflection beamforming at the STAR-RIS is formulated for each of the proposed operating protocols, subject to communication rate constraints of the users. For ES, the resulting highly-coupled non-convex optimization problem is solved by an iterative algorithm, which exploits the penalty method and successive convex approximation. Then, the proposed penalty-based iterative algorithm is extended to solve the mixed-integer non-convex optimization problem for MS. For TS, the optimization problem is decomposed into two subproblems, which can be consecutively solved using state-of-the-art algorithms and convex optimization techniques. Finally, our numerical results reveal that: 1) the TS and ES operating protocols are generally preferable for unicast and multicast transmission, respectively; and 2) the required power consumption for both scenarios is significantly reduced by employing the proposed STAR-RIS instead of conventional reflecting/transmiting-only RISs.
In this work we compute the first integral cohomology of the pure mapping class group of a non-orientable surface of infinite topological type and genus at least 3. To this purpose, we also prove several other results already known for orientable surfaces such as the existence of an Alexander method, the fact that the mapping class group is isomorphic to the automorphism group of the curve graph along with the topological rigidity of the curve graph, and the structure of the pure mapping class group as both a Polish group and a semi-direct product.
Programmable data planes allow users to define their own data plane algorithms for network devices including appropriate data plane application programming interfaces (APIs) which may be leveraged by user-defined software-defined networking (SDN) control. This offers great flexibility for network customization, be it for specialized, commercial appliances, e.g., in 5G or data center networks, or for rapid prototyping in industrial and academic research. Programming protocol-independent packet processors (P4) has emerged as the currently most widespread abstraction, programming language, and concept for data plane programming. It is developed and standardized by an open community, and it is supported by various software and hardware platforms. In the first part of this paper we give a tutorial of data plane programming models, the P4 programming language, architectures, compilers, targets, and data plane APIs. We also consider research efforts to advance P4 technology. In the second part, we categorize a large body of literature of P4-based applied research into different research domains, summarize the contributions of these papers, and extract prototypes, target platforms, and source code availability. For each research domain, we analyze how the reviewed works benefit from P4's core features. Finally, we discuss potential next steps based on our findings.
Metasurfaces enable manipulation of light propagation at an unprecedented level, benefitting from a number of merits unavailable to conventional optical elements, such as ultracompactness, precise phase and polarization control at deep subwavelength scale, and multifunctionalities. Recent progress in this field has witnessed a plethora of functional metasurfaces, ranging from lenses and vortex beam generation to holography. However, research endeavors have been mainly devoted to static devices, exploiting only a glimpse of opportunities that metasurfaces can offer. We demonstrate a dynamic metasurface platform, which allows independent manipulation of addressable subwavelength pixels at visible frequencies through controlled chemical reactions. In particular, we create dynamic metasurface holograms for advanced optical information processing and encryption. Plasmonic nanorods tailored to exhibit hierarchical reaction kinetics upon hydrogenation/dehydrogenation constitute addressable pixels in multiplexed metasurfaces. The helicity of light, hydrogen, oxygen, and reaction duration serve as multiple keys to encrypt the metasurfaces. One single metasurface can be deciphered into manifold messages with customized keys, featuring a compact data storage scheme as well as a high level of information security. Our work suggests a novel route to protect and transmit classified data, where highly restricted access of information is imposed.
High-energy heavy-ion collisions generate extremely strong magnetic field which plays a key role in a number of novel quantum phenomena in quark-gluon plasma (QGP), such as the chiral magnetic effect (CME). However, due to the complexity in theoretical modellings of the coupled electromagnetic fields and the QGP system, especially in the pre-equilibrium stages, the lifetime of the magnetic field in the QGP medium remains undetermined. We establish, for the first time, a kinetic framework to study the dynamical decay of the magnetic field in the early stages of a weakly coupled QGP by solving the coupled Boltzmann and Maxwell equations. We find that at late times a magnetohydrodynamical description of the coupled system emerges. With respect to realistic collisions at RHIC and the LHC, we estimate the residual strength of the magnetic field in the QGP when the system start to evolve hydrodynamically.
Nowadays, the confidentiality of data and information is of great importance for many companies and organizations. For this reason, they may prefer not to release exact data, but instead to grant researchers access to approximate data. For example, rather than providing the exact measurements of their clients, they may only provide researchers with grouped data, that is, the number of clients falling in each of a set of non-overlapping measurement intervals. The challenge is to estimate the mean and variance structure of the hidden ungrouped data based on the observed grouped data. To tackle this problem, this work considers the exact observed data likelihood and applies the Expectation-Maximization (EM) and Monte-Carlo EM (MCEM) algorithms for cases where the hidden data follow a univariate, bivariate, or multivariate normal distribution. Simulation studies are conducted to evaluate the performance of the proposed EM and MCEM algorithms. The well-known Galton data set is considered as an application example.
For the first time, the dielectric response of a BaTiO3 thin film under an AC electric field is investigated using time-resolved X-ray absorption spectroscopy at the Ti K-edge to clarify correlated contributions of each constituent atom on the electronic states. Intensities of the pre-edge eg peak and shoulder structure just below the main edge increase with an increase in the amplitude of the applied electric field, whereas that of the main peak decreases in an opposite manner. Based on the multiple scattering theory, the increase and decrease of the eg and main peaks are simulated for different Ti off-center displacements. Our results indicate that these spectral features reflect the inter- and intra-atomic hybridization of Ti 3d with O 2p and Ti 4p, respectively. In contrast, the shoulder structure is not affected by changes in the Ti off-center displacement but is susceptible to the effect of the corner site Ba ions. This is the first experimental verification of the dynamic electronic contribution of Ba to polarization reversal.
Magnetic multilayers are promising tuneable systems for hosting magnetic skyrmions at/above room temperature. Revealing their intriguing switching mechanisms and associated inherent electrical responses are prerequisites for developing skyrmionic devices. In this work, we theoretically demonstrate the annihilation of single skyrmions occurring through a multilayer structure, which is mediated by hopping dynamics of topological hedgehog singularities known as Bloch points. The emerging intralayer dynamics of Bloch points are dominated by the Dzyaloshinskii-Moriya interaction, and their propagation can give rise to solenoidal emergent electric fields in the vicinity. Moreover, as the topology of spin textures can dominate their emergent magnetic properties, we show that the Bloch-point hopping through the multilayer will modulate the associated topological Hall response, with the magnitude proportional to the effective topological charge. We also investigate the thermodynamic stability of these states regarding the layer-dependent magnetic properties. This study casts light on the emergent electromagnetic signatures of skyrmion-based spintronics, rooted in magnetic-multilayer systems.
In this paper a comparative structural, dielectric and magnetic study of two langasite compounds Ba$_3$TeCo$_3$P$_2$O$_{14}$ (absence of lone pair) and Pb$_3$TeCo$_3$P$_2$O$_{14}$ (Pb$^{2+}$ 6$s^2$ lone pair) have been carried out to precisely explore the development of room temperature spontaneous polarization in presence of stereochemically active lone pair. In case of Pb$_3$TeCo$_3$P$_2$O$_{14}$, mixing of both Pb 6$s$ with Pb 6$p$ and O 2$p$ help the lone pair to be stereochemically active. This stereochemically active lone pair brings a large structural distortion within the unit cell and creates a polar geometry, while Ba$_3$TeCo$_3$P$_2$O$_{14}$ compound remains in a nonpolar structure due to the absence of any such effect. Consequently, polarization measurement under varying electric field confirms room temperature ferroelectricity for Pb$_3$TeCo$_3$P$_2$O$_{14}$, which was not the case of Ba$_3$TeCo$_3$P$_2$O$_{14}$. Detailed study was carried out to understand the microscopic mechanism of ferroelectricity which revealed the exciting underlying activity of poler TeO$_6$ octahedral unit as well as Pb-hexagon.
We study cosmological inflation and its dynamics in the framework of the Randall-Sundrum II brane model. In particular, we analyze in detail four representative small-field inflationary potentials, namely Natural inflation, Hilltop inflation, Higgs-like inflation, and Exponential SUSY inflation, each characterized by two mass scales. We constrain the parameters for which a viable inflationary Universe emerges using the latest PLANCK results. Furthermore, we investigate whether or not those models in brane cosmology are consistent with the recently proposed Swampland Criteria, and give predictions for the duration of reheating as well as for the reheating temperature after inflation. Our results show that (i) the distance conjecture is satisfied, (ii) the de Sitter conjecture and its refined version may be avoided, and (iii) the allowed range for the five-dimensional Planck mass, $M_5$, is found to be between $10^5~\textrm{TeV}$ and $10^{12}~\textrm{TeV}$. Our main findings indicate that non-thermal leptogenesis cannot work within the framework of RS-II brane cosmology, at least for the inflationary potentials considered here.
We first propose a general method to construct the complete set of on-shell operator bases involving massive particles with any spins. To incorporate the non-abelian little groups of massive particles, the on-shell scattering amplitude basis should be factorized into two parts: one is charged, and the other one is neutral under little groups of massive particles. The complete set of these two parts can be systematically constructed by choosing some specific Young diagrams of Lorentz subgroup and global symmetry $U(N)$ respectively ($N$ is the number of external particles), without the equation of motion and integration by part redundancy. Thus the complete massive amplitude bases without any redundancies can be obtained by combining these two complete sets. Some examples are presented to explicitly demonstrate this method. This method is applicable for constructing amplitude bases involving identical particles, and all the bases can be constructed automatically by computer programs based on it.
We study the variety ZG of monoids where the elements that belong to a group are central, i.e., commute with all other elements. We show that ZG is local, that is, the semidirect product ZG * D of ZG by definite semigroups is equal to LZG, the variety of semigroups where all local monoids are in ZG. Our main result is thus: ZG * D = LZG. We prove this result using Straubing's delay theorem, by considering paths in the category of idempotents. In the process, we obtain the characterization ZG = MNil \vee Com, and also characterize the ZG languages, i.e., the languages whose syntactic monoid is in ZG: they are precisely the languages that are finite unions of disjoint shuffles of singleton languages and regular commutative languages.
The noise-enhanced trapping is a surprising phenomenon that has already been studied in chaotic scattering problems where the noise affects the physical variables but not the parameters of the system. Following this research, in this work we provide strong numerical evidence to show that an additional mechanism that enhances the trapping arises when the noise influences the energy of the system. For this purpose, we have included a source of Gaussian white noise in the H\'enon-Heiles system, which is a paradigmatic example of open Hamiltonian system. For a particular value of the noise intensity, some trajectories decrease their energy due to the stochastic fluctuations. This drop in energy allows the particles to spend very long transients in the scattering region, increasing their average escape times. This result, together with the previously studied mechanisms, points out the generality of the noise-enhanced trapping in chaotic scattering problems.
This study investigates the correlation of self-report accuracy with academic performance. The sample was composed of 289 undergraduate students (96 senior and 193 junior) enrolled in two engineering classes. Age ranged between 22 and 24 years, with a slight over representation of male students (53%). Academic performance was calculated based on students' final grades in each class. The tendency to report inaccurate information was measured at the end of the Raven Progressive Matrices Test, by asking students to report their exact finishing times. We controlled for gender, age, personality traits, intelligence, and past academic performance. We also included measures of centrality in their friendship, advice and trust networks. Correlation and multiple regression analyses results indicate that lower achieving students were significantly less accurate in self-reporting data. We also found that being more central in the advice network was correlated with higher performance (r = .20, p < .001). The results are aligned with existing literature emphasizing the individual and relational factors associated with academic performance and, pending future studies, may be utilized to include a new metric of self-report accuracy that is not dependent on academic records.
We investigate the 3D spin alignment of galaxies with respect to the large-scale filaments using the MaNGA survey. The cosmic web is reconstructed from the Sloan Digital Sky Survey using Disperse and the 3D spins of MaNGA galaxies are estimated using the thin disk approximation with integral field spectroscopy kinematics. Late-type spiral galaxies are found to have their spins parallel to the closest filament's axis. The alignment signal is found to be dominated by low-mass spirals. Spins of S0-type galaxies tend to be oriented preferentially in perpendicular direction with respect to the filament's axis. This orthogonal orientation is found to be dominated by S0s that show a notable misalignment between their kinematic components of stellar and ionised gas velocity fields and/or by low mass S0s with lower rotation support compared to their high mass counterparts. Qualitatively similar results are obtained when splitting galaxies based on the degree of ordered stellar rotation, such that galaxies with high spin magnitude have their spin aligned, and those with low spin magnitude in perpendicular direction to the filaments. In the context of conditional tidal torque theory, these findings suggest that galaxies' spins retain memory of their larger-scale environment. In agreement with measurements from hydrodynamical cosmological simulations, the measured signal at low redshift is weak, yet statistically significant. The dependence of the spin-filament orientation of galaxies on their stellar mass, morphology and kinematics highlights the importance of sample selection to detect the signal.
Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent years. However, previous work has indicated that off-the-shelf MLMs are not effective as universal lexical or sentence encoders without further task-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks using annotated task data. In this work, we demonstrate that it is possible to turn MLMs into effective universal lexical and sentence encoders even without any additional data and without any supervision. We propose an extremely simple, fast and effective contrastive learning technique, termed Mirror-BERT, which converts MLMs (e.g., BERT and RoBERTa) into such encoders in 20-30 seconds without any additional external knowledge. Mirror-BERT relies on fully identical or slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning. We report huge gains over off-the-shelf MLMs with Mirror-BERT in both lexical-level and sentence-level tasks, across different domains and different languages. Notably, in the standard sentence semantic similarity (STS) tasks, our self-supervised Mirror-BERT model even matches the performance of the task-tuned Sentence-BERT models from prior work. Finally, we delve deeper into the inner workings of MLMs, and suggest some evidence on why this simple approach can yield effective universal lexical and sentence encoders.
Soft or weakly-consolidated sand refers to porous materials composed of particles (or grains) weakly held together to form a solid but that can be easily broken when subjected to stress. These materials do not behave as conventional brittle, linear elastic materials and the transition between these two regimes cannot usually be described using poro-elastic models. Furthermore, conventional geotechnical sampling techniques often result in the destruction of the cementation and recovery of sufficient intact core is, therefore, difficult. This paper studies a numerical model that allows us to introduce weak consolidation in granular packs. The model, based on the LIGGGHTS open source project, simply adds an attractive contribution to particles in contact. This simple model allow us to reproduce key elements of the behaviour of the stress observed in compacted sands and clay, as well as in poorly consolidated sandstones. The paper finishes by inspecting the effect of different consolidation levels in fluid-driven fracture behaviour. Numerical results are compared against experimental results on bio-cemented sandstones.
In this paper, a new implicit-explicit local method with an arbitrary order is produced for stiff initial value problems. Here, a general method for one-step time integrations has been created, considering a direction free approach for integrations leading to a numerical method with parameter-based stability preservation. Adaptive procedures depending on the problem types for the current method are explained with the help of local error estimates to minimize the computational cost. Priority error analysis of the current method is made, and order conditions are presented in terms of direction parameters. Stability analysis of the method is performed for both scalar equations and systems of differential equations. The currently produced parameter-based method has been proven to provide A-stability, for 0.5<\theta<1, in various orders. The present method has been shown to be a very good option for addressing a wide range of initial value problems through numerical experiments. It can be seen as a significant contribution that the Susceptible-Exposed-Infected-Recovered equation system parameterized for the COVID-19 pandemic has been integrated with the present method and stability properties of the method have been tested on this stiff model and significant results are produced. Some challenging stiff behaviours represented by the nonlinear Duffing equation, Robertson chemical system, and van der Pol equation have also been integrated, and the results revealed that the current algorithm produces much more reliable results than numerical techniques in the literature.
In this paper, a new method of training pipeline is discussed to achieve significant performance on the task of anti-spoofing with RGB image. We explore and highlight the impact of using pseudo-depth to pre-train a network that will be used as the backbone to the final classifier. While the usage of pseudo-depth for anti-spoofing task is not a new idea on its own, previous endeavours utilize pseudo-depth simply as another medium to extract features for performing prediction, or as part of many auxiliary losses in aiding the training of the main classifier, normalizing the importance of pseudo-depth as just another semantic information. Through this work, we argue that there exists a significant advantage in training the final classifier can be gained by the pre-trained generator learning to predict the corresponding pseudo-depth of a given facial image, from a Generative Adversarial Network framework. Our experimental results indicate that our method results in a much more adaptable system that can generalize beyond intra-dataset samples, but to inter-dataset samples, which it has never seen before during training. Quantitatively, our method approaches the baseline performance of the current state of the art anti-spoofing models with 15.8x less parameters used. Moreover, experiments showed that the introduced methodology performs well only using basic binary label without additional semantic information which indicates potential benefits of this work in industrial and application based environment where trade-off between additional labelling and resources are considered.
A transient two-dimensional acoustic boundary element solver is coupled to a potential flow boundary element solver via Powell's acoustic analogy to determine the acoustic emission of isolated hydrofoils performing biologically-inspired motions. The flow-acoustic boundary element framework is validated against experimental and asymptotic solutions for the noise produced by canonical vortex-body interactions. The numerical framework then characterizes the noise production of an oscillating foil, which is a simple representation of a fish caudal fin. A rigid NACA 0012 hydrofoil is subjected to combined heaving and pitching motions for Strouhal numbers ($0.03 < St < 1$) based on peak-to-peak amplitudes and chord-based reduced frequencies ($0.125 < f^* < 1$) that span the parameter space of many swimming fish species. A dipolar acoustic directivity is found for all motions, frequencies, and amplitudes considered, and the peak noise level increases with both the reduced frequency and the Strouhal number. A combined heaving and pitching motion produces less noise than either a purely pitching or purely heaving foil at a fixed reduced frequency and amplitude of motion. Correlations of the lift and power coefficients with the peak root-mean-square acoustic pressure levels are determined, which could be utilized to develop long-range, quiet swimmers.
In power system dynamic simulation, up to 90% of the computational time is devoted to solve the network equations, i.e., a set of linear equations. Traditional approaches are based on sparse LU factorization, which is inherently sequential. In this paper, an inverse-based network solution is proposed by a hierarchical method for computing and store the approximate inverse of the conductance matrix in electromagnetic transient (EMT) simulations. The proposed method can also efficiently update the inverse by modifying only local sub-matrices to reflect changes in the network, e.g., loss of a line. Experiments on a series of simplified 179-bus Western Interconnection demonstrate the advantages of the proposed methods.
We present the results of long-term photometric monitoring of two active galactic nuclei, 2MASX J08535955+7700543 (z $\sim$ 0.106) and VII Zw 244 (z $\sim$ 0.131), being investigated by the reverberation mapping method in medium-band filters. To estimate the size of the broad line region, we have analyzed the light curves with the JAVELIN code. The emission line widths have been measured using the spectroscopic data obtained at the 6-m BTA telescope of SAO RAS. We give our estimates of the supermassive black hole masses $\lg (M/M_{\odot})$, $7.398_{-0.171}^{+0.153}$, and $7.049_{-0.075}^{+0.068}$, respectively
Perpendicularly magnetized films showing small saturation magnetization, $M_\mathrm{s}$, are essential for spin-transfer-torque writing type magnetoresistive random access memories, STT-MRAMs. An intermetallic compound, {(Mn-Cr)AlGe} of the Cu$_2$Sb-type crystal structure was investigated, in this study, as a material showing the low $M_\mathrm{s}$ ($\sim 300$ kA/m) and high-perpendicular magnetic anisotropy, $K_\mathrm{u}$. The layer thickness dependence of $K_\mathrm{u}$ and effects of Mg-insertion layers at top and bottom (Mn-Cr)AlGe$|$MgO interfaces were studied in film samples fabricated onto thermally oxidized silicon substrates to realize high-$K_\mathrm{u}$ in the thickness range of a few nanometer. Optimum Mg-insertion thicknesses were 1.4 and 3.0 nm for the bottom and the top interfaces, respectively, which were relatively thick compared to results in similar insertion effect investigations on magnetic tunnel junctions reported in previous studies. The cross-sectional transmission electron microscope images revealed that the Mg-insertion layers acted as barriers to interdiffusion of Al-atoms as well as oxidization from the MgO layers. The values of $K_\mathrm{u}$ were about $7 \times 10^5$ and $2 \times 10^5$ J/m$^3$ at room temperature for 5 and 3 nm-thick (Mn-Cr)AlGe films, respectively, with the optimum Mg-insertion thicknesses. The $K_\mathrm{u}$ at a few nanometer thicknesses is comparable or higher than those reported in perpendicularly magnetized CoFeB films which are conventionally used in MRAMs, while the $M_\mathrm{s}$ value is one third or less smaller than those of the CoFeB films. The developed (Mn-Cr)AlGe films are promising from the viewpoint of not only the magnetic properties, but also the compatibility to the silicon process in the film fabrication.
Blockchain (BC) technology can revolutionize future networks by providing a distributed, secure, and unalterable way to boost collaboration among operators, users, and other stakeholders. Its implementations have traditionally been supported by wired communications, with performance indicators like the high latency introduced by the BC being one of the key technology drawbacks. However, when applied to wireless communications, the performance of BC remains unknown, especially if running over contention-based networks. In this paper, we evaluate the latency performance of BC technology when the supporting communication platform is wireless, specifically we focus on IEEE 802.11ax, for the use case of users' radio resource provisioning. For that purpose, we propose a discrete-time Markov model to capture the expected delay incurred by the BC. Unlike other models in the literature, we consider the effect that timers and forks have on the end-to-end latency.
Recently, Martinez-Penas and Kschischang (IEEE Trans. Inf. Theory, 2019) showed that lifted linearized Reed-Solomon codes are suitable codes for error control in multishot network coding. We show how to construct and decode lifted interleaved linearized Reed-Solomon codes. Compared to the construction by Martinez-Penas-Kschischang, interleaving allows to increase the decoding region significantly (especially w.r.t. the number of insertions) and decreases the overhead due to the lifting (i.e., increases the code rate), at the cost of an increased packet size. The proposed decoder is a list decoder that can also be interpreted as a probabilistic unique decoder. Although our best upper bound on the list size is exponential, we present a heuristic argument and simulation results that indicate that the list size is in fact one for most channel realizations up to the maximal decoding radius.
This paper presents a detailed investigation of FeCr-based quaternary Heusler alloys. By using ultrasoft pseudopotential, electronic and magnetic properties of the compounds are studied within the framework of Density Functional Theory (DFT) by using the Quantum Espresso package. The thermodynamic, mechanical, and dynamical stability of the compounds is established through the comprehensive study of different mechanical parameters and phonon dispersion curves. The meticulous study of elastic parameters such as bulk, Young's, shear moduli, etc. is done to understand different mechanical properties. The FeCr-based compounds containing also Yttrium are studied to redress the contradictory electronic and magnetic properties observed in the literature. The interesting properties like half-metallicity and spin-gapless semiconducting (SGS) behavior are realized in the compounds under study.
We study several variants of the problem of moving a convex polytope $K$, with $n$ edges, in three dimensions through a flat rectangular (and sometimes more general) window. Specifically: $\bullet$ We study variants where the motion is restricted to translations only, discuss situations where such a motion can be reduced to sliding (translation in a fixed direction), and present efficient algorithms for those variants, which run in time close to $O(n^{8/3})$. $\bullet$ We consider the case of a `gate' (an unbounded window with two parallel infinite edges), and show that $K$ can pass through such a window, by any collision-free rigid motion, if and only if it can slide through it. $\bullet$ We consider arbitrary compact convex windows, and show that if $K$ can pass through such a window $W$ (by any motion) then $K$ can slide through a gate of width equal to the diameter of $W$. $\bullet$ We study the case of a circular window $W$, and show that, for the regular tetrahedron $K$ of edge length $1$, there are two thresholds $1 > \delta_1\approx 0.901388 > \delta_2\approx 0.895611$, such that (a) $K$ can slide through $W$ if the diameter $d$ of $W$ is $\ge 1$, (b) $K$ cannot slide through $W$ but can pass through it by a purely translational motion when $\delta_1\le d < 1$, (c) $K$ cannot pass through $W$ by a purely translational motion but can do it when rotations are allowed when $\delta_2 \le d < \delta_1$, and (d) $K$ cannot pass through $W$ at all when $d < \delta_2$. $\bullet$ Finally, we explore the general setup, where we want to plan a general motion (with all six degrees of freedom) for $K$ through a rectangular window $W$, and present an efficient algorithm for this problem, with running time close to $O(n^4)$.
The paper is devoted to the study of Gromov-Hausdorff convergence and stability of irreversible metric-measure spaces, both in the compact and noncompact cases. While the compact setting is mostly similar to the reversible case developed by J. Lott, K.-T. Sturm and C. Villani, the noncompact case provides various surprising phenomena. Since the reversibility of noncompact irreversible spaces might be infinite, it is motivated to introduce a suitable nondecreasing function that bounds the reversibility of larger and larger balls. By this approach, we are able to prove satisfactory convergence/stability results in a suitable -- reversibility depending -- Gromov-Hausdorff topology. A wide class of irreversible spaces is provided by Finsler manifolds, which serve to construct various model examples by pointing out genuine differences between the reversible and irreversible settings. We conclude the paper by proving various geometric and functional inequalities (as Brunn-Minkowski, Bishop-Gromov, log-Sobolev and Lichnerowicz inequalities) on irreversible structures.
We introduce an evolutionary game on hypergraphs in which decisions between a risky alternative and a safe one are taken in social groups of different sizes. The model naturally reproduces choice shifts, namely the differences between the preference of individual decision makers and the consensual choice of a group, that have been empirically observed in choice dilemmas. In particular, a deviation from the Nash equilibrium towards the risky strategy occurs when the dynamics takes place on heterogeneous hypergraphs. These results can explain the emergence of irrational herding and radical behaviours in social groups.
Photos of faces captured in unconstrained environments, such as large crowds, still constitute challenges for current face recognition approaches as often faces are occluded by objects or people in the foreground. However, few studies have addressed the task of recognizing partial faces. In this paper, we propose a novel approach to partial face recognition capable of recognizing faces with different occluded areas. We achieve this by combining attentional pooling of a ResNet's intermediate feature maps with a separate aggregation module. We further adapt common losses to partial faces in order to ensure that the attention maps are diverse and handle occluded parts. Our thorough analysis demonstrates that we outperform all baselines under multiple benchmark protocols, including naturally and synthetically occluded partial faces. This suggests that our method successfully focuses on the relevant parts of the occluded face.