abstract
stringlengths
42
2.09k
We study the strong magnetic field limit for a nonlinear Iwatsuka-type model, i.e. a nonlinear Schr\"odinger equation in two spatial dimensions with a magnetic vector potential that only depends on the $x$-coordinate. Using a high-frequency averaging technique, we show that this equation can be effectively described by a nonlocal nonlinear model, which is no longer dispersive. We also prove that, in this asymptotic regime, inhomogeneous nonlinearities are confined along the $y$-axis.
Answering in negative a question of M. Hru\v{s}\'ak, we construct a Borel ideal not extendable to any $F_\sigma$ ideal and such that it is not Kat\v{e}tov above the ideal $\mathrm{conv}$.
Mechanical disorder in solids, which is generated by a broad range of physical processes and controls various material properties, appears in a wide variety of forms. Defining unified and measurable dimensionless quantifiers, allowing quantitative comparison of mechanical disorder across widely different physical systems, is therefore an important goal. Two such coarse-grained dimensionless quantifiers (among others) appear in the literature, one is related to the spectral broadening of discrete phononic bands in finite-size systems (accessible through computer simulations) and the other is related the spatial fluctuations of the shear modulus in macroscopically large systems. The latter has been recently shown to determine the amplitude of wave attenuation rates in the low-frequency limit (accessible through laboratory experiments). Here, using two alternative and complementary theoretical approaches linked to the vibrational spectra of solids, we derive a basic scaling relation between the two dimensionless quantifiers. This scaling relation, which is supported by simulational data, shows that the two apparently distinct quantifiers are in fact intrinsically related, giving rise to a unified quantifier of mechanical disorder in solids. We further discuss the obtained results in the context of the unjamming transition taking place in soft sphere packings at low confining pressures, in addition to their implications for our understanding of the low-frequency vibrational spectra of disordered solids in general, and in particular those of glassy systems.
The problem of Bayesian reduced rank regression is considered in this paper. We propose, for the first time, to use Langevin Monte Carlo method in this problem. A spectral scaled Student prior distrbution is used to exploit the underlying low-rank structure of the coefficient matrix. We show that our algorithms are significantly faster than the Gibbs sampler in high-dimensional setting. Simulation results show that our proposed algorithms for Bayesian reduced rank regression are comparable to the state-of-the-art method where the rank is chosen by cross validation.
Multi-regional interaction among neuronal populations underlies the brain's processing of rich sensory information in our daily lives. Recent neuroscience and neuroimaging studies have increasingly used naturalistic stimuli and experimental design to identify such realistic sensory computation in the brain. However, existing methods for cross-areal interaction analysis with dimensionality reduction, such as reduced-rank regression and canonical correlation analysis, have limited applicability and interpretability in naturalistic settings because they usually do not appropriately 'demix' neural interactions into those associated with different types of task parameters or stimulus features (e.g., visual or audio). In this paper, we develop a new method for cross-areal interaction analysis that uses the rich task or stimulus parameters to reveal how and what types of information are shared by different neural populations. The proposed neural demixed shared component analysis combines existing dimensionality reduction methods with a practical neural network implementation of functional analysis of variance with latent variables, thereby efficiently demixing nonlinear effects of continuous and multimodal stimuli. We also propose a simplifying alternative under the assumptions of linear effects and unimodal stimuli. To demonstrate our methods, we analyzed two human neuroimaging datasets of participants watching naturalistic videos of movies and dance movements. The results demonstrate that our methods provide new insights into multi-regional interaction in the brain during naturalistic sensory inputs, which cannot be captured by conventional techniques.
Speaker embeddings extracted with deep 2D convolutional neural networks are typically modeled as projections of first and second order statistics of channel-frequency pairs onto a linear layer, using either average or attentive pooling along the time axis. In this paper we examine an alternative pooling method, where pairwise correlations between channels for given frequencies are used as statistics. The method is inspired by style-transfer methods in computer vision, where the style of an image, modeled by the matrix of channel-wise correlations, is transferred to another image, in order to produce a new image having the style of the first and the content of the second. By drawing analogies between image style and speaker characteristics, and between image content and phonetic sequence, we explore the use of such channel-wise correlations features to train a ResNet architecture in an end-to-end fashion. Our experiments on VoxCeleb demonstrate the effectiveness of the proposed pooling method in speaker recognition.
For a team of robots to work collaboratively, it is crucial that each robot have the ability to determine the position of their neighbors, relative to themselves, in order to execute tasks autonomously. This letter presents an algorithm for determining the three-dimensional relative position between two mobile robots, each using nothing more than a single ultra-wideband transceiver, an accelerometer, a rate gyro, and a magnetometer. A sliding window filter estimates the relative position at selected keypoints by combining the distance measurements with acceleration estimates, which each agent computes using an on-board attitude estimator. The algorithm is appropriate for real-time implementation, and has been tested in simulation and experiment, where it comfortably outperforms standard estimators. A positioning accuracy of less than 1 meter is achieved with inexpensive sensors.
Room Temperature Ionic Liquids (RTILs) are molten salts which exhibit uniques physical and chemical properties, commonly harnessed for lubrication and energy applications. The pure ionic nature of RTIL leads to strong electrostatic interactions among the liquid, furthermore exalted in the presence of interfaces and confinement. In this work, we use a tuning-fork based dynamic Surface Force Tribometer (TF-SFT), which allows probing both the rheological and the tribological properties of RTILs films confined between a millimetric sphere and a surface, over a wide range of confinements. When the RTIL is confined between metallic surfaces, we evidence an abrupt change of its rheological properties below a threshold confinement. This is reminiscent of a recently reported confinement induced capillary freezing, here observed with a wide contact area. In parallel, we probe the tribological response of the film under imposed nanometric shear deformation and unveil a yielding behaviour of the interfacial solid phase below this threshold confinement. This is characterized by a transition from an elastic to a plastic regime, exhibiting striking similarities with the response of glassy materials. This transition to yielding of the RTIL in metallic confinement leads overall to a reduction in friction and offers a self-healing protection of the surfaces avoiding direct contact, with obvious applications in tribology.
Recently, it has been shown theoretically that fluorescence microscopy using random illuminations (RIM) yields a doubled lateral resolution and an improved optical sectioning. Moreover, an algorithm called algoRIM, based on variance matching, has been successfully validated on numerous biological applications. Here, we propose a proof of uniqueness of the RIM variance equation, which corresponds to a first theoretical validation of algoRIM.
Reliable evaluation of adversarial defenses is a challenging task, currently limited to an expert who manually crafts attacks that exploit the defense's inner workings or approaches based on an ensemble of fixed attacks, none of which may be effective for the specific defense at hand. Our key observation is that adaptive attacks are composed of reusable building blocks that can be formalized in a search space and used to automatically discover attacks for unknown defenses. We evaluated our approach on 24 adversarial defenses and show that it outperforms AutoAttack, the current state-of-the-art tool for reliable evaluation of adversarial defenses: our tool discovered significantly stronger attacks by producing 3.0\%-50.8\% additional adversarial examples for 10 models, while obtaining attacks with slightly stronger or similar strength for the remaining models.
In this paper we compare the experimental HERA data with the next-to-leading order approach (NLO) of Ref.[C.~Contreras, E.~Levin, R.~Meneses and M.~Sanhueza,Eur. Phys. J. C 80 (2020) no.11, 1029). This approach includes the re-summed NLO corrections to the kernel of the evolution equation, the correct asymptotic behaviour in the NLO at $\tau = r^2 Q^2_s \,\gg\,1$; the impact parameter dependence of the saturation scale in accord with the Froissarrt theorem as well as the non-linear corrections. In this paper, we successfully describe the experimental data with the quality, which is not worse, than in the leading order fits with larger number of the phenomenological parameters. It is demonstrated, that the data could be described, taking into account both the diffusion on $\ln(k_T)$, which stems from perturbative QCD, and the Gribov's diffusion in impact parameters. It is shown an ability to describe the data at rather large values of $\alpha_S$.
Soft glassy materials such as mayonnaise, wet clays, or dense microgels display under external shear a solid-to-liquid transition. Such a shear-induced transition is often associated with a non-monotonic stress response, in the form of a stress maximum referred to as "stress overshoot". This ubiquitous phenomenon is characterized by the coordinates of the maximum in terms of stress $\sigma_\text{M}$ and strain $\gamma_\text{M}$ that both increase as weak power laws of the applied shear rate. Here we rationalize such power-law scalings using a continuum model that predicts two different regimes in the limit of low and high applied shear rates. The corresponding exponents are directly linked to the steady-state rheology and are both associated with the nucleation and growth dynamics of a fluidized region. Our work offers a consistent framework for predicting the transient response of soft glassy materials upon start-up of shear from the local flow behavior to the global rheological observables.
Temporal correspondence - linking pixels or objects across frames - is a fundamental supervisory signal for the video models. For the panoptic understanding of dynamic scenes, we further extend this concept to every segment. Specifically, we aim to learn coarse segment-level matching and fine pixel-level matching together. We implement this idea by designing two novel learning objectives. To validate our proposals, we adopt a deep siamese model and train the model to learn the temporal correspondence on two different levels (i.e., segment and pixel) along with the target task. At inference time, the model processes each frame independently without any extra computation and post-processing. We show that our per-frame inference model can achieve new state-of-the-art results on Cityscapes-VPS and VIPER datasets. Moreover, due to its high efficiency, the model runs in a fraction of time (3x) compared to the previous state-of-the-art approach.
Lines and circles pose significant scalability challenges in synthetic geometry. A line with $n$ points implies ${n \choose 3}$ collinearity atoms, or alternatively, when lines are represented as functions, equality among ${n \choose 2}$ different lines. Similarly, a circle with $n$ points implies ${n \choose 4}$ cocyclicity atoms or equality among ${n \choose 3}$ circumcircles. We introduce a new mathematical concept of $k$-equivalence relations, which generalizes equality ($k=1$) and includes both lines ($k=2$) and circles ($k=3$), and present an efficient proof-producing procedure to compute the closure of a $k$-equivalence relation.
We present a study of the incidence of active galactic nucleus (AGN) in a sample of major merging systems at 0.3<z<2.5. Galaxies in this merger sample have projected separations between 3 to 15 kpc and are selected from the CANDELS/3D-HST catalogs using a peak-finding algorithm. AGNs in mergers and non-mergers are identified on the basis of their X-ray emission, optical lines, mid-infrared colors, and radio emission. Among galaxies with adequate measurements to find potential AGNs, we find a similar fraction of AGNs in mergers (16.4%) compared to the fraction found in non-merging galaxies (15.4%). In mergers, this fraction is obtained by assuming that, in unresolved observations, only one of the merging galaxies is the AGN source. The similarity between the fractions is possibly due to the higher availability of cold gas at high redshifts, where the excess of nuclear activity as a result of merging is less important than at lower redshifts. Star-forming galaxies have a higher incidence of AGNs than quiescent galaxies. In particular, starbursts in mergers are the most common sites of AGN activity since they present higher AGN fractions and black hole accretion rates. We find no clear correlation between the black hole accretion rate and the galaxy properties (i.e., star-formation rate, stellar mass) in mergers and non-mergers. However, mergers seem to have a higher correlation with star formation than non-mergers, which possibly indicates that the merging process is starting to influence the star formation and AGN activity even at this pre-coalescence stage.
We consider the weighted Sobolev spaces associated with non-isotropic dilations of Calder\'on-Torchinsky and characterize the spaces by the square functions of Marcinkiewicz type including those defined with repeated uses of averaging operation.
We forecast the reionization history constraints, inferred from Lyman-alpha damping wing absorption features, for a future sample of $\sim 20$ $z \geq 6$ gamma-ray burst (GRB) afterglows. We describe each afterglow spectrum by a three-parameter model. First, L characterizes the size of the ionized region (the "bubble size") around a GRB host halo. Second, $\langle{x_{\rm HI}\rangle}$ is the volume-averaged neutral fraction outside of the ionized bubble around the GRB, which is approximated as spatially uniform. Finally, $N_{\mathrm{HI}}$ denotes the column-density of a local damped Lyman-alpha absorber (DLA) associated with the GRB host galaxy. The size distribution of ionized regions is extracted from a numerical simulation of reionization, and evolves strongly across the Epoch of Reionization (EoR). The model DLA column densities follow the empirical distribution determined from current GRB afterglow spectra. We use a Fisher matrix formalism to forecast the $\langle{x_{\rm HI}(z)\rangle}$ constraints that can be obtained from follow-up spectroscopy of afterglows with SNR = 20 per R=3,000 resolution element at the continuum. We find that the neutral fraction may be determined to better than 10-15\% (1-$\sigma$) accuracy from this data across multiple independent redshift bins at $z \sim 6-10$, spanning much of the EoR, although the precision degrades somewhat near the end of reionization. A more futuristic survey with $80$ GRB afterglows at $z \geq 6$ can improve the precision here by a factor of $2$ and extend measurements out to $z \sim 14$. We further discuss how these constraints may be combined with estimates of the escape fraction of ionizing photons, derived from the DLA column density distribution towards GRBs extracted at slightly lower redshift. This combination will help in testing whether we have an accurate census of the sources that reionized the universe.
The first chapter is a critical review and a case study in eBusiness, with special attention to the digital currencies resource and its possibilities. 2. chapter attempts to incorporate the UTAUT model with perceived risk theory to explore its impact on the intention to use m-government services. 3. chapter aims to assess the level of gender inclusivity in the municipal e-procurement processes in the City of Johannesburg as a case study. It uses a GAD approach. 4. chapter examines the impediments that derail the intensive uptake of eLearning programmes in a particular higher education institution. The study adopted an inductive research paradigm that followed a qualitative research strategy. Data were collected by means of one-on-one in-depth interviews from selected faculty members at a nominated institution of higher learning. 5. chapter investigated the role of KMS in enhancing the export performance of firms operating within the manufacturing sector in Zimbabwe. The study used a quantitative approach in which a survey questionnaire was distributed to 555 managers drawn from 185 manufacturing firms based in Harare. Data analyses involved the use of descriptive statistics, Spearman correlations and regression analysis. In the sixth chapter, a survey was undertaken on 131 SMEs from the Pelagonija region in order to determine the current level of SME digitalization within the region. It is aimed to compare with the EU average and to make conclusions on the impact of the SME digitalization on region GDP growth as well as revenues collection. The last chapter s purpose was to develop a measuring and modelling framework, an instrument of IBSQ for the South African banking sector. Snowball and convenience sampling, both non-probability techniques were used to recruit participants for the study. A total of 310 Internet banking customer responses were utilised in the analysis.
An almost Fuchsian manifold is a hyperbolic 3-manifold of the type $S\times \mathbb{R}$ which admits a closed minimal surface (homeomorphic to $S$) with the maximum principal curvature $\lambda_0 <1$, while a weakly almost Fuchsian manifold is of the same type but it admits a closed minimal surface with $\lambda_0 <= 1$. We first prove that any weakly almost Fuchsian manifold is in fact quasi-Fuchsian, and we construct a Canary-Storm type compactification for the weakly almost Fuchsian space. We apply these results to prove uniform upper bounds on the volume of the convex core and Hausdorff dimension for the limit set of weakly almost Fuchsian manifolds and to give examples of quasi-Fuchsian manifolds which admit unique minimal surfaces (resp. stable minimal surfaces) without being almost Fuchsian (resp. weakly almost Fuchsian). We also show that for every $g$ there is an $\epsilon$ depending only on $g$ such that if a closed hyperbolic 3-manifold fibers over the circle with fiber a surface of genus $g$, then any embedded minimal surface isotopic to the fiber has the maximum principal curvature $\lambda_0$ larger than $1+\epsilon$.
We solve time-harmonic Maxwell's equations in anisotropic, spatially homogeneous media in intersections of $L^p$-spaces. The material laws are time-independent. The analysis requires Fourier restriction-extension estimates for perturbations of Fresnel's wave surface. This surface can be decomposed into finitely many components of the following three types: smooth surfaces with non-vanishing Gaussian curvature, smooth surfaces with Gaussian curvature vanishing along one-dimensional submanifolds, but without flat points, and surfaces with conical singularities. Our estimates are based on new Bochner-Riesz estimates with negative index for non-elliptic surfaces.
Previously we constructed Calabi-Yau threefolds by a differential-geometric gluing method using Fano threefolds with their smooth anticanonical $K3$ divisors in arXiv:1305.0074. In this article we further consider the diffeomorphic types of the resulting Calabi-Yau threefolds starting from different pairs of Fano threefolds of Picard number one.
Electric field measurements of the Time Domain Sampler (TDS) receiver, part of the Radio and Plasma Waves (RPW) instrument on board Solar Orbiter, often exhibit very intense broadband wave emissions at frequencies below 20~kHz in the spacecraft frame. In this paper, we present a year-long study of electrostatic fluctuations observed in the solar wind at an interval of heliocentric distances from 0.5 to 1~AU. The RPW/TDS observations provide a nearly continuous data set for a statistical study of intense waves below the local plasma frequency. The on-board and continuously collected and processed properties of waveform snapshots allow for the mapping plasma waves at frequencies between 200~Hz and 20~kHz. We used the triggered waveform snapshots and a Doppler-shifted solution of the dispersion relation for wave mode identification in order to carry out a detailed spectral and polarization analysis. Electrostatic ion-acoustic waves are the common wave emissions observed between the local electron and proton plasma frequency in the soler wind. The occurrence rate of ion-acoustic waves peaks around perihelion at distances of 0.5~AU and decreases with increasing distances, with only a few waves detected per day at 0.9~AU. Waves are more likely to be observed when the local proton moments and magnetic field are highly variable. A more detailed analysis of more than 10000 triggered waveform snapshots shows the mean wave frequency at about 3 kHz and wave amplitude about 2.5 mV/m. The wave amplitude varies as 1/R^(1.38) with the heliocentric distance. The relative phase distribution between two components of the E-field shows a mostly linear wave polarization. Electric field fluctuations are closely aligned with the directions of the ambient field lines. Only a small number (3%) of ion-acoustic waves are observed at larger magnetic discontinuities.
Human mobility is crucial to understand the transmission pattern of COVID-19 on spatially embedded geographic networks. This pattern seems unpredictable, and the propagation appears unstoppable, resulting in over 350,000 death tolls in the U.S. by the end of 2020. Here, we create the spatiotemporal inter-county mobility network using 10 TB (Terabytes) trajectory data of 30 million smart devices in the U.S. in the first six months of 2020. We investigate its bound percolation by removing the weakly connected edges. The mobility network becomes vulnerable and prone to reach its criticality and thus experience surprisingly abrupt phase transitions. Despite the complex behaviors of the mobility network, we devised a novel approach to identify a small, manageable set of recurrent critical bridges, connecting the giant component and the second-largest component. These adaptive links, located across the United States, played a key role as valves connecting components in divisions and regions during the pandemic. Beyond, our numerical results unveil that network characteristics determine the critical thresholds and the bridge locations. The findings provide new insights into managing and controlling the connectivity of mobility networks during unprecedented disruptions. The work can also potentially offer practical future infectious diseases both globally and locally.
In the perovskite oxide SrCrO$_{3}$ the interplay between crystal structure, strain and orbital ordering enables a transition from a metallic to an insulating electronic structure under certain conditions. We identified a narrow window of oxygen partial pressure in which highly strained SrCrO$_{3}$ thin films can be grown using radio-frequency (RF) off-axis magnetron sputtering on three different substrates, (LaAlO$_{3}$)$_{0.3}$-(Sr$_{2}$TaAlO$_{6}$)$_{0.7}$ (LSAT), SrTiO$_{3}$ (STO) and DyScO$_{3}$ (DSO). X-ray diffraction and atomic force microscopy confirmed the quality of the films and a metal-insulator transition driven by the substrate induced strain was demonstrated.
In this survey we provide an overview of our recent results concerning Ricci de Turck flow on spaces with isolated conical singularities. The crucial characteristic of the flow is that it preserves the conical singularity. Under certain conditions, Ricci flat metrics with isolated conical singularities are stable and positive scalar curvature is preserved under the flow. We also discuss the relation to Perelman's entropies in the singular setting, and outline open questions and future reseach directions.
For given quantum (non-commutative) spaces $\mathbb{P}$ and $\mathbb{O}$ we study the quantum space of maps $\mathbb{M}_{\mathbb{P},\mathbb{O}}$ from $\mathbb{P}$ to $\mathbb{O}$. In case of finite quantum spaces these objects turn out to be behind a large class of maps which generalize the classical $\mathrm{qc}$-correlations known from quantum information theory to the setting of quantum input and output sets. We prove a number of important functorial properties of the mapping $(\mathbb{P},\mathbb{O})\mapsto\mathbb{M}_{\mathbb{P},\mathbb{O}}$ and use them to study various operator algebraic properties of the $\mathrm{C}^*$-algebras $\operatorname{C}(\mathbb{M}_{\mathbb{P},\mathbb{O}})$ such as the lifting property and residual finite dimensionality. Inside $\operatorname{C}(\mathbb{M}_{\mathbb{P},\mathbb{O}})$ we construct a universal operator system $\mathbb{S}_{\mathbb{P},\mathbb{O}}$ related to $\mathbb{P}$ and $\mathbb{O}$ and show, among other things, that the embedding $\mathbb{S}_{\mathbb{P},\mathbb{O}}\subset\operatorname{C}(\mathbb{M}_{\mathbb{P},\mathbb{O}})$ is hyperrigid, $\operatorname{C}(\mathbb{M}_{\mathbb{P},\mathbb{O}})$ is the $\mathrm{C}^*$-envelope of $\mathbb{S}_{\mathbb{P},\mathbb{O}}$ and that a large class of non-signalling correlations on the quantum sets $\mathbb{P}$ and $\mathbb{O}$ arise from states on $\operatorname{C}(\mathbb{M}_{\mathbb{P},\mathbb{O}})\otimes_{\rm{max}}\operatorname{C}(\mathbb{M}_{\mathbb{P},\mathbb{O}})$ as well as states on the commuting tensor product $\mathbb{S}_{\mathbb{P},\mathbb{O}}\otimes_{\rm{c}}\mathbb{S}_{\mathbb{P},\mathbb{O}}$. Finally we introduce and study the notion of a synchronous correlation with quantum input and output sets, prove several characterizations of such correlations and their relation to traces on $\operatorname{C}(\mathbb{M}_{\mathbb{P},\mathbb{O}})$.
Along with the increasing popularity of agile software development, software work has become much more social than ever. Contemporary software teams rely on a variety of collaborative practices, such as pair programming, the topic of our study. Many agilists advocated the importance of collocation, face-to-face interaction, and physical artefacts incorporated in the shared workspace, which the COVID-19 pandemic made unavailable; most software companies around the world were forced to send their engineers to work from home. As software projects and teams overnight turned into dis-tributed collaborations, we question what happened to the pair programming practice in the work-from-home mode. This paper reports on a longitudinal study of remote pair programming in two companies. We conducted 38 interviews with 30 engineers from Norway, Sweden, and the USA, and used the results of a survey in one of the case companies. Our study is unique as we collected the data longitudinally in April/May 2020, Sep/Oct 2020, and Jan/Feb 2021. We found that pair programming has decreased and some interviewees report not pairing at all for almost a full year. The experiences of those who paired vary from actively co-editing the code by using special tools to more passively co-reading and discussing the code and solutions by sharing the screen. Finally, we found that the interest in and the use of PP over time, since the first months of forced work from home to early 2021, has admittedly increased, also as a social practice.
We study Naruse-Newton coefficients, which are obtained from expanding descent polynomials in a Newton basis introduced by Jiradilok and McConville. These coefficients $C_0, C_1, \ldots$ form an integer sequence associated to each finite set of positive integers. For fixed nonnegative integers $a<b$, we examine the set $R_{a, b}$ of all ratios $\frac{C_a}{C_b}$ over finite sets of positive integers. We characterize finite sets for which $\frac{C_a}{C_b}$ is minimized and provide a construction to prove $R_{a, b}$ is unbounded above. We use this construction to obtain results on the closure of $R_{a, b}$. We also examine properties of Naruse-Newton coefficients associated with doubleton sets, such as unimodality and log-concavity. Finally, we find an explicit formula for all ratios $\frac{C_a}{C_b}$ of Naruse-Newton coefficients associated with ribbons of staircase shape.
The tilted balance among competing interactions can yield a rich variety of ground states of quantum matter. In most Ce-based heavy fermion systems, this can often be qualitatively described by the famous Doniach phase diagram, owing to the competition between the Kondo screening and the Ruderman-Kittel-Kasuya-Yoshida exchange interaction. Here, we report an unusual pressure-temperature phase diagram beyond the Doniach one in CeCuP2. At ambient pressure, CeCuP2 displays typical heavy-fermion behavior, albeit with a very low carrier density. With lowering temperature, it shows a crossover from a non Fermi liquid to a Fermi liquid at around 2.4 K. But surprisingly, the Kondo coherence temperature decreases with increasing pressure, opposite to that in most Ce-based heavy fermion compounds. Upon further compression, two superconducting phases are revealed. At 48.0 GPa, the transition temperature reaches 6.1 K, the highest among all Ce-based heavy fermion superconductors. We argue for possible roles of valence tuning and fluctuations associated with its special crystal structure in addition to the hybridization effect. These unusual phase diagrams suggest that CeCuP2 is a novel platform for studying the rich heavy fermions physics beyond the conventional Doniach paradigm.
We present a compression algorithm for parton densities using synthetic replicas generated from the training of a Generative Adversarial Network (GAN). The generated replicas are used to further enhance the statistics of a given Monte Carlo PDF set prior to compression. This results in a compression methodology that is able to provide a compressed set with smaller number of replicas and a more adequate representation of the original probability distribution. We also address the question of whether the GAN could be used as an alternative mechanism to avoid the fitting of large number of replicas.
Travellers in autonomous vehicles (AVs) need not to walk to the destination any more after parking like those in conventional human-driven vehicles (HVs). Instead, they can drop off directly at the destination and AVs can cruise for parking autonomously. It is a revolutionary change that such parking autonomy of AVs may increase the potential parking span substantially and affect the spatial parking equilibrium. Given this, from urban planners' perspective, it is of great necessity to reconsider the planning of parking supply along the city. To this end, this paper is the first to examine the spatial parking equilibrium considering the mix of AVs and HVs with parking cruising effect. It is found that the equilibrium solution of travellers' parking location choices can be biased due to the ignorance of cruising effects. On top of that, the optimal parking span of AVs at given parking supply should be no less than that at equilibrium. Besides, the optimal parking planning to minimize the total parking cost is also explored in a bi-level parking planning design problem (PPDP). While the optimal differentiated pricing allows the system to achieve optimal parking distribution, this study suggests that it is beneficial to encourage AVs to cruise further to park by reserving less than enough parking areas for AVs.
We propose a nanoscale device consisting of a double quantum dot with strong intra- and inter- dot Coulomb repulsions. In this design, the current can only flow through the lower dot, but is triggered by the gate-controlled occupancy of the upper dot. At low temperatures, our calculations predict the double dot to pass through a narrow Kondo regime, resulting in highly sensitive switching characteristics between three well-defined states : insulating, normal conduction and resonant conduction.
In this paper we explore a special class of metric spaces called smocked metric spaces and study their tangent cones at infinity. We prove that under the right hypotheses, the rescaled limits of balls converge in both the Gromov-Hausdorff and Intrinsic Flat sense to normed spaces. This paper will be applied in upcoming work by Kazaras and Sormani concerning Gromov's conjectures on the properties of GH and SWIF limits of Riemannian manifolds with positive scalar curvature.
We consider training models on private data that are distributed across user devices. To ensure privacy, we add on-device noise and use secure aggregation so that only the noisy sum is revealed to the server. We present a comprehensive end-to-end system, which appropriately discretizes the data and adds discrete Gaussian noise before performing secure aggregation. We provide a novel privacy analysis for sums of discrete Gaussians and carefully analyze the effects of data quantization and modular summation arithmetic. Our theoretical guarantees highlight the complex tension between communication, privacy, and accuracy. Our extensive experimental results demonstrate that our solution is essentially able to match the accuracy to central differential privacy with less than 16 bits of precision per value.
Among spin-crossover complexes, Fe-porphyrin (FeP) stands out for molecular spintronic applications: An intricate, yet favourable balance between ligand fields, charge transfer, and the Coulomb interaction makes FeP highly manipulable, while its planar structure facilitates device integration. Here, we theoretically design a mechanical spin-switch device in which external strain triggers the intrinsic magneto-structural coupling of FeP through a purely organic embedding. Exploiting the chemical compatibility and stretchability of graphene nanoribbon electrodes, we overcome common reliability and reproducibility issues of conventional inorganic setups. The competition between the Coulomb interaction and distortion-induced changes in ligand fields requires methodologies beyond the state-of-the-art: Combining density functional theory with many-body techniques, we demonstrate experimentally feasible tensile strain to trigger a low-spin ($S=1$) to high-spin ($S=2$) crossover. Concomitantly, the current through the device toggles by over an order of magnitude, adding a fully planar mechanical current-switch unit to the panoply of molecular spintronics.
In this paper, we study the topological properties of complex polynomial Hamiltonian differential systems of degree $n$ having an isochronous center of Morse type. Firstly, we prove that if the critical level curve possessing an isochronous center contains only a single singular point, then the vanishing cycle associated to this center represents a zero homology cycle on the compact Riemann surface of a generic level curve. Our result provides a positive answer to a question asked by L. Gavrilov under a quite simple condition and can be applied to achieve an equivalent description of the Jacobian conjecture on $\mathbb{C}^2$. Secondly, we obtain a very simple but useful necessary condition for isochronicity of Hamiltonian systems, which is that the $(n+1)$-degree part of the Hamiltonian function must have a factor with multipicity no less than $(n+1)/2$. Thirdly, we show a relation between Gavrilov's question and the conjecture proposed by X. Jarque and J. Villadelprat on the non-isochronicity of real Hamiltonian systems of even degree $n$.
This paper analyzes the performance of maximum-ratio transmission (MRT)/maximum-ratio combining (MRC) scheme in a dual-hop non-orthogonal multiple access (NOMA) full-duplex (FD) relay networks in the presence of residual hardware impairments (RHIs). The effects of channel estimation errors (CEEs) and imperfect successive interference cancellation are also considered for a realistic performance analysis. In the network, the base station and multiple users utilize MRT and MRC, respectively, while a dedicated relay consisting of two antennas, one for receiving and the other for broadcasting, operates in amplify-and-forward mode. For performance criterion, exact outage probability (OP) expression is derived for Nakagami-m fading channels. Furthermore, a tight lower bound and asymptotic expressions are also derived to provide more insights into the obtained OP in terms of diversity order and array gain. The obtained numerical results demonstrate the importance of loop-interference cancellation process at FD relay in order for the investigated system to perform better than half-duplex-NOMA counterpart. Also, a performance trade-off between the MRT and MRC schemes is observed in the presence of CEEs among users. Furthermore, it is shown that RHIs have a significant effect on the performance of users with lower power coefficients, however it does not change the diversity order. RHIs and CEEs have the most and least deterioration effects on the system performance, respectively.
This paper reviews the first NTIRE challenge on quality enhancement of compressed video, with a focus on the proposed methods and results. In this challenge, the new Large-scale Diverse Video (LDV) dataset is employed. The challenge has three tracks. Tracks 1 and 2 aim at enhancing the videos compressed by HEVC at a fixed QP, while Track 3 is designed for enhancing the videos compressed by x265 at a fixed bit-rate. Besides, the quality enhancement of Tracks 1 and 3 targets at improving the fidelity (PSNR), and Track 2 targets at enhancing the perceptual quality. The three tracks totally attract 482 registrations. In the test phase, 12 teams, 8 teams and 11 teams submitted the final results of Tracks 1, 2 and 3, respectively. The proposed methods and solutions gauge the state-of-the-art of video quality enhancement. The homepage of the challenge: https://github.com/RenYang-home/NTIRE21_VEnh
Deep neural networks have achieved promising performance in supervised point cloud applications, but manual annotation is extremely expensive and time-consuming in supervised learning schemes. Unsupervised domain adaptation (UDA) addresses this problem by training a model with only labeled data in the source domain but making the model generalize well in the target domain. Existing studies show that self-supervised learning using both source and target domain data can help improve the adaptability of trained models, but they all rely on hand-crafted designs of the self-supervised tasks. In this paper, we propose a learnable self-supervised task and integrate it into a self-supervision-based point cloud UDA architecture. Specifically, we propose a learnable nonlinear transformation that transforms a part of a point cloud to generate abundant and complicated point clouds while retaining the original semantic information, and the proposed self-supervised task is to reconstruct the original point cloud from the transformed ones. In the UDA architecture, an encoder is shared between the networks for the self-supervised task and the main task of point cloud classification or segmentation, so that the encoder can be trained to extract features suitable for both the source and the target domain data. Experiments on PointDA-10 and PointSegDA datasets show that the proposed method achieves new state-of-the-art performance on both classification and segmentation tasks of point cloud UDA. Code will be made publicly available.
Distant supervision (DS) is a well established technique for creating large-scale datasets for relation extraction (RE) without using human annotations. However, research in DS-RE has been mostly limited to the English language. Constraining RE to a single language inhibits utilization of large amounts of data in other languages which could allow extraction of more diverse facts. Very recently, a dataset for multilingual DS-RE has been released. However, our analysis reveals that the proposed dataset exhibits unrealistic characteristics such as 1) lack of sentences that do not express any relation, and 2) all sentences for a given entity pair expressing exactly one relation. We show that these characteristics lead to a gross overestimation of the model performance. In response, we propose a new dataset, DiS-ReX, which alleviates these issues. Our dataset has more than 1.5 million sentences, spanning across 4 languages with 36 relation classes + 1 no relation (NA) class. We also modify the widely used bag attention models by encoding sentences using mBERT and provide the first benchmark results on multilingual DS-RE. Unlike the competing dataset, we show that our dataset is challenging and leaves enough room for future research to take place in this field.
Coronal upflows at the edges of active regions (AR), which are a possible source of slow solar wind, have been found to connect with dynamics in the transition region. To infer at what scale transition region dynamics connect to AR upflows, we investigate the statistical properties of the small-scale dynamics in the transition region underneath the upflows at the edge of AR NOAA 11934. With observations from the Interface Region Imaging Spectragraph (IRIS), we found that the Si IV 1403\,\AA\ Doppler map consists of numerous blue-shifted and red-shifted patches mostly with sizes less than 1\,$Mm^2$. The blue-shifted structures in the transition region tend to be brighter than the red-shifted ones, but their nonthermal velocities have no significant difference. With the SWAMIS feature tracking procedure, in IRIS slit-jaw 1400\,\AA\ images we found that dynamic bright dots with an average size of about 0.3\,$Mm^2$ and lifetimes mostly less than 200\,s spread all over the region. Most of the bright dots appear to be localised, without clear signature of propagation of plasma to a long distance on the projection plane. Surge-like motions with speeds about 15 km/s could be seen in some events at the boundaries of the upflow region, where the magnetic field appear to be inclined. We conclude that the transition region dynamics connecting to coronal upflows should occur in very fine scale, suggesting that the corresponding coronal upflows should also be highly-structured. It is also plausible that the transition region dynamics might just act as stimulation at the coronal base that then drives the upflows in the corona.
The probabilistic learning on manifolds (PLoM) introduced in 2016 has solved difficult supervised problems for the ``small data'' limit where the number N of points in the training set is small. Many extensions have since been proposed, making it possible to deal with increasingly complex cases. However, the performance limit has been observed and explained for applications for which $N$ is very small (50 for example) and for which the dimension of the diffusion-map basis is close to $N$. For these cases, we propose a novel extension based on the introduction of a partition in independent random vectors. We take advantage of this novel development to present improvements of the PLoM such as a simplified algorithm for constructing the diffusion-map basis and a new mathematical result for quantifying the concentration of the probability measure in terms of a probability upper bound. The analysis of the efficiency of this novel extension is presented through two applications.
Confidentiality hinders the publication of authentic, labeled datasets of personal and enterprise data, although they could be useful for evaluating knowledge graph construction approaches in industrial scenarios. Therefore, our plan is to synthetically generate such data in a way that it appears as authentic as possible. Based on our assumption that knowledge workers have certain habits when they produce or manage data, generation patterns could be discovered which can be utilized by data generators to imitate real datasets. In this paper, we initially derived 11 distinct patterns found in real spreadsheets from industry and demonstrate a suitable generator called Data Sprout that is able to reproduce them. We describe how the generator produces spreadsheets in general and what altering effects the implemented patterns have.
The use of classical computers to simulate quantum computing has been successful in aiding the study of quantum algorithms and circuits that are too complex to examine analytically. Current implementations of quantum computing simulators are limited to two-level quantum systems. Recent advances in high-dimensional quantum computing systems have demonstrated the viability of working with multi-level superposition and entanglement. These advances allow an agile increase in the number of dimensions of the system while maintaining quantum entanglement, achieving higher encoding of information and making quantum algorithms less vulnerable to decoherence and computational errors. In this paper, we introduce QuantumSkynet, a novel high-dimensional cloud-based quantum computing simulator. This platform allows simulations of qudit-based quantum algorithms. We also propose a unified generalization of high-dimensional quantum gates, which are available for simulations in QuantumSkynet. Finally, we report simulations and their results for qudit-based versions of the Deutsch--Jozsa and quantum phase estimation algorithms using QuantumSkynet.
The two-field equations governing fully nonlinear dynamics of the drift wave (DW) and geodesic acoustic mode (GAM) in the toroidal geometry are derived in nonlinear gyrokinetic framework. Two stages with distinctive features are identified and analyzed. In the linear growth stage, the set of nonlinear equations can be reduced to the intensively studied parametric decay instability (PDI), accounting for the spontaneous resonant excitation of GAM by DW. The main results of previous works on spontaneous GAM excitation, e.g., the much enhanced GAM group velocity and the nonlinear growth rate of GAM, are reproduced from numerical solution of the two-field equations. In the fully nonlinear stage, soliton structures are observed to form due to the balancing of the self-trapping effect by the spontaneously excited GAM and kinetic dispersiveness of DW. The soliton structures enhance turbulence spreading from DW linearly unstable to stable region, exhibiting convective propagation instead of typical linear dispersive process, and is thus, expected to induce core-edge interaction and nonlocal transport.
Due to the mobility and frequent disconnections, the correctness of mobile interaction systems, such as mobile robot systems and mobile payment systems, are often difficult to analyze. This paper introduces three critical properties of systems, called system connectivity, interaction soundness and data validity, and presents a related modeling and analysis method, based on a kind of Petri nets called VPN. For a given system, a model including component nets and interaction structure nets is constructed by using VPNs. The component net describes the internal process of each component, while the interaction structure net reflects the dynamic interaction between components. Based on this model, three properties are defined and analyzed. The case study of a practical mobile payment system shows the effectiveness of the proposed method.
Point defects in insulators are considered promising candidates for quantum technologies. In keeping with this, we present an extensive optically-detected magnetic resonance (ODMR) study at room-temperature on individual TR12 centers (ZPL at 471nm), which are known in the literature since 1956. In this work we found TR12 centers to show a strong ODMR signal under optical saturation. These observed defects were created in high-purity epitaxial layers of diamond by standard irradiation and annealing processes. From the analysis of the ODMR spectra along with antibunching measurements and coherent population trapping, we proposed the energy level structure of TR12 center, consisting of ground state and excited state singlets complemented by a metastable triplet in between. Mapping the fluorescence dependence of the center on an external magnetic field and on the polarization of laser excitation, allows us to identify twelve inequivalent orientations for TR12 centers. This includes the exact orientations of the dipole transition and the triplet axes in the diamond lattice in full agreement with the results of modeling based on the proposed level structure. Furthermore, a static Jahn-Teller effect was detected through fluorescence switching between two levels at low optical excitation power, directly observable in the real-time fluorescence signal for various polarization of laser excitation. Based on these results we discuss the prospects of the TR12 center in diamond for quantum sensing and quantum information processing.
The aims of this article is to generalize some useful Besicovitch-Morse type covering lemmas in complete Riemannian manifolds and try to find more spaces such that the so-called BCP and WBCP are equivalent while these two properties are weaker and still useful. We also get interest in the best constants of Besicovitch-type covering properties in Euclidean spaces and sorted out the best results of related problems before giving a new proof of Besicovitch covering theorem in the one-dimensional case.
Since a rigorous microscopic treatment of a nematic fluid system based on a pairwise interaction potential is immensely complex we had introduced a simple mean field potential which was a modification of the Maier-Saupe potential in a previous paper. Building up on that here we have modified that potential to take into account the various aspects of a smectic A-nematic phase transition. In particular we have studied the dependence of the phase transition on the coupling coefficient between the nematic and smectic order parameters which in turn depends on the length of alkyl chain, existence of tricritical point, variation of entropy and specific heat as well as the dependence of the phase transition on pressure.
Learned networks in the domain of visual recognition and cognition impress in part because even though they are trained with datasets many orders of magnitude smaller than the full population of possible images, they exhibit sufficient generalization to be applicable to new and previously unseen data. Although many have examined issues regarding generalization from several perspectives, we wondered If a network is trained with a biased dataset that misses particular samples corresponding to some defining domain attribute, can it generalize to the full domain from which that training dataset was extracted? It is certainly true that in vision, no current training set fully captures all visual information and this may lead to Selection Bias. Here, we try a novel approach in the tradition of the Thought Experiment. We run this thought experiment on a real domain of visual objects that we can fully characterize and look at specific gaps in training data and their impact on performance requirements. Our thought experiment points to three conclusions: first, that generalization behavior is dependent on how sufficiently the particular dimensions of the domain are represented during training; second, that the utility of any generalization is completely dependent on the acceptable system error; and third, that specific visual features of objects, such as pose orientations out of the imaging plane or colours, may not be recoverable if not represented sufficiently in a training set. Any currently observed generalization in modern deep learning networks may be more the result of coincidental alignments and whose utility needs to be confirmed with respect to a system's performance specification. Our Thought Experiment Probe approach, coupled with the resulting Bias Breakdown can be very informative towards understanding the impact of biases.
The expanding application in Micro-Air Vehicles has encouraged many researchers to understand the unsteady flow around a flapping foil at a low Reynolds number. We numerically investigate an incompressible unsteady flow around a two-dimensional pitching airfoil (SD7003) at high reduced frequency in the laminar regime. This study interrogates the effect of different unsteady parameters, namely amplitude (A), reduced frequency (k), Reynolds number (Re), and asymmetry parameter (S) for pitching motion on the force coefficients. The inviscid theoretical model is utilized to calculate the lift coefficient for sinusoidal motion in the viscous regime, and a comparison is made with the numerical results. The theoretical analysis identifies the influence of the non-circulatory lift over circulatory lift at a high reduced frequency. Further, the results indicate that the reduced frequency (k) and asymmetry parameter (S) have a significant impact on the instantaneous and time-averaged force coefficients as well as on the vortex structure in the wake. Finally, the Fast Fourier Transformation (FFT) analysis is carried out over a simulated case with fixed amplitude and Reynolds number for distinct k and S values. The findings confirm that the dominant frequency in the flow (k*) has a direct correlation to the airfoil pitching frequency (k).
Numerous efforts have been invested in improving the effectiveness of bug localization techniques, whereas little attention is paid to making these tools run more efficiently in continuously evolving software repositories. This paper first analyzes the information retrieval model behind a classic bug localization tool, BugLocator, and builds a mathematical foundation illustrating that the model can be updated incrementally when codebase or bug reports evolve. Then, we present IncBL, a tool for Incremental Bug Localization in evolving software repositories. IncBL is evaluated on the Bugzbook dataset, and the results show that IncBL can significantly reduce the running time by 77.79% on average compared with the re-computing the model, while maintaining the same level of accuracy. We also implement IncBL as a Github App that can be easily integrated into open-source projects on GitHub. Users can deploy and use IncBL locally as well. The demo video for IncBL can be viewed at https://youtu.be/G4gMuvlJSb0, and the source code can be found at https://github.com/soarsmu/IncBL.
Recently, DETR and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, an end-to-end video object detection model based on a spatial-temporal Transformer architecture. The goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow, recurrent neural networks, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS or Tubelet rescoring, which keeps the pipeline simple and clean. In particular, we present temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal Transformer consists of three components: Temporal Deformable Transformer Encoder (TDTE) to encode the multiple frame spatial details, Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (3%-4% mAP) on the ImageNet VID dataset. TransVOD yields comparable results performance on the benchmark of ImageNet VID. We hope our TransVOD can provide a new perspective for video object detection. Code will be made publicly available at https://github.com/SJTU-LuHe/TransVOD.
Video question answering (Video QA) presents a powerful testbed for human-like intelligent behaviors. The task demands new capabilities to integrate video processing, language understanding, binding abstract linguistic concepts to concrete visual artifacts, and deliberative reasoning over spacetime. Neural networks offer a promising approach to reach this potential through learning from examples rather than handcrafting features and rules. However, neural networks are predominantly feature-based - they map data to unstructured vectorial representation and thus can fall into the trap of exploiting shortcuts through surface statistics instead of true systematic reasoning seen in symbolic systems. To tackle this issue, we advocate for object-centric representation as a basis for constructing spatio-temporal structures from videos, essentially bridging the semantic gap between low-level pattern recognition and high-level symbolic algebra. To this end, we propose a new query-guided representation framework to turn a video into an evolving relational graph of objects, whose features and interactions are dynamically and conditionally inferred. The object lives are then summarized into resumes, lending naturally for deliberative relational reasoning that produces an answer to the query. The framework is evaluated on major Video QA datasets, demonstrating clear benefits of the object-centric approach to video reasoning.
With the recent developments in neural networks, there has been a resurgence in algorithms for the automatic generation of simulation ready electronic circuits from hand-drawn circuits. However, most of the approaches in literature were confined to classify different types of electrical components and only a few of those methods have shown a way to rebuild the circuit schematic from the scanned image, which is extremely important for further automation of netlist generation. This paper proposes a real-time algorithm for the automatic recognition of hand-drawn electrical circuits based on object detection and circuit node recognition. The proposed approach employs You Only Look Once version 5 (YOLOv5) for detection of circuit components and a novel Hough transform based approach for node recognition. Using YOLOv5 object detection algorithm, a mean average precision (mAP0.5) of 98.2% is achieved in detecting the components. The proposed method is also able to rebuild the circuit schematic with 80% accuracy with a near-real time performance of 0.33s per schematic generation.
We present a novel deep neural network (DNN) architecture for compressing an image when a correlated image is available as side information only at the decoder. This problem is known as distributed source coding (DSC) in information theory. In particular, we consider a pair of stereo images, which generally have high correlation with each other due to overlapping fields of view, and assume that one image of the pair is to be compressed and transmitted, while the other image is available only at the decoder. In the proposed architecture, the encoder maps the input image to a latent space, quantizes the latent representation, and compresses it using entropy coding. The decoder is trained to extract the common information between the input image and the correlated image, using only the latter. The received latent representation and the locally generated common information are passed through a decoder network to obtain an enhanced reconstruction of the input image. The common information provides a succinct representation of the relevant information at the receiver. We train and demonstrate the effectiveness of the proposed approach on the KITTI and Cityscape datasets of stereo image pairs. Our results show that the proposed architecture is capable of exploiting the decoder-only side information, and outperforms previous work on stereo image compression with decoder side information.
Ly-alpha emitting galaxies and giant Ly-alpha blobs (LABs) have been extensively observed to study the formation history of galaxies. However, the origin of their extended Ly-alpha emission, especially of LABs, remains controversial. Polarization signals from some LABs have been discovered, and this is commonly interpreted as strong evidence supporting that the extended Ly-alpha emission originates from the resonance scattering. The Monte Carlo Ly-alpha radiative transfer code LaRT is updated to investigate the polarization of Ly-alpha using the Stokes vector formalism. We apply LaRT to a few models to explore the fundamental polarization properties of Ly-alpha. Interestingly, individual Ly-alpha photon packets are found to be almost completely polarized by a sufficient number of scatterings (N_scatt > 10^4-10^5 in a static medium) or Doppler shifts induced by gas motion, even starting from unpolarized light. It is also found that the polarization pattern can exhibit a non-monotonically increasing pattern in some cases, besides the commonly-known trend that the polarization monotonically increases with radius. The polarization properties are primarily determined by the degree of polarization of individual photon packets and the anisotropy of the Ly-alpha radiation field, which are eventually controlled by the medium's optical depth and velocity field. If once Ly-alpha photon packets achieve ~100% polarization, the radial profile of polarization appears to correlate with the surface brightness profile. A steep surface brightness profile tends to yield a rapid increase of the linear polarization near the Ly-alpha source location. In contrast, a shallow surface brightness profile gives rise to a slowly increasing polarization pattern.
In this work we elaborate on holographic description of the path-integral optimization in conformal field theories (CFT) using Hartle-Hawking wave functions in Anti-de Sitter spacetimes. We argue that the maximization of the Hartle-Hawking wave function is equivalent to the path-integral optimization procedure in CFT. In particular, we show that metrics that maximize gravity wave functions computed in particular holographic geometries, precisely match those derived in the path-integral optimization procedure for their dual CFT states. The present work is a detailed version of \cite{Boruch:2020wax} and contains many new results such as analysis of excited states in various dimensions including JT gravity, and a new way of estimating holographic path-integral complexity from Hartle-Hawking wave functions. Finally, we generalize the analysis to Lorentzian Anti-de Sitter and de Sitter geometries and use it to shed light on path-integral optimization in Lorentzian CFTs.
The possibility of long-baseline quantum experiments in space makes it necessary to better understand the time evolution of relativistic quantum particles in a weakly varying gravitational field. We explain why conventional treatments by traditional quantum optics and atomic physics based on quantum mechanics may become inadequate when faced with issues related to locality, simultaneity, signaling, causality, etc. Quantum field theory is needed. Adding the effects of gravitation, we are led to Quantum Field Theory in Curved Spacetime (QFTCST). This well-established theory should serve as the canonical reference theory to a large class of proposed space experiments testing the foundations of gravitation and quantum theory, and the basic notions of quantum information theory in relativistic settings. This is the first in a series of papers treating near-term quantum optics and matter waves experiments in space from the perspective of QFTCST. We analyze the quantum motion of photons and of scalar massive particles using QFTCST with application to interferometer experiments. Our main result is that, for photons, the weak gravitational field is to leading order completely equivalent to an inhomogeneous dielectric, thus allowing for a description of quantum optics experiments in curved space using familiar notions from the theory of optical media. We also discuss interference experiments that probe first-order quantum coherence, the importance of a covariant particle detection theory, and the relevance of time of arrival measurements. For massive particles with internal structure, we identify a novel gravity-induced phase shift that originates from the different gravitational masses attributed to the excited internal states. This phase shift can in principle be measured in space experiments.
Different AdS-Rindler wedges can be mapped to each other using bulk isometries. In this paper we address how the boundary representations corresponding to the AdS-Rindler wedges transform under such isometries. We show that when a bulk wedge is mapped to another using a bulk isometry, their boundary representations are mapped by the conformal transformation corresponding to the isometry. We comment on the import of this result on the relation between AdS/CFT and quantum error correction.
Markov Population Models are a widespread formalism used to model the dynamics of complex systems, with applications in Systems Biology and many other fields. The associated Markov stochastic process in continuous time is often analyzed by simulation, which can be costly for large or stiff systems, particularly when a massive number of simulations has to be performed (e.g. in a multi-scale model). A strategy to reduce computational load is to abstract the population model, replacing it with a simpler stochastic model, faster to simulate. Here we pursue this idea, building on previous works and constructing a generator capable of producing stochastic trajectories in continuous space and discrete time. This generator is learned automatically from simulations of the original model in a Generative Adversarial setting. Compared to previous works, which rely on deep neural networks and Dirichlet processes, we explore the use of state of the art generative models, which are flexible enough to learn a full trajectory rather than a single transition kernel.
In this paper, we exploit the capability of multi-agent deep reinforcement learning (MA-DRL) technique to generate a transmit power pool (PP) for Internet of things (IoT) networks with semi-grant-free non-orthogonal multiple access (SGF-NOMA). The PP is mapped with each resource block (RB) to achieve distributed transmit power control (DPC). We first formulate the resource (sub-channel and transmit power) selection problem as stochastic Markov game, and then solve it using two competitive MA-DRL algorithms, namely double deep Q network (DDQN) and Dueling DDQN. Each GF user as an agent tries to find out the optimal transmit power level and RB to form the desired PP. With the aid of dueling processes, the learning process can be enhanced by evaluating the valuable state without considering the effect of each action at each state. Therefore, DDQN is designed for communication scenarios with a small-size action-state space, while Dueling DDQN is for a large-size case. Our results show that the proposed MA-Dueling DDQN based SGF-NOMA with DPC outperforms the SGF-NOMA system with the fixed-power-control mechanism and networks with pure GF protocols with 17.5% and 22.2% gain in terms of the system throughput, respectively. Moreover, to decrease the training time, we eliminate invalid actions (high transmit power levels) to reduce the action space. We show that our proposed algorithm is computationally scalable to massive IoT networks. Finally, to control the interference and guarantee the quality-of-service requirements of grant-based users, we find the optimal number of GF users for each sub-channel.
We propose in this paper a new nonlinear mathematical model of an oscillating water column. The one-dimensional shallow water equations in the presence of this device are essentially reformulated as two transmission problems: the first one is associated with a step in front of the device and the second one is related to the interaction between waves and a fixed partially-immersed structure. By taking advantage of free surface Bernoulli's equation, we close the system by deriving a transmission condition that involves a time-dependent air pressure inside the chamber of the device, instead of a constant atmospheric pressure as in the previous work \cite{bocchihevergara2021}. We then show that the second transmission problem can be reduced to a quasilinear hyperbolic initial boundary value problem with a semilinear boundary condition determined by an ODE depending on the trace of the solution to the PDE at the boundary. Local well-posedness for general problems of this type is established via an iterative scheme by using linear estimates for the PDE and nonlinear estimates for the ODE. Finally, the well-posedness of the transmission problem related to the wave-structure interaction in the oscillating water column is obtained as an application of the general theory.
Scene text image super-resolution (STISR) aims to improve the resolution and visual quality of low-resolution (LR) scene text images, and consequently boost the performance of text recognition. However, most of existing STISR methods regard text images as natural scene images, ignoring the categorical information of text. In this paper, we make an inspiring attempt to embed categorical text prior into STISR model training. Specifically, we adopt the character probability sequence as the text prior, which can be obtained conveniently from a text recognition model. The text prior provides categorical guidance to recover high-resolution (HR) text images. On the other hand, the reconstructed HR image can refine the text prior in return. Finally, we present a multi-stage text prior guided super-resolution (TPGSR) framework for STISR. Our experiments on the benchmark TextZoom dataset show that TPGSR can not only effectively improve the visual quality of scene text images, but also significantly improve the text recognition accuracy over existing STISR methods. Our model trained on TextZoom also demonstrates certain generalization capability to the LR images in other datasets.
The main goal of this paper is to discuss how to integrate the possibilities of crowdsourcing platforms with systems supporting workflow to enable the engagement and interaction with business tasks of a wider group of people. Thus, this work is an attempt to expand the functional capabilities of typical business systems by allowing selected process tasks to be performed by unlimited human resources. Opening business tasks to crowdsourcing, within established Business Process Management Systems (BPMS) will improve the flexibility of company processes and allow for lower work-load and greater specialization among the staff employed on-site. The presented conceptual work is based on the current international standards in this field, promoted by Workflows Management Coalition. To this end, the functioning of business platforms was analysed and their functionality was presented visually, followed by a proposal and a discussion of how to implement crowdsourcing into workflow systems.
The beautiful structures of single and multi-domain proteins are clearly ordered in some fashion but cannot be readily classified using group theory methods that are successfully used to describe periodic crystals. For this reason, protein structures are considered to be aperiodic, and may have evolved this way for functional purposes, especially in instances that require a combination of softness and rigidity within the same molecule. By analyzing the solved protein structures, we show that orientational symmetry is broken in the aperiodic arrangement of the secondary structural elements (SSEs), which we deduce by calculating the nematic order parameter, $P_{2}$. We find that the folded structures are nematic droplets with a broad distribution of $P_{2}$. We argue that non-zero values of $P_{2}$, leads to an arrangement of the SSEs that can resist mechanical forces, which is a requirement for allosteric proteins. Such proteins, which resist mechanical forces in some regions while being flexible in others, transmit signals from one region of the protein to another (action at a distance) in response to binding of ligands (oxygen, ATP or other small molecules).
Both single-laser and two-laser experiments were conducted to look into the ion-imaging of Br*(2P1/2) and Br(2P3/2) photo-fragmented from 1-bromo-2-methylbutane in the range 232-240 nm via a detection scheme of (2+1) resonance-enhanced multiphoton ionization. The angular analysis of these photofragment distributions yields the anisotropy parameter beta = 1.88 +/- 0.06 for the Br* excited state which arises from a parallel transition, while beta = 0.63 +/- 0.09 for the Br ground state indicates the contribution from both a perpendicular transition and a non-adiabatic transition. When a hexapole coupled with an orienting field was implemented, the parent molecules are spatially oriented to yield an orientation efficiency |<cos theta>| of 0.15. Besides the chi angle between the recoil velocity v and the transition dipole moment mu, orienting molecules allows for the evaluation of the angle alpha between v and the permanent molecular dipole moment d. The angular analysis of Br* photofragment distribution yields chi to be 11.5 degrees and alpha in the range from 160 degrees to 180 degrees with weak dependency. In the two-laser experiments, the angular anisotropy of Br photofragment distribution was found to be smaller (0.38 +/- 0.10) when the photolysis wavelength was red-shifted to 240 nm, suggesting the increasing contributions from perpendicular transitions.
Particularly important to hurricane risk assessment for coastal regions is finding accurate approximations of return probabilities of maximum windspeeds. Since extremes in maximum windspeed have a direct relationship to minimums in the central pressure, accurate windspeed return estimates rely heavily on proper modeling of the central pressure minima. Using the HURDAT2 database, we show that the central pressure minima of hurricane events can be appropriately modeled by a nonstationary extreme value distribution. We also provide and validate a Poisson distribution with a nonstationary rate parameter to model returns of hurricane events. Using our nonstationary models and numerical simulation techniques from established literature, we perform a simulation study to model returns of maximum windspeeds of hurricane events along the North Atlantic Coast. We show that our revised model agrees with current data and results in an expectation of higher maximum windspeeds for all regions along the coast with the highest maximum windspeeds occurring in the northern part of the coast.
In this paper, I generalize the Naszodi-Mendonca method in order to identify changes in marital preferences over multiple dimensions, such as the partners' race and education level. Similar to the original Naszodi-Mendonca method, preferences are identified by the generalized method through estimating their effects on marriage patterns, in particular, on the share of inter-racial couples, and the share of educationally homogamous couples. This is not a simple task because marriage patterns are shaped not only by marital preferences, but also by the distribution of marriageable males and females by traits. The generalized Naszodi-Mendonca method is designed for constructing counterfactuals to perform the decomposition. I illustrate the application of the generalized Naszodi-Mendonca method by decomposing changes in the prevalence of racial and educational homogamy in the 1980s using US data from IPUMS.
To enable multiple missiles to attack a maneuvering target simultaneously, fixed-time distributed cooperative guidance laws are proposed in this paper. Here, we present a novel fixed-time fast nonsingular terminal sliding mode surface (FNTSMS). In particular, the sliding mode surface not only avoids singularities but also has the characteristic of a settling time boundary regardless of the initial conditions. Based on the FNTSMS, we have developed a distributed guidance law, which has the characteristic of fixed-time convergence. The guidance law achieves the consensus of range-to-go, relative velocity along and perpendicular to the line of sight (LOS) direction to realize the simultaneous attack. In addition, a saturation function is introduced to avoid the chattering problem caused by the commonly used sign function. Furthermore, the distributed cooperative guidance law with communication failure is considered and proved theoretically, which shows that the proposed guidance law has excellent performance. Finally, the simulation results verify the performance of the distributed guidance law and its robustness against communication topology mutations, and explain the phenomenon in detail.
For better clustering performance, appropriate representations are critical. Although many neural network-based metric learning methods have been proposed, they do not directly train neural networks to improve clustering performance. We propose a meta-learning method that train neural networks for obtaining representations such that clustering performance improves when the representations are clustered by the variational Bayesian (VB) inference with an infinite Gaussian mixture model. The proposed method can cluster unseen unlabeled data using knowledge meta-learned with labeled data that are different from the unlabeled data. For the objective function, we propose a continuous approximation of the adjusted Rand index (ARI), by which we can evaluate the clustering performance from soft clustering assignments. Since the approximated ARI and the VB inference procedure are differentiable, we can backpropagate the objective function through the VB inference procedure to train the neural networks. With experiments using text and image data sets, we demonstrate that our proposed method has a higher adjusted Rand index than existing methods do.
Deployed machine learning models are confronted with the problem of changing data over time, a phenomenon also called concept drift. While existing approaches of concept drift detection already show convincing results, they require true labels as a prerequisite for successful drift detection. Especially in many real-world application scenarios-like the ones covered in this work-true labels are scarce, and their acquisition is expensive. Therefore, we introduce a new algorithm for drift detection, Uncertainty Drift Detection (UDD), which is able to detect drifts without access to true labels. Our approach is based on the uncertainty estimates provided by a deep neural network in combination with Monte Carlo Dropout. Structural changes over time are detected by applying the ADWIN technique on the uncertainty estimates, and detected drifts trigger a retraining of the prediction model. In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model rather than detecting change on the input data only (which can lead to unnecessary retrainings). We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks.
The present study examined 4698 Indian Coronary Artery Disease research publications, as indexed in Web of Science database during 1990-2019, with a view to understand their growth rate, global share, citation impact, international collaborative papers, distribution of publications by broad subjects, productivity and citation profile of top organizations and authors, and preferred media of communication. The Indian publications registered an annual average growth rate of 11.47%, global share of 1.14%, international collaborative publications share of 38.89% and its citation impact averaged to 25.58 citations per paper. Among broad subjects, Cardiovascular System & Cardiology contributed the largest publications share of 19.14% in Indian coronary artery disease output, followed by Neurosciences & Neurology (14.94%), Pharmacology & Pharmacy (8.51%), etc. during 1990-2019. Among various organizations and authors contributing to Indian coronary artery disease research, the top 20 organizations and top 30 authors together contributed 40.70% and 37.29% respectively as their share of Indian publication output and 38.36% and 33.13% respectively as their share of Indian citation output during 1990-2019. Among 1222 contributing journals in Indian coronary artery disease research, the top 30 journals registered 30.80% share during 1990-2019. There is an urgent need to increase the publication output, improve research quality and improve international collaboration. Indian government also needs to come up with a policy for identification, screening, diagnosis and treatment of coronary artery disease patients, besides curriculum reform in teaching, capacity building, patient education and political support are badly needed.
Fix $R>1$ and let $A_R=\{1/R\le |z|\le R \}$ be an annulus. Also, let $K(R)$ denote the smallest constant such that $A_R$ is a $K(R)$-spectral set for the bounded linear operator $T\in \mathcal{B}(H)$ whenever $||T||\le R$ and $||T^{-1}||\le R.$ We show that $K(R)\ge 2, \text{ for all } R>1. $ This improves on previous results by Badea, Beckermann and Crouzeix.
We describe an improvement on the magnetic scalar potential approach to the design of an electromagnet, which incorporates the need to wind the coil as a helix. Any magnetic field that can be described by a magnetic scalar potential is produced with high fidelity within a Target region; all fields are confined within a larger Return. The helical winding only affects the field in the Return.
We introduce the use of conditional generative adversarial networks forgeneralised gravitational wave burst generation in the time domain.Generativeadversarial networks are generative machine learning models that produce new databased on the features of the training data set. We condition the network on fiveclasses of time-series signals that are often used to characterise gravitational waveburst searches: sine-Gaussian, ringdown, white noise burst, Gaussian pulse and binaryblack hole merger. We show that the model can replicate the features of these standardsignal classes and, in addition, produce generalised burst signals through interpolationand class mixing. We also present an example application where a convolutional neuralnetwork classifier is trained on burst signals generated by our conditional generativeadversarial network. We show that a convolutional neural network classifier trainedonly on the standard five signal classes has a poorer detection efficiency than aconvolutional neural network classifier trained on a population of generalised burstsignals drawn from the combined signal class space.
The analogy between self-similar time series with given Hurst exponent H and Markovian, Gaussian stochastic processes with multiplicative noise and entropic index q (Borland, PRE 57, 6, 6634-6642, 1998) allows us to explain the empirical results reported in (Pavithran et al., EPL, 129 2020 24004) and (Pavithran et al. Sci. Reports 10.1 (2020) 1-8) with the help of the properties of the nonextensive entropy Sq of index q: a dominant oscillating mode arises as H goes to zero in many different systems and its amplitude is proportional to 1/ H^2 . Thus, a decrease of H acts as precursor of large oscillations of the state variable, which corresponds to catastrophic events in many problems of practical interest. In contrast, if H goes to 1 then the time series is strongly intermittent, fluctuations of the state variable follow a power law whose exponent depends on H, and exceedingly large event are basically unpredictable. These predictions agree with observations in problems of aeroacoustics, aeroelasticity, electric engineering, hydrology, laser physics, meteorology, plasma physics, plasticity, polemology, seismology and thermoacoustics.
We discuss relations between the initial boundary value problem (IBVP) and quasi-local Hamiltonians in GR. The latter have traditionally been based on Dirichlet boundary conditions, which however are shown here to be ill-posed for the IBVP. We present and analyse several other choices of boundary conditions which are better behaved with respect to the IBVP and carry out a corresponding Hamiltonian analysis, using the framework of the covariant phase space method.
Providing personalized explanations for recommendations can help users to understand the underlying insight of the recommendation results, which is helpful to the effectiveness, transparency, persuasiveness and trustworthiness of recommender systems. Current explainable recommendation models mostly generate textual explanations based on pre-defined sentence templates. However, the expressiveness power of template-based explanation sentences are limited to the pre-defined expressions, and manually defining the expressions require significant human efforts. Motivated by this problem, we propose to generate free-text natural language explanations for personalized recommendation. In particular, we propose a hierarchical sequence-to-sequence model (HSS) for personalized explanation generation. Different from conventional sentence generation in NLP research, a great challenge of explanation generation in e-commerce recommendation is that not all sentences in user reviews are of explanation purpose. To solve the problem, we further propose an auto-denoising mechanism based on topical item feature words for sentence generation. Experiments on various e-commerce product domains show that our approach can not only improve the recommendation accuracy, but also the explanation quality in terms of the offline measures and feature words coverage. This research is one of the initial steps to grant intelligent agents with the ability to explain itself based on natural language sentences.
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input. Recently, generative adversarial networks (GANs) become popular to hallucinate details. Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task. Also, GAN-generated fake details may often undermine the realism of the whole image. We address these issues by proposing best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision during training, which is beneficial to producing more reasonable details. Besides, we propose a region-aware adversarial learning strategy that directs our model to focus on generating details for textured areas adaptively. Extensive experiments justify the effectiveness of our method. An ultra-high-resolution 4K dataset is also constructed to facilitate future super-resolution research.
A long-standing challenge in artificial intelligence is lifelong learning. In lifelong learning, many tasks are presented in sequence and learners must efficiently transfer knowledge between tasks while avoiding catastrophic forgetting over long lifetimes. On these problems, policy reuse and other multi-policy reinforcement learning techniques can learn many tasks. However, they can generate many temporary or permanent policies, resulting in memory issues. Consequently, there is a need for lifetime-scalable methods that continually refine a policy library of a pre-defined size. This paper presents a first approach to lifetime-scalable policy reuse. To pre-select the number of policies, a notion of task capacity, the maximal number of tasks that a policy can accurately solve, is proposed. To evaluate lifetime policy reuse using this method, two state-of-the-art single-actor base-learners are compared: 1) a value-based reinforcement learner, Deep Q-Network (DQN) or Deep Recurrent Q-Network (DRQN); and 2) an actor-critic reinforcement learner, Proximal Policy Optimisation (PPO) with or without Long Short-Term Memory layer. By selecting the number of policies based on task capacity, D(R)QN achieves near-optimal performance with 6 policies in a 27-task MDP domain and 9 policies in an 18-task POMDP domain; with fewer policies, catastrophic forgetting and negative transfer are observed. Due to slow, monotonic improvement, PPO requires fewer policies, 1 policy for the 27-task domain and 4 policies for the 18-task domain, but it learns the tasks with lower accuracy than D(R)QN. These findings validate lifetime-scalable policy reuse and suggest using D(R)QN for larger and PPO for smaller library sizes.
We prove that the general linear groups of the integers, Gaussian integers, and Eisenstein integers satisfy homological stability of slope 1 when using $\mathbb{Z}[1/2]$-coefficients and of slope $2/3$ when using $\mathbb{Z}$-coefficients.
The sources of reliable, code-level information about vulnerabilities that affect open-source software (OSS) are scarce, which hinders a broad adoption of advanced tools that provide code-level detection and assessment of vulnerable OSS dependencies. In this paper, we study the extent to which the output of off-the-shelf static code analyzers can be used as a source of features to represent commits in Machine Learning (ML) applications. In particular, we investigate how such features can be used to construct embeddings and train ML models to automatically identify source code commits that contain vulnerability fixes. We analyze such embeddings for security-relevant and non-security-relevant commits, and we show that, although in isolation they are not different in a statistically significant manner, it is possible to use them to construct a ML pipeline that achieves results comparable with the state of the art. We also found that the combination of our method with commit2vec represents a tangible improvement over the state of the art in the automatic identification of commits that fix vulnerabilities: the ML models we construct and commit2vec are complementary, the former being more generally applicable, albeit not as accurate.
We review a combinatoric approach to the Hodge Conjecture for Fermat Varieties and announce new cases where the conjecture is true.
This article studies the impact of online news on social and economic consumer perceptions through the application of semantic network analysis. Using almost 1.3 million online articles on Italian media covering a period of four years, we assessed the incremental predictive power of economic-related keywords on the Consumer Confidence Index. We transformed news into networks of co-occurring words and calculated the semantic importance of specific keywords, to see if words appearing in the articles could anticipate consumers' judgements about the economic situation. Results show that economic-related keywords have a stronger predictive power if we consider the current households and national situation, while their predictive power is less significant with regards to expectations about the future. Our indicator of semantic importance offers a complementary approach to estimate consumer confidence, lessening the limitations of traditional survey-based methods.
We establish a direct connection between general tensor networks and deep feed-forward artificial neural networks. The core of our results is the construction of neural-network layers that efficiently perform tensor contractions, and that use commonly adopted non-linear activation functions. The resulting deep networks feature a number of edges that closely matches the contraction complexity of the tensor networks to be approximated. In the context of many-body quantum states, this result establishes that neural-network states have strictly the same or higher expressive power than practically usable variational tensor networks. As an example, we show that all matrix product states can be efficiently written as neural-network states with a number of edges polynomial in the bond dimension and depth logarithmic in the system size. The opposite instead does not hold true, and our results imply that there exist quantum states that are not efficiently expressible in terms of matrix product states or practically usable PEPS, but that are instead efficiently expressible with neural network states.
Moir\'e super-potentials in two-dimensional materials allow unprecedented control of the ratio between kinetic and interaction energy. By this, they pave the way to study a wide variety of strongly correlated physics under a new light. In particular, the transition metal dichalcogenides (TMDs) are promising candidate "quantum simulators" of the Hubbard model on a triangular lattice. Indeed, Mott and generalized Wigner crystals have been observed in such devices. Here we theoretically propose to extend this model into the multi-orbital regime by focusing on electron doped systems at filling higher than 2. As opposed to hole bands, the electronic bands in TMD materials include two, nearly degenerate species, which can be viewed as two orbitals with different effective mass and binding energy. Using realistic band-structure parameters and a slave-rotor mean-field theory, we find that an orbitally selective Mott (OSM) phase can be stabilized over a wide range of fillings, where one band is locked in a commensurate Mott state, while the other remains itinerant with variable density. This scenario thus, realizes the basic ingredients in the Kondo lattice model: A periodic lattice of localized magnetic moments interacting with metallic states. We also discuss possible experimental signatures of the OSM state.
Mitchell Feigenbaum discovered an intriguing property of viewing images through cylindrical mirrors or looking into water. Because the eye is a lens with an opening of about 5mm, many different rays of reflected images reach the eye, and need to be interpreted by the visual system. This has the surprising effect that what one perceives depends on the orientation of the head, whether it is tilted or not. I explain and illustrate this phenomenon on the example of a human eye looking at a ruler immersed in water.
The restoration lemma by Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [Dist. Comp. '02] proves that, in an undirected unweighted graph, any replacement shortest path avoiding a failing edge can be expressed as the concatenation of two original shortest paths. However, the lemma is tiebreaking-sensitive: if one selects a particular canonical shortest path for each node pair, it is no longer guaranteed that one can build replacement paths by concatenating two selected shortest paths. They left as an open problem whether a method of shortest path tiebreaking with this desirable property is generally possible. We settle this question affirmatively with the first general construction of restorable tiebreaking schemes. We then show applications to various problems in fault-tolerant network design. These include a faster algorithm for subset replacement paths, more efficient fault-tolerant (exact) distance labeling schemes, fault-tolerant subset distance preservers and $+4$ additive spanners with improved sparsity, and fast distributed algorithms that construct these objects. For example, an almost immediate corollary of our restorable tiebreaking scheme is the first nontrivial distributed construction of sparse fault-tolerant distance preservers resilient to three faults.
We give a new proof for the central limit theorem in probability for the directed polymer model in a bounded environment with bond disorder in the interior of the weak disorder phase. In the same setup, we also show that the large deviation rate function agrees with that of the underlying random walk. In addition, for the Brownian polymer model, we show that the central limit theorem holds almost surely in the whole weak disorder phase. The results are proved using the moment bound from [20] and a new tool introduced in this paper, which allows a quantitative comparison between the associated martingales at different inverse temperatures. This comparison is made precise using the noise operator $T_\rho$ acting on the environment by independent resampling.
A splinter is a notion of singularity that has seen numerous recent applications, especially in connection with the direct summand theorem, the mixed characteristic minimal model program, Cohen-Macaulayness of absolute integral closures and cohomology vanishing theorems. Nevertheless, many basic questions about these singularities remain elusive. One outstanding problem is whether the splinter property spreads from a point to an open neighborhood of a noetherian scheme. Our paper addresses this problem in prime characteristic, where we show that a locally noetherian scheme that has finite Frobenius or that is locally essentially of finite type over a quasi-excellent local ring has an open splinter locus. In particular, all varieties over fields of positive characteristic have open splinter loci. Intimate connections are established between the openness of splinter loci and $F$-compatible ideals, which are prime characteristic analogues of log canonical centers. We prove the surprising fact that for a large class of noetherian rings with pure (aka universally injective) Frobenius, the splinter condition is detected by the splitting of a single generically \'etale finite extension. We also show that for a noetherian $\textbf{N}$-graded ring over a field, the homogeneous maximal ideal detects the splinter property.
We consider accelerated black hole horizons with and without defects. These horizons appear in the $C$-metric solution to Einstein equations and in its generalization to the case where external fields are present. These solutions realize a variety of physical processes, from the decay of a cosmic string by a black hole pair nucleation to the creation of a black hole pair by an external electromagnetic field. Here, we show that such geometries exhibit an infinite set of symmetries in their near horizon region, generalizing in this way previous results for smooth isolated horizons. By considering the limit close to both the black hole and the acceleration horizons, we show that a sensible set of asymptotic boundary conditions gets preserved by supertranslation and superrotation transformations. By acting on the geometry with such transformations, we derive the superrotated, supertranslated version of the $C$-metric and compute the associated conserved charges.
Multi-access coded caching schemes from cross resolvable designs (CRD) have been reported recently \cite{KNRarXiv}. To be able to compare coded caching schemes with different number of users and possibly with different number of caches a new metric called rate-per-user was introduced and it was shown that under this new metric the schemes from CRDs perform better than the Maddah-Ali-Niesen scheme in the large memory regime. In this paper a new class of CRDs is presented and it is shown that the multi-access coded caching schemes derived from these CRDs perform better than the Maddah-Ali-Niesen scheme in the entire memory regime. Comparison with other known multi-access coding schemes is also presented.
The security of the Internet rests on a small number of open-source cryptographic libraries: a vulnerability in any one of them threatens to compromise a significant percentage of web traffic. Despite this potential for security impact, the characteristics and causes of vulnerabilities in cryptographic software are not well understood. In this work, we conduct the first comprehensive analysis of cryptographic libraries and the vulnerabilities affecting them. We collect data from the National Vulnerability Database, individual project repositories and mailing lists, and other relevant sources for eight widely used cryptographic libraries. Among our most interesting findings is that only 27.2% of vulnerabilities in cryptographic libraries are cryptographic issues while 37.2% of vulnerabilities are memory safety issues, indicating that systems-level bugs are a greater security concern than the actual cryptographic procedures. In our investigation of the causes of these vulnerabilities, we find evidence of a strong correlation between the complexity of these libraries and their (in)security, empirically demonstrating the potential risks of bloated cryptographic codebases. We further compare our findings with non-cryptographic systems, observing that these systems are, indeed, more complex than similar counterparts, and that this excess complexity appears to produce significantly more vulnerabilities in cryptographic libraries than in non-cryptographic software.
Recent advances in quantum engineering have given us the ability to design hybrid systems with novel properties normally not present in the regime they operate in. The coupling of spin ensembles and magnons to microwave resonators has for instance lead to a much richer understanding of collective effects in these systems and their potential quantum applications. We can also hybridize electron and nuclear spin ensembles together in the solid-state regime to investigate collective effects normally only observed in the atomic, molecular and optical world. Here we explore in the solid state regime the dynamics of a double domain nuclear spin ensemble coupled to the Nambu-Goldstone boson in GaAs semiconductors and show it exhibits both collective and individual relaxation (thermalization) on very different time scales. Further the collective relaxation of the nuclear spin ensemble is what one would expect from superradiant decay. This opens up the possibility for the exploration of novel collective behaviour in solid state systems where the natural energies associated with those spins are much less than the thermal energy.
In this paper, we develop a computational multiscale to solve the parabolic wave approximation with heterogeneous and variable media. Parabolic wave approximation is a technique to approximate the full wave equation. One benefit of the method is that: one wave propagation direction can be taken as an evolution direction, and we then can discretize it using a classical scheme like Backward Euler. Consequently, we obtain a set of quasi-gas-dynamic (QGD) models with different heterogeneous permeability fields. Then, we employ constraint energy minimization generalized multiscale finite element method (CEM-GMsFEM) to perform spatial discretization for the problem. The resulting system can be solved by combining the central difference in time evolution. Due to the variable media, we apply the technique of proper orthogonal decomposition (POD) to further the dimension of the problem and solve the corresponding model problem in the POD space instead of in the whole multiscale space spanned by all possible multiscale basis functions. We prove the stability of the full discretization scheme and give the convergence analysis of the proposed approximation scheme. Numerical results verify the effectiveness of the proposed method.
Nickel-based complex oxides have served as a playground for decades in the quest for a copper-oxide analog of the high-temperature (high-Tc) superconductivity. They may provide key points towards understanding the mechanism of the high-Tc and an alternative route for a room-temperature superconductor. The recent discovery of superconductivity in the infinite-layer nickelate thin films has put this pursuit to an end. Having complete control in material preparation and a full understanding of the properties and electronic structures becomes the center of gravity of current research in nickelates. Thus far, material synthesis remains challenging. The demonstration of perfect diamagnetism is still missing, and understanding the role of the interface and bulk to the superconducting properties is still lacking. Here, we synthesized high-quality Nd0.8Sr0.2NiO2 thin films with different thicknesses and investigated the interface and strain effects on the electrical, magnetic and optical properties. The perfect diamagnetism is demonstrated, confirming the occurrence of superconductivity in the thin films. Unlike the thick films in which the normal state Hall coefficient (RH) changes signs from negative to positive as the temperature decreases, the RH of the films thinner than 6.1-nm remains negative at the whole temperature range below 300 K, suggesting a thickness-driven band structure modification. The X-ray spectroscopy reveals the Ni-O hybridization nature in doped finite-layer nickelates, and the hybridization is enhanced as the thickness decreases. Consistent with band structure calculations on nickelate/SrTiO3 interfaces, the interface and strain effect induce the dominating electron-like band in the ultrathin film, thus causing the sign-change of the RH.
Recommending medications for patients using electronic health records (EHRs) is a crucial data mining task for an intelligent healthcare system. It can assist doctors in making clinical decisions more efficiently. However, the inherent complexity of the EHR data renders it as a challenging task: (1) Multilevel structures: the EHR data typically contains multilevel structures which are closely related with the decision-making pathways, e.g., laboratory results lead to disease diagnoses, and then contribute to the prescribed medications; (2) Multiple sequences interactions: multiple sequences in EHR data are usually closely correlated with each other; (3) Abundant noise: lots of task-unrelated features or noise information within EHR data generally result in suboptimal performance. To tackle the above challenges, we propose a multilevel selective and interactive network (MeSIN) for medication recommendation. Specifically, MeSIN is designed with three components. First, an attentional selective module (ASM) is applied to assign flexible attention scores to different medical codes embeddings by their relevance to the recommended medications in every admission. Second, we incorporate a novel interactive long-short term memory network (InLSTM) to reinforce the interactions of multilevel medical sequences in EHR data with the help of the calibrated memory-augmented cell and an enhanced input gate. Finally, we employ a global selective fusion module (GSFM) to infuse the multi-sourced information embeddings into final patient representations for medications recommendation. To validate our method, extensive experiments have been conducted on a real-world clinical dataset. The results demonstrate a consistent superiority of our framework over several baselines and testify the effectiveness of our proposed approach.
Quantum information is typically encoded in the state of a qubit that is decoupled from the environment. In contrast, waveguide quantum electrodynamics studies qubits coupled to a mode continuum, exposing them to a loss channel and causing quantum information to be lost before coherent operations can be performed. Here we restore coherence by realizing a dark state that exploits symmetry properties and interactions between four qubits. Dark states decouple from the waveguide and are thus a valuable resource for quantum information but also come with a challenge: they cannot be controlled by the waveguide drive. We overcome this problem by designing a drive that utilizes the symmetry properties of the collective state manifold allowing us to selectively drive both bright and dark states. The decay time of the dark state exceeds that of the waveguide-limited single qubit by more than two orders of magnitude. Spectroscopy on the second excitation manifold provides further insight into the level structure of the hybridized system. Our experiment paves the way for implementations of quantum many-body physics in waveguides and the realization of quantum information protocols using decoherence-free subspaces.
Sparse neural networks have been widely applied to reduce the computational demands of training and deploying over-parameterized deep neural networks. For inference acceleration, methods that discover a sparse network from a pre-trained dense network (dense-to-sparse training) work effectively. Recently, dynamic sparse training (DST) has been proposed to train sparse neural networks without pre-training a dense model (sparse-to-sparse training), so that the training process can also be accelerated. However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in the Recurrent Neural Networks (RNNs) setting. In this paper, we propose an approach to train intrinsically sparse RNNs with a fixed parameter count in one single run, without compromising performance. During training, we allow RNN layers to have a non-uniform redistribution across cell gates for better regularization. Further, we propose SNT-ASGD, a novel variant of the averaged stochastic gradient optimizer, which significantly improves the performance of all sparse training methods for RNNs. Using these strategies, we achieve state-of-the-art sparse training results, better than the dense-to-sparse methods, with various types of RNNs on Penn TreeBank and Wikitext-2 datasets. Our codes are available at https://github.com/Shiweiliuiiiiiii/Selfish-RNN.