abstract
stringlengths
42
2.09k
Subaru Strategic Program with the Hyper-Suprime Cam (HSC-SSP) has proven to be successful with its extremely-wide area coverage in past years. Taking advantages of this feature, we report initial results from exploration and research of expansive over- and under-dense structures at $z=$ 0.3-1 based on the second Public Data Release where optical 5-band photometric data for $\sim$ eight million sources with $i<23$ mag are available over $\sim360$ square degrees. We not only confirm known superclusters but also find candidates of titanic over- and under-dense regions out to $z=1$. The mock data analysis suggests that the density peaks would involve one or more massive dark matter haloes ($>10^{14}$ M$_\odot$) of the redshift, and the density troughs tend to be empty of massive haloes over $>10$ comoving Mpc. Besides, the density peaks and troughs at $z<0.6$ are in part identified as positive and negative weak lensing signals respectively, in mean tangential shear profiles, showing a good agreement with those inferred from the full-sky weak lensing simulation. The coming extensive spectroscopic surveys will be able to resolve these colossal structures in three-dimensional space. The number density information over the entire survey field will be available as grid-point data on the website of the HSC-SSP data release (https://hsc.mtk.nao.ac.jp/ssp/data-release/).
Pre-training and fine-tuning have achieved remarkable success in many downstream natural language processing (NLP) tasks. Recently, pre-training methods tailored for information retrieval (IR) have also been explored, and the latest success is the PROP method which has reached new SOTA on a variety of ad-hoc retrieval benchmarks. The basic idea of PROP is to construct the \textit{representative words prediction} (ROP) task for pre-training inspired by the query likelihood model. Despite its exciting performance, the effectiveness of PROP might be bounded by the classical unigram language model adopted in the ROP task construction process. To tackle this problem, we propose a bootstrapped pre-training method (namely B-PROP) based on BERT for ad-hoc retrieval. The key idea is to use the powerful contextual language model BERT to replace the classical unigram language model for the ROP task construction, and re-train BERT itself towards the tailored objective for IR. Specifically, we introduce a novel contrastive method, inspired by the divergence-from-randomness idea, to leverage BERT's self-attention mechanism to sample representative words from the document. By further fine-tuning on downstream ad-hoc retrieval tasks, our method achieves significant improvements over baselines without pre-training or with other pre-training methods, and further pushes forward the SOTA on a variety of ad-hoc retrieval tasks.
By analyzing photometric and spectroscopic time series in this paper, we show that the pulsator V764 Mon, assumed to be the brightest RR Lyrae star in the sky, is in fact a rapidly rotating delta Scuti star with an unusually long dominant period (P1=0.29 d). Our spectroscopy confirmed the discovery of the Gaia satellite about the binarity of V764 Mon. In the case of HY Com, a `bona fide' RRc star, we present its first complete radial velocity curve. Additionally, we found that the star continues its strong phase variation reported before.
Quantum computing can efficiently simulate Hamiltonian dynamics of many-body quantum physics, a task that is generally intractable with classical computers. The hardness lies at the ubiquitous anti-commutative relations of quantum operators, in corresponding with the notorious negative sign problem in classical simulation. Intuitively, Hamiltonians with more commutative terms are also easier to simulate on a quantum computer, and anti-commutative relations generally cause more errors, such as in the product formula method. Here, we theoretically explore the role of anti-commutative relation in Hamiltonian simulation. We find that, contrary to our intuition, anti-commutative relations could also reduce the hardness of Hamiltonian simulation. Specifically, Hamiltonians with mutually anti-commutative terms are easy to simulate, as what happens with ones consisting of mutually commutative terms. Such a property is further utilized to reduce the algorithmic error or the gate complexity in the truncated Taylor series quantum algorithm for general problems. Moreover, we propose two modified linear combinations of unitaries methods tailored for Hamiltonians with different degrees of anti-commutation. We numerically verify that the proposed methods exploiting anti-commutative relations could significantly improve the simulation accuracy of electronic Hamiltonians. Our work sheds light on the roles of commutative and anti-commutative relations in simulating quantum systems.
The deep extension of the restricted Boltzmann machine (RBM), known as the deep Boltzmann machine (DBM), is an expressive family of machine learning models which can serve as compact representations of complex probability distributions. However, jointly training DBMs in the unsupervised setting has proven to be a formidable task. A recent technique we have proposed, called mode-assisted training, has shown great success in improving the unsupervised training of RBMs. Here, we show that the performance gains of the mode-assisted training are even more dramatic for DBMs. In fact, DBMs jointly trained with the mode-assisted algorithm can represent the same data set with orders of magnitude lower number of total parameters compared to state-of-the-art training procedures and even with respect to RBMs, provided a fan-in network topology is also introduced. This substantial saving in number of parameters makes this training method very appealing also for hardware implementations.
Probability forecasts for binary events play a central role in many applications. Their quality is commonly assessed with proper scoring rules, which assign forecasts a numerical score such that a correct forecast achieves a minimal expected score. In this paper, we construct e-values for testing the statistical significance of score differences of competing forecasts in sequential settings. E-values have been proposed as an alternative to p-values for hypothesis testing, and they can easily be transformed into conservative p-values by taking the multiplicative inverse. The e-values proposed in this article are valid in finite samples without any assumptions on the data generating processes. They also allow optional stopping, so a forecast user may decide to interrupt evaluation taking into account the available data at any time and still draw statistically valid inference, which is generally not true for classical p-value based tests. In a case study on postprocessing of precipitation forecasts, state-of-the-art forecasts dominance tests and e-values lead to the same conclusions.
We prove an upper bound on the rank of the abelianised revised fundamental group (called "revised first Betti number") of a compact $RCD^{*}(K,N)$ space, in the same spirit of the celebrated Gromov-Gallot upper bound on the first Betti number for a smooth compact Riemannian manifold with Ricci curvature bounded below. When the synthetic lower Ricci bound is close enough to (negative) zero and the aforementioned upper bound on the revised first Betti number is saturated (i.e. equal to the integer part of $N$, denoted by $\lfloor N \rfloor$), then we establish a torus stability result stating that the space is $\lfloor N \rfloor$-rectifiable as a metric measure space, and a finite cover must be mGH-close to an $\lfloor N \rfloor$-dimensional flat torus; moreover, in case $N$ is an integer, we prove that the space itself is bi-H\"older homeomorphic to a flat torus. This second result extends to the class of non-smooth $RCD^{*}(-\delta, N)$ spaces a celebrated torus stability theorem by Colding (later refined by Cheeger-Colding).
Customers of machine learning systems demand accountability from the companies employing these algorithms for various prediction tasks. Accountability requires understanding of system limit and condition of erroneous predictions, as customers are often interested in understanding the incorrect predictions, and model developers are absorbed in finding methods that can be used to get incremental improvements to an existing system. Therefore, we propose an accountable error characterization method, AEC, to understand when and where errors occur within the existing black-box models. AEC, as constructed with human-understandable linguistic features, allows the model developers to automatically identify the main sources of errors for a given classification system. It can also be used to sample for the set of most informative input points for a next round of training. We perform error detection for a sentiment analysis task using AEC as a case study. Our results on the sample sentiment task show that AEC is able to characterize erroneous predictions into human understandable categories and also achieves promising results on selecting erroneous samples when compared with the uncertainty-based sampling.
The paper deals with the H2-norm and associated energy or power measurements for a class of processes known as CSVIU (Control and State Variation Increase Uncertainty). These are system models for which a stochastic process conveys the underlying uncertainties, and are able to give rise to cautious controls. The paper delves into the non-controlled version and fundamental system and norms notions associated with stochastic stability and mean-square convergence. One pillar of the study is the connection between the finiteness of one of these norms or a limited energy measurement growth with the corresponding stochastic stability notions. A detectability concept ties these notions, and the analysis of linear-positive operators plays a fundamental role. The introduction of various H2-norms and energy measurement performance criteria allows one to span the focus from transient to long-run behavior. As the discount parameter turns into a counter-discount, the criteria enforce stricter requirements on the second-moment steady state errors and on the exponential convergence rate. A tidy connection among this H2-performance measures cast employs a unifying vanishing discount reasoning.
A nilmanifold is a (left) quotient of a nilpotent Lie group by a cocompact lattice. A hypercomplex structure on a manifold is a triple of complex structure operators satisfying the quaternionic relations. A hypercomplex nilmanifold is a compact quotient of a nilpotent Lie group equipped with a left-invariant hypercomplex structure. Such a manifold admits a whole 2-dimensional sphere $S^2$ of complex structures induced by quaternions. We prove that for any hypercomplex nilmanifold $M$ and a generic complex structure $L\in S^2$, the complex manifold $(M,L)$ has algebraic dimension 0. A stronger result is proven when the hypercomplex nilmanifold is abelian. Consider the Lie algebra of left-invariant vector fields of Hodge type (1,0) on the corresponding nilpotent Lie group with respect to some complex structure $I\in S^2$. A hypercomplex nilmanifold is called abelian when this Lie algebra is abelian. We prove that all complex subvarieties of $(M,L)$ for generic $L\in S^2$ on a hypercomplex abelian nilmanifold are also hypercomplex nilmanifolds.
Given the increasing data collection capabilities and limited computing resources of future collider experiments, interest in using generative neural networks for the fast simulation of collider events is growing. In our previous study, the Bounded Information Bottleneck Autoencoder (BIB-AE) architecture for generating photon showers in a high-granularity calorimeter showed a high accuracy modeling of various global differential shower distributions. In this work, we investigate how the BIB-AE encodes this physics information in its latent space. Our understanding of this encoding allows us to propose methods to optimize the generation performance further, for example, by altering latent space sampling or by suggesting specific changes to hyperparameters. In particular, we improve the modeling of the shower shape along the particle incident axis.
Recent progress in nanofabrication has led to the emergence of three-dimensional magnetic nanostructures as a vibrant field of research. This includes the study of three-dimensional arrays of interconnected magnetic nanowires with tunable artificial spin-ice properties. Prominent examples of such structures are magnetic buckyball nanoarchitectures, which consist of ferromagnetic nanowires connected at vertex positions corresponding to those of a C60 molecule. These structures can be regarded as prototypes for the study of the transition from two- to three-dimensional spin-ice lattices. In spite of their significance for three-dimensional nanomagnetism, little is known about the micromagnetic properties of buckyball nanostructures. By means of finite-element micromagnetic simulations, we investigate the magnetization structures and the hysteretic properties of several sub-micron-sized magnetic buckyballs. Similar to ordinary artificial spin ice lattices, the array can be magnetized in a variety of zero-field states with vertices exhibiting different degrees of magnetic frustration. Remarkably, and unlike planar geometries, magnetically frustrated states can be reversibly created and dissolved by applying an external magnetic field. This easiness to insert and remove defect-like magnetic charges, made possible by the angle-selectivity of the field-induced switching of individual nanowires, demonstrates a potentially significant advantage of three-dimensional nanomagnetism compared to planar geometries. The control provided by the ability to switch between ice-rule obeying and magnetically frustrated structures could be an important feature of future applications, including magnonic devices exploiting differences in the fundamental frequencies of these configurations.
This perspective article elucidates both the importance and the implications of relativistic spacetime crystals as well as the renormalized blended coordinates transformation. It alludes to possible applications in materials science, condensed matter physics and quantum gravity.
The task of assigning semantic classes and track identities to every pixel in a video is called video panoptic segmentation. Our work is the first that targets this task in a real-world setting requiring dense interpretation in both spatial and temporal domains. As the ground-truth for this task is difficult and expensive to obtain, existing datasets are either constructed synthetically or only sparsely annotated within short video clips. To overcome this, we introduce a new benchmark encompassing two datasets, KITTI-STEP, and MOTChallenge-STEP. The datasets contain long video sequences, providing challenging examples and a test-bed for studying long-term pixel-precise segmentation and tracking under real-world conditions. We further propose a novel evaluation metric Segmentation and Tracking Quality (STQ) that fairly balances semantic and tracking aspects of this task and is more appropriate for evaluating sequences of arbitrary length. Finally, we provide several baselines to evaluate the status of existing methods on this new challenging dataset. We have made our datasets, metric, benchmark servers, and baselines publicly available, and hope this will inspire future research.
Third-party software, or skills, are essential components in Smart Personal Assistants (SPA). The number of skills has grown rapidly, dominated by a changing environment that has no clear business model. Skills can access personal information and this may pose a risk to users. However, there is little information about how this ecosystem works, let alone the tools that can facilitate its study. In this paper, we present the largest systematic measurement of the Amazon Alexa skill ecosystem to date. We study developers' practices in this ecosystem, including how they collect and justify the need for sensitive information, by designing a methodology to identify over-privileged skills with broken privacy policies. We collect 199,295 Alexa skills and uncover that around 43% of the skills (and 50% of the developers) that request these permissions follow bad privacy practices, including (partially) broken data permissions traceability. In order to perform this kind of analysis at scale, we present SkillVet that leverages machine learning and natural language processing techniques, and generates high-accuracy prediction sets. We report a number of concerning practices including how developers can bypass Alexa's permission system through account linking and conversational skills, and offer recommendations on how to improve transparency, privacy and security. Resulting from the responsible disclosure we have conducted,13% of the reported issues no longer pose a threat at submission time.
We introduce a new network marker for climate network analysis. It is based upon an available special definition of local clustering coefficient for weighted correlation networks, which was previously introduced in the neuroscience context and aimed at compensating for uninformative correlations caused by indirect interactions. We modify this definition further by replacing Pearson's pairwise correlation coefficients and Pearson's three-way partial correlation coefficients by the respective Kendall's rank correlations. This reduces statistical sample size requirements to compute the correlations, which translates into the possibility of using shorter time windows and hence into shorter response time of the real-time climate network analysis. We compare this proposed network marker to the conventional local clustering coefficient based on unweighted networks obtained by thresholding the correlation matrix. We show several examples where the new marker is found to be better associated to tropical cyclones than the unweighted local clustering coefficient.
A previously published covariant decomposition of the Levi-Civita tensor has demonstrated the strong mathematical parallel between gravity and $N=2$ Yang-Mills theory. We use this to argue that that the Lorentz gauge theory has a condensate vacuum of lower energy than the perturbative vacuum. Adapting our previously published Clairaut-based treatment of QCD, we go on to study the implications for second quantisation.
Decentralized Finance (DeFi), a blockchain powered peer-to-peer financial system, is mushrooming. One and a half years ago the total value locked in DeFi systems was approximately $700$m USD, now, as of September 2021, it stands at around $100$bn USD. The frenetic evolution of the ecosystem has created challenges in understanding the basic principles of these systems and their security risks. In this Systematization of Knowledge (SoK) we delineate the DeFi ecosystem along the following axes: its primitives, its operational protocol types and its security. We provide a distinction between technical security, which has a healthy literature, and economic security, which is largely unexplored, connecting the latter with new models and thereby synthesizing insights from computer science, economics and finance. Finally, we outline the open research challenges in the ecosystem across these security types.
10 {\mu}m lasing is studied in a compact CO2-He cell pressurized up to 15 atm when optically pumped by a ~50 mJ Fe:ZnSe laser tunable around 4.3 {\mu}m. The optimal pump wavelength and partial pressure of CO2 for generating 10 {\mu}m pulses are found to be ~4.4 {\mu}m and 0.75 atm, respectively. Without cavity optimization, the optical-to-optical conversion efficiency reached ~10% at a total pressure of 7 atm. The gain lifetime is measured to be ~1 {\mu}s at pressures above 10 atm, indicating the feasibility of using high-pressure optically pumped CO2 for the efficient amplification of picosecond 10 {\mu}m pulses.
We present a unified framework for minimizing average completion time for many seemingly disparate online scheduling problems, such as the traveling repairperson problems (TRP), dial-a-ride problems (DARP), and scheduling on unrelated machines. We construct a simple algorithm that handles all these scheduling problems, by computing and later executing auxiliary schedules, each optimizing a certain function on already seen prefix of the input. The optimized function resembles a prize-collecting variant of the original scheduling problem. By a careful analysis of the interplay between these auxiliary schedules, and later employing the resulting inequalities in a factor-revealing linear program, we obtain improved bounds on the competitive ratio for all these scheduling problems. In particular, our techniques yield a $4$-competitive deterministic algorithm for all previously studied variants of online TRP and DARP, and a $3$-competitive one for the scheduling on unrelated machines (also with precedence constraints). This improves over currently best ratios for these problems that are $5.14$ and $4$, respectively. We also show how to use randomization to further reduce the competitive ratios to $1+2/\ln 3 < 2.821$ and $1+1/\ln 2 < 2.443$, respectively. The randomized bounds also substantially improve the current state of the art. Our upper bound for DARP contradicts the lower bound of 3 given by Fink et al. (Inf. Process. Lett. 2009); we pinpoint a flaw in their proof.
This paper introduces the Simulated Jet Engine Bracket Dataset (SimJEB): a new, public collection of crowdsourced mechanical brackets and accompanying structural simulations. SimJEB is applicable to a wide range of geometry processing tasks; the complexity of the shapes in SimJEB offer a challenge to automated geometry cleaning and meshing, while categorical labels and structural simulations facilitate classification and regression (i.e. engineering surrogate modeling). In contrast to existing shape collections, SimJEB's models are all designed for the same engineering function and thus have consistent structural loads and support conditions. On the other hand, SimJEB models are more complex, diverse, and realistic than the synthetically generated datasets commonly used in parametric surrogate model evaluation. The designs in SimJEB were derived from submissions to the GrabCAD Jet Engine Bracket Challenge: an open engineering design competition with over 700 hand-designed CAD entries from 320 designers representing 56 countries. Each model has been cleaned, categorized, meshed, and simulated with finite element analysis according to the original competition specifications. The result is a collection of 381 diverse, high-quality and application-focused designs for advancing geometric deep learning, engineering surrogate modeling, automated cleaning and related geometry processing tasks.
Several governments introduced or promoted the use of contact-tracing apps during the ongoing COVID-19 pandemic. In Germany, the related app is called Corona-Warn-App, and by end of 2020, it had 22.8 million downloads. Contact tracing is a promising approach for containing the spread of the novel coronavirus. It is only effective if there is a large user base, which brings new challenges like app users unfamiliar with using smartphones or apps. As Corona-Warn-App is voluntary to use, reaching many users and gaining a positive public perception is crucial for its effectiveness. Based on app reviews and tweets, we are analyzing the public perception of Corona-Warn-App. We collected and analyzed all 78,963 app reviews for the Android and iOS versions from release (June 2020) to beginning of February 2021, as well as all original tweets until February 2021 containing #CoronaWarnApp (43,082). For the reviews, the most common words and n-grams point towards technical issues, but it remains unclear, to what extent this is due to the app itself, the used Exposure Notification Framework, system settings on the user's phone, or the user's misinterpretations of app content. For Twitter data, overall, based on tweet content, frequent hashtags, and interactions with tweets, we conclude that the German Twitter-sphere widely reports adopting the app and promotes its use.
The proximity effect from a spin-triplet $p_x$-wave superconductor to a dirty normal-metal has been shown to result in various unusual electromagnetic properties, reflecting a cooperative relation between topologically protected zero-energy quasiparticles and odd-frequency Cooper pairs. However, because of a lack of candidate materials for spin-triplet $p_x$-wave superconductors, observing this effect has been difficult. In this paper, we demonstrate that the anomalous proximity effect, which is essentially equivalent to that of a spin-triplet $p_x$-wave superconductor, can occur in a semiconductor/high-$T_c$ cuprate superconductor hybrid device in which two potentials coexist: a spin-singlet $d$-wave pair potential and a spin--orbit coupling potential sustaining the persistent spin-helix state. As a result, we propose an alternative and promising route to observe the anomalous proximity effect related to the profound nature of topologically protected quasiparticles and odd-frequency Cooper pairs.
The dipole anisotropy in the Cosmic Microwave Background Radiation (CMBR) has given a peculiar velocity vector 370 km s$^{-1}$ along $l=264^\circ,b=48^\circ$. However, some other dipoles, for instance, from the number counts, sky brightness or redshift distributions in large samples of distant Active Galactic Nuclei (AGNs), have yielded values of the peculiar velocity many times larger than that from the CMBR, though surprisingly, in all cases the directions agreed with the CMBR dipole. Here we determine our peculiar motion from a sample of ~0.28 million AGNs, selected from the Mid Infra Red Active Galactic Nuclei (MIRAGN) sample comprising more than a million sources. From this, we find a peculiar velocity, which is more than four times the CMBR value, although the direction seems to be within $\sim 2\sigma$ of the CMBR dipole. A genuine value of the solar peculiar velocity should be the same irrespective of the data or the technique employed to estimate it. Therefore, such discordant dipole amplitudes, might mean that the explanation for these dipoles, including that of the CMBR, might in fact be something else. But, the observed fact that the direction in all cases, is the same, though obtained from completely independent surveys using different instruments and techniques, by different sets of people employing different computing routines, might nonetheless indicate that these dipoles are not merely due to some systematics, otherwise why would they all be pointing along the same direction. It might instead suggest a preferred direction in the Universe, implying a genuine anisotropy, which would violate the Cosmological Principle, the core of the modern cosmology.
We present 3D general relativistic magnetohydrodynamic(GRMHD) simulations of zero angular momentum accretion around a rapidly rotating black hole, modified by the presence of initially uniform magnetic fields. We consider serveral angles between the magnetic field direction and the black hole spin. In the resulting flows, the midplane dynamics are governed by magnetic reconnection-driven turbulence in a magnetically arrested (or a nearly arrested) state. Electromagnetic jets with outflow efficiencies ~10-200% occupy the polar regions, reaching several hundred gravitational radii before they dissipate due to the kink instability. The jet directions fluctuate in time and can be tilted by as much as ~30 degrees with respect to black hole spin, but this tilt does not depend strongly on the tilt of the initial magnetic field. A jet forms even when there is no initial net vertical magnetic flux since turbulent, horizon-scale fluctuations can generate a net vertical field locally. Peak jet power is obtained for an initial magnetic field tilted by 40-80 degrees with respect to the black hole spin because this maximizes the amount of magnetic flux that can reach the black hole. These simulations may be a reasonable model for low luminosity black hole accretion flows such as Sgr A* or M87.
Large Pre-trained Language Models (PLMs) have become ubiquitous in the development of language understanding technology and lie at the heart of many artificial intelligence advances. While advances reported for English using PLMs are unprecedented, reported advances using PLMs in Hebrew are few and far between. The problem is twofold. First, Hebrew resources available for training NLP models are not at the same order of magnitude as their English counterparts. Second, there are no accepted tasks and benchmarks to evaluate the progress of Hebrew PLMs on. In this work we aim to remedy both aspects. First, we present AlephBERT, a large pre-trained language model for Modern Hebrew, which is trained on larger vocabulary and a larger dataset than any Hebrew PLM before. Second, using AlephBERT we present new state-of-the-art results on multiple Hebrew tasks and benchmarks, including: Segmentation, Part-of-Speech Tagging, full Morphological Tagging, Named-Entity Recognition and Sentiment Analysis. We make our AlephBERT model publicly available, providing a single point of entry for the development of Hebrew NLP applications.
The goal of this minireview is restricted to describe how the Kolmogorov-Johnson-Mehl-Avrami model has evolved from its birth up to the present day. The model, which dates back to the late of 1930s, has the purpose of describing the kinetics of a phase transformation. Given the nature of this article, although there are hundreds (if not thousands) of experimental data concerning the most disparate topics, which are interpreted on the basis of the KJMA model, no arguments relating to these, will be touched upon. Starting from the ingenious concept of phantom nuclei, firstly introduced by Avrami to get the exact kinetics, we review theoretical approaches which overcome such concept. We will show how spatial correlation among nuclei plays a fundamental role in these modelings.
The layered perovskite YBaCuFeO5 (YBCFO) is considered one of the best candidates to high-temperature chiral multiferroics with strong magnetoelectric coupling. In RBaCuFeO5 perovskites (R: rare-earth or Y) A-site cations are fully ordered whereas their magnetic properties strongly depend on the preparation process. They exhibit partial cationic disorder at the B-site that generates a magnetic spiral stabilized through directionally assisted long range coupling between canted locally frustrated spins. Moreover the orientation of its magnetic spiral can be critical for the magnetoelectric response of this chiral magnetic oxide. We have synthesized and studied YBaCuFe1-xMnxO5 samples doped with Mn, with the aim of increasing spin-orbit coupling effects, and found that the overall Fe/Cu cation disorder at the B-sites can be increased by doping without changing the sample preparation process. In YBaCuFe1-xMnxO5 samples prepared under the same conditions, the T-x magnetic phase diagram have been constructed in the range 10K-500K combining magnetometry, X-ray and neutron powder diffraction measurements. The tilting angles of the spins in the collinear, {\theta}col , and spiral phases, {\theta}spiral, barely vary with temperature. In the collinear phase {\theta}col is also independent of the Mn content. In contrast, the presence of Mn produces a progressive reorientation of the plane of the magnetic helix in the incommensurate phase, capable to transform the helicoidal spin ordering into a cycloidal one, which may critically determine the ferroelectric and magnetoelectric behavior in these compounds. Some of the observations are of interest for engineering and developing this family of potential high-temperature multiferroics.
Despite their consequential applications, metastable states of antibranes in warped throats are not yet fully understood. In this thesis, we provide new information on various aspects of these metastable antibranes through applications of the blackfold effective theory for higher-dimensional black holes. As concrete examples, we study the conjectured metastable state of polarised anti-D3 branes at the tip of the Klebanov-Strassler (KS) throat in type IIB supergravity and the analogous state of polarised anti-M2 branes at the tip of the Cvetic-Gibbons-Lu-Pope (CGLP) throat in eleven-dimensional supergravity. For anti-D3 branes in KS throat, we provide novel evidence for the existence of the metastable state exactly where no-go theorems are lifted. In the extremal limit, we recover directly in supergravity the metastable states originally discovered by Kachru, Pearson, and Verlinde (KPV). Away from extremality, we uncover a metastable wrapped black NS5 state. We observe that such metastability is lost when the wrapped NS5 is heated sufficiently that its horizon geometry resembles that of a black anti-D3. We study the classical stability of the KPV state under generic long-wavelength deformations. We observe that, with regards to considered perturbations and regime of parameters, the state is classically stable. A study of anti-M2 branes in CGLP throat reveals many similarities to that of the anti-D3 branes. We recover directly in supergravity the Klebanov-Pufu (KP) state at extremality, and our finite temperature results fit suggestively well with known, complementary no-go theorems. However, we discover an unexpected, exotic pattern of thermal transitions of the KP state different from that of the KPV. This thesis contains also a pedagogical introduction to the blackfold formalism, focusing on aspects immediately relevant to applications to metastable antibranes.
In this paper, a relay-aided two-phase transmission protocol for the smart factory scenario is proposed. This protocol aims at enabling all robots' ultra-reliable target number of uplink critical data transmission within a latency constraint by jointly optimizing the relay selection, resource block (RB) assignment, and transmit power allocation. Such protocol design is formulated as a mixed-integer and strictly non-convex problem where optimization variables are mutual coupling, which is definitely challenging. Instead of conventional methods designed for solving the problem, we leverage the properties of the relative entropy function to equivalently transform the problem without introducing extra constraints. As the packet error probability requirements of each robot under two possible transmission modes are coupled in one overall reliability constraint, the big-M technique is applied to decouple it into two corresponding reliability constraints. One is for direct transmission mode, and the other is for cooperative transmission mode. Moreover, both non-convex penalty (NCP) and quadratic penalty (QP) approaches are utilized to deal with the binary indicator constraints. Based on such penalty methods, a sequence of penalized approximated convex problems can be iteratively solved for sub-optimal solutions. Numerical results demonstrate the efficiency of such two penalty methods from the perspectives of sub-optimal values of total transmit power and convergence rate. Further, the impacts of reliability, the number and location of relays, the number of robots, the target number of data bits on the total power consumption are analyzed.
We analytically study shock wave in the Josephson transmission line (JTL) in the presence of ohmic dissipation. When ohmic resistors shunt the Josephson junctions (JJ) or are introduced in series with the ground capacitors the shock is broadened. When ohmic resistors are in series with the JJ, the shock remains sharp, same as it was in the absence of dissipation. In all the cases considered, ohmic resistors don't influence the shock propagation velocity. We study an alternative to the shock wave - an expansion fan - in the framework of the simple wave approximation for the dissipationless JTL and formulate the generalization of the approximation for the JTL with ohmic dissipation.
The world is facing a tough situation due to the catastrophic pandemic caused by novel coronavirus (COVID-19). The number people affected by this virus are increasing exponentially day by day and the number has already crossed 6.4 million. As no vaccine has been discovered yet, the early detection of patients and isolation is the only and most effective way to reduce the spread of the virus. Detecting infected persons from chest X-Ray by using Deep Neural Networks, can be applied as a time and laborsaving solution. In this study, we tried to detect Covid-19 by classification of Covid-19, pneumonia and normal chest X-Rays. We used five different Convolutional Pre-Trained Neural Network models (VGG16, VGG19, Xception, InceptionV3 and Resnet50) and compared their performance. VGG16 and VGG19 shows precise performance in classification. Both models can classify between three kinds of X-Rays with an accuracy over 92%. Another part of our study was to find the impact of weather factors (temperature, humidity, sun hour and wind speed) on this pandemic using Decision Tree Regressor. We found that temperature, humidity and sun-hour jointly hold 85.88% impact on escalation of Covid-19 and 91.89% impact on death due to Covid-19 where humidity has 8.09% impact on death. We also tried to predict the death of an individual based on age, gender, country, and location due to COVID-19 using the LogisticRegression, which can predict death of an individual with a model accuracy of 94.40%.
Single-particle x-ray diffractive imaging (SPI) of small (bio-)nanoparticles (NPs) requires optimized injectors to collect sufficient diffraction patterns to reconstruct the NP structure with high resolution. Typically, aerodynamic-lens-stack injectors are used for single NP injection. However, current injectors were developed for larger NPs ($\gg\!100$ nm) and their ability to generate high-density NP beams suffers with decreasing NP size. Here, an aerodynamic-lens-stack injector with variable geometry and the geometry-optimization procedure are presented. The optimization for 50 nm gold NP (AuNP) injection using a numerical simulation infrastructure capable of calculating the carrier gas flow and the particle trajectories through the injector is introduced. The simulations are experimentally validated using spherical AuNPs and sucrose NPs. In addition, the optimized injector is compared to the standard-installation "Uppsala-injector" for AuNPs and results for these heavy particles show a shift in the particle-beam focus position rather than a change in beam size, which results in a lower gas background for the optimized injector. Optimized aerodynamic-lens stack injectors will allow to increase NP beam density, reduce the gas background, discover the limits of current injectors, and contribute to structure determination of small NPs using SPI.
Despite significant improvements over the last few years, cloud-based healthcare applications continue to suffer from poor adoption due to their limitations in meeting stringent security, privacy, and quality of service requirements (such as low latency). The edge computing trend, along with techniques for distributed machine learning such as federated learning, have gained popularity as a viable solution in such settings. In this paper, we leverage the capabilities of edge computing in medicine by analyzing and evaluating the potential of intelligent processing of clinical visual data at the edge allowing the remote healthcare centers, lacking advanced diagnostic facilities, to benefit from the multi-modal data securely. To this aim, we utilize the emerging concept of clustered federated learning (CFL) for an automatic diagnosis of COVID-19. Such an automated system can help reduce the burden on healthcare systems across the world that has been under a lot of stress since the COVID-19 pandemic emerged in late 2019. We evaluate the performance of the proposed framework under different experimental setups on two benchmark datasets. Promising results are obtained on both datasets resulting in comparable results against the central baseline where the specialized models (i.e., each on a specific type of COVID-19 imagery) are trained with central data, and improvements of 16\% and 11\% in overall F1-Scores have been achieved over the multi-modal model trained in the conventional Federated Learning setup on X-ray and Ultrasound datasets, respectively. We also discuss in detail the associated challenges, technologies, tools, and techniques available for deploying ML at the edge in such privacy and delay-sensitive applications.
The goal of this paper is to propose a framework for representing and reasoning about the rules governing a combinatorial exchange. Such a framework is at first interest as long as we want to build up digital marketplaces based on auction, a widely used mechanism for automated transactions. Combinatorial exchange is the most general case of auctions, mixing the double and combinatorial variants: agents bid to trade bundles of goods. Hence the framework should fulfill two requirements: (i) it should enable bidders to express their bids on combinations of goods and (ii) it should allow describing the rules governing some market, namely the legal bids, the allocation and payment rules. To do so, we define a logical language in the spirit of the Game Description Language: the Combinatorial Exchange Description Language is the first language for describing combinatorial exchange in a logical framework. The contribution is two-fold: first, we illustrate the general dimension by representing different kinds of protocols, and second, we show how to reason about auction properties in this machine-processable language.
Active learning continues to remain significant in the industry since it is data efficient. Not only is it cost effective on a constrained budget, continuous refinement of the model allows for early detection and resolution of failure scenarios during the model development stage. Identifying and fixing failures with the model is crucial as industrial applications demand that the underlying model performs accurately in all foreseeable use cases. One popular state-of-the-art technique that specializes in continuously refining the model via failure identification is Learning Loss. Although simple and elegant, this approach is empirically motivated. Our paper develops a foundation for Learning Loss which enables us to propose a novel modification we call LearningLoss++. We show that gradients are crucial in interpreting how Learning Loss works, with rigorous analysis and comparison of the gradients between Learning Loss and LearningLoss++. We also propose a convolutional architecture that combines features at different scales to predict the loss. We validate LearningLoss++ for regression on the task of human pose estimation (using MPII and LSP datasets), as done in Learning Loss. We show that LearningLoss++ outperforms in identifying scenarios where the model is likely to perform poorly, which on model refinement translates into reliable performance in the open world.
We study voltage controllable superconducting state in multi-terminal bridge composed of the dirty superconductor/pure normal metal (SN) bilayer and pure normal metal. In the proposed system small control current $I_{ctrl}$ flows via normal bridge, creates voltage drop $V$ and modifies distribution function of electrons in connected SN bilayer. In case of long normal bridge the voltage induced nonequilibrium effects could be interpreted in terms of increased local electron temperature. In this limit we experimentally find large sensitivity of critical curent $I_c$ of Cu/MoN/Pt-Cu bridge to $I_{ctrl}$ and relatively large current gain which originate from steep dependence of $I_c$ on temperature and large $I_c$ (comparable with theoretical depairing current of superconducting bridge). In the short normal bridge deviation from equilibrium cannot be described by simple increase of local temperature but we also theoretically find large sensitivity of $I_c$ to control current/voltage. In this limit we predict existence at finite $V$ of so called in-plane Fulde-Ferrell state with spontaneous currents in SN bilayer. We argue that its appearance is connected with voltage induced paramagnetic response in N layer.
A unified explicit form for difference formulas to approximate the fractional and classical derivatives is presented. The formula gives finite difference approximations for any classical derivatives with a desired order of accuracy at nodal point in the computational domain. It also gives Gr\"unwald type approximations for fractional derivatives with arbitrary order of approximation at any point. Thus, this explicit unifies approximations of both types of derivatives. Moreover, for classical derivatives, it provides various finite difference formulas such as forward, backward, central, staggered, compact, non-compact etc. Efficient computations of the coefficients of the difference formulas are also presented that lead to automating the solution process of differential equations with a given higher order accuracy. Some basic applications are presented to demonstrate the usefulness of this unified formulation.
Recent strategies achieved ensembling "for free" by fitting concurrently diverse subnetworks inside a single base network. The main idea during training is that each subnetwork learns to classify only one of the multiple inputs simultaneously provided. However, the question of how to best mix these multiple inputs has not been studied so far. In this paper, we introduce MixMo, a new generalized framework for learning multi-input multi-output deep subnetworks. Our key motivation is to replace the suboptimal summing operation hidden in previous approaches by a more appropriate mixing mechanism. For that purpose, we draw inspiration from successful mixed sample data augmentations. We show that binary mixing in features - particularly with rectangular patches from CutMix - enhances results by making subnetworks stronger and more diverse. We improve state of the art for image classification on CIFAR-100 and Tiny ImageNet datasets. Our easy to implement models notably outperform data augmented deep ensembles, without the inference and memory overheads. As we operate in features and simply better leverage the expressiveness of large networks, we open a new line of research complementary to previous works.
Effects from nonstandard corrections to Newtonian gravity, at large scale, can be investigated using the cosmological structure formation. In particular, it is possible to show if and how a logarithmic correction (as that induced from nonlocal gravity) modifies the clustering properties of galaxies and of clusters of galaxies. The thermodynamics of such systems can be used to obtain important information about the effects of such modification on clustering. We will compare its effects with observational data and it will be demonstrated that the observations seem to point to a characteristic scale where such a logarithmic correction might be in play at galactic scales. However, at larger scales such statistical inferences are much weaker, so that a fully reliable statistical evidence for this kind of corrections cannot be stated without further investigations and the use of more varied and precise cosmological and astrophysical probes.
Recent research has shown that non-additive image steganographic frameworks effectively improve security performance through adjusting distortion distribution. However, as far as we know, all of the existing non-additive proposals are based on handcrafted policies, and can only be applied to a specific image domain, which heavily prevent non-additive steganography from releasing its full potentiality. In this paper, we propose an automatic non-additive steganographic distortion learning framework called MCTSteg to remove the above restrictions. Guided by the reinforcement learning paradigm, we combine Monte Carlo Tree Search (MCTS) and steganalyzer-based environmental model to build MCTSteg. MCTS makes sequential decisions to adjust distortion distribution without human intervention. Our proposed environmental model is used to obtain feedbacks from each decision. Due to its self-learning characteristic and domain-independent reward function, MCTSteg has become the first reported universal non-additive steganographic framework which can work in both spatial and JPEG domains. Extensive experimental results show that MCTSteg can effectively withstand the detection of both hand-crafted feature-based and deep-learning-based steganalyzers. In both spatial and JPEG domains, the security performance of MCTSteg steadily outperforms the state of the art by a clear margin under different scenarios.
State-of-the-art approaches to calculate the electron-phonon and the phonon-electron self-energy are based on a mean-field approximation for the interacting electronic system. This approach introduces an overscreening error which results in an underestimation of the electron-phonon coupling strength. We introduce a theoretical and numerical approach for the calculation of the phonon-electron self-energy without the overscreening error. Starting from the out-of-equilibrium Kadanoff-Baym equations for the phonon propagator, we discuss and compare the ovescreened (i.e., symmetrically screened) and overscreening--free (i.e., asymmetrically screened) cases. We point out that the difficulty in treating the latter stems from the static approximation to the dielectric function and from the need to obtain a self-energy that preserves the elementary scattering processes. We solve both problems in the equilibrium case by considering a manifestly symmetric form of the correct self-energy which can be easily calculated numerically and yields an overscreening--free coupling strength. Finally, we describe the numerical implementation of this treatment into the first--principles Yambo code for the calculations of phonon linewidths.
Ab initio molecular dynamics (AIMD) with hybrid density functionals and plane wave basis is computationally expensive due to the high computational cost of exact exchange energy evaluation. Recently, we proposed a strategy to combine adaptively compressed exchange (ACE) operator formulation and multiple time step (MTS) integration scheme to reduce the computational cost significantly [J. Chem. Phys. 151, 151102 (2019)]. However, it was found that the construction of the ACE operator, which has to be done at least once in every MD time step, is computationally expensive. In the present work, systematic improvements are introduced to further speed-up by employing localized orbitals for the construction of the ACE operator. By this, we could achieve a computational speed-up of an order of magnitude for a periodic system containing 32-water molecules. Benchmark calculations were carried out to show the accuracy and efficiency of the method in predicting the structural and dynamical properties of bulk water. To demonstrate the applicability, computationally intensive free energy computations at the level of hybrid density functional theory were performed to investigate (a) methyl formate hydrolysis reaction in neutral aqueous medium and (b) proton transfer reaction within the active site residues of class-C $\beta$-lactamase enzyme.
We theoretically investigate the apparent contact angle of droplets on liquid infused surfaces as a function of the relative size of the wetting ridge and the deposited droplet. We provide an intuitive geometrical interpretation whereby the variation in the apparent contact angle is due to the rotation of the Neumann triangle. We also derive linear and quadratic corrections to the apparent contact angle as power series expansion in terms of pressure differences between the lubricant, droplet and gas phases. These expressions are much simpler and more compact compared to those previously derived by Semprebon et al. [Soft Matter, 2017, 13, 101-110].
Self-attention has the promise of improving computer vision systems due to parameter-independent scaling of receptive fields and content-dependent interactions, in contrast to parameter-dependent scaling and content-independent interactions of convolutions. Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50. In this work, we aim to develop self-attention models that can outperform not just the canonical baseline models, but even the high-performing convolutional models. We propose two extensions to self-attention that, in conjunction with a more efficient implementation of self-attention, improve the speed, memory usage, and accuracy of these models. We leverage these improvements to develop a new self-attention model family, HaloNets, which reach state-of-the-art accuracies on the parameter-limited setting of the ImageNet classification benchmark. In preliminary transfer learning experiments, we find that HaloNet models outperform much larger models and have better inference performance. On harder tasks such as object detection and instance segmentation, our simple local self-attention and convolutional hybrids show improvements over very strong baselines. These results mark another step in demonstrating the efficacy of self-attention models on settings traditionally dominated by convolutional models.
We introduce an additive basis of the integral cohomology ring of the Peterson variety which reflects the geometry of certain subvarieties of the Peterson variety. We explain the positivity of the structure constants from a geometric viewpoint, and provide a manifestly positive combinatorial formula for them. We also prove that our basis coincides with the additive basis introduced by Harada-Tymoczko.
We study non-linear optical effects in electron systems with and without inversion symmetry in a Fabry-Perot cavity. General photon up- and down-conversion processes are modeled by the coupling of a noninteracting lattice model to two modes of the quantized light field. Effective descriptions retaining the most relevant states are devised via downfolding and a generalized Householder transformation. These models are used to relate the transition amplitudes for even order photon-conversion processes to the shift vector, a topological quantity describing the difference in polarization between the valence and conduction band in non-centrosymmetric systems. We also demonstrate that the truncated models, despite their small Hilbert space, capture correlation effects induced by the photons in the electronic subsystem.
In recent years, blockchain has grown in popularity due to its singular attributes, enabling the development of new innovative decentralized applications. But when companies consider leveraging blockchain for their applications, the plethora of possible choices and the difficulty of integrating blockchain into architectures can hinder its adoption. Our research project aims to ease the adoption of blockchain into companies, notably with the construction of an automated decision process to solve this issue in which requirements are first-class citizens, a knowledge base containing architectural patterns and blockchains refined over time, and an architecture generator able to process outputs into architectural stubs. This paper will also present our current progression on this decision process, by introducing the preliminary version that is able to choose the most suitable blockchain between multiple choices and our process-driven benchmarking tool.
A leader-follower framework is proposed for multi-robot navigation of large scale teams where the leader agents corral the follower agents. A group of leaders is modeled as a 2D deformable object where discrete masses (i.e., leader robots) are interconnected by springs and dampers. A time-varying domain is defined by the positions of leaders while the external forces induce deformations of the domain from its nominal configuration. The team of followers is performing coverage over the time-varying domain by employing a perspective transformation that maps between the nominal and deformed configurations. A decentralized control strategy is proposed where a leader only takes local sensing information and information about its neighbors (connected by virtual springs and dampers), and a follower only needs partial information about leaders and information about its Delaunay neighbors.
In the era of digitization, different actors in agriculture produce numerous data. Such data contains already latent historical knowledge in the domain. This knowledge enables us to precisely study natural hazards within global or local aspects, and then improve the risk prevention tasks and augment the yield, which helps to tackle the challenge of growing population and changing alimentary habits. In particular, French Plants Health Bulletins (BSV, for its name in French Bulletin de Sant{\'e} du V{\'e}g{\'e}tal) give information about the development stages of phytosanitary risks in agricultural production. However, they are written in natural language, thus, machines and human cannot exploit them as efficiently as it could be. Natural language processing (NLP) technologies aim to automatically process and analyze large amounts of natural language data. Since the 2010s, with the increases in computational power and parallelization, representation learning and deep learning methods became widespread in NLP. Recent advancements Bidirectional Encoder Representations from Transformers (BERT) inspire us to rethink of knowledge representation and natural language understanding in plant health management domain. The goal in this work is to propose a BERT-based approach to automatically classify the BSV to make their data easily indexable. We sampled 200 BSV to finetune the pretrained BERT language models and classify them as pest or/and disease and we show preliminary results.
The chiral Hamiltonian for twisted graphene bilayers is written as a $2\times2$ matrix operator by a renormalization of the Hamiltonian that takes into account the particle-hole symmetry. This results in an effective Hamiltonian with an average field plus and effective non-Abelian gauge potential. The action of the proposed renormalization maps the zero-mode region into the ground state. Modes near zero energy have an antibonding nature in a triangular lattice. This leads to a phase-frustration effect associated with massive degeneration, and makes flat-bands modes similar to confined modes observed in other bipartite lattices. Suprisingly, the proposed Hamiltonian renormalization suggests that flat-bands at magic angles are akin to floppy-mode bands in flexible crystals or glasses, making an unexpected connection between rigidity topological theory and magic angle twisted two-dimensional heterostructures physics.
While the original goal for developing robots is replacing humans in dangerous and tedious tasks, the final target shall be completely mimicking the human cognitive and motor behaviour. Hence, building detailed computational models for the human brain is one of the reasonable ways to attain this. The cerebellum is one of the key players in our neural system to guarantee dexterous manipulation and coordinated movements as concluded from lesions in that region. Studies suggest that it acts as a forward model providing anticipatory corrections for the sensory signals based on observed discrepancies from the reference values. While most studies consider providing the teaching signal as error in joint-space, few studies consider the error in task-space and even fewer consider the spiking nature of the cerebellum on the cellular-level. In this study, a detailed cellular-level forward cerebellar model is developed, including modeling of Golgi and Basket cells which are usually neglected in previous studies. To preserve the biological features of the cerebellum in the developed model, a hyperparameter optimization method tunes the network accordingly. The efficiency and biological plausibility of the proposed cerebellar-based controller is then demonstrated under different robotic manipulation tasks reproducing motor behaviour observed in human reaching experiments.
We compute exactly the statistics of the number of records in a discrete-time random walk model on a line where the walker stays at a given position with a nonzero probability $0\leq p \leq 1$, while with the complementary probability $1-p$, it jumps to a new position with a jump length drawn from a continuous and symmetric distribution $f_0(\eta)$. We have shown that, for arbitrary $p$, the statistics of records up to step $N$ is completely universal, i.e., independent of $f_0(\eta)$ for any $N$. We also compute the connected two-time correlation function $C_p(m_1, m_2)$ of the record-breaking events at times $m_1$ and $m_2$ and show it is also universal for all $p$. Moreover, we demonstrate that $C_p(m_1, m_2)< C_0(m_1, m_2)$ for all $p>0$, indicating that a nonzero $p$ induces additional anti-correlations between record events. We further show that these anti-correlations lead to a drastic reduction in the fluctuations of the record numbers with increasing $p$. This is manifest in the Fano factor, i.e. the ratio of the variance and the mean of the record number, which we compute explicitly. We also show that an interesting scaling limit emerges when $p \to 1$, $N \to \infty$ with the product $t = (1-p)\, N$ fixed. We compute exactly the associated universal scaling functions for the mean, variance and the Fano factor of the number of records in this scaling limit. .
Large organizations such as social media companies continually release data, for example user images. At the same time, these organizations leverage their massive corpora of released data to train proprietary models that give them an edge over their competitors. These two behaviors can be in conflict as an organization wants to prevent competitors from using their own data to replicate the performance of their proprietary models. We solve this problem by developing a data poisoning method by which publicly released data can be minimally modified to prevent others from train-ing models on it. Moreover, our method can be used in an online fashion so that companies can protect their data in real time as they release it.We demonstrate the success of our approach onImageNet classification and on facial recognition.
The Born cross sections are measured for the first time for the processes $e^+e^-\to D_s^{*+}D_{s0}^*(2317)^- +c.c.$ and $e^+e^-\to D_s^{*+}D_{s1}(2460)^- +c.c.$ at the center-of-mass energy $\sqrt{s}=$ 4.600~GeV, 4.612~GeV, 4.626~GeV, 4.640~GeV, 4.660~GeV, 4.68~GeV, and 4.700~GeV, and for $e^+e^-\to D_s^{*+}D_{s1}(2536)^- +c.c.$ at $\sqrt{s}=$ 4.660~GeV, 4.680~GeV, and 4.700~GeV, using data samples collected with the BESIII detector at the BEPCII collider. No structures are observed in cross-section distributions for any of the processes.
The autoencoder model uses an encoder to map data samples to a lower dimensional latent space and then a decoder to map the latent space representations back to the data space. Implicitly, it relies on the encoder to approximate the inverse of the decoder network, so that samples can be mapped to and back from the latent space faithfully. This approximation may lead to sub-optimal latent space representations. In this work, we investigate a decoder-only method that uses gradient flow to encode data samples in the latent space. The gradient flow is defined based on a given decoder and aims to find the optimal latent space representation for any given sample through optimisation, eliminating the need of an approximate inversion through an encoder. Implementing gradient flow through ordinary differential equations (ODE), we leverage the adjoint method to train a given decoder. We further show empirically that the costly integrals in the adjoint method may not be entirely necessary. Additionally, we propose a $2^{nd}$ order ODE variant to the method, which approximates Nesterov's accelerated gradient descent, with faster convergence per iteration. Commonly used ODE solvers can be quite sensitive to the integration step-size depending on the stiffness of the ODE. To overcome the sensitivity for gradient flow encoding, we use an adaptive solver that prioritises minimising loss at each integration step. We assess the proposed method in comparison to the autoencoding model. In our experiments, GFE showed a much higher data-efficiency than the autoencoding model, which can be crucial for data scarce applications.
New computing technologies inspired by the brain promise fundamentally different ways to process information with extreme energy efficiency and the ability to handle the avalanche of unstructured and noisy data that we are generating at an ever-increasing rate. To realise this promise requires a brave and coordinated plan to bring together disparate research communities and to provide them with the funding, focus and support needed. We have done this in the past with digital technologies; we are in the process of doing it with quantum technologies; can we now do it for brain-inspired computing?
In this paper we characterize the value set $\Delta$ of the $R$-modules of the form $R+zR$ for the local ring $R$ associated to a germ $\xi$ of an irreducible plane curve singularity with one Puiseux pair. In the particular case of the module of K\"ahler differentials attached to $\xi$, we recover some results of Delorme. From our characterization of $\Delta$ we introduce a proper subset of semimodules over the value semigroup of the ring $R$. Moreover, we provide a geometric algorithm to construct all possible semimodules in this subset for a given value semigroup.
Chemotaxis, the directional locomotion of cells towards a source of a chemical gradient, is an integral part of many biological processes - for example, bacteria motion, single-cell or multicellular organisms development, immune response, etc. Chemotaxis directs bacteria's movement to find food (e.g., glucose) by swimming toward the highest concentration of food molecules. In multicellular organisms, chemotaxis is critical to early development (e.g., movement of sperm towards the egg during fertilization). Chemotaxis also helps mobilize phagocytic and immune cells at sites of infection, tissue injury, and thus facilitates immune reactions. In this paper, we study a PDE system that describes such biological processes in one dimension, which may correspond to a thin channel, the setting relevant in many applications: for example, spermatozoa progression to the ovum inside a Fallopian tube or immune response in a blood vessel.
This study evaluates a suspension design of a passenger car to obtain maximum rider's comfort when the vehicle is subjected to different road profile or road surface condition. The challenge will be on finding a balance between the rider's comfort and vehicle handling to optimize design parameters. The study uses a simple passive suspension system and an active suspension model integrated with a pneumatic actuator controlled by proportional integral derivative (PID) controller in both quarter car and full car models having a different degree of freedoms (DOF) and increasing degrees of complexities. The quarter car considered as a 2-DOF model, while the full car model is a 7-DOF model. The design process set to optimise the spring stiffnesses, damping coefficients and actuator PID controller gains. For optimisation, the research featured genetic algorithm optimisation technique to obtain a balanced response of the vehicle as evaluated from the displacement, velocity and acceleration of sprung and unsprung masses along with different human comfort and vehicle performance criteria. The results revealed that the active suspension system with optimised spring stiffness, damping coefficients and PID gains demonstrated the superior riding comfort and road holding compared to a passive suspension system.
Prompt isolated leptons are essential in many analyses in high-energy particle physics but are subject to fake-lepton background, i.e. objects that mimic the lepton signature. The fake-lepton background is difficult to estimate from simulation and is often directly determined from data. A popular method is the matrix method, which however suffers from several limitations. This paper recapitulates an alternative approach based on a likelihood with Poisson constraints and reformulates the problem from a different starting point in the framework of Bayesian statistics. The equality of both approaches is shown and several cases are studied in which the matrix method is limited. In addition, the fake lepton background is recalculated and compared to the estimate with the matrix method in an example top-quark measurement.
Training vision-based Urban Autonomous driving models is a challenging problem, which is highly researched in recent times. Training such models is a data-intensive task requiring the storage and processing of vast volumes of (possibly redundant) driving video data. In this paper, we study the problem of developing data-efficient autonomous driving systems. In this context, we study the problem of multi-criteria online video frame subset selection. We study convex optimization-based solutions and show that they are unable to provide solutions with high weightage to the loss of selected video frames. We design a novel convex optimization-based multi-criteria online subset selection algorithm that uses a thresholded concave function of selection variables. We also propose and study a submodular optimization-based algorithm. Extensive experiments using the driving simulator CARLA show that we are able to drop 80% of the frames while succeeding to complete 100% of the episodes w.r.t. the model trained on 100% data, in the most difficult task of taking turns. This results in a training time of less than 30% compared to training on the whole dataset. We also perform detailed experiments on prediction performances of various affordances used by the Conditional Affordance Learning (CAL) model and show that our subset selection improves performance on the crucial affordance "Relative Angle" during turns.
Thinking aloud is an effective meta-cognitive strategy human reasoners apply to solve difficult problems. We suggest to improve the reasoning ability of pre-trained neural language models in a similar way, namely by expanding a task's context with problem elaborations that are dynamically generated by the language model itself. Our main result is that dynamic problem elaboration significantly improves the zero-shot performance of GPT-2 in a deductive reasoning and natural language inference task: While the model uses a syntactic heuristic for predicting an answer, it is capable (to some degree) of generating reasoned additional context which facilitates the successful application of its heuristic. We explore different ways of generating elaborations, including fewshot learning, and find that their relative performance varies with the specific problem characteristics (such as problem difficulty). Moreover, the effectiveness of an elaboration can be explained in terms of the degree to which the elaboration semantically coheres with the corresponding problem. In particular, elaborations that are most faithful to the original problem description may boost accuracy by up to 24%.
We establish a novel approach to computing $G$-equivariant cohomology for a finite group $G$, and demonstrate it in the case that $G = C_{p^n}$. For any commutative ring spectrum $R$, we prove a symmetric monoidal reconstruction theorem for genuine $G$-$R$-modules, which records them in terms of their geometric fixedpoints as well as gluing maps involving their Tate cohomologies. This reconstruction theorem follows from a symmetric monoidal stratification (in the sense of \cite{AMR-strat}); here we identify the gluing functors of this stratification in terms of Tate cohomology. Passing from genuine $G$-spectra to genuine $G$-$\mathbb{Z}$-modules (a.k.a. derived Mackey functors) provides a convenient intermediate category for calculating equivariant cohomology. Indeed, as $\mathbb{Z}$-linear Tate cohomology is far simpler than $\mathbb{S}$-linear Tate cohomology, the above reconstruction theorem gives a particularly simple algebraic description of genuine $G$-$\mathbb{Z}$-modules. We apply this in the case that $G = C_{p^n}$ for an odd prime $p$, computing the Picard group of genuine $G$-$\mathbb{Z}$-modules (and therefore that of genuine $G$-spectra) as well as the $RO(G)$-graded and Picard-graded $G$-equivariant cohomology of a point.
Spin defects in silicon carbide (SiC) have attracted increasing interest due to their excellent optical and spin properties, which are useful in quantum information processing. In this paper, we systematically investigate the temperature dependence of the spin properties of divacancy defects in implanted 4\emph{H}-SiC. The zero-field splitting parameter $D$, the inhomogeneous dephasing time $T_2^{*}$, the coherence time $T_2$, and the depolarization time $T_1$ are extensively explored in a temperature range from 5 to 300 K. Two samples implanted with different nitrogen molecule ion fluences ($\rm {N_2}^{+}$, $1\times 10^{14}/\rm cm^{2}$ and $1\times 10^{13}/\rm cm^{2}$) are investigated, whose spin properties are shown to have similar temperature-dependent behaviors. Still, the sample implanted with a lower ion fluence has longer $T_{2}$ and $T_{1}$. We provide possible theoretical explanations for the observed temperature-dependent dynamics. Our work promotes the understanding of the temperature dependence of spin properties in solid-state systems, which can be helpful for constructing wide temperature-range thermometers based on the mature semiconductor material.
In addition to regular Schwabe cycles (~ 11 years), solar activity also shows longer periods of enhanced or reduced activity. Of these, reconstructions of the Dalton Minimum provide controversial sunspot group numbers and limited sunspot positions, partially due to limited source record accessibility. We analysed Stephan Prantner's sunspot observations from 1804--1844, the values of which had only been known through estimates despite their notable chronological coverage during the Dalton Minimum. We identified his original manuscript in Stiftsarchiv Wilten, near Innsbruck, Austria. We reviewed his biography (1782--1873) and located his observational sites at Wilten and Waidring, which housed the principal telescopes for his early and late observations: a 3.5-inch astronomical telescope and a Reichenbach 4-feet achromatic erecting telescope, respectively. We identified 215 days of datable sunspot observations, which are twice as much data as his estimated data in the existing database (= 115 days). Prantner counted up to 7--9 sunspot groups per day and measured sunspot positions, which show their distributions in both solar hemispheres. These results strikingly emphasise the difference between the Dalton Minimum and the Maunder Minimum as well as the similarity between the Dalton Minimum and the modern solar cycles.
We present algebraic and geometric classifications of the $4$-dimensional complex nilpotent right alternative algebras. Specifically, we find that, up to isomorphism, there are only $9$ non-isomorphic nontrivial nilpotent right alternative algebras. The corresponding geometric variety has dimension $13$ and it is determined by the Zariski closure of $4$ rigid algebras and one one-parametric family of algebras.
This paper investigates algebraic objects equipped with an operator, such as operated monoids, operated algebras etc. Various free object functors in these operated contexts are explicitly constructed. For operated algebras whose operator satisfies a set $\Phi$ of relations (usually called operated polynomial identities (aka. OPIs)), Guo defined free objects, called free $\Phi$-algebras, via universal algebra. Free $\Phi$-algebras over algebras are studied in details. A mild sufficient condition is found such that $\Phi$ together with a Gr\"obner-Shirshov basis of an algebra $A$ form a Gr\"obner-Shirshov basis of the free $\Phi$-algebra over algebra $A$ in the sense of Guo et al.. Ample examples for which this condition holds are provided, such as all Rota-Baxter type OPIs, a class of differential type OPIs, averaging OPIs and Reynolds OPI.
Neural networks trained with SGD were recently shown to rely preferentially on linearly-predictive features and can ignore complex, equally-predictive ones. This simplicity bias can explain their lack of robustness out of distribution (OOD). The more complex the task to learn, the more likely it is that statistical artifacts (i.e. selection biases, spurious correlations) are simpler than the mechanisms to learn. We demonstrate that the simplicity bias can be mitigated and OOD generalization improved. We train a set of similar models to fit the data in different ways using a penalty on the alignment of their input gradients. We show theoretically and empirically that this induces the learning of more complex predictive patterns. OOD generalization fundamentally requires information beyond i.i.d. examples, such as multiple training environments, counterfactual examples, or other side information. Our approach shows that we can defer this requirement to an independent model selection stage. We obtain SOTA results in visual recognition on biased data and generalization across visual domains. The method - the first to evade the simplicity bias - highlights the need for a better understanding and control of inductive biases in deep learning.
Multi-Source Domain Adaptation (MSDA) focuses on transferring the knowledge from multiple source domains to the target domain, which is a more practical and challenging problem compared to the conventional single-source domain adaptation. In this problem, it is essential to utilize the labeled source data and the unlabeled target data to approach the conditional distribution of semantic label on target domain, which requires the joint modeling across different domains and also an effective domain combination scheme. The graphical structure among different domains is useful to tackle these challenges, in which the interdependency among various instances/categories can be effectively modeled. In this work, we propose two types of graphical models,i.e. Conditional Random Field for MSDA (CRF-MSDA) and Markov Random Field for MSDA (MRF-MSDA), for cross-domain joint modeling and learnable domain combination. In a nutshell, given an observation set composed of a query sample and the semantic prototypes i.e. representative category embeddings) on various domains, the CRF-MSDA model seeks to learn the joint distribution of labels conditioned on the observations. We attain this goal by constructing a relational graph over all observations and conducting local message passing on it. By comparison, MRF-MSDA aims to model the joint distribution of observations over different Markov networks via an energy-based formulation, and it can naturally perform label prediction by summing the joint likelihoods over several specific networks. Compared to the CRF-MSDA counterpart, the MRF-MSDA model is more expressive and possesses lower computational cost. We evaluate these two models on four standard benchmark data sets of MSDA with distinct domain shift and data complexity, and both models achieve superior performance over existing methods on all benchmarks.
We prove that a simple knot in the lens space $L(p,q)$ fibers if and only if its order in homology does not divide any remainder occurring in the Euclidean algorithm applied to the pair $(p,q)$. One corollary is that if $p=m^2$ is a perfect square, then any simple knot of order $m$ fibers, answering a question of Cebanu. More generally, we compute the leading coefficient of the Alexander polynomial of a simple knot, and we describe how to construct a minimum complexity Seifert surface for one. The methods are direct, combinatorial, and geometric.
In Veltman's original view, the Standard Model with a large Higgs particle mass of about 1 TeV was the natural completion of non-renormalizable Glashow model. This mass was thus a second threshold for weak interactions, as the W mass was for the non-renormalizable 4-fermion V-A theory. Today, after the observation of the narrow scalar resonance at 125 GeV, Veltman's large-mass idea seems to be ruled out. Yet, this is not necessarily true. Depending on the description of SSB in $\Phi^4$ theory, and by combining analytic calculations and lattice simulations, besides the known particle at 125 GeV, a new resonance of the Higgs field may also show up around 700 GeV. The peculiarity, though, is that this heavier state would couple to longitudinal vector bosons with the same typical strength of the low-mass state and thus represent a relatively narrow resonance. In this way, such hypothetical new resonance would naturally fit with some excess of 4-lepton events observed by ATLAS around 680 GeV. Analogous data from CMS are needed to confirm or disprove this interpretation. Implications of a two-mass structure for radiative corrections are also discussed.
The understanding of an offense is subjective and people may have different opinions about the offensiveness of a comment. Moreover, offenses and hate speech may occur through sarcasm, which hides the real intention of the comment and makes the decision of the annotators more confusing. Therefore, providing a well-structured annotation process is crucial to a better understanding of hate speech and offensive language phenomena, as well as supplying better performance for machine learning classifiers. In this paper, we describe a corpus annotation process proposed by a linguist, a hate speech specialist, and machine learning engineers in order to support the identification of hate speech and offensive language on social media. In addition, we provide the first robust dataset of this kind for the Brazilian Portuguese language. The corpus was collected from Instagram posts of political personalities and manually annotated, being composed by 7,000 annotated documents according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive, and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology to the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. The proposed annotation approach is also language and domain-independent nevertheless it is currently customized for Brazilian Portuguese.
We describe the addition of Lyman-alpha resonant line transfer to our dust continuum radiation transfer code SKIRT, verifying our implementation with published results for spherical problems and using some self-designed three-dimensional setups. We specifically test spatial discretization through various grid types, including hierarchical octree grids and unstructured Voronoi tessellations. We then use a radiation transfer post-processing model for one of the spiral galaxies produced by the Auriga cosmological zoom simulations to investigate the effect of spatial discretization on the synthetic observations. We find that the calculated Lyman-alpha line profiles exhibit an extraordinarily strong dependence on the type and resolution of the spatial grid, rendering the results untrustworthy at best. We attribute this effect to the large gradients in the hydrogen density distribution over small distances, which remain significantly under-resolved in the input model. We therefore argue that further research is needed to determine the required spatial resolution of a hydrodynamical simulation snapshot to enable meaningful Lyman-alpha line transfer post-processing.
In modern data science, it is often not enough to obtain only a data-driven model with a good prediction quality. On the contrary, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results. Such questions are unified under machine learning interpretability questions, which could be considered one of the area's raising topics. In the paper, we use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties. It means that whereas one of the apparent objectives is precision, the other could be chosen as the complexity of the model, robustness, and many others. The method application is shown on examples of multi-objective learning of composite models, differential equations, and closed-form algebraic expressions are unified and form approach for model-agnostic learning of the interpretable models.
Multi-messenger astrophysics is becoming a major avenue to explore the Universe, with the potential to span a vast range of redshifts. The growing synergies between different probes is opening new frontiers, which promise profound insights into several aspects of fundamental physics and cosmology. In this context, THESEUS will play a central role during the 2030s in detecting and localizing the electromagnetic counterparts of gravitational wave and neutrino sources that the unprecedented sensitivity of next generation detectors will discover at much higher rates than the present. Here, we review the most important target signals from multi-messenger sources that THESEUS will be able to detect and characterize, discussing detection rate expectations and scientific impact.
We adopt a geometric perspective on Fock space to provide two complementary insights into the eigenstates in many-body-localized fermionic systems. On the one hand, individual many-bodylocalized eigenstates are well approximated by a Slater determinant of single-particle orbitals. On the other hand, the orbitals of different eigenstates in a given system display a varying, and generally imperfect, degree of compatibility, as we quantify by a measure based on the projectors onto the corresponding single-particle subspaces. We study this incompatibility between states of fixed and differing particle number, as well as inside and outside the many-body-localized regime. This gives detailed insights into the emergence and strongly correlated nature of quasiparticle-like excitations in many-body localized systems, revealing intricate correlations between states of different particle number down to the level of individual realizations.
In applications such as participatory sensing and crowd sensing, self-interested agents exert costly effort towards achieving an objective for the system operator. We study such a setup where a principal incentivizes multiple agents of different types who can collude with each other to derive rent. The principal cannot observe the efforts exerted directly, but only the outcome of the task, which is a noisy function of the effort. The type of each agent influences the effort cost and task output. For a duopoly in which agents are coupled in their payments, we show that if the principal and the agents interact finitely many times, the agents can derive rent by colluding even if the principal knows the types of the agents. However, if the principal and the agents interact infinitely often, the principal can disincentivize agent collusion through a suitable data-driven contract.
We establish an exact formulation for wave scattering of a massless field with spin and charge by a Kerr-Newman-de Sitter black hole. Our formulation is based on the exact solution of the Teukolsky equation in terms of the local Heun function, and does not require any approximation. It serves as simple exact formulae with arbitrary high precision, which realize fast calculation without restrictions on model parameters. We highlight several applications including quasinormal modes, cross section, reflection/absorption rate, and Green function.
The Sloan Digital Sky Survey provides colors for more than 100 000 moving objects, among which around 10 000 have albedos determined. Here we combined colors and albedo in order to perform a cluster analysis on the small bodies population, and identify a C-cluster, a group of asteroid related to C-type as defined in earlier work. Members of this C-cluster are in fair agreement with the color boundaries of B and C-type defined in DeMeo and Carry (2013). We then compare colors of C-cluster asteroids to those of carbonaceous chondrites powders, while taking into account the effect of phase angle. We show that only CM chondrites have colors in the range of C-cluster asteroids, CO, CR and CV chondrites being significantly redder. Also, CM chondrites powders are on average slightly redder than the average C-cluster. The colors of C-cluster members are further investigated by looking at color variations as a function of asteroid diameter. We observe that the visible slope becomes bluer with decreasing asteroids diameter, and a transition seems to be present around 20 km. We discuss the origin of this variation and, if not related to a bias in the dataset - analysis, we conclude that it is related to the surface texture of the objects, smaller objects being covered by rocks, while larger objects are covered by a particulate surface. The blueing is interpreted by an increased contribution of the first reflection in the case of rock-dominated surfaces, which can scatter light in a Rayleigh-like manner. We do not have unambiguous evidence of space weathering within the C-cluster based on this analysis, however the generally bluer nature of C-cluster objects compared to CM chondrites could be to some extent related to space weathering.
An intersecting $r$-uniform straight line system is an intersecting linear system whose lines consist of $r$ points on straight line segment of $\mathbb{R}^2$ and any two lines share a point. Recently, the author [A. V\'azquez-\'Avila, \emph{On intersecting straight line systems}, J. Discret. Math. Sci. Cryptogr. Accepted] proved that any intersecting $r$-uniform straight line system $(P,\mathcal{L})$ has transversal number at most $\nu_2-1$, with $r\geq\nu_2$, where $\nu_2$ is the maximum cardinality of a subset of lines $R\subseteq\mathcal{L}$ such that every triplet of different elements of $R$ does not have a common point. In this paper, we improve such upper bound if the intersecting $r$-uniform straight line system satisfies $r=\nu_2$. This result has immediate consequences for some questions given by Oliveros et al. [D. Oliveros, C. O'Neill and S. Zerbib, \emph{The geometry and combinatorics of discrete line segment hypergraphs}, Discrete Math. {\bf 343} (2020), no. 6, 111825].
The effect of polyvalent molecular cations, such as spermine, on the condensation of DNA into very well-defined toroidal shapes have been well studied and understood. However, a great effort has been made trying to obtain similar condensed structures from either ssRNA or dsRNA, which the latter carries similar negative charge density as dsDNA, although it adopts a different helical form. But the analogous condensation of RNA molecules into well-defined structures has so far been elusive. In this work, we show that ssRNA molecules can easily be condensed into nanoring structures on a mice surface, where each nanoring structure is formed mostly by a single RNA molecule. The condensation occurs in a concentration range of different atomic cations, from monovalent to trivalent. The structures of the RNA nanorings on mica surfaces were oberved by atomic force microscopy (AFM). The samples were observed in tapping mode and were prepared by drop evaporation of a solution of RNA in the presence of one type of the different cations used. As far as we know, this is the first time that nanorings or any other well-defined condensed RNA structures have been reported. The RNA nanorings formation can be understood by an energy competition between the hydrogen bonding forming hairpin stems, weakened by the salts, and hairpin loops. This results may have an important biological relevance, since it has been proposed that RNA is the oldest genome coding molecule and the formation of these structures could have given it stability against degradation in primeval times. Even more, the nanoring structures could have the potential to be used as biosensors and functionalized nanodevices.
We explore the feasibility of combining Graph Neural Network-based policy architectures with Deep Reinforcement Learning as an approach to problems in systems. This fits particularly well with operations on networks, which naturally take the form of graphs. As a case study, we take the idea of data-driven routing in intradomain traffic engineering, whereby the routing of data in a network can be managed taking into account the data itself. The particular subproblem which we examine is minimising link congestion in networks using knowledge of historic traffic flows. We show through experiments that an approach using Graph Neural Networks (GNNs) performs at least as well as previous work using Multilayer Perceptron architectures. GNNs have the added benefit that they allow for the generalisation of trained agents to different network topologies with no extra work. Furthermore, we believe that this technique is applicable to a far wider selection of problems in systems research.
We seek to design experimentally feasible broadband, temporally multiplexed optical quantum memory with near-term applications to telecom bands. Specifically, we devise dispersion compensation for an impedance-matched narrow-band quantum memory by exploiting Raman processes over two three-level atomic subensembles, one for memory and the other for dispersion compensation. Dispersion compensation provides impedance matching over more than a full cavity linewidth. Combined with one second spin-coherence lifetime the memory could be capable of power efficiency exceeding 90% leading to 106 modes for temporal multiplexing. Our design could lead to significant multiplexing enhancement for quantum repeaters to be used for telecom quantum networks.
The flux of cosmic rays (CRs) in the heliosphere is subjected to remarkable time variations caused by the 11-year cycle of solar activity. To help the study of this effect, we have developed a web application (Heliophysics Virtual Observatory) that collects real-time data on solar activity, interplanetary plasma, and charged radiation from several space missions or observatories. As we will show, our application can be used to visualize, manipulate, and download updated data on sunspots, heliospheric magnetic fields, solar wind, and neutron monitors counting rates. Data and calculations are automatically updated on daily basis. A nowcasting for the energy spectrum of CR protons near-Earth is also provided using calculations and real-time neutron monitor data as input.
Flickering is a universal phenomenon in accreting astronomical systems which still defies detailed physical understanding. It is particularly evident in cataclysmic variables (CVs). Attempting to define boundary conditions for models, the strength of the flickering is measured in several thousand light curves of more than 100 CVs. The flickering amplitude is parameterized by the FWHM of a Gaussian fit to the magnitude distribution of data points in a light curve. This quantity requires several corrections before a comparison between different sources can be made. While no correlations of the flickering strength with simple parameters such as component masses, orbital inclination or period were detected, a dependence on the absolute magnitude of the primary component and on the CV subtype is found. In particular, flickering in VY Scl tpye novalike variables is systematically stronger than in UX UMa type novalikes. The broadband spectrum of the flickering light source can be fit by simple models but shows excess in the $U$ band. When the data permitted to investigate the flickering strength as a function of orbital phase in eclipsing CVs, such a dependence was found, but it is different for different systems. Surprisingly, there are also indications for variations of the flickering strength with the superhump phase in novalike variables with permanent superhumps. In dwarf novae, the flickering amplitude is high during quiescence, drops quickly at an intermediate magnitude when the system enters into (or returns from) an outburst and, on average, remains constant above a given brightness threshold.
Training a multi-speaker Text-to-Speech (TTS) model from scratch is computationally expensive and adding new speakers to the dataset requires the model to be re-trained. The naive solution of sequential fine-tuning of a model for new speakers can cause the model to have poor performance on older speakers. This phenomenon is known as catastrophic forgetting. In this paper, we look at TTS modeling from a continual learning perspective where the goal is to add new speakers without forgetting previous speakers. Therefore, we first propose an experimental setup and show that serial fine-tuning for new speakers can result in the forgetting of the previous speakers. Then we exploit two well-known techniques for continual learning namely experience replay and weight regularization and we reveal how one can mitigate the effect of degradation in speech synthesis diversity in sequential training of new speakers using these methods. Finally, we present a simple extension to improve the results in extreme setups.
The difficulty of optimal control problems has classically been characterized in terms of system properties such as minimum eigenvalues of controllability/observability gramians. We revisit these characterizations in the context of the increasing popularity of data-driven techniques like reinforcement learning (RL), and in control settings where input observations are high-dimensional images and transition dynamics are unknown. Specifically, we ask: to what extent are quantifiable control and perceptual difficulty metrics of a task predictive of the performance and sample complexity of data-driven controllers? We modulate two different types of partial observability in a cartpole "stick-balancing" problem -- (i) the height of one visible fixation point on the cartpole, which can be used to tune fundamental limits of performance achievable by any controller, and by (ii) the level of perception noise in the fixation point position inferred from depth or RGB images of the cartpole. In these settings, we empirically study two popular families of controllers: RL and system identification-based $H_\infty$ control, using visually estimated system state. Our results show that the fundamental limits of robust control have corresponding implications for the sample-efficiency and performance of learned perception-based controllers. Visit our project website https://jxu.ai/rl-vs-control-web for more information.
We introduce and apply a new approach to probe the response of galactic stellar haloes to the interplay between cosmological merger histories and galaxy formation physics. We perform dark-matter-only, zoomed simulations of two Milky Way-mass hosts and make targeted, controlled changes to their cosmological histories using the genetic modification technique. Populating each history's stellar halo with a semi-empirical, particle-tagging approach then enables a controlled study, with all instances converging to the same large-scale structure, dynamical and stellar mass at $z=0$ as their reference. These related merger scenarios alone generate an extended spread in stellar halo mass fractions (1.5 dex) comparable to the observed population. Largest scatter is achieved by growing late ($z\leq1$) major mergers that spread out existing stars to create massive, in-situ dominated stellar haloes. Increasing a last major merger at $z\sim2$ brings more accreted stars into the inner regions, resulting in smaller scatter in the outskirts which are predominantly built by subsequent minor events. Exploiting the flexibility of our semi-empirical approach, we show that the diversity of stellar halo masses across scenarios is reduced by allowing shallower slopes in the stellar mass--halo mass relation for dwarf galaxies, while it remains conserved when central stars are born with hotter kinematics across cosmic time. The merger-dependent diversity of stellar haloes thus responds distinctly to assumptions in modelling the central and dwarf galaxies respectively, opening exciting prospects to constrain star formation and feedback at different galactic mass-scales with the coming generation of deep, photometric observatories.
This paper presents a method for riggable 3D face reconstruction from monocular images, which jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations. To achieve this goal, we design an end-to-end trainable network embedded with a differentiable in-network optimization. The network first parameterizes the face rig as a compact latent code with a neural decoder, and then estimates the latent code as well as per-image parameters via a learnable optimization. By estimating a personalized face rig, our method goes beyond static reconstructions and enables downstream applications such as video retargeting. In-network optimization explicitly enforces constraints derived from the first principles, thus introduces additional priors than regression-based methods. Finally, data-driven priors from deep learning are utilized to constrain the ill-posed monocular setting and ease the optimization difficulty. Experiments demonstrate that our method achieves SOTA reconstruction accuracy, reasonable robustness and generalization ability, and supports standard face rig applications.
Object recognition has made great advances in the last decade, but predominately still relies on many high-quality training examples per object category. In contrast, learning new objects from only a few examples could enable many impactful applications from robotics to user personalization. Most few-shot learning research, however, has been driven by benchmark datasets that lack the high variation that these applications will face when deployed in the real-world. To close this gap, we present the ORBIT dataset and benchmark, grounded in the real-world application of teachable object recognizers for people who are blind/low-vision. The dataset contains 3,822 videos of 486 objects recorded by people who are blind/low-vision on their mobile phones. The benchmark reflects a realistic, highly challenging recognition problem, providing a rich playground to drive research in robustness to few-shot, high-variation conditions. We set the benchmark's first state-of-the-art and show there is massive scope for further innovation, holding the potential to impact a broad range of real-world vision applications including tools for the blind/low-vision community. We release the dataset at https://doi.org/10.25383/city.14294597 and benchmark code at https://github.com/microsoft/ORBIT-Dataset.
The recent inflow of empirical data about the collective behaviour of strongly correlated biological systems has brought field theory and the renormalization group into the biophysical arena. Experiments on bird flocks and insect swarms show that social forces act on the particles' velocity through the generator of its rotations, namely the spin, indicating that mode-coupling field theories are necessary to reproduce the correct dynamical behaviour. Unfortunately, a theory for three coupled fields - density, velocity and spin - has a prohibitive degree of intricacy. A simplifying path consists in getting rid of density fluctuations by studying incompressible systems. This requires imposing a solenoidal constraint on the primary field, an unsolved problem even for equilibrium mode-coupling theories. Here, we perform an equilibrium dynamic renormalization group analysis of a mode-coupling field theory subject to a solenoidal constraint; using the classification of Halperin and Hohenberg, we can dub this case as a solenoidal Model G. We demonstrate that the constraint produces a new vertex that mixes static and dynamical coupling constants, and that this vertex is essential to grant the closure of the renormalization group structure and the consistency of dynamics with statics. Interestingly, although the solenoidal constraint leads to a modification of the static universality class, we find that it does not change the dynamical universality class, a result that seems to represent an exception to the general rule that dynamical universality classes are narrower than static ones. Our results constitute a solid stepping stone in the admittedly large chasm towards developing an off-equilibrium mode-coupling theory of biological groups.
Besides magnetic and charge order, regular arrangements of orbital occupation constitute a fundamental order parameter of condensed matter physics. Even though orbital order is difficult to identify directly in experiments, its presence was firmly established in a number of strongly correlated, three-dimensional Mott insulators. Here, reporting resonant X-ray scattering experiments on the layered Van der Waals compound $1T$-TiSe$_2$, we establish the emergence of orbital order in a weakly correlated, quasi-two-dimensional material. Our experimental scattering results are consistent with first-principles calculations that bring to the fore a generic mechanism of close interplay between charge redistribution, lattice displacements, and orbital order. It demonstrates the essential role that orbital degrees of freedom play in TiSe$_2$, and their importance throughout the family of correlated Van der Waals materials.
Existing radar sensors can be classified into automotive and scanning radars. While most radar odometry (RO) methods are only designed for a specific type of radar, our RO method adapts to both scanning and automotive radars. Our RO is simple yet effective, where the pipeline consists of thresholding, probabilistic submap building, and an NDT-based radar scan matching. The proposed RO has been tested on two public radar datasets: the Oxford Radar RobotCar dataset and the nuScenes dataset, which provide scanning and automotive radar data respectively. The results show that our approach surpasses state-of-the-art RO using either automotive or scanning radar by reducing translational error by 51% and 30%, respectively, and rotational error by 17% and 29%, respectively. Besides, we show that our RO achieves centimeter-level accuracy as lidar odometry, and automotive and scanning RO have similar accuracy.
Periodically-driven systems are ubiquitous in science and technology. In quantum dynamics, even a small number of periodically-driven spins leads to complicated dynamics. Hence, it is of interest to understand what constraints such dynamics must satisfy. We derive a set of constraints for each number of cycles. For pure initial states, the observable being constrained is the recurrence probability. We use our constraints for detecting undesired coupling to unaccounted environments and drifts in the driving parameters. To illustrate the relevance of these results for modern quantum systems we demonstrate our findings experimentally on a trapped-ion quantum computer, and on various IBM quantum computers. Specifically, we provide two experimental examples where these constraints surpass fundamental bounds associated with known one-cycle constraints. This scheme can potentially be used to detect the effect of the environment in quantum circuits that cannot be classically simulated. Finally, we show that, in practice, testing an $n$-cycle constraint requires executing only $O(\sqrt{n})$ cycles, which makes the evaluation of constraints associated with hundreds of cycles realistic.
We study one-loop bulk entanglement entropy in even spacetime dimensions using the heat kernel method, which captures the universal piece of entanglement entropy, a logarithmically divergent term in even dimensions. In four dimensions, we perform explicit calculations for various shapes of boundary subregions. In particular, for a cusp subregion with an arbitrary opening angle, we find that the bulk entanglement entropy always encodes the same universal information about the boundary theories as the leading entanglement entropy in the large N limit, up to a fixed proportional constant. By smoothly deforming a circle in the boundary, we find that to leading order of the deformations, the bulk entanglement entropy shares the same shape dependence as the leading entanglement entropy and hence the same physical information can be extracted from both cases. This establishes an interesting local/nonlocal duality for holographic $\mm{CFT}_3$. However, the result does not hold for higher dimensional holographic theories.
Using the nonabilinization procedure, we find an integrable matrix version of the Euler top on $\mathfrak{so}_3$
Linear-response time-dependent density functional theory (LR-TDDFT) for core level spectroscopy using standard local functionals suffers from self-interaction error and a lack of orbital relaxation upon creation of the core hole. As a result, LR-TDDFT calculated X-ray absorption near edge structure (XANES) spectra need to be shifted along the energy axis to match experimental data. We propose a correction scheme based on many body perturbation theory to calculate the shift from first principles. The ionization potential of the core donor state is first computed and then substituted for the corresponding Kohn--Sham orbital energy, thus emulating Koopmans' condition. Both self-interaction error and orbital relaxation are taken into account. The method exploits the localized nature of core states for efficiency and integrates seamlessly in our previous implementation of core level LR-TDDFT, yielding corrected spectra in a single calculation. We benchmark the correction scheme on molecules at the K- and L-edges as well as for core binding energies and report accuracies comparable to higher order methods. We also demonstrate applicability in large and extended systems and discuss efficient approximations.
A giant impact origin for the Moon is generally accepted, but many aspects of lunar formation remain poorly understood and debated. \'Cuk et al. (2016) proposed that an impact that left the Earth-Moon system with high obliquity and angular momentum could explain the Moon's orbital inclination and isotopic similarity to Earth. In this scenario, instability during the Laplace Plane transition, when the Moon's orbit transitions from the gravitational influence of Earth's figure to that of the Sun, would both lower the system's angular momentum to its present-day value and generate the Moon's orbital inclination. Recently, Tian and Wisdom (2020) discovered new dynamical constraints on the Laplace Plane transition and concluded that the Earth-Moon system could not have evolved from an initial state with high obliquity. Here we demonstrate that the Earth-Moon system with an initially high obliquity can evolve into the present state, and we identify a spin-orbit secular resonance as a key dynamical mechanism in the later stages of the Laplace Plane transition. Some of the simulations by Tian and Wisdom (2020) did not encounter this late secular resonance, as their model suppressed obliquity tides and the resulting inclination damping. Our results demonstrate that a giant impact that left Earth with high angular momentum and high obliquity ($\theta > 61^{\circ}$) is a promising scenario for explaining many properties of the Earth-Moon system, including its angular momentum and obliquity, the geochemistry of Earth and the Moon, and the lunar inclination.
Transmission electron microscopes use electrons with wavelengths of a few picometers, potentially capable of imaging individual atoms in solids at a resolution ultimately set by the intrinsic size of an atom. Unfortunately, due to imperfections in the imaging lenses and multiple scattering of electrons in the sample, the image resolution reached is 3 to 10 times worse. Here, by inversely solving the multiple scattering problem and overcoming the aberrations of the electron probe using electron ptychography to recover a linear phase response in thick samples, we demonstrate an instrumental blurring of under 20 picometers. The widths of atomic columns in the measured electrostatic potential are now no longer limited by the imaging system, but instead by the thermal fluctuations of the atoms. We also demonstrate that electron ptychography can potentially reach a sub-nanometer depth resolution and locate embedded atomic dopants in all three dimensions with only a single projection measurement.