abstract
stringlengths
42
2.09k
We proposed a simple and efficient modular single-source surface integral equation (SS-SIE) formulation for electromagnetic analysis of arbitrarily connected penetrable and perfectly electrical conductor (PEC) objects in two-dimensional space. In this formulation, a modular equivalent model for each penetrable object consisting of the composite structure is first independently constructed through replacing it by the background medium, no matter whether it is surrounded by the background medium, other media, or partially connected objects, and enforcing an equivalent electric current density on the boundary to remain fields in the exterior region unchanged. Then, by combining all the modular models and any possible PEC objects together, an equivalent model for the composite structure can be derived. The troublesome junction handling techniques are not needed and non-conformal meshes are intrinsically supported. The proposed SS-SIE formulation is simple to implement, efficient, and flexible, which shows significant performance improvement in terms of CPU time compared with the original SS-SIE formulation and the Poggio-Miller-Chang-Harrington-Wu-Tsai (PMCHWT) formulation. Several numerical examples including the coated dielectric cuboid, the large lossy objects, the planar layered dielectric structure, and the partially connected dielectric and PEC structure are carried out to validate its accuracy, efficiency and robustness.
This research quantitatively and qualitatively analyzes the factors responsible for the water level variations in Lake Toba, North Sumatra Province, Indonesia. According to several studies carried out from 1993 to 2020, changes in the water level were associated with climate variability, climate change, and human activities. Furthermore, these studies stated that reduced rainfall during the rainy season due to the El Nino Southern Oscillation (ENSO) and the continuous increase in the maximum and average temperatures were some of the effects of climate change in the Lake Toba catchment area. Additionally, human interventions such as industrial activities, population growth, and damage to the surrounding environment of the Lake Toba watershed had significant impacts in terms of decreasing the water level. However, these studies were unable to determine the factor that had the most significant effect, although studies on other lakes worldwide have shown these factors are the main causes of fluctuations or decreases in water levels. A simulation study of Lake Toba's water balance showed the possibility of having a water surplus until the mid-twenty-first century. The input discharge was predicted to be greater than the output; therefore, Lake Toba could be optimized without affecting the future water level. However, the climate projections depicted a different situation, with scenarios predicting the possibility of extreme climate anomalies, demonstrating drier climatic conditions in the future. This review concludes that it is necessary to conduct an in-depth, comprehensive, and systematic study to identify the most dominant factor among the three that is causing the decrease in the Lake Toba water level and to describe the future projected water level.
This paper presents the CUHK-EE voice cloning system for ICASSP 2021 M2VoC challenge. The challenge provides two Mandarin speech corpora: the AIShell-3 corpus of 218 speakers with noise and reverberation and the MST corpus including high-quality speech of one male and one female speakers. 100 and 5 utterances of 3 target speakers in different voice and style are provided in track 1 and 2 respectively, and the participants are required to synthesize speech in target speaker's voice and style. We take part in the track 1 and carry out voice cloning based on 100 utterances of target speakers. An end-to-end voicing cloning system is developed to accomplish the task, which includes: 1. a text and speech front-end module with the help of forced alignment, 2. an acoustic model combining Tacotron2 and DurIAN to predict melspectrogram, 3. a Hifigan vocoder for waveform generation. Our system comprises three stages: multi-speaker training stage, target speaker adaption stage and target speaker synthesis stage. Our team is identified as T17. The subjective evaluation results provided by the challenge organizer demonstrate the effectiveness of our system. Audio samples are available at our demo page: https://daxintan-cuhk.github.io/CUHK-EE-system-M2VoC-challenge/ .
The use of cumulative incidence functions for characterizing the risk of one type of event in the presence of others has become increasingly popular over the past decade. The problems of modeling, estimation and inference have been treated using parametric, nonparametric and semi-parametric methods. Efforts to develop suitable extensions of machine learning methods, such as regression trees and related ensemble methods, have begun comparatively recently. In this paper, we propose a novel approach to estimating cumulative incidence curves in a competing risks setting using regression trees and associated ensemble estimators. The proposed methods employ augmented estimators of the Brier score risk as the primary basis for building and pruning trees, and lead to methods that are easily implemented using existing R packages. Data from the Radiation Therapy Oncology Group (trial 9410) is used to illustrate these new methods.
The two years experience with Active Metal Casting flat bonds shows that this technology is suitable for the heat fluxes expected in Tore Supra (10 MW/m${}^2$). Tests were pursued up to 3330 cycles, with elements still functional. At higher heat fluxes, fatigue damage is observed, but the bond resists remarkably well with no tile detachment. Examination of such deliberately damaged bonds showed distributed cracking, proving the absence of any weak link. The limitations to those higher heat fluxes are more related to the design and the base materials than to the bond itself.
The class of gross substitutes (GS) set functions plays a central role in Economics and Computer Science. GS belongs to the hierarchy of {\em complement free} valuations introduced by Lehmann, Lehmann and Nisan, along with other prominent classes: $GS \subsetneq Submodular \subsetneq XOS \subsetneq Subadditive$. The GS class has always been more enigmatic than its counterpart classes, both in its definition and in its relation to the other classes. For example, while it is well understood how closely the Submodular, XOS and Subadditive classes (point-wise) approximate one another, approximability of these classes by GS remained wide open. Our main result is the existence of a submodular valuation (one that is also budget additive) that cannot be approximated by GS within a ratio better than $\Omega(\frac{\log m}{\log\log m})$, where $m$ is the number of items. En route, we uncover a new symmetrization operation that preserves GS, which may be of independent interest. We show that our main result is tight with respect to budget additive valuations. Additionally, for a class of submodular functions that we refer to as {\em concave of Rado} valuations (this class greatly extends budget additive valuations), we show approximability by GS within an $O(\log^2m)$ factor.
Two-dimensional infrared spectroscopy experiments have presented new results regarding the dynamics of the hydrated excess proton (aka <q>hydronium</q> cation solvated in water). It has been suggested by these experiments that the hydrated excess proton has an anisotropy reorientation timescale of 2.5 ps, which can be viewed as being somewhat long lived. Through the use of both the reactive molecular dynamics Multistate-Empirical Valence Bond method and Experiment Directed Simulation Ab Initio Molecular Dynamics we show that timescales of the same magnitude are obtained that correspond to proton transport, while also involving structural reorientations of the hydrated proton structure that correspond to the so-called <q>special pair dance</q>. The latter is a process predicted by prior computational studies in which the central hydrated hydronium in a distorted Eigen cation (H<sub>9</sub>O<sub>4</sub><sup>+</sup>) structure continually switches special pair partners with its strongly hydrogen-bonded neighboring water molecules. These dynamics are further characterized through the time-evolution of instantaneous normal modes. It is concluded that the hydrated excess proton has a spectral signature unique from the other protons in the hydrated proton complex. However, the results call into question the use of a static picture based on a simple effective one dimensional potential well to describe the hydrated excess proton in water. Instead, they more conclusively point to a distorted and dynamic Eigen cation as the most prevalent hydrated proton species in acid solutions of dilute to moderate concentrations.
We study solutions to Nahm's equations with continuous symmetries and, under certain (mild) hypotheses, we classify the corresponding Ans\"atze. Using our classification, we construct novel Nahm data, and prescribe methods for generating further solutions. Finally, we use these results to construct new BPS monopoles with spherical symmetry.
This paper proposes a computational technique based on "deep unfolding" to solving the finite-time maximum hands-off control problem for discrete-time nonlinear stochastic systems. In particular, we seek a sparse control input sequence that stabilizes the system such that the expected value of the square of the final states is small by training a deep neural network. The proposed technique is demonstrated by a numerical experiment.
The data paper is an emerging academic genre that focuses on the description of research data objects. However, there is a lack of empirical knowledge about this rising genre in quantitative science studies, particularly from the perspective of its linguistic features. To fill this gap, this research aims to offer a first quantitative examination of which rhetorical moves-rhetorical units performing a coherent narrative function-are used in data paper abstracts, as well as how these moves are used. To this end, we developed a new classification scheme for rhetorical moves in data paper abstracts by expanding a well-received system that focuses on English-language research article abstracts. We used this expanded scheme to classify and analyze rhetorical moves used in two flagship data journals, Scientific Data and Data in Brief. We found that data papers exhibit a combination of IMRaD- and data-oriented moves and that the usage differences between the journals can be largely explained by journal policies concerning abstract and paper structure. This research offers a novel examination of how the data paper, a novel data-oriented knowledge representation, is composed, which greatly contributes to a deeper understanding of data and data publication in the scholarly communication system.
We discuss tensor categories motivated by CFT, their unitarizability and applications to various models including the affine VOAs. We discuss classification of type A Verlinde fusion categories. We propose an approach to Kazhdan-Lusztig-Finkelberg theorem. This theorem gives a ribbon equivalence between the fusion category associated to a quantum group at a certain root of unity and that associated to a corresponding affine vertex operator algebra at a suitable positive integer level. We develop ideas by Wenzl. Our results rely on the notion of weak-quasi-Hopf algebra of Drinfeld-Mack-Schomerus. We were also guided by Drinfeld original proof, by Bakalov and Kirillov and Neshveyev and Tuset work for a generic parameter. Wenzl described a fusion tensor product in quantum group fusion categories, and related it to the unitary structure. Given two irreducible objects, the inner product of the fusion tensor product is induced by the braiding of U_q(g), with q a suitable root of 1. Moreover, the paper suggests a suitable untwisting procedure to make the unitary structure trivial. Then it also describes a continuous path that intuitively connects objects of the quantum group fusion category to representations of the simple Lie group defining the affine Lie algebra. We study this procedure. One of our main results is the construction of a Hopf algebra in a weak sense associated to quantum group fusion category and of a twist of it giving a wqh structure on the Zhu algebra and a unitary modular fusion category structure on the representation category of the affine Lie algebra, confirming an early view by Frenkel and Zhu. We show that this modular fusion category structure is equivalent to that obtained via the tensor product theory of VOAs by Huang and Lepowsky. This gives a direct proof of FKL theorem.
We present spatially resolved Hubble Space Telescope grism spectroscopy of 15 galaxies at $z\sim0.8$ drawn from the DEEP2 survey. We analyze H$\alpha$+[N II], [S II] and [S III] emission on kpc scales to explore which mechanisms are powering emission lines at high redshifts, testing which processes may be responsible for the well-known offset of high redshift galaxies from the $z\sim0$ locus in the [O III]/H$\beta$ versus [N II]/H$\alpha$ BPT (Baldwin-Phillips-Terlevich) excitation diagram. We study spatially resolved emission line maps to examine evidence for active galactic nuclei (AGN), shocks, diffuse ionized gas (DIG), or escaping ionizing radiation, all of which may contribute to the BPT offsets observed in our sample. We do not find significant evidence of AGN in our sample and quantify that, on average, AGN would need to contribute $\sim$25% of the H$\alpha$ flux in the central resolution element in order to cause the observed BPT offsets. We find weak ($2\sigma$) evidence of DIG emission at low surface brightnesses, yielding an implied total DIG emission fraction of $\sim$20%, which is not significant enough to be the dominant emission line driver in our sample. In general we find that the observed emission is dominated by star forming H II regions. We discuss trends with demographic properties and the possible role of $\alpha$-enhanced abundance patterns in the emission spectra of high redshift galaxies. Our results indicate that photo-ionization modeling with stellar population synthesis inputs is a valid tool to explore the specific star formation properties which may cause BPT offsets, to be explored in future work.
We present properties and invariants of Hamiltonian circuits in rectangular grids. It is proved that all circuits on a $2n \times 2n$ chessboard have at least $4n$ turns and at least $2n$ straights if $n$ is even and $2n+2$ straights if $n$ is odd. The minimum number of turns and straights are presented and proved for circuits on an $n \times (n+1)$ chessboard. For the general case of an $n \times m$ chessboard similar results are stated but not all proofs are given.
In this paper, we propose a new approach to train Generative Adversarial Networks (GANs) where we deploy a double-oracle framework using the generator and discriminator oracles. GAN is essentially a two-player zero-sum game between the generator and the discriminator. Training GANs is challenging as a pure Nash equilibrium may not exist and even finding the mixed Nash equilibrium is difficult as GANs have a large-scale strategy space. In DO-GAN, we extend the double oracle framework to GANs. We first generalize the players' strategies as the trained models of generator and discriminator from the best response oracles. We then compute the meta-strategies using a linear program. For scalability of the framework where multiple generators and discriminator best responses are stored in the memory, we propose two solutions: 1) pruning the weakly-dominated players' strategies to keep the oracles from becoming intractable; 2) applying continual learning to retain the previous knowledge of the networks. We apply our framework to established GAN architectures such as vanilla GAN, Deep Convolutional GAN, Spectral Normalization GAN and Stacked GAN. Finally, we conduct experiments on MNIST, CIFAR-10 and CelebA datasets and show that DO-GAN variants have significant improvements in both subjective qualitative evaluation and quantitative metrics, compared with their respective GAN architectures.
In-band full-duplex systems promise to further increase the throughput of wireless systems, by simultaneously transmitting and receiving on the same frequency band. However, concurrent transmission generates a strong self-interference signal at the receiver, which requires the use of cancellation techniques. A wide range of techniques for analog and digital self-interference cancellation have already been presented in the literature. However, their evaluation focuses on cases where the underlying physical parameters of the full-duplex system do not vary significantly. In this paper, we focus on adaptive digital cancellation, motivated by the fact that physical systems change over time. We examine some of the different cancellation methods in terms of their performance and implementation complexity, considering the cost of both cancellation and training. We then present a comparative analysis of all these methods to determine which perform better under different system performance requirements. We demonstrate that with a neural network approach, the reduction in arithmetic complexity for the same cancellation performance relative to a state-of-the-art polynomial model is several orders of magnitude.
The subject of this paper is to study the decay of solutions for two systems of laminated Timoshenko beams with interfacial slip in the whole space R subject to a thermal effect of type III acting only on one component. When the thermal effect is acting via the second or third component of the laminated Timoshenko beam (rotation angle displacement or dynamic of the slip), we prove that both systems are polynomially stable and obtain stability estimates in the L2 -norm of solutions and their higher order derivatives with respect of the space variable. The decay rates, as well as the absence and presence of the regularity-loss type property, depend on the regularity of the initial data and the speeds of wave propagations. However, when the thermal effect is acting via the first comoponent (transversal displacement), we introduce a new stability number \c{hi} and prove that the stability of systems is equivalent to \c{hi} 6= 0. An application to a case of lower order coupling terms will be also given. To prove our results, we use the energy method in Fourier space combined with well chosen weight functions to build appropriate Lyapunov functionals.
Evolution of 3D graphics and graphical worlds has brought issues like content optimization, real-time processing, rendering, and shared storage limitation under consideration. Generally, different simplification approaches are used to make 3D meshes viable for rendering. However, many of these approaches ignore vertex attributes for instanced 3D meshes. In this paper, we implement and evaluate a simple and improved version to simplify instanced 3D textured models. The approach uses different vertex attributes in addition to geometry to simplify mesh instances. The resulting simplified models demonstrate efficient time-space requirements and better visual quality.
Composite fermions (CFs) are the particles underlying the novel phenomena observed in partially filled Landau levels. Both microscopic wave functions and semi-classical dynamics suggest that a CF is a dipole consisting of an electron and a double $2h/e$ quantum vortex, and its motion is subject to a Berry curvature that is uniformly distributed in the momentum space. Based on the picture, we study the electromagnetic response of composite fermions. We find that the response in the long-wavelength limit has a form identical to that of the Dirac CF theory. To obtain the result, we show that the Berry curvature contributes a half-quantized Hall conductance which, notably, is independent of the filling factor of a Landau level and not altered by the presence of impurities. The latter is because CFs undergo no side-jumps when scattered by quenched impurities in a Landau-level with the particle-hole symmetry. The remainder of the response is from an effective system that has the same Fermi wavevector, effective density, Berry phase, and therefore long-wavelength response to electromagnetic fields as a Dirac CF system. By interpreting the half-quantized Hall conductance as a contribution from a redefined vacuum, we can explicitly show the emergence of a Dirac CF effective description from the dipole picture. We further determine corrections due to electric quadrupoles and magnetic moments of CFs and show deviations from the Dirac CF theory when moving away from the long wavelength limit.
Temporal action localization is an important yet challenging task in video understanding. Typically, such a task aims at inferring both the action category and localization of the start and end frame for each action instance in a long, untrimmed video.While most current models achieve good results by using pre-defined anchors and numerous actionness, such methods could be bothered with both large number of outputs and heavy tuning of locations and sizes corresponding to different anchors. Instead, anchor-free methods is lighter, getting rid of redundant hyper-parameters, but gains few attention. In this paper, we propose the first purely anchor-free temporal localization method, which is both efficient and effective. Our model includes (i) an end-to-end trainable basic predictor, (ii) a saliency-based refinement module to gather more valuable boundary features for each proposal with a novel boundary pooling, and (iii) several consistency constraints to make sure our model can find the accurate boundary given arbitrary proposals. Extensive experiments show that our method beats all anchor-based and actionness-guided methods with a remarkable margin on THUMOS14, achieving state-of-the-art results, and comparable ones on ActivityNet v1.3. Code is available at https://github.com/TencentYoutuResearch/ActionDetection-AFSD.
We continue the study of rateless codes for transmission of information across channels whose rate of erasure is unknown. In such a code, an infinite stream of encoding symbols can be generated from the message and sent across the erasure channel, and the decoder can decode the message after it has successfully collected a certain number of encoding symbols. A rateless erasure code is real-time oblivious if rather than collecting encoding symbols as they are received, the receiver either immediately decodes or discards each symbol it receives. Efficient real-time oblivious erasure correction uses a feedback channel in order to maximize the probability that a received encoding symbol is decoded rather than discarded. We construct codes which are real-time oblivious, but require fewer feedback messages and have faster decoding compared to previous work. Specifically, for a message of length $k'$, we improve the expected complexity of the feedback channel from $O(\sqrt{k'})$ to $O(1)$, and the expected decoding complexity from $O(k'\log(k'))$ to $O(k')$. Our method involves using an appropriate block erasure code to first encode the $k'$ message symbols, and then using a truncated version of the real-time oblivious erasure correction of Beimel et al (2007) to transmit the encoded message to the receiver, which then uses the decoding algorithm for the outer code to recover the message.
Multimodal Sentiment Analysis (MuSe) 2021 is a challenge focusing on the tasks of sentiment and emotion, as well as physiological-emotion and emotion-based stress recognition through more comprehensively integrating the audio-visual, language, and biological signal modalities. The purpose of MuSe 2021 is to bring together communities from different disciplines; mainly, the audio-visual emotion recognition community (signal-based), the sentiment analysis community (symbol-based), and the health informatics community. We present four distinct sub-challenges: MuSe-Wilder and MuSe-Stress which focus on continuous emotion (valence and arousal) prediction; MuSe-Sent, in which participants recognise five classes each for valence and arousal; and MuSe-Physio, in which the novel aspect of `physiological-emotion' is to be predicted. For this years' challenge, we utilise the MuSe-CaR dataset focusing on user-generated reviews and introduce the Ulm-TSST dataset, which displays people in stressful depositions. This paper also provides detail on the state-of-the-art feature sets extracted from these datasets for utilisation by our baseline model, a Long Short-Term Memory-Recurrent Neural Network. For each sub-challenge, a competitive baseline for participants is set; namely, on test, we report a Concordance Correlation Coefficient (CCC) of .4616 CCC for MuSe-Wilder; .4717 CCC for MuSe-Stress, and .4606 CCC for MuSe-Physio. For MuSe-Sent an F1 score of 32.82 % is obtained.
GRB 190114C was a bright burst that occurred in the local Universe (z=0.425). It was the first gamma-ray burst (GRB) ever detected at TeV energies, thanks to MAGIC. We characterize the ambient medium properties of the host galaxy through the study of the absorbing X-ray column density. Joining Swift, XMM-Newton, and NuSTAR observations, we find that the GRB X-ray spectrum is characterized by a high column density that is well in excess of the expected Milky Way value and decreases, by a factor of ~2, around ~$10^5$ s. Such a variability is not common in GRBs. The most straightforward interpretation of the variability in terms of photoionization of the ambient medium is not able to account for the decrease at such late times, when the source flux is less intense. Instead, we interpret the decrease as due to a clumped absorber, denser along the line of sight and surrounded by lower-density gas. After the detection at TeV energies of GRB 190114C, two other GRBs were promptly detected. They share a high value of the intrinsic column density and there are hints for a decrease of the column density, too. We speculate that a high local column density might be a common ingredient for TeV-detected GRBs.
Many commonsense reasoning NLP tasks involve choosing between one or more possible answers to a question or prompt based on knowledge that is often implicit. Large pretrained language models (PLMs) can achieve near-human performance on such tasks, while providing little human-interpretable evidence of the underlying reasoning they use. In this work, we show how to use these same models to generate such evidence: inspired by the contrastive nature of human explanations, we use PLMs to complete explanation prompts which contrast alternatives according to the key attribute(s) required to justify the correct answer (for example, peanuts are usually salty while raisins are sweet). Conditioning model decisions on these explanations improves performance on two commonsense reasoning benchmarks, as compared to previous non-contrastive alternatives. These explanations are also judged by humans to be more relevant for solving the task, and facilitate a novel method to evaluate explanation faithfulfness.
Motivated by dark coronal lanes in SOHO / EIT 284 {\AA} EUV observations we construct and optimize an atmosphere model of the AR 8535 sunspot by adding a cool and dense component in the volume of plasma along open field lines determined using the Potential Field Source Surface (PFSS) extrapolation. Our model qualitatively reproduces the observed reduced microwave brightness temperature in the northern part of the sunspot in the VLA observations from 13 May 1999 and provides a physical explanation for the coronal dark lanes. We propose application of this method to other sunspots with such observed dark regions in EUV or soft X-rays and with concurrent microwave observations to determine the significance of open field regions. The connection between open fields and the resulting plasma temperature and density change is of relevance for slow solar wind source investigations.
Auxiliary information attracts more and more attention in the area of machine learning. Attempts so far to include such auxiliary information in state-of-the-art learning process have often been based on simply appending these auxiliary features to the data level or feature level. In this paper, we intend to propose a novel training method with new options and architectures. Siamese labels, which were used in the training phase as auxiliary modules. While in the testing phase, the auxiliary module should be removed. Siamese label module makes it easier to train and improves the performance in testing process. In general, the main contributions can be summarized as, 1) Siamese Labels are firstly proposed as auxiliary information to improve the learning efficiency; 2) We establish a new architecture, Siamese Labels Auxiliary Network (SilaNet), which is to assist the training of the model; 3) Siamese Labels Auxiliary Network is applied to compress the model parameters by 50% and ensure the high accuracy at the same time. For the purpose of comparison, we tested the network on CIFAR-10 and CIFAR100 using some common models. The proposed SilaNet performs excellent efficiency both on the accuracy and robustness.
In classical newsvendor model, piece-wise linear shortage and excess costs are balanced out to determine the optimal order quantity. However, for critical perishable commodities, severity of the costs may be much more than linear. In this paper we discuss a generalisation of the newsvendor model with piece-wise polynomial cost functions to accommodate their severity. In addition, the stochastic demand has been assumed to follow a completely unknown probability distribution. Subsequently, non-parametric estimator of the optimal order quantity has been developed from a random polynomial type estimating equation using a random sample on demand. Strong consistency of the estimator has been proven when the true optimal order quantity is unique. The result has been extended to the case where multiple solutions for optimal order quantity are available. Probability of existence of the estimated optimal order quantity has been studied through extensive simulation experiments. Simulation results indicate that the non-parametric method provides robust yet efficient estimator of the optimal order quantity in terms of mean square error.
Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing thus far. This paper first revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis. Our results show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions. We further demonstrate these high-frequency artifacts enable a simple way to detect existing backdoor triggers at a detection rate of 98.50% without prior knowledge of the attack details and the target model. Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability. We show that existing defense works can benefit by incorporating these smooth triggers into their design consideration. Moreover, we show that the detector tuned over stronger smooth triggers can generalize well to unseen weak smooth triggers. In short, our work emphasizes the importance of considering frequency analysis when designing both backdoor attacks and defenses in deep learning.
Machine learning models for the ad-hoc retrieval of documents and passages have recently shown impressive improvements due to better language understanding using large pre-trained language models. However, these over-parameterized models are inherently non-interpretable and do not provide any information on the parts of the documents that were used to arrive at a certain prediction. In this paper we introduce the select and rank paradigm for document ranking, where interpretability is explicitly ensured when scoring longer documents. Specifically, we first select sentences in a document based on the input query and then predict the query-document score based only on the selected sentences, acting as an explanation. We treat sentence selection as a latent variable trained jointly with the ranker from the final output. We conduct extensive experiments to demonstrate that our inherently interpretable select-and-rank approach is competitive in comparison to other state-of-the-art methods and sometimes even outperforms them. This is due to our novel end-to-end training approach based on weighted reservoir sampling that manages to train the selector despite the stochastic sentence selection. We also show that our sentence selection approach can be used to provide explanations for models that operate on only parts of the document, such as BERT.
In this paper, we present a thorough study of maximizing a regularized non-monotone submodular function subject to various constraints, i.e., $\max \{ g(A) - \ell(A) : A \in \mathcal{F} \}$, where $g \colon 2^\Omega \to \mathbb{R}_+$ is a non-monotone submodular function, $\ell \colon 2^\Omega \to \mathbb{R}_+$ is a normalized modular function and $\mathcal{F}$ is the constraint set. Though the objective function $f := g - \ell$ is still submodular, the fact that $f$ could potentially take on negative values prevents the existing methods for submodular maximization from providing a constant approximation ratio for the regularized submodular maximization problem. To overcome the obstacle, we propose several algorithms which can provide a relatively weak approximation guarantee for maximizing regularized non-monotone submodular functions. More specifically, we propose a continuous greedy algorithm for the relaxation of maximizing $g - \ell$ subject to a matroid constraint. Then, the pipage rounding procedure can produce an integral solution $S$ such that $\mathbb{E} [g(S) - \ell(S)] \geq e^{-1}g(OPT) - \ell(OPT) - O(\epsilon)$. Moreover, we present a much faster algorithm for maximizing $g - \ell$ subject to a cardinality constraint, which can output a solution $S$ with $\mathbb{E} [g(S) - \ell(S)] \geq (e^{-1} - \epsilon) g(OPT) - \ell(OPT)$ using $O(\frac{n}{\epsilon^2} \ln \frac 1\epsilon)$ value oracle queries. We also consider the unconstrained maximization problem and give an algorithm which can return a solution $S$ with $\mathbb{E} [g(S) - \ell(S)] \geq e^{-1} g(OPT) - \ell(OPT)$ using $O(n)$ value oracle queries.
In this paper, we present an uncertainty-aware INVASE to quantify predictive confidence of healthcare problem. By introducing learnable Gaussian distributions, we lever-age their variances to measure the degree of uncertainty. Based on the vanilla INVASE, two additional modules are proposed, i.e., an uncertainty quantification module in the predictor, and a reward shaping module in the selector. We conduct extensive experiments on UCI-WDBC dataset. Notably, our method eliminates almost all predictive bias with only about 20% queries, while the uncertainty-agnostic counterpart requires nearly 100% queries. The open-source implementation with a detailed tutorial is available at https://github.com/jx-zhong-for-academic-purpose/Uncertainty-aware-INVASE/blob/main/tutorialinvase%2B.ipynb.
The estimation of the intrinsic dimension of a dataset is a fundamental step in most dimensionality reduction techniques. This article illustrates intRinsic, an R package that implements novel state-of-the-art likelihood-based estimators of the intrinsic dimension of a dataset. In detail, the methods included in this package are the TWO-NN, Gride, and Hidalgo models. To allow these novel estimators to be easily accessible, the package contains a few high-level, intuitive functions that rely on a broader set of efficient, low-level routines. intRinsic encompasses models that fall into two categories: homogeneous and heterogeneous intrinsic dimension estimators. The first category contains the TWO-NN and Gride models. The functions dedicated to these two methods carry out inference under both the frequentist and Bayesian frameworks. In the second category we find Hidalgo, a Bayesian mixture model, for which an efficient Gibbs sampler is implemented. After discussing the theoretical background, we demonstrate the performance of the models on simulated datasets. This way, we can assess the results by comparing them with the ground truth. Then, we employ the package to study the intrinsic dimension of the Alon dataset, obtained from a famous microarray experiment. We show how the estimation of homogeneous and heterogeneous intrinsic dimensions allows us to gain valuable insights about the topological structure of a dataset.
The rate and pathways of relaxation of a magnetic medium to its equilibrium following excitation with intense and short laser pulses are the key ingredients of ultrafast optical control of spins. Here we study experimentally the evolution of the magnetization and magnetic anisotropy of thin films of a ferromagnetic metal galfenol (Fe$_{0.81}$Ga$_{0.19}$) resulting from excitation with a femtosecond laser pulse. From the temporal evolution of the hysteresis loops we deduce that the magnetization $M_S$ and magnetic anisotropy parameters $K$ recover within a nanosecond, and the ratio between $K$ and $M_S$ satisfies the thermal equilibrium's power law in the whole time range spanning from a few picoseconds to 3 nanoseconds. We further use the experimentally obtained relaxation times of $M_S$ and $K$ to analyze the laser-induced precession and demonstrate how they contribute to its frequency evolution at the nanosecond timescale.
Magnons and phonons are two fundamental neutral excitations of magnetically ordered materials which can significantly dominate the low-energy thermal properties. In this work we study the interplay of magnons and phonons in honeycomb and Kagome lattices. When the mirror reflection with respect to the magnetic ordering direction is broken, the symmetry-allowed in-plane Dzyaloshinskii-Moriya (DM) interaction will couple the magnons to the phonons and the magnon-polaron states are formed. Besides, both lattice structures also allow for an out-of-plane DM interaction rendering the uncoupled magnons to be topological. Our aim is to study the interplay of such topological magnons with phonons. We show that the hybridization between magnons and phonons can significantly redistribute the Berry curvature among the bands. Especially, we found that the topological magnon band becomes trivial while the hybridized states at lower energy acquire Berry curvature strongly peaked near the avoided crossings. As such the thermal Hall conductivity of topological magnons shows significant changes due to coupling to the phonons.
Strapdown inertial navigation research involves the parameterization and computation of the attitude, velocity and position of a rigid body in a chosen reference frame. The community has long devoted to finding the most concise and efficient representation for the strapdown inertial navigation system (INS). The current work is motivated by simplifying the existing dual quaternion representation of the kinematic model. This paper proposes a compact and elegant representation of the body's attitude, velocity and position, with the aid of a devised trident quaternion tool in which the position is accounted for by adding a second imaginary part to the dual quaternion. Eventually, the kinematics of strapdown INS are cohesively unified in one concise differential equation, which bears the same form as the classical attitude quaternion equation. In addition, the computation of this trident quaternion-based kinematic equation is implemented with the recently proposed functional iterative integration approach. Numerical results verify the analysis and show that incorporating the new representation into the functional iterative integration scheme achieves high inertial navigation computation accuracy as well.
Comoving pairs, even at the separations of $\mathcal{O}(10^6)\,$AU, are a predicted reservoir of conatal stars. We present detailed chemical abundances of 62 stars in 31 comoving pairs with separations of $10^2 - 10^7\,$AU and 3D velocity differences $< 2 \mathrm{\ km \ s^{-1}}$. This sample includes both bound comoving pairs/wide binaries and unbound comoving pairs. Observations were taken using the MIKE spectrograph on the Magellan/Clay Telescope at high resolution ($\mathrm{R} \sim 45,000$) with a typical signal-to-noise ratio of 150 per pixel. With these spectra, we measure surface abundances for 24 elements, including Li, C, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Y, Zr, Ba, La, Nd, Eu. Taking iron as the representative element, our sample of wide binaries is chemically homogeneous at the level of $0.05$ dex, which agrees with prior studies on wide binaries. Importantly, even systems at separations $2\times10^5-10^7\,$AU are homogeneous to $0.09$ dex, as opposed to the random pairs which have a dispersion of $0.23\,$dex. Assuming a mixture model of the wide binaries and random pairs, we find that $73 \pm 22\%$ of the comoving pairs at separations $2\times10^5-10^7\,$AU are conatal. Our results imply that a much larger parameter space of phase space may be used to find conatal stars, to study M-dwarfs, star cluster evolution, exoplanets, chemical tagging, and beyond.
Flux-integrated semi-exclusive differential and integral cross sections for quasi-elastic neutrino charged-current scattering on argon are analyzed. We calculate these cross sections using the relativistic distorted-wave impulse approximation and compare with recent MicroBooNE data. We found that the measured cross sections can be described well within the experimental uncertainties with value of the nucleon axial mass $1 < M_A < 1.2$ GeV. The contribution of the exclusive channel $\boldmath{(\nu_{\mu}, \mu p)}$ to the flux-integrated inclusive cross sections is about 50\%.
Chromospheric umbral oscillations produce periodic brightenings in the core of some spectral lines, known as umbral flashes. They are also accompanied by fluctuations in velocity, temperature, and, according to several recent works, magnetic field. In this study, we aim to ascertain the accuracy of the magnetic field determined from inversions of the Ca II 8542 \AA\ line. We have developed numerical simulations of wave propagation in a sunspot umbra. Synthetic Stokes profiles emerging from the simulated atmosphere were computed and then inverted using the NICOLE code. The atmospheres inferred from the inversions have been compared with the original parameters from the simulations. Our results show that the inferred chromospheric fluctuations in velocity and temperature match the known oscillations from the numerical simulation. In contrast, the vertical magnetic field obtained from the inversions exhibits an oscillatory pattern with a $\sim$300 G peak-to-peak amplitude which is absent in the simulation. We have assessed the error in the inferred parameters by performing numerous inversions with slightly different configurations of the same Stokes profiles. We find that when the atmosphere is approximately at rest, the inversion tends to favor solutions that underestimate the vertical magnetic field strength. On the contrary, during umbral flashes, the values inferred from most of the inversions are concentrated at stronger fields than those from the simulation. Our analysis provides a quantification of the errors associated with the inversions of the Ca II 8542 \AA\ line and suggests caution with the interpretation of the inferred magnetic field fluctuations.
Unrestricted mutation of shared state is a source of many well-known problems. The predominant safe solutions are pure functional programming, which bans mutation outright, and flow sensitive type systems, which depend on sophisticated typing rules. Mutable value semantics is a third approach that bans sharing instead of mutation, thereby supporting part-wise in-place mutation and local reasoning, while maintaining a simple type system. In the purest form of mutable value semantics, references are second-class: they are only created implicitly, at function boundaries, and cannot be stored in variables or object fields. Hence, variables can never share mutable state. Because references are often regarded as an indispensable tool to write efficient programs, it is legitimate to wonder whether such a discipline can compete other approaches. As a basis for answering that question, we demonstrate how a language featuring mutable value semantics can be compiled to efficient native code. This approach relies on stack allocation for static garbage collection and leverages runtime knowledge to sidestep unnecessary copies.
We prove the existence of multiple noise-induced transitions in the Lasota-Mackey map, which is a class of one dimensional random dynamical system with additive noise. The result is achieved by the help of rigorous computer assisted estimates. We first approximate the stationary distribution of the random dynamical system and then compute certified error intervals for the Lyapunov exponent. We find that the sign of the Lyapunov exponent changes at least three times when increasing the noise amplitude. We also show numerical evidence that the standard non-rigorous numerical approximation by finite-time Lyapunov exponent is valid with our model for a sufficiently large number of iterations. Our method is expected to work for a broad class of nonlinear stochastic phenomena.
The results of a long-term multiwavelength study of the powerful flat spectrum radio quasar 3C 454.3 using {\it Fermi}-LAT and Swift XRT/UVOT data are reported. In the $\gamma$-ray band, {\it Fermi}-LAT observations show several major flares when the source flux was $>10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$; the peak $\gamma$-ray flux above $141.6$ MeV, $(9.22\pm1.96)\times10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$ observed on MJD 55519.33, corresponds to $2.15\times10^{50}\:{\rm erg\:s^{-1}}$ isotropic $\gamma$-ray luminosity. The analysis of Swift XRT and UVOT data revealed a flux increase, although with smaller amplitudes, also in the X-ray and optical/UV bands. The X-ray emission of 3C 454.3 is with a hard spectral index of $\Gamma_{\rm X}=1.16-1.75$, and the flux in the flaring states increased up to $(1.80\pm0.18)\times10^{-10}{\rm erg\:cm^{-2}\: s^{-1}}$. Through combining the analyzed data, it was possible to assemble 362 high-quality and quasi-simultaneous spectral energy distributions of 3C 454.3 in 2008-2018 which all were modeled within a one-zone leptonic scenario assuming the emission region is within the broad line region, involving synchrotron, synchrotron self-Compton and external Compton mechanisms. Such an extensive modeling is the key for constraining the underlying emission mechanisms in the 3C 454.3 jet and allows to derive the physical parameters of the jet and investigate their evolution in time. The modeling suggests that during the flares, along with the variation of emitting electron parameters, the Doppler boosting factor increased substantially implying that the emission in these periods has most likely originated in a faster moving region.
This article argues that low latency, high bandwidth, device proliferation, sustainable digital infrastructure, and data privacy and sovereignty continue to motivate the need for edge computing research even though its initial concepts were formulated more than a decade ago.
With the promulgation of data protection laws (e.g., GDPR in 2018), privacy preservation has become a general agreement in applications where cross-domain sensitive data are utilized. Out of many privacy-preserving techniques, federated learning (FL) has received much attention as a bridge for secure data connection and cooperation. Although FL's research works have surged, some classical data modeling methods are not well accommodated in FL. In this paper, we propose the first masking-based federated singular vector decomposition method, called FedSVD. FedSVD protects the raw data through a singular value invariance mask, which can be further removed from the SVD results. Compared with prior privacy-preserving SVD solutions, FedSVD has lossless results, high confidentiality, and excellent scalability. We provide privacy proof showing that FedSVD has guaranteed data confidentiality. Empirical experiments on real-life datasets and synthetic data have verified the effectiveness of our method. The reconstruction error of FedSVD is around 0.000001% of the raw data, validating the lossless property of FedSVD. The scalability of FedSVD is nearly the same as the standalone SVD algorithm. Hence, FedSVD can bring privacy protection almost without sacrificing any computation time or communication overhead.
Unmanned Aerial Vehicles (UAVs), as a recently emerging technology, enabled a new breed of unprecedented applications in different domains. This technology's ongoing trend is departing from large remotely-controlled drones to networks of small autonomous drones to collectively complete intricate tasks time and cost-effectively. An important challenge is developing efficient sensing, communication, and control algorithms that can accommodate the requirements of highly dynamic UAV networks with heterogeneous mobility levels. Recently, the use of Artificial Intelligence (AI) in learning-based networking has gained momentum to harness the learning power of cognizant nodes to make more intelligent networking decisions by integrating computational intelligence into UAV networks. An important example of this trend is developing learning-powered routing protocols, where machine learning methods are used to model and predict topology evolution, channel status, traffic mobility, and environmental factors for enhanced routing. This paper reviews AI-enabled routing protocols designed primarily for aerial networks, including topology-predictive and self-adaptive learning-based routing algorithms, with an emphasis on accommodating highly-dynamic network topology. To this end, we justify the importance and adaptation of AI into UAV network communications. We also address, with an AI emphasis, the closely related topics of mobility and networking models for UAV networks, simulation tools and public datasets, and relations to UAV swarming, which serve to choose the right algorithm for each scenario. We conclude by presenting future trends, and the remaining challenges in AI-based UAV networking, for different aspects of routing, connectivity, topology control, security and privacy, energy efficiency, and spectrum sharing.
Superconducting radio-frequency (SRF) niobium cavities are the modern means of particle acceleration and an enabling technology for record coherence superconducting quantum systems and ultra-sensitive searches for new physics. Here, we report a systematic effect in Nb cavities indicative of improved superconducting properties - an anomalous decrease (dip) in the resonant frequency at temperatures just below the critical temperature $T_\mathrm{c}$. The frequency dip magnitude correlates with cavity quality factor, near-surface impurity distribution, and $T_\mathrm{c}$. It is also a precursor of the peculiar decrease in the BCS surface impedance with increasing RF current. A first demonstration of the coherence peak in the AC conductivity in Nb SRF cavities is also presented and found to correlate with a large frequency dip.
Recent discovery of spin-orbit torques (SOTs) within magnetic single-layers has attracted attention in the field of spintronics. However, it has remained elusive as to how to understand and how to tune the SOTs. Here, utilizing the single layers of chemically disordered Fe$_x$Pt$_{1-x}$, we unveil the mechanism of the "unexpected" bulk SOTs by studying their dependence on the introduction of a controlled vertical composition gradient and on temperature. We find that the bulk damping like SOT arises from an imbalanced internal spin current that is transversely polarized and independent of the magnetization orientation. The torque can be strong only in the presence of a vertical composition gradient and the SOT efficiency per electric field is insensitive to temperature but changes sign upon reversal of the orientation of the composition gradient, which are in analogue to behaviors of the strain. From these characteristics we conclude that the imbalanced internal spin current originates from a bulk spin Hall effect and that the associated inversion asymmetry that allows for a non-zero net torque is most likely a strain non-uniformity induced by the composition gradient. The fieldlike SOT is a relatively small bulk effect compared to the dampinglike SOT. This work points to the possibility of developing low-power single-layer SOT devices by strain engineering.
In this paper we consider distributed adaptive stabilization for uncertain multivariable linear systems with a time-varying diagonal matrix gain. We show that uncertain multivariable linear systems are stabilizable by diagonal matrix high gains if the system matrix is an H-matrix with positive diagonal entries. Based on matrix measure and stability theory for diagonally dominant systems, we consider two classes of uncertain linear systems, and derive a threshold condition to ensure their exponential stability by a monotonically increasing diagonal gain matrix. When each individual gain function in the matrix gain is updated by state-dependent functions using only local state information, the boundedness and convergence of both system states and adaptive matrix gains are guaranteed. We apply the adaptive distributed stabilization approach to adaptive synchronization control for large-scale complex networks consisting of nonlinear node dynamics and time-varying coupling weights. A unified framework for adaptive synchronization is proposed that includes several general design approaches for adaptive coupling weights to guarantee network synchronization.
Buildings rarely perform as designed/simulated and and there are numerous tangible benefits if this gap is reconciled. A new scientific yet pragmatic methodology - called Enhanced Parameter Estimation (EPE) - is proposed that allows physically relevant parameter estimation rather than a blind force-fit to energy use data. It calibrates a rapidly and inexpensively created simulation model of the building in two stages: (a) building shell calibration with the HVAC system is replaced by an by an ideal system that meets the loads. EPE identifies a small number of high-level heat flows in the energy balance, calculates them with specifically tailored individual driving functions, introduces physically significant parameters to best accomplish energy balance, and, estimates the parameters and their uncertainty bounds. Calibration is thus done with corrective heat flows without any arbitrary tuning of input parameters, (b) HVAC system calibration with the building shell replaced by a box with only process loads; as many parameters as the data allows are estimated. Calibration accuracy is enhanced by machine learning of the residual errors. The EPE methodology is demonstrated through: a synthetic building and one an actual 75,000 Sq.Ft. building in Pennsylvania. A subsequent paper will provide further details and applications.
In this article, we improve our main results from \emph{Chow groups and $L$-derivatives of automorphic motives for unitary groups} in two direction: First, we allow ramified places in the CM extension $E/F$ at which we consider representations that are spherical with respect to a certain special maximal compact subgroup, by formulating and proving an analogue of the Kudla--Rapoport conjecture for exotic smooth Rapoport--Zink spaces. Second, we lift the restriction on the components at split places of the automorphic representation, by proving a more general vanishing result on certain cohomology of integral models of unitary Shimura varieties with Drinfeld level structures.
Cache side-channel attacks lead to severe security threats to the settings that a CPU is shared across users, e.g., in the cloud. The existing attacks rely on sensing the micro-architectural state changes made by victims, and this assumption can be invalidated by combining spatial (\eg, Intel CAT) and temporal isolation (\eg, time protection). In this work, we advance the state of cache side-channel attacks by showing stateless cache side-channel attacks that cannot be defeated by both spatial and temporal isolation. This side-channel exploits the timing difference resulted from interconnect congestion. Specifically, to complete cache transactions, for Intel CPUs, cache lines would travel across cores via the CPU mesh interconnect. Nonetheless, the mesh links are shared by all cores, and cache isolation does not segregate the traffic. An attacker can generate interconnect traffic to contend with the victim's on a mesh link, hoping that extra delay will be measured. With the variant delays, the attacker can deduce the memory access pattern of a victim program, and infer its sensitive data. Based on this idea, we implement Volcano and test it against the existing RSA implementations of JDK. We found the RSA private key used by a victim process can be partially recovered. In the end, we propose a few directions for defense and call for the attention of the security community.
Transport is called nonreciprocal when not only the sign, but also the absolute value of the current, depends on the polarity of the applied voltage. It requires simultaneously broken inversion and time-reversal symmetries, e.g., by the interplay of spin-orbit coupling and magnetic field. So far, observation of nonreciprocity was always tied to resistivity, and dissipationless nonreciprocal circuit elements were elusive. Here, we engineer fully superconducting nonreciprocal devices based on highly-transparent Josephson junctions fabricated on InAs quantum wells. We demonstrate supercurrent rectification far below the transition temperature. By measuring Josephson inductance, we can link nonreciprocal supercurrent to the asymmetry of the current-phase relation, and directly derive the supercurrent magnetochiral anisotropy coefficient for the first time. A semi-quantitative model well explains the main features of our experimental data. Nonreciprocal Josephson junctions have the potential to become for superconducting circuits what $pn$-junctions are for traditional electronics, opening the way to novel nondissipative circuit elements.
We propose a framework for reinforcement learning (RL) in fine time discretization and a learning algorithm in this framework. One of the main goals of RL is to provide a way for physical machines to learn optimal behavior instead of being programmed. However, the machines are usually controlled in fine time discretization. The most common RL methods apply independent random elements to each action, which is not suitable in that setting. It is not feasible because it causes the controlled system to jerk, and does not ensure sufficient exploration since a single action is not long enough to create a significant experience that could be translated into policy improvement. In the RL framework introduced in this paper, policies are considered that produce actions based on states and random elements autocorrelated in subsequent time instants. The RL algorithm introduced here approximately optimizes such a policy. The efficiency of this algorithm is verified against three other RL methods (PPO, SAC, ACER) in four simulated learning control problems (Ant, HalfCheetah, Hopper, and Walker2D) in diverse time discretization. The algorithm introduced here outperforms the competitors in most cases considered.
Kiukas, Lahti and Ylinen asked the following general question. When is a positive operator measure projection valued? A version of this question formulated in terms of operator moments was posed in a recent paper of the present authors. Let $T$ be a selfadjoint operator and $F$ be a Borel semispectral measure on the real line with compact support. For which positive integers $p< q$ do the equalities $T^k =\int_{\mathbb{R}} x^k F(dx)$, $k=p, q$, imply that $F$ is a spectral measure? In the present paper, we completely solve the second problem. The answer is affirmative if $p$ is odd and $q$ is even, and negative otherwise. The case $(p,q)=(1,2)$ closely related to intrinsic noise operator was solved by several authors including Kruszy\'{n}ski and de Muynck as well as Kiukas, Lahti and Ylinen. The counterpart of the second problem concerning the multiplicativity of unital positive linear maps on $C^*$-algebras is also solved.
Thompson sampling is a popular algorithm for solving multi-armed bandit problems, and has been applied in a wide range of applications, from website design to portfolio optimization. In such applications, however, the number of choices (or arms) $N$ can be large, and the data needed to make adaptive decisions require expensive experimentation. One is then faced with the constraint of experimenting on only a small subset of $K \ll N$ arms within each time period, which poses a problem for traditional Thompson sampling. We propose a new Thompson Sampling under Experimental Constraints (TSEC) method, which addresses this so-called "arm budget constraint". TSEC makes use of a Bayesian interaction model with effect hierarchy priors, to model correlations between rewards on different arms. This fitted model is then integrated within Thompson sampling, to jointly identify a good subset of arms for experimentation and to allocate resources over these arms. We demonstrate the effectiveness of TSEC in two problems with arm budget constraints. The first is a simulated website optimization study, where TSEC shows noticeable improvements over industry benchmarks. The second is a portfolio optimization application on industry-based exchange-traded funds, where TSEC provides more consistent and greater wealth accumulation over standard investment strategies.
We report on a theoretical model for image formation in full-field optical coherence tomography (FFOCT). Because the spatial incoherence of the illumination acts as a virtual confocal pinhole in FFOCT, its imaging performance is equivalent to a scanning time-gated coherent confocal microscope. In agreement with optical experiments enabling a precise control of aberrations, FFOCT is shown to have nearly twice the resolution of standard imaging at moderate aberration level. Beyond a rigorous study on the sensitivity of FFOCT with respect to aberrations, this theoretical model paves the way towards an optimized design of adaptive optics and \rev{computational tools} for high-resolution and deep imaging of biological tissues.
It is a long-lasting task to understand heat conduction phenomena beyond Fourier. Besides the low-temperature experiments on extremely pure crystals, it has turned out recently that heterogeneous materials with macro-scale size can also show thermal effects that cannot be modelled by the Fourier equation. This is called over-diffusive propagation, different from low-temperature observations, and is found in numerous samples made from metal foam, rocks, and composites. The measured temperature history is indeed similar to what Fourier's law predicts but the usual evaluation cannot provide reliable thermal parameters. This paper is a report on our experiments on several rock types, each type having multiple samples with different thicknesses. We show that size-dependent thermal behaviour can occur for both Fourier and non-Fourier situations. Moreover, based on the present experimental data, we find an empirical relation between the Fourier and non-Fourier parameters, which may be helpful in later experiments to develop a more robust and reliable evaluation procedure.
We consider a two-player zero-sum deterministic differential game where each player uses both continuous and impulse controls in infinite-time horizon. We assume that the impulses supposed to be of general term and the costs depend on the state of the system. We use the dynamic programming principle and viscosity solutions approach to show existence and uniqueness of a solution for the Hamilton-Jacobi-Bellman-Isaacs (HJBI) partial differential equations (PDEs) of the game. We prove under Isaacs condition that the upper and lower value functions coincide.
This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML). Specifically, we study if a distributed implementation of the renowned Stochastic Gradient Descent (SGD) learning algorithm is feasible with both differential privacy (DP) and $(\alpha,f)$-Byzantine resilience. To the best of our knowledge, this is the first work to tackle this problem from a theoretical point of view. A key finding of our analyses is that the classical approaches to these two (seemingly) orthogonal issues are incompatible. More precisely, we show that a direct composition of these techniques makes the guarantees of the resulting SGD algorithm depend unfavourably upon the number of parameters of the ML model, making the training of large models practically infeasible. We validate our theoretical results through numerical experiments on publicly-available datasets; showing that it is impractical to ensure DP and Byzantine resilience simultaneously.
Image captioning has conventionally relied on reference-based automatic evaluations, where machine captions are compared against captions written by humans. This is in contrast to the reference-free manner in which humans assess caption quality. In this paper, we report the surprising empirical finding that CLIP (Radford et al., 2021), a cross-modal model pretrained on 400M image+caption pairs from the web, can be used for robust automatic evaluation of image captioning without the need for references. Experiments spanning several corpora demonstrate that our new reference-free metric, CLIPScore, achieves the highest correlation with human judgements, outperforming existing reference-based metrics like CIDEr and SPICE. Information gain experiments demonstrate that CLIPScore, with its tight focus on image-text compatibility, is complementary to existing reference-based metrics that emphasize text-text similarities. Thus, we also present a reference-augmented version, RefCLIPScore, which achieves even higher correlation. Beyond literal description tasks, several case studies reveal domains where CLIPScore performs well (clip-art images, alt-text rating), but also where it is relatively weaker in comparison to reference-based metrics, e.g., news captions that require richer contextual knowledge.
We establish a sharp bilinear estimate for the Klein--Gordon propagator in the spirit of recent work of Beltran--Vega. Our approach is inspired by work in the setting of the wave equation due to Bez, Jeavons and Ozawa. As a consequence of our main bilinear estimate, we deduce several sharp estimates of null-form type and recover some sharp Strichartz estimates found by Quilodr\'an and Jeavons.
Modelling the dynamics of urban venues is a challenging task as it is multifaceted in nature. Demand is a function of many complex and nonlinear features such as neighborhood composition, real-time events, and seasonality. Recent advances in Graph Convolutional Networks (GCNs) have had promising results as they build a graphical representation of a system and harness the potential of deep learning architectures. However, there has been limited work using GCNs in a temporal setting to model dynamic dependencies of the network. Further, within the context of urban environments, there has been no prior work using dynamic GCNs to support venue demand analysis and prediction. In this paper, we propose a novel deep learning framework which aims to better model the popularity and growth of urban venues. Using a longitudinal dataset from location technology platform Foursquare, we model individual venues and venue types across London and Paris. First, representing cities as connected networks of venues, we quantify their structure and note a strong community structure in these retail networks, an observation that highlights the interplay of cooperative and competitive forces that emerge in local ecosystems of retail businesses. Next, we present our deep learning architecture which integrates both spatial and topological features into a temporal model which predicts the demand of a venue at the subsequent time-step. Our experiments demonstrate that our model can learn spatio-temporal trends of venue demand and consistently outperform baseline models. Relative to state-of-the-art deep learning models, our model reduces the RSME by ~ 28% in London and ~ 13% in Paris. Our approach highlights the power of complex network measures and GCNs in building prediction models for urban environments. The model could have numerous applications within the retail sector to better model venue demand and growth.
Non-uniform message quantization techniques such as reconstruction-computation-quantization (RCQ) improve error-correction performance and decrease hardware complexity of low-density parity-check (LDPC) decoders that use a flooding schedule. Layered MinSum RCQ (L-msRCQ) enables message quantization to be utilized for layered decoders and irregular LDPC codes. We investigate field-programmable gate array (FPGA) implementations of L-msRCQ decoders. Three design methods for message quantization are presented, which we name the Lookup, Broadcast, and Dribble methods. The decoding performance and hardware complexity of these schemes are compared to a layered offset MinSum (OMS) decoder. Simulation results on a (16384, 8192) protograph-based raptor-like (PBRL) LDPC code show that a 4-bit L-msRCQ decoder using the Broadcast method can achieve a 0.03 dB improvement in error-correction performance while using 12% fewer registers than the OMS decoder. A Broadcast-based 3-bit L-msRCQ decoder uses 15% fewer lookup tables, 18% fewer registers, and 13% fewer routed nets than the OMS decoder, but results in a 0.09 dB loss in performance.
We examine a description of available cross section data for symmetric space star (SST) configurations in the neutron-deuteron (nd) and proton-deuteron (pd) breakup reaction using numerically exact solutions of the three-nucleon (3N) Faddeev equation based on two- and three-nucleon (semi)phenomenological and chiral forces. The predicted SST cross sections are very stable with respect to the underlying dynamics for incoming nucleon laboratory energies below $\approx 25$ MeV. We discuss possible origins of the surprising discrepancies between theory and data found in low-energy nd and pd SST breakup measurements.
SrTiO$_3$ exhibits superconductivity for carrier densities $10^{19}-10^{21}$ cm$^{-3}$. Across this range, the Fermi level traverses a number of vibrational modes in the system, making it ideal for studying dilute superconductivity. We use high-resolution planar-tunneling spectroscopy to probe chemically-doped SrTiO$_3$ across the superconducting dome. The over-doped superconducting boundary aligns, with surprising precision, to the Fermi energy crossing the Debye energy. Superconductivity emerges with decreasing density, maintaining throughout the Bardeen-Cooper-Schrieffer (BCS) gap to transition-temperature ratio, despite being in the anti-adiabatic regime. At lowest superconducting densities, the lone remaining adiabatic phonon van Hove singularity is the soft transverse-optic mode, associated with the ferroelectric instability. We suggest a scenario for pairing mediated by this mode in the presence of spin-orbit coupling, which naturally accounts for the superconducting dome and BCS ratio.
In today's context, deploying data-driven services like recommendation on edge devices instead of cloud servers becomes increasingly attractive due to privacy and network latency concerns. A common practice in building compact on-device recommender systems is to compress their embeddings which are normally the cause of excessive parameterization. However, despite the vast variety of devices and their associated memory constraints, existing memory-efficient recommender systems are only specialized for a fixed memory budget in every design and training life cycle, where a new model has to be retrained to obtain the optimal performance while adapting to a smaller/larger memory budget. In this paper, we present a novel lightweight recommendation paradigm that allows a well-trained recommender to be customized for arbitrary device-specific memory constraints without retraining. The core idea is to compose elastic embeddings for each item, where an elastic embedding is the concatenation of a set of embedding blocks that are carefully chosen by an automated search function. Correspondingly, we propose an innovative approach, namely recommendation with universally learned elastic embeddings (RULE). To ensure the expressiveness of all candidate embedding blocks, RULE enforces a diversity-driven regularization when learning different embedding blocks. Then, a performance estimator-based evolutionary search function is designed, allowing for efficient specialization of elastic embeddings under any memory constraint for on-device recommendation. Extensive experiments on real-world datasets reveal the superior performance of RULE under tight memory budgets.
Emerging interests have been brought to recognize previously unseen objects given very few training examples, known as few-shot object detection (FSOD). Recent researches demonstrate that good feature embedding is the key to reach favorable few-shot learning performance. We observe object proposals with different Intersection-of-Union (IoU) scores are analogous to the intra-image augmentation used in contrastive approaches. And we exploit this analogy and incorporate supervised contrastive learning to achieve more robust objects representations in FSOD. We present Few-Shot object detection via Contrastive proposals Encoding (FSCE), a simple yet effective approach to learning contrastive-aware object proposal encodings that facilitate the classification of detected objects. We notice the degradation of average precision (AP) for rare objects mainly comes from misclassifying novel instances as confusable classes. And we ease the misclassification issues by promoting instance level intra-class compactness and inter-class variance via our contrastive proposal encoding loss (CPE loss). Our design outperforms current state-of-the-art works in any shot and all data splits, with up to +8.8% on standard benchmark PASCAL VOC and +2.7% on challenging COCO benchmark. Code is available at: https: //github.com/MegviiDetection/FSCE
We propose a simple approach to implement a tunable, high power and narrow linewidth laser source based on a series of highly coherent tones from an electro-optic frequency comb and a set of 3 DFB slave lasers. We experimentally demonstrate approximately 1.25 THz (10 nm) of tuning within the C-Band centered at 192.9 THz (1555 nm). The output power is approximately 100 mW (20 dBm), with a side band suppression ratio greater than 55 dB, and a linewidth below 400 Hz across the full range of tunability. This approach is scalable and may be extended to cover a significantly broader optical spectral range.
Time-reparametrization invariance in general relativistic space-time does not allow us to single out a time in quantum mechanics in a mechanical way of measurement. Motivated by this problem, we examine this gauge invariance in the ground state of the quasi-stationary coarse-grained state of a long-range interacting closed system of identical or identified, macroscopic, and spatiotemporally inhomogeneous Bose-Einstein condensates in the thermodynamic and Newtonian limits. As a result, we find that it is a theoretical counterexample of this gauge invariance, except for proper-time translational invariance, at a coarse-grained level.
The H^2-regularity of variational solutions to a two-dimensional transmission problem with geometric constraint is investigated, in particular when part of the interface becomes part of the outer boundary of the domain due to the saturation of the geometric constraint. In such a situation, the domain includes some non-Lipschitz subdomains with cusp points, but it is shown that this feature does not lead to a regularity breakdown. Moreover, continuous dependence of the solutions with respect to the domain is established.
We study particle production and unitarity violation caused by a curved target space right after inflation. We use the inflaton field value instead of cosmic time as the time variable, and derive a semiclassical formula for the spectrum of produced particles. We then derive a simple condition for unitarity violation during preheating, which we confirm by our semiclassical method and numerical solution. This condition depends not only on the target space curvature but also on the height of the inflaton potential at the end of inflation. This condition tells us, for instance, that unitarity is violated in running kinetic inflation and Higgs inflation, while unitarity is conserved in $\alpha$-attractor inflation and Higgs-Palatini inflation.
A minimal absent word of a sequence x, is a sequence yt hat is not a factorof x, but all of its proper factors are factors of x as well. The set of minimal absent words uniquely defines the sequence itself. In recent times minimal absent words have been used in order to compare sequences. In fact, to do this, one can compare the sets of their minimal absent words. Chairungasee and Crochemorein [2] define a distance between pairs of sequences x and y, where the symmetric difference of the sets of minimal absent words of x and y is involved. Here, weconsider a different distance, introduced in [1], based on a specific subset of such symmetric difference that, in our opinion, better capture the different features ofthe considered sequences. We show the result of some experiments where the distance is tested on a dataset of genetic sequences by 11 living species, in order to compare the new distance with the ones existing in literature.
The Semi-Digital Hadronic Calorimeter (SDHCAL) is proposed to equip the future ILC detector. A technological prototype of the SDHCAL developed within the CALICE collaboration has been extensively tested in test beams. We review here the prototype performances in terms of hadronic shower reconstruction from the most recent analyses test beam data.
We propose a worldsheet formula for tree-level correlation functions describing a scalar field with arbitrary mass and quartic self-interaction in de Sitter space, which is a simple model for inflationary cosmology. The correlation functions are located on the future boundary of the spacetime and are Fourier-transformed to momentum space. Our formula is supported on mass-deformed scattering equations involving conformal generators in momentum space and reduces to the CHY formula for $\phi^4$ amplitudes in the flat space limit. Using the global residue theorem, we verify that it reproduces the Witten diagram expansion at four and six points, and sketch the extension to $n$ points.
Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years. With their ability to directly learn the probability distribution of data, and then sample synthetic realistic data. Many applications have emerged, using GANs to solve classical problems in machine learning, such as data augmentation, class unbalance problems, and fair representation learning. In this paper, we analyze and highlight fairness concerns of GANs model. In this regard, we show empirically that GANs models may inherently prefer certain groups during the training process and therefore they're not able to homogeneously generate data from different groups during the testing phase. Furthermore, we propose solutions to solve this issue by conditioning the GAN model towards samples' group or using ensemble method (boosting) to allow the GAN model to leverage distributed structure of data during the training phase and generate groups at equal rate during the testing phase.
The classification of time-series data is pivotal for streaming data and comes with many challenges. Although the amount of publicly available datasets increases rapidly, deep neural models are only exploited in a few areas. Traditional methods are still used very often compared to deep neural models. These methods get preferred in safety-critical, financial, or medical fields because of their interpretable results. However, their performance and scale-ability are limited, and finding suitable explanations for time-series classification tasks is challenging due to the concepts hidden in the numerical time-series data. Visualizing complete time-series results in a cognitive overload concerning our perception and leads to confusion. Therefore, we believe that patch-wise processing of the data results in a more interpretable representation. We propose a novel hybrid approach that utilizes deep neural networks and traditional machine learning algorithms to introduce an interpretable and scale-able time-series classification approach. Our method first performs a fine-grained classification for the patches followed by sample level classification.
In growing active matter systems, a large collection of engineered or living autonomous units metabolize free energy and create order at different length scales as they proliferate and migrate collectively. One such example is bacterial biofilms, which are surface-attached aggregates of bacterial cells embedded in an extracellular matrix. However, how bacterial growth coordinates with cell-surface interactions to create distinctive, long-range order in biofilms remains elusive. Here we report a collective cell reorientation cascade in growing Vibrio cholerae biofilms, leading to a differentially ordered, spatiotemporally coupled core-rim structure reminiscent of a blooming aster. Cell verticalization in the core generates differential growth that drives radial alignment of the cells in the rim, while the radially aligned rim in turn generates compressive stresses that expand the verticalized core. Such self-patterning disappears in adhesion-less mutants but can be restored through opto-manipulation of growth. Agent-based simulations and two-phase active nematic modeling reveal the strong interdependence of the driving forces for the differential ordering. Our findings provide insight into the collective cell patterning in bacterial communities and engineering of phenotypes and functions of living active matter.
In vehicle-to-everything (V2X) communications, reliability is one of the most important performance metrics in safety-critical applications such as advanced driving, remote driving, and vehicle platooning. In this paper, the link reliability of unicast concurrent transmission in mode 1 (centralized mode) of 5G New Radio based V2X (NR-V2X) is analyzed. The closed-form expression of link reliability for concurrent unicast transmission is firstly derived for a highway scenario under a given interference distance distribution. On this basis, according to the macroscopic configuration of the system, a method to control the number of concurrent transmission nodes is proposed, including the communication range, message packet size, and the number of lanes, etc. The results indicate that the proposed method can maximize the system load on the premise of satisfying the link reliability requirements.
We present optical VLT/MUSE integral field spectroscopy data of the merging galaxy NGC 1487. We use fitting techniques to study the ionized gas emission of this merger and its main morphological and kinematical properties. We measured flat and sometimes inverted oxygen abundance gradients in the subsystems composing NGC 1487, explained by metal mixing processes common in merging galaxies. We also measured widespread star-forming bursts, indicating that photoionisation by stars is the primary ionization source of the galaxy. The kinematic map revealed a rotating pattern in the gas in the northern tail of the system, suggesting that the galaxy may be in the process of rebuilding a disc. The gas located in the central region has larger velocity dispersion ($\sigma\approx 50$ km s$^{-1}$) than the remaining regions, indicating kinematic heating, possibly owing to the ongoing interaction. Similar trends were, however, not observed in the stellar velocity-dispersion map, indicating that the galaxy has not yet achieved equilibrium, and the nebular and stellar components are still kinematically decoupled. Based on all our measurements and findings, and specially on the mass estimates, metallicity gradients and velocity fields of the system, we propose that NGC 1487 is the result of an ongoing merger event involving smallish dwarf galaxies within a group, in a pre-merger phase, resulting in a relic with mass and physical parameters similar to a dwarf galaxy. Thus, we may be witnessing the formation of a dwarf galaxy by merging of smaller clumps at z=0.
Current dense symmetric eigenvalue (EIG) and singular value decomposition (SVD) implementations may suffer from the lack of concurrency during the tridiagonal and bidiagonal reductions, respectively. This performance bottleneck is typical for the two-sided transformations due to the Level-2 BLAS memory-bound calls. Therefore, the current state-of-the-art EIG and SVD implementations may achieve only a small fraction of the system's sustained peak performance. The QR-based Dynamically Weighted Halley (QDWH) algorithm may be used as a pre-processing step toward the EIG and SVD solvers, while mitigating the aforementioned bottleneck. QDWH-EIG and QDWH-SVD expose more parallelism, while relying on compute-bound matrix operations. Both run closer to the sustained peak performance of the system, but at the expense of performing more FLOPS than the standard EIG and SVD algorithms. In this paper, we introduce a new QDWH-based solver for computing the partial spectrum for EIG (QDWHpartial-EIG) and SVD (QDWHpartial-SVD) problems. By optimizing the rational function underlying the algorithms only in the desired part of the spectrum, QDWHpartial-EIG and QDWHpartial-SVD algorithms efficiently compute a fraction (say 1-20%) of the corresponding spectrum. We develop high-performance implementations of QDWHpartial-EIG and QDWHpartial-SVD on distributed-memory anymore systems and demonstrate their numerical robustness. Experimental results using up to 36K MPI processes show performance speedups for QDWHpartial-SVD up to 6X and 2X against PDGESVD from ScaLAPACK and KSVD, respectively. QDWHpartial-EIG outperforms PDSYEVD from ScaLAPACK up to 3.5X but remains slower compared to ELPA. QDWHpartial-EIG achieves, however, a better occupancy of the underlying hardware by extracting higher sustained peak performance than ELPA, which is critical moving forward with accelerator-based supercomputers.
We investigate attosecond time delays in the emission of photoelectrons using a hierarchy of models of the $CO_2$ molecule including the strong field approximation, Coulomb-scattering, short-range parts of the molecular potential, Hartree and Hartree-Fock descriptions. In addition, we present an {\it ab initio} calculation based on quantum-chemical structure in combination with strong-field techniques, which fully includes multi-electron exchange and correlation. Every single of these model constituents is found to modify delays on the scale of 10 as or more, with exchange and correlation having the most pronounced effect.
We study the notion of indistinguishability obfuscation for null quantum circuits (quantum null-iO). We present a construction assuming: - The quantum hardness of learning with errors (LWE). - Post-quantum indistinguishability obfuscation for classical circuits. - A notion of ''dual-mode'' classical verification of quantum computation (CVQC). We give evidence that our notion of dual-mode CVQC exists by proposing a scheme that is secure assuming LWE in the quantum random oracle model (QROM). Then we show how quantum null-iO enables a series of new cryptographic primitives that, prior to our work, were unknown to exist even making heuristic assumptions. Among others, we obtain the first witness encryption scheme for QMA, the first publicly verifiable non-interactive zero-knowledge (NIZK) scheme for QMA, and the first attribute-based encryption (ABE) scheme for BQP.
One salient feature of cooperative formation tracking is its distributed nature that relies on localized control and information sharing over a sparse communication network. That is, a distributed control manner could be prone to malicious attacks and unreliable communication that deteriorate the formation tracking performance or even destabilize the whole multi-agent system. This paper studies a safe and reliable time-varying output formation tracking problem of linear multi-agent systems, where an attacker adversely injects any unbounded time-varying signals (false data injection (FDI) attacks), while an interruption of communication channels between the agents is caused by an unreliable network. Both characteristics improve the practical relevance of the problem to be addressed, which poses some technical challenges to the distributed algorithm design and stability analysis. To mitigate the adverse effect, a novel resilient distributed control architecture is established to guarantee time-varying output formation tracking exponentially. The key features of the proposed framework are threefold: 1) an observer-based identifier is integrated to compensate for adverse effects; 2) a reliable distributed algorithm is proposed to deal with time-varying topologies caused by unreliable communication; and 3) in contrast to the existing remedies that deal with attacks as bounded disturbances/faults with known knowledge, we propose resilience strategies to handle unknown and unbounded attacks for exponential convergence of dynamic formation tracking errors, whereas most of existing results achieve uniformly ultimately boundedness (UUB) results. Numerical simulations are given to show the effectiveness of the proposed design.
Linear embedding transformation has been shown to be effective for zero-shot cross-lingual transfer tasks and achieve surprisingly promising results. However, cross-lingual embedding space mapping is usually studied in static word-level embeddings, where a space transformation is derived by aligning representations of translation pairs that are referred from dictionaries. We move further from this line and investigate a contextual embedding alignment approach which is sense-level and dictionary-free. To enhance the quality of the mapping, we also provide a deep view of properties of contextual embeddings, i.e., anisotropy problem and its solution. Experiments on zero-shot dependency parsing through the concept-shared space built by our embedding transformation substantially outperform state-of-the-art methods using multilingual embeddings.
This paper provides an $H_2$ optimal scheme for reducing diffusively coupled second-order systems evolving over undirected networks. The aim is to find a reduced-order model that not only approximates the input-output mapping of the original system but also preserves crucial structures, such as the second-order form, asymptotically stability, and diffusive couplings. To this end, an $H_2$ optimal approach based on a convex relaxation is implemented to reduce the dimension, yielding a lower order asymptotically stable approximation of the original second-order network system. Then, a novel graph reconstruction approach is employed to convert the obtained model to a reduced system that is interpretable as an undirected diffusively coupled network. Finally, the effectiveness of the proposed method is illustrated via a large-scale networked mass-spring-damper system.
The radio, optical, and $\gamma$-ray light curves of the blazar S5 1803+784, from the beginning of the {\it Fermi} Large Area Telescope (LAT) mission in August 2008 until December 2018, are presented. The aim of this work is to look for correlations among different wavelengths useful for further theoretical studies. We analyzed all the data collected by {\it Fermi} LAT for this source, taking into account the presence of nearby sources, and we collected optical data from our own observations and public archive data to build the most complete optical and $\gamma$-ray light curve possible. Several $\gamma$-ray flares ($\mathrm{F>2.3~10^{-7} ph(E>0.1 GeV)~cm^{-2}~s^{-1}}$) with optical coverage were detected, all but one with corresponding optical enhancement; we also found two optical flares without a $\gamma$-ray counterpart. We obtained two {\it Swift} Target of Opportunity observations during the strong flare of 2015. Radio observations performed with VLBA and EVN through our proposals in the years 2016-2020 were analyzed to search for morphological changes after the major flares. The optical/$\gamma$-ray flux ratio at the flare peak varied for each flare. Very minor optical V-I color changes were detected during the flares. The X-ray spectrum was well fitted by a power law with photon spectral index $\alpha$=1.5, nearly independent of the flux level: no clear correlation with the optical or the $\gamma$-ray emission was found. The $\gamma$-ray spectral shape was well fitted by a power law with average photon index $\alpha$= 2.2. These findings support an Inverse Compton origin for the high-energy emission of the source, nearly co-spatial with the optically emitting region. The radio maps showed two new components originating from the core and moving outwards, with ejection epochs compatible with the dates of the two largest $\gamma$-ray flares.
Deep convolutional networks have attracted great attention in image restoration and enhancement. Generally, restoration quality has been improved by building more and more convolutional block. However, these methods mostly learn a specific model to handle all images and ignore difficulty diversity. In other words, an area in the image with high frequency tend to lose more information during compressing while an area with low frequency tends to lose less. In this article, we adrress the efficiency issue in image SR by incorporating a patch-wise rolling network(PRN) to content-adaptively recover images according to difficulty levels. In contrast to existing studies that ignore difficulty diversity, we adopt different stage of a neural network to perform image restoration. In addition, we propose a rolling strategy that utilizes the parameters of each stage more flexible. Extensive experiments demonstrate that our model not only shows a significant acceleration but also maintain state-of-the-art performance.
We give an efficient algorithm for learning a binary function in a given class C of bounded VC dimension, with training data distributed according to P and test data according to Q, where P and Q may be arbitrary distributions over X. This is the generic form of what is called covariate shift, which is impossible in general as arbitrary P and Q may not even overlap. However, recently guarantees were given in a model called PQ-learning (Goldwasser et al., 2020) where the learner has: (a) access to unlabeled test examples from Q (in addition to labeled samples from P, i.e., semi-supervised learning); and (b) the option to reject any example and abstain from classifying it (i.e., selective classification). The algorithm of Goldwasser et al. (2020) requires an (agnostic) noise tolerant learner for C. The present work gives a polynomial-time PQ-learning algorithm that uses an oracle to a "reliable" learner for C, where reliable learning (Kalai et al., 2012) is a model of learning with one-sided noise. Furthermore, our reduction is optimal in the sense that we show the equivalence of reliable and PQ learning.
Drilling and Extraction Automated System (DREAMS) is a fully automated prototype-drilling rig that can drill, extract water and assess subsurface density profiles from simulated lunar and Martian subsurface ice. DREAMS system is developed by the Texas A&M drilling automation team and composed of four main components: 1- tensegrity rig structure, 2- drilling system, 3- water extracting and heating system, and 4- electronic hardware, controls, and machine algorithm. The vertical and rotational movements are controlled by using an Acme rod, stepper, and rotary motor. DREAMS is a unique system and different from other systems presented before in the NASA Rascal-Al competition because 1- It uses the tensegrity structure concept to decrease the system weight, improve mobility, and easier installation in space. 2- It cuts rock layers by using a short bit length connected to drill pipes. This drilling methodology is expected to drill hundreds and thousands of meters below the moon and Martian surfaces without any anticipated problems (not only 1 m.). 3- Drilling, heating, and extraction systems are integrated into one system that can work simultaneously or individually to save time and cost.
We consider the problem of forecasting multiple values of the future of a vector time series, using some past values. This problem, and related ones such as one-step-ahead prediction, have a very long history, and there are a number of well-known methods for it, including vector auto-regressive models, state-space methods, multi-task regression, and others. Our focus is on low rank forecasters, which break forecasting up into two steps: estimating a vector that can be interpreted as a latent state, given the past, and then estimating the future values of the time series, given the latent state estimate. We introduce the concept of forecast consistency, which means that the estimates of the same value made at different times are consistent. We formulate the forecasting problem in general form, and focus on linear forecasters, for which we propose a formulation that can be solved via convex optimization. We describe a number of extensions and variations, including nonlinear forecasters, data weighting, the inclusion of auxiliary data, and additional objective terms. We illustrate our methods with several examples.
Deep image inpainting aims to restore damaged or missing regions in an image with realistic contents. While having a wide range of applications such as object removal and image recovery, deep inpainting techniques also have the risk of being manipulated for image forgery. A promising countermeasure against such forgeries is deep inpainting detection, which aims to locate the inpainted regions in an image. In this paper, we make the first attempt towards universal detection of deep inpainting, where the detection network can generalize well when detecting different deep inpainting methods. To this end, we first propose a novel data generation approach to generate a universal training dataset, which imitates the noise discrepancies exist in real versus inpainted image contents to train universal detectors. We then design a Noise-Image Cross-fusion Network (NIX-Net) to effectively exploit the discriminative information contained in both the images and their noise patterns. We empirically show, on multiple benchmark datasets, that our approach outperforms existing detection methods by a large margin and generalize well to unseen deep inpainting techniques. Our universal training dataset can also significantly boost the generalizability of existing detection methods.
The recently established formalism of a worldline quantum field theory, which describes the classical scattering of massive bodies in Einstein gravity, is generalized up to quadratic order in spin -- for a pair of Kerr black holes revealing a hidden ${\mathcal N}=2$ supersymmetry. The far-field time-domain waveform of the gravitational waves produced in such a spinning encounter is computed at leading order in the post-Minkowskian (weak field, but generic velocity) expansion, and exhibits this supersymmetry. From the waveform we extract the leading-order total radiated angular momentum in a generic reference frame, and the total radiated energy in the center-of-mass frame to leading order in a low-velocity approximation.
We develop a uniform coalgebraic approach to Thomason and J\'{o}nsson-Tarski type dualities for various classes of neighborhood frames and neighborhood algebras. In the first part of the paper we construct an endofunctor on the category of complete and atomic Boolean algebras that is dual to the double powerset functor on $\mathsf{Set}$. This allows us to show that Thomason duality for neighborhood frames can be viewed as an algebra-coalgebra duality. We generalize this approach to any class of algebras for an endofunctor presented by one-step axioms in the language of infinitary modal logic. As a consequence, we obtain a uniform approach to dualities for various classes of neighborhood frames, including monotone neighborhood frames, pretopological spaces, and topological spaces. In the second part of the paper we develop a coalgebraic approach to J\'{o}nsson-Tarski duality for neighborhood algebras and descriptive neighborhood frames. We introduce an analogue of the Vietoris endofunctor on the category of Stone spaces and show that descriptive neighborhood frames are isomorphic to coalgebras for this endofunctor. This allows us to obtain a coalgebraic proof of the duality between descriptive neighborhood frames and neighborhood algebras. Using one-step axioms in the language of finitary modal logic, we restrict this duality to other classes of neighborhood algebras studied in the literature, including monotone modal algebras and contingency algebras. We conclude the paper by connecting the two types of dualities via canonical extensions, and discuss when these extensions are functorial.
The main topic of this paper is a brief overview of the field of Artificial Intelligence. The core of this paper is a practical implementation of an algorithm for object detection and tracking. The ability to detect and track fast-moving objects is crucial for various applications of Artificial Intelligence like autonomous driving, ball tracking in sports, robotics or object counting. As part of this paper the Fully Convolutional Neural Network "CueNet" was developed. It detects and tracks the cueball on a labyrinth game robustly and reliably. While CueNet V1 has a single input image, the approach with CueNet V2 was to take three consecutive 240 x 180-pixel images as an input and transform them into a probability heatmap for the cueball's location. The network was tested with a separate video that contained all sorts of distractions to test its robustness. When confronted with our testing data, CueNet V1 predicted the correct cueball location in 99.6% of all frames, while CueNet V2 had 99.8% accuracy.
Novelty detection is the task of recognizing samples that do not belong to the distribution of the target class. During training, the novelty class is absent, preventing the use of traditional classification approaches. Deep autoencoders have been widely used as a base of many unsupervised novelty detection methods. In particular, context autoencoders have been successful in the novelty detection task because of the more effective representations they learn by reconstructing original images from randomly masked images. However, a significant drawback of context autoencoders is that random masking fails to consistently cover important structures of the input image, leading to suboptimal representations - especially for the novelty detection task. In this paper, to optimize input masking, we have designed a framework consisting of two competing networks, a Mask Module and a Reconstructor. The Mask Module is a convolutional autoencoder that learns to generate optimal masks that cover the most important parts of images. Alternatively, the Reconstructor is a convolutional encoder-decoder that aims to reconstruct unperturbed images from masked images. The networks are trained in an adversarial manner in which the Mask Module generates masks that are applied to images given to the Reconstructor. In this way, the Mask Module seeks to maximize the reconstruction error that the Reconstructor is minimizing. When applied to novelty detection, the proposed approach learns semantically richer representations compared to context autoencoders and enhances novelty detection at test time through more optimal masking. Novelty detection experiments on the MNIST and CIFAR-10 image datasets demonstrate the proposed approach's superiority over cutting-edge methods. In a further experiment on the UCSD video dataset for novelty detection, the proposed approach achieves state-of-the-art results.
We consider a meal delivery service fulfilling dynamic customer requests given a set of couriers over the course of a day. A courier's duty is to pick-up an order from a restaurant and deliver it to a customer. We model this service as a Markov decision process and use deep reinforcement learning as the solution approach. We experiment with the resulting policies on synthetic and real-world datasets and compare those with the baseline policies. We also examine the courier utilization for different numbers of couriers. In our analysis, we specifically focus on the impact of the limited available resources in the meal delivery problem. Furthermore, we investigate the effect of intelligent order rejection and re-positioning of the couriers. Our numerical experiments show that, by incorporating the geographical locations of the restaurants, customers, and the depot, our model significantly improves the overall service quality as characterized by the expected total reward and the delivery times. Our results present valuable insights on both the courier assignment process and the optimal number of couriers for different order frequencies on a given day. The proposed model also shows a robust performance under a variety of scenarios for real-world implementation.
Let $(\pi_{\mathbf{z}},V_{\mathbf{z}})$ be an unramified principal series representation of a reductive group over a nonarchimedean local field, parametrized by an element $\mathbf{z}$ of the maximal torus in the Langlands dual group. If $v$ is an element of the Weyl group $W$, then the standard intertwining integral $\mathcal{A}_v$ maps $V_{\mathbf{z}}$ to $V_{v\mathbf{z}}$. Letting $\psi^{\mathbf{z}}_w$ with $w\in W$ be a suitable basis of the Iwahori fixed vectors in $V_{\mathbf{z}}$, and $\widehat\psi^{\mathbf{z}}_w$ a basis of the contragredient representation, we define $\sigma(u,v,w)$ (for $u,v,w\in W$) to be $\langle \mathcal{A}_v\psi_u^{\mathbf{z}},\widehat\psi^{v\mathbf{z}}_w\rangle$. This is an interesting function and we initiate its study. We show that given $u$ and $w$, there is a minimal $v$ such that $\sigma(u,v,w)\neq 0$. Denoting this $v$ as $v_\hbox{min}=v_\hbox{min}(u,w)$, we will prove that $\sigma(u,v_\hbox{min},w)$ is a polynomial of the cardinality $q$ of the residue field. Indeed if $v>v_\hbox{min}$, then $\sigma(u,v,w)$ is a rational function of $\mathbf{z}$ and $q$, whose denominator we describe. But if $v=v_\hbox{min}$, the dependence on $\mathbf{z}$ disappears. We will express $\sigma(u,v_\hbox{min},w)$ as the Poincar\'e polynomial of a Bruhat interval. The proof leads to fairly intricate considerations of the Bruhat order. Thus our results require us to prove some facts that may be of independent interest, relating the Bruhat order $\leqslant$ and the weak Bruhat order $\leqslant_R$. For example we will prove (for finite Coxeter groups) the following "mixed meet" property. If $u, w$ are elements of $W$, then there exists a unique element $m \in W$ that is maximal with respect to the condition that $m \leqslant_R u$ and $m \leqslant w$. Thus if $z \leqslant_R u$ and $z \leqslant w$, then $x \leqslant m$. The value $v_\hbox{min}$ is $m^{-1}u$.
Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution. For example, a bank which uses the number of open credit lines to determine a customer's risk of default on a loan may induce customers to open more credit lines in order to improve their chances of being approved. Because of the interactions between the model and data distribution, finding the optimal model parameters is challenging. Works in this area have focused on finding stable points, which can be far from optimal. Here we introduce performative gradient descent (PerfGD), which is the first algorithm which provably converges to the performatively optimal point. PerfGD explicitly captures how changes in the model affects the data distribution and is simple to use. We support our findings with theory and experiments.
We investigate robust linear consensus over networks under capacity-constrained communication. The capacity of each edge is encoded as an upper bound on the number of state variables that can be communicated instantaneously. When the edge capacities are small compared to the dimensionality of the state vectors, it is not possible to instantaneously communicate full state information over every edge. We investigate how robust consensus (small steady state variance of the states) can be achieved within a linear time-invariant setting by optimally assigning edges to state-dimensions. We show that if a finite steady state variance of the states can be achieved, then both the minimum cut capacity and the total capacity of the network should be sufficiently large. Optimal and approximate solutions are provided for some special classes of graphs. We also consider the related problem of optimally allocating additional capacity on a feasible initial solution. We show that this problem corresponds to the maximization of a submodular function subject to a matroid constraint, which can be approximated via a greedy algorithm.
We propose an exact model of anyon ground states including higher Landau levels, and use it to obtain fractionally quantized Hall states at filling fractions $\nu=p/(p(m-1)+1)$ with $m$ odd, from integer Hall states at $\nu=p$ through adiabatic localization of magnetic flux. For appropriately chosen two-body potential interactions, the energy gap remains intact during the process. The construction hence establishes the existence of incompressible states at these fillings.
The velocity of dislocations is derived analytically to incorporate and predict the intriguing effects induced by the preferential solute segregation and Cottrell atmospheres in both two-dimensional and three-dimensional binary systems of various crystalline symmetries. The corresponding mesoscopic description of defect dynamics is constructed through the amplitude formulation of the phase-field crystal model which has been shown to accurately capture elasticity and plasticity in a wide variety of systems. Modifications of the Peach-Koehler force as a result of solute concentration variations and compositional stresses are presented, leading to interesting new predictions of defect motion due to effects of Cottrell atmospheres. These include the deflection of dislocation glide paths, the variation of climb speed and direction, and the change or prevention of defect annihilation, all of which play an important role in determining the fundamental behaviors of complex defect network and dynamics. The analytic results are verified by numerical simulations.
We investigate the tail asymptotics of the response time distribution for the cancel-on-start (c.o.s.) and cancel-on-completion (c.o.c.) variants of redundancy-$d$ scheduling and the fork-join model with heavy-tailed job sizes. We present bounds, which only differ in the pre-factor, for the tail probability of the response time in the case of the first-come first-served (FCFS) discipline. For the c.o.s. variant we restrict ourselves to redundancy-$d$ scheduling, which is a special case of the fork-join model. In particular, for regularly varying job sizes with tail index $-\nu$ the tail index of the response time for the c.o.s. variant of redundancy-$d$ equals $-\min\{d_{\mathrm{cap}}(\nu-1),\nu\}$, where $d_{\mathrm{cap}} = \min\{d,N-k\}$, $N$ is the number of servers and $k$ is the integer part of the load. This result indicates that for $d_{\mathrm{cap}} < \frac{\nu}{\nu-1}$ the waiting time component is dominant, whereas for $d_{\mathrm{cap}} > \frac{\nu}{\nu-1}$ the job size component is dominant. Thus, having $d = \lceil \min\{\frac{\nu}{\nu-1},N-k\} \rceil$ replicas is sufficient to achieve the optimal asymptotic tail behavior of the response time. For the c.o.c. variant of the fork-join($n_{\mathrm{F}},n_{\mathrm{J}}$) model the tail index of the response time, under some assumptions on the load, equals $1-\nu$ and $1-(n_{\mathrm{F}}+1-n_{\mathrm{J}})\nu$, for identical and i.i.d. replicas, respectively; here the waiting time component is always dominant.