title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Labeled Memory Networks for Online Model Adaptation
Augmenting a neural network with memory that can grow without growing the number of trained parameters is a recent powerful concept with many exciting applications. We propose a design of memory augmented neural networks (MANNs) called Labeled Memory Networks (LMNs) suited for tasks requiring online adaptation in classification models. LMNs organize the memory with classes as the primary key.The memory acts as a second boosted stage following a regular neural network thereby allowing the memory and the primary network to play complementary roles. Unlike existing MANNs that write to memory for every instance and use LRU based memory replacement, LMNs write only for instances with non-zero loss and use label-based memory replacement. We demonstrate significant accuracy gains on various tasks including word-modelling and few-shot learning. In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time. We show that LMNs are better than other MANNs designed for meta-learning. We also found them to be more accurate and faster than state-of-the-art methods of retuning model parameters for adapting to domain-specific labeled data.
1
0
0
1
0
0
Perturbative approach to weakly driven many-particle systems in the presence of approximate conservation laws
We develop a Liouville perturbation theory for weakly driven and weakly open quantum systems in situations when the unperturbed system has a number of conservations laws. If the perturbation violates the conservation laws, it drives the system to a new steady state which can be approximately but efficiently described by a (generalized) Gibbs ensemble characterized by one Lagrange parameter for each conservation law. The value of those has to be determined from rate equations for conserved quantities. Remarkably, even weak perturbations can lead to large responses of conserved quantities. We present a perturbative expansion of the steady state density matrix; first we give the condition that fixes the zeroth order expression (Lagrange parameters) and then determine the higher order corrections via projections of the Liouvillian. The formalism can be applied to a wide range of problems including two-temperature models for electron-phonon systems, Bose condensates of excitons or photons or weakly perturbed integrable models. We test our formalism by studying interacting fermions coupled to non-thermal reservoirs, approximately described by a Boltzmann equation.
0
1
0
0
0
0
Smart Mining for Deep Metric Learning
To solve deep metric learning problems and producing feature embeddings, current methodologies will commonly use a triplet model to minimise the relative distance between samples from the same class and maximise the relative distance between samples from different classes. Though successful, the training convergence of this triplet model can be compromised by the fact that the vast majority of the training samples will produce gradients with magnitudes that are close to zero. This issue has motivated the development of methods that explore the global structure of the embedding and other methods that explore hard negative/positive mining. The effectiveness of such mining methods is often associated with intractable computational requirements. In this paper, we propose a novel deep metric learning method that combines the triplet model and the global structure of the embedding space. We rely on a smart mining procedure that produces effective training samples for a low computational cost. In addition, we propose an adaptive controller that automatically adjusts the smart mining hyper-parameters and speeds up the convergence of the training process. We show empirically that our proposed method allows for fast and more accurate training of triplet ConvNets than other competing mining methods. Additionally, we show that our method achieves new state-of-the-art embedding results for CUB-200-2011 and Cars196 datasets.
1
0
0
0
0
0
Quantum phase transitions of a generalized compass chain with staggered Dzyaloshinskii-Moriya interaction
We consider a class of one-dimensional compass models with staggered Dzyaloshinskii-Moriya exchange interactions in an external transverse magnetic field. Based on the exact solution derived from Jordan-Wigner approach, we study the excitation gap, energy spectra, spin correlations and critical properties at phase transitions. We explore mutual effects of the staggered Dzyaloshinskii-Moriya interaction and the magnetic field on the energy spectra and the ground-state phase diagram. Thermodynamic quantities including the entropy and the specific heat are discussed, and their universal scalings at low temperature are demonstrated.
0
1
0
0
0
0
A blowup algebra of hyperplane arrangements
It is shown that the Orlik-Terao algebra is graded isomorphic to the special fiber of the ideal $I$ generated by the $(n-1)$-fold products of the members of a central arrangement of size $n$. This momentum is carried over to the Rees algebra (blowup) of $I$ and it is shown that this algebra is of fiber-type and Cohen-Macaulay. It follows by a result of Simis-Vasconcelos that the special fiber of $I$ is Cohen-Macaulay, thus giving another proof of a result of Proudfoot-Speyer about the Cohen-Macauleyness of the Orlik-Terao algebra.
0
0
1
0
0
0
Object Region Mining with Adversarial Erasing: A Simple Classification to Semantic Segmentation Approach
We investigate a principle way to progressively mine discriminative object regions using classification networks to address the weakly-supervised semantic segmentation problems. Classification networks are only responsive to small and sparse discriminative regions from the object of interest, which deviates from the requirement of the segmentation task that needs to localize dense, interior and integral regions for pixel-wise inference. To mitigate this gap, we propose a new adversarial erasing approach for localizing and expanding object regions progressively. Starting with a single small object region, our proposed approach drives the classification network to sequentially discover new and complement object regions by erasing the current mined regions in an adversarial manner. These localized regions eventually constitute a dense and complete object region for learning semantic segmentation. To further enhance the quality of the discovered regions by adversarial erasing, an online prohibitive segmentation learning approach is developed to collaborate with adversarial erasing by providing auxiliary segmentation supervision modulated by the more reliable classification scores. Despite its apparent simplicity, the proposed approach achieves 55.0% and 55.7% mean Intersection-over-Union (mIoU) scores on PASCAL VOC 2012 val and test sets, which are the new state-of-the-arts.
1
0
0
0
0
0
Streaming Binary Sketching based on Subspace Tracking and Diagonal Uniformization
In this paper, we address the problem of learning compact similarity-preserving embeddings for massive high-dimensional streams of data in order to perform efficient similarity search. We present a new online method for computing binary compressed representations -sketches- of high-dimensional real feature vectors. Given an expected code length $c$ and high-dimensional input data points, our algorithm provides a $c$-bits binary code for preserving the distance between the points from the original high-dimensional space. Our algorithm does not require neither the storage of the whole dataset nor a chunk, thus it is fully adaptable to the streaming setting. It also provides low time complexity and convergence guarantees. We demonstrate the quality of our binary sketches through experiments on real data for the nearest neighbors search task in the online setting.
1
0
0
0
0
0
Improving hot-spot pressure for ignition in high-adiabat Inertial Confinement Fusion implosion
A novel capsule target design to improve the hot-spot pressure in the high-adiabat implosion for inertial confinement fusion is proposed, where a layer of comparatively high-density material is used as a pusher between the fuel and the ablator. This design is based on our theoretical finding of the stagnation scaling laws, which indicates that the hot spot pressure can be improved by increasing the kinetic energy density $\rho_d V_{imp}^2/2$ ($\rho_d$ is the shell density when the maximum shell velocity is reached, $V_{imp}$ is the implosion velocity.) of the shell. The proposed design uses the high density pusher to enhance the shell density $\rho_d$ so that the hot spot pressure is improved. Radio-hydrodynamic simulations show that the hot spot pressure of the design reaches the requirement for ignition even driven by a very high-adiabat short-duration two-shock pulse. The design is hopeful to simultaneously overcome the two major obstacles to achieving ignition--ablative instability and laser-plasma instability.
0
1
0
0
0
0
Cell-to-cell variation sets a tissue-rheology-dependent bound on collective gradient sensing
When a single cell senses a chemical gradient and chemotaxes, stochastic receptor-ligand binding can be a fundamental limit to the cell's accuracy. For clusters of cells responding to gradients, however, there is a critical difference: even genetically identical cells have differing responses to chemical signals. With theory and simulation, we show collective chemotaxis is limited by cell-to-cell variation in signaling. We find that when different cells cooperate the resulting bias can be much larger than the effects of ligand-receptor binding. Specifically, when a strongly-responding cell is at one end of a cell cluster, cluster motion is biased toward that cell. These errors are mitigated if clusters average measurements over times long enough for cells to rearrange. In consequence, fluid clusters are better able to sense gradients: we derive a link between cluster accuracy, cell-to-cell variation, and the cluster rheology. Because of this connection, increasing the noisiness of individual cell motion can actually increase the collective accuracy of a cluster by improving fluidity.
0
1
0
0
0
0
Languages of Play: Towards semantic foundations for game interfaces
Formal models of games help us account for and predict behavior, leading to more robust and innovative designs. While the games research community has proposed many formalisms for both the "game half" (game models, game description languages) and the "human half" (player modeling) of a game experience, little attention has been paid to the interface between the two, particularly where it concerns the player expressing her intent toward the game. We describe an analytical and computational toolbox based on programming language theory to examine the phenomenon sitting between control schemes and game rules, which we identify as a distinct player intent language for each game.
1
0
0
0
0
0
Hybrid Normed Ideal Perturbations of n-tuples of Operators I
In hybrid normed ideal perturbations of $n$-tuples of operators, the normed ideal is allowed to vary with the component operators. We begin extending to this setting the machinery we developed for normed ideal perturbations based on the modulus of quasicentral approximation and an adaptation of our non-commutative generalization of the Weyl--von~Neumann theorem. For commuting $n$-tuples of hermitian operators, the modulus of quasicentral approximation remains essentially the same when $\cC_n^-$ is replaced by a hybrid $n$-tuple $\cC_{p_1,\dots}^-,\dots,\cC^-_{p_n}$, $p_1^{-1} + \dots + p_n^{-1} = 1$. The proof involves singular integrals of mixed homogeneity.
0
0
1
0
0
0
Global behaviour of radially symmetric solutions stable at infinity for gradient systems
This paper is concerned with radially symmetric solutions of systems of the form \[ u_t = -\nabla V(u) + \Delta_x u \] where space variable $x$ and and state-parameter $u$ are multidimensional, and the potential $V$ is coercive at infinity. For such systems, under generic assumptions on the potential, the asymptotic behaviour of solutions "stable at infinity", that is approaching a spatially homogeneous equilibrium when $|x|$ approaches $+\infty$, is investigated. It is proved that every such solutions approaches a stacked family of radially symmetric bistable fronts travelling to infinity. This behaviour is similar to the one of bistable solutions for gradient systems in one unbounded spatial dimension, described in a companion paper. It is expected (but unfortunately not proved at this stage) that behind these travelling fronts the solution again behaves as in the one-dimensional case (that is, the time derivative approaches zero and the solution approaches a pattern of stationary solutions).
0
0
1
0
0
0
On the Total Forcing Number of a Graph
Let $G$ be a simple and finite graph without isolated vertices. In this paper we study forcing sets (zero forcing sets) which induce a subgraph of $G$ without isolated vertices. Such a set is called a total forcing set, introduced and first studied by Davila \cite{Davila}. The minimum cardinality of a total forcing set in $G$ is the total forcing number of $G$, denoted $F_t(G)$. We study basic properties of $F_t(G)$, relate $F_t(G)$ to various domination parameters, and establish $NP$-completeness of the associated decision problem for $F_t(G)$. We also prove that if $G$ is a connected graph of order $n \ge 3$ and maximum degree $\Delta$, then $F_t(G) \le ( \frac{\Delta}{\Delta +1} ) n$, with equality if and only if $G$ is a complete graph $K_{\Delta + 1}$.
0
0
1
0
0
0
Brief Notes on Hard Takeoff, Value Alignment, and Coherent Extrapolated Volition
I make some basic observations about hard takeoff, value alignment, and coherent extrapolated volition, concepts which have been central in analyses of superintelligent AI systems.
1
0
0
0
0
0
Convolutional Neural Networks In Classifying Cancer Through DNA Methylation
DNA Methylation has been the most extensively studied epigenetic mark. Usually a change in the genotype, DNA sequence, leads to a change in the phenotype, observable characteristics of the individual. But DNA methylation, which happens in the context of CpG (cytosine and guanine bases linked by phosphate backbone) dinucleotides, does not lead to a change in the original DNA sequence but has the potential to change the phenotype. DNA methylation is implicated in various biological processes and diseases including cancer. Hence there is a strong interest in understanding the DNA methylation patterns across various epigenetic related ailments in order to distinguish and diagnose the type of disease in its early stages. In this work, the relationship between methylated versus unmethylated CpG regions and cancer types is explored using Convolutional Neural Networks (CNNs). A CNN based Deep Learning model that can classify the cancer of a new DNA methylation profile based on the learning from publicly available DNA methylation datasets is then proposed.
0
0
0
1
1
0
Persistent Monitoring of Dynamically Changing Environments Using an Unmanned Vehicle
We consider the problem of planning a closed walk $\mathcal W$ for a UAV to persistently monitor a finite number of stationary targets with equal priorities and dynamically changing properties. A UAV must physically visit the targets in order to monitor them and collect information therein. The frequency of monitoring any given target is specified by a target revisit time, $i.e.$, the maximum allowable time between any two successive visits to the target. The problem considered in this paper is the following: Given $n$ targets and $k \geq n$ allowed visits to them, find an optimal closed walk $\mathcal W^*(k)$ so that every target is visited at least once and the maximum revisit time over all the targets, $\mathcal R(\mathcal W(k))$, is minimized. We prove the following: If $k \geq n^2-n$, $\mathcal R(\mathcal W^*(k))$ (or simply, $\mathcal R^*(k)$) takes only two values: $\mathcal R^*(n)$ when $k$ is an integral multiple of $n$, and $\mathcal R^*(n+1)$ otherwise. This result suggests significant computational savings - one only needs to determine $\mathcal W^*(n)$ and $\mathcal W^*(n+1)$ to construct an optimal solution $\mathcal W^*(k)$. We provide MILP formulations for computing $\mathcal W^*(n)$ and $\mathcal W^*(n+1)$. Furthermore, for {\it any} given $k$, we prove that $\mathcal R^*(k) \geq \mathcal R^*(k+n)$.
1
0
0
0
0
0
Almost sharp nonlinear scattering in one-dimensional Born-Infeld equations arising in nonlinear Electrodynamics
We study decay of small solutions of the Born-Infeld equation in 1+1 dimensions, a quasilinear scalar field equation modeling nonlinear electromagnetism, as well as branes in String theory and minimal surfaces in Minkowski space-times. From the work of Whitham, it is well-known that there is no decay because of arbitrary solutions traveling to the speed of light just as linear wave equation. However, even if there is no global decay in 1+1 dimensions, we are able to show that all globally small $H^{s+1}\times H^s$, $s>\frac12$ solutions do decay to the zero background state in space, inside a strictly proper subset of the light cone. We prove this result by constructing a Virial identity related to a momentum law, in the spirit of works \cite{KMM,KMM1}, as well as a Lyapunov functional that controls the $\dot H^1 \times L^2$ energy.
0
0
1
0
0
0
Life efficiency does not always increase with the dissipation rate
There does not exist a general positive correlation between important life-supporting properties and the entropy production rate. The simple reason is that nondissipative and time-symmetric kinetic aspects are also relevant for establishing optimal functioning. In fact those aspects are even crucial in the nonlinear regimes around equilibrium where we find biological processing on mesoscopic scales. We make these claims specific via examples of molecular motors, of circadian cycles and of sensory adaptation, whose performance in some regimes is indeed spoiled by increasing the dissipated power. We use the relation between dissipation and the amount of time-reversal breaking to keep the discussion quantitative also in effective models where the physical entropy production is not clearly identifiable.
0
1
0
0
0
0
Molecular simulations of entangled defect structures around nanoparticles in nematic liquid crystals
We investigate the defect structures forming around two nanoparticles in a Gay-Berne nematic liquid crystal using molecular simulations. For small separations, disclinations entangle both particles forming the figure of eight, the figure of omega and the figure of theta. These defect structures are similar in shape and occur with a comparable frequency to micron-sized particles studied in experiments. The simulations reveal fast transitions from one defect structure to another suggesting that particles of nanometre size cannot be bound together effectively. We identify the 'three-ring' structure observed in previous molecular simulations as a superposition of the different entangled and non-entangled states over time and conclude that it is not itself a stable defect structure.
0
1
0
0
0
0
Quasi Maximum-Likelihood Estimation of Dynamic Panel Data Models
This paper establishes the almost sure convergence and asymptotic normality of levels and differenced quasi maximum-likelihood (QML) estimators of dynamic panel data models. The QML estimators are robust with respect to initial conditions, conditional and time-series heteroskedasticity, and misspecification of the log-likelihood. The paper also provides an ECME algorithm for calculating levels QML estimates. Finally, it uses Monte Carlo experiments to compare the finite sample performance of levels and differenced QML estimators, the differenced GMM estimator, and the system GMM estimator. In these experiments the QML estimators usually have smaller --- typically substantially smaller --- bias and root mean squared errors than the panel data GMM estimators.
0
0
1
1
0
0
Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes
This paper contributes to an emerging literature that models votes and text in tandem to better understand polarization of expressed preferences. It introduces a new approach to estimate preference polarization in multidimensional settings, such as international relations, based on developments in the natural language processing and network science literatures -- namely word embeddings, which retain valuable syntactical qualities of human language, and community detection in multilayer networks, which locates densely connected actors across multiple, complex networks. We find that the employment of these tools in tandem helps to better estimate states' foreign policy preferences expressed in UN votes and speeches beyond that permitted by votes alone. The utility of these located affinity blocs is demonstrated through an application to conflict onset in International Relations, though these tools will be of interest to all scholars faced with the measurement of preferences and polarization in multidimensional settings.
1
0
0
0
0
0
Magnetic phase diagram of the iron pnictides in the presence of spin-orbit coupling: Frustration between $C_2$ and $C_4$ magnetic phases
We investigate the impact of spin anisotropic interactions, promoted by spin-orbit coupling, on the magnetic phase diagram of the iron-based superconductors. Three distinct magnetic phases with Bragg peaks at $(\pi,0)$ and $(0,\pi)$ are possible in these systems: one $C_2$ (i.e. orthorhombic) symmetric stripe magnetic phase and two $C_4$ (i.e. tetragonal) symmetric magnetic phases. While the spin anisotropic interactions allow the magnetic moments to point in any direction in the $C_2$ phase, they restrict the possible moment orientations in the $C_4$ phases. As a result, an interesting scenario arises in which the spin anisotropic interactions favor a $C_2$ phase, but the other spin isotropic interactions favor a $C_4$ phase. We study this frustration via both mean-field and renormalization-group approaches. We find that, to lift this frustration, a rich magnetic landscape emerges well below the magnetic transition temperature, with novel $C_2$, $C_4$, and mixed $C_2$-$C_4$ phases. Near the putative magnetic quantum critical point, spin anisotropies promote a stable Gaussian fixed point in the renormalization-group flow, which is absent in the spin isotropic case, and is associated with a near-degeneracy between $C_2$ and $C_4$ phases. We argue that this frustration is the reason why most $C_4$ phases in the iron pnictides only appear inside the $C_2$ phase, and discuss additional manifestations of this frustration in the phase diagrams of these materials.
0
1
0
0
0
0
Algorithms and Bounds for Very Strong Rainbow Coloring
A well-studied coloring problem is to assign colors to the edges of a graph $G$ so that, for every pair of vertices, all edges of at least one shortest path between them receive different colors. The minimum number of colors necessary in such a coloring is the strong rainbow connection number ($\src(G)$) of the graph. When proving upper bounds on $\src(G)$, it is natural to prove that a coloring exists where, for \emph{every} shortest path between every pair of vertices in the graph, all edges of the path receive different colors. Therefore, we introduce and formally define this more restricted edge coloring number, which we call \emph{very strong rainbow connection number} ($\vsrc(G)$). In this paper, we give upper bounds on $\vsrc(G)$ for several graph classes, some of which are tight. These immediately imply new upper bounds on $\src(G)$ for these classes, showing that the study of $\vsrc(G)$ enables meaningful progress on bounding $\src(G)$. Then we study the complexity of the problem to compute $\vsrc(G)$, particularly for graphs of bounded treewidth, and show this is an interesting problem in its own right. We prove that $\vsrc(G)$ can be computed in polynomial time on cactus graphs; in contrast, this question is still open for $\src(G)$. We also observe that deciding whether $\vsrc(G) = k$ is fixed-parameter tractable in $k$ and the treewidth of $G$. Finally, on general graphs, we prove that there is no polynomial-time algorithm to decide whether $\vsrc(G) \leq 3$ nor to approximate $\vsrc(G)$ within a factor $n^{1-\varepsilon}$, unless P$=$NP.
1
0
0
0
0
0
Quantile function expansion using regularly varying functions
We present a simple result that allows us to evaluate the asymptotic order of the remainder of a partial asymptotic expansion of the quantile function $h(u)$ as $u\to 0^+$ or $1^-$. This is focussed on important univariate distributions when $h(\cdot)$ has no simple closed form, with a view to assessing asymptotic rate of decay to zero of tail dependence in the context of bivariate copulas. The Introduction motivates the study in terms of the standard Normal. The Normal, Skew-Normal and Gamma are used as initial examples. Finally, we discuss approximation to the lower quantile of the Variance-Gamma and Skew-Slash distributions.
0
0
1
1
0
0
Identification of Key Proteins Involved in Axon Guidance Related Disorders: A Systems Biology Approach
Axon guidance is a crucial process for growth of the central and peripheral nervous systems. In this study, 3 axon guidance related disorders, namely- Duane Retraction Syndrome (DRS) , Horizontal Gaze Palsy with Progressive Scoliosis (HGPPS) and Congenital fibrosis of the extraocular muscles type 3 (CFEOM3) were studied using various Systems Biology tools to identify the genes and proteins involved with them to get a better idea about the underlying molecular mechanisms including the regulatory mechanisms. Based on the analyses carried out, 7 significant modules have been identified from the PPI network. Five pathways/processes have been found to be significantly associated with DRS, HGPPS and CFEOM3 associated genes. From the PPI network, 3 have been identified as hub proteins- DRD2, UBC and CUL3.
0
0
0
0
1
0
Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent work of Daskalakis et al~\cite{DISZ17} and follow-up work of Liang and Stokes~\cite{LiangS18} have established that a variant of the widely used Gradient Descent/Ascent procedure, called "Optimistic Gradient Descent/Ascent (OGDA)", exhibits last-iterate convergence to saddle points in {\em unconstrained} convex-concave min-max optimization problems. We show that the same holds true in the more general problem of {\em constrained} min-max optimization under a variant of the no-regret Multiplicative-Weights-Update method called "Optimistic Multiplicative-Weights Update (OMWU)". This answers an open question of Syrgkanis et al~\cite{SALS15}. The proof of our result requires fundamentally different techniques from those that exist in no-regret learning literature and the aforementioned papers. We show that OMWU monotonically improves the Kullback-Leibler divergence of the current iterate to the (appropriately normalized) min-max solution until it enters a neighborhood of the solution. Inside that neighborhood we show that OMWU becomes a contracting map converging to the exact solution. We believe that our techniques will be useful in the analysis of the last iterate of other learning algorithms.
0
0
0
1
0
0
The Lifetimes of Phases in High-Mass Star-Forming Regions
High-mass stars form within star clusters from dense, molecular regions, but is the process of cluster formation slow and hydrostatic or quick and dynamic? We link the physical properties of high-mass star-forming regions with their evolutionary stage in a systematic way, using Herschel and Spitzer data. In order to produce a robust estimate of the relative lifetimes of these regions, we compare the fraction of dense, molecular regions above a column density associated with high-mass star formation, N(H2) > 0.4-2.5 x 10^22 cm^-2, in the 'starless (no signature of stars > 10 Msun forming) and star-forming phases in a 2x2 degree region of the Galactic Plane centered at l=30deg. Of regions capable of forming high-mass stars on ~1 pc scales, the starless (or embedded beyond detection) phase occupies about 60-70% of the dense, molecular region lifetime and the star-forming phase occupies about 30-40%. These relative lifetimes are robust over a wide range of thresholds. We outline a method by which relative lifetimes can be anchored to absolute lifetimes from large-scale surveys of methanol masers and UCHII regions. A simplistic application of this method estimates the absolute lifetimes of the starless phase to be 0.2-1.7 Myr (about 0.6-4.1 fiducial cloud free-fall times) and the star-forming phase to be 0.1-0.7 Myr (about 0.4-2.4 free-fall times), but these are highly uncertain. This work uniquely investigates the star-forming nature of high-column density gas pixel-by-pixel and our results demonstrate that the majority of high-column density gas is in a starless or embedded phase.
0
1
0
0
0
0
Null controllability of a population dynamics with interior degeneracy
In this paper, we deal with the null controllability of a population dynamics model with an interior degenerate diffusion. To this end, we proved first a new Carleman estimate for the full adjoint system and afterwards we deduce a suitable observability inequality which will be needed to establish the existence of a control acting on a subset of the space which lead the population to extinction in a finite time.
0
0
1
0
0
0
ICLR Reproducibility Challenge Report (Padam : Closing The Generalization Gap Of Adaptive Gradient Methods in Training Deep Neural Networks)
This work is a part of ICLR Reproducibility Challenge 2019, we try to reproduce the results in the conference submission PADAM: Closing The Generalization Gap of Adaptive Gradient Methods In Training Deep Neural Networks. Adaptive gradient methods proposed in past demonstrate a degraded generalization performance than the stochastic gradient descent (SGD) with momentum. The authors try to address this problem by designing a new optimization algorithm that bridges the gap between the space of Adaptive Gradient algorithms and SGD with momentum. With this method a new tunable hyperparameter called partially adaptive parameter p is introduced that varies between [0, 0.5]. We build the proposed optimizer and use it to mirror the experiments performed by the authors. We review and comment on the empirical analysis performed by the authors. Finally, we also propose a future direction for further study of Padam. Our code is available at: this https URL
1
0
0
1
0
0
Robust Bayes-Like Estimation: Rho-Bayes estimation
We consider the problem of estimating the joint distribution $P$ of $n$ independent random variables within the Bayes paradigm from a non-asymptotic point of view. Assuming that $P$ admits some density $s$ with respect to a given reference measure, we consider a density model $\overline S$ for $s$ that we endow with a prior distribution $\pi$ (with support $\overline S$) and we build a robust alternative to the classical Bayes posterior distribution which possesses similar concentration properties around $s$ whenever it belongs to the model $\overline S$. Furthermore, in density estimation, the Hellinger distance between the classical and the robust posterior distributions tends to 0, as the number of observations tends to infinity, under suitable assumptions on the model and the prior, provided that the model $\overline S$ contains the true density $s$. However, unlike what happens with the classical Bayes posterior distribution, we show that the concentration properties of this new posterior distribution are still preserved in the case of a misspecification of the model, that is when $s$ does not belong to $\overline S$ but is close enough to it with respect to the Hellinger distance.
0
0
1
1
0
0
Nonlinear demixed component analysis for neural population data as a low-rank kernel regression problem
Here I introduce an extension to demixed principal component analysis (dPCA), a linear dimensionality reduction technique for analyzing the activity of neural populations, to the case of nonlinear dimensions. This is accomplished using kernel methods, resulting in kernel demixed principal component analysis (kdPCA). This extension resembles kernel-based extensions to standard principal component analysis and canonical correlation analysis. kdPCA includes dPCA as a special case when the kernel is linear. I present examples of simulated neural activity that follows different low dimensional configurations and compare the results of kdPCA to dPCA. These simulations demonstrate that nonlinear interactions can impede the ability of dPCA to demix neural activity corresponding to experimental parameters, but kdPCA can still recover interpretable components. Additionally, I compare kdPCA and dPCA to a neural population from rat orbitofrontal cortex during an odor classification task in recovering decision-related activity.
0
0
0
0
1
0
Stability of semi-wavefronts for delayed reaction-diffusion equations
This paper deals with the asymptotic behavior of solutions to the delayed monostable equation: $(*)$ $u_{t}(t,x) = u_{xx}(t,x) - u(t,x) + g(u(t-h,x)),$ $x \in \mathbb{R},\ t >0,$ where $h>0$ and the reaction term $g: \mathbb{R}_+ \to \mathbb{R}_+$ has exactly two fixed points (zero and $\kappa >0$). Under certain condition on the derivative of $g$ at $\kappa$, the global stability of fast wavefronts is proved. Also, the stability of the $leading \ edge$ of semi-wavefronts for $(*)$ with $g$ satisfying $g(u)\leq g'(0)u, u\in\R_+,$ is established
0
0
1
0
0
0
An End-to-End Approach to Natural Language Object Retrieval via Context-Aware Deep Reinforcement Learning
We propose an end-to-end approach to the natural language object retrieval task, which localizes an object within an image according to a natural language description, i.e., referring expression. Previous works divide this problem into two independent stages: first, compute region proposals from the image without the exploration of the language description; second, score the object proposals with regard to the referring expression and choose the top-ranked proposals. The object proposals are generated independently from the referring expression, which makes the proposal generation redundant and even irrelevant to the referred object. In this work, we train an agent with deep reinforcement learning, which learns to move and reshape a bounding box to localize the object according to the referring expression. We incorporate both the spatial and temporal context information into the training procedure. By simultaneously exploiting local visual information, the spatial and temporal context and the referring language a priori, the agent selects an appropriate action to take at each time. A special action is defined to indicate when the agent finds the referred object, and terminate the procedure. We evaluate our model on various datasets, and our algorithm significantly outperforms the compared algorithms. Notably, the accuracy improvement of our method over the recent method GroundeR and SCRC on the ReferItGame dataset are 7.67% and 18.25%, respectively.
1
0
0
0
0
0
Using Rule-Based Labels for Weak Supervised Learning: A ChemNet for Transferable Chemical Property Prediction
With access to large datasets, deep neural networks (DNN) have achieved human-level accuracy in image and speech recognition tasks. However, in chemistry, data is inherently small and fragmented. In this work, we develop an approach of using rule-based knowledge for training ChemNet, a transferable and generalizable deep neural network for chemical property prediction that learns in a weak-supervised manner from large unlabeled chemical databases. When coupled with transfer learning approaches to predict other smaller datasets for chemical properties that it was not originally trained on, we show that ChemNet's accuracy outperforms contemporary DNN models that were trained using conventional supervised learning. Furthermore, we demonstrate that the ChemNet pre-training approach is equally effective on both CNN (Chemception) and RNN (SMILES2vec) models, indicating that this approach is network architecture agnostic and is effective across multiple data modalities. Our results indicate a pre-trained ChemNet that incorporates chemistry domain knowledge, enables the development of generalizable neural networks for more accurate prediction of novel chemical properties.
1
0
0
1
0
0
Intelligent Sensor Based Bayesian Neural Network for Combined Parameters and States Estimation of a Brushed DC Motor
The objective of this paper is to develop an Artificial Neural Network (ANN) model to estimate simultaneously, parameters and state of a brushed DC machine. The proposed ANN estimator is novel in the sense that his estimates simultaneously temperature, speed and rotor resistance based only on the measurement of the voltage and current inputs. Many types of ANN estimators have been designed by a lot of researchers during the last two decades. Each type is designed for a specific application. The thermal behavior of the motor is very slow, which leads to large amounts of data sets. The standard ANN use often Multi-Layer Perceptron (MLP) with Levenberg-Marquardt Backpropagation (LMBP), among the limits of LMBP in the case of large number of data, so the use of MLP based on LMBP is no longer valid in our case. As solution, we propose the use of Cascade-Forward Neural Network (CFNN) based Bayesian Regulation backpropagation (BRBP). To test our estimator robustness a random white-Gaussian noise has been added to the sets. The proposed estimator is in our viewpoint accurate and robust.
1
0
0
0
0
0
The infinite Fibonacci groups and relative asphericity
We prove that the generalised Fibonacci group F(r,n) is infinite for (r,n) in {(7 + 5k,5), (8 + 5k,5)} where k is greater than or equal to 0. This together with previously known results yields a complete classification of the finite F(r,n), a problem that has its origins in a question by J H Conway in 1965. The method is to show that a related relative presentation is aspherical from which it can be deduced that the groups are infinite.
0
0
1
0
0
0
Representations of Polynomial Rota-Baxter Algebras
A Rota--Baxter operator is an algebraic abstraction of integration, which is the typical example of a weight zero Rota-Baxter operator. We show that studying the modules over the polynomial Rota--Baxter algebra $(k[x],P)$ is equivalent to studying the modules over the Jordan plane, and we generalize the direct decomposability results for the $(k[x],P)$-modules in [Iy] from algebraically closed fields of characteristic zero to fields of characteristic zero. Furthermore, we provide a classification of Rota--Baxter modules up to isomorphism based on indecomposable $k[x]$-modules.
0
0
1
0
0
0
The extended ROSAT-ESO Flux-Limited X-ray Galaxy Cluster Survey (REFLEX II) VII The Mass Function of Galaxy Clusters
The mass function of galaxy clusters is a sensitive tracer of the gravitational evolution of the cosmic large-scale structure and serves as an important census of the fraction of matter bound in large structures. We obtain the mass function by fitting the observed cluster X-ray luminosity distribution from the REFLEX galaxy cluster survey to models of cosmological structure formation. We marginalise over uncertainties in the cosmological parameters as well as those of the relevant galaxy cluster scaling relations. The mass function is determined with an uncertainty less than 10% in the mass range 3 x 10^12 to 5 x 10^14 M$_\odot$. For the cumulative mass function we find a slope at the low mass end consistent with a value of -1, while the mass rich end cut-off is milder than a Schechter function with an exponential term exp($- M^\delta$) with $\delta$ smaller than 1. Changing the Hubble parameter in the range $H_0 = 67 - 73 km s^-1 Mpc^{-1}$ or allowing the total neutrino mass to have a value between 0 - 0.4 eV causes variations less than the uncertainties. We estimate the fraction of mass locked up in galaxy clusters: about 4.4% of the matter in the Universe is bound in clusters (inside $r_200$) with a mass larger than 10^14 M$_\odot$ and 14% to clusters and groups with a mass larger than 10^13 M$_\odot$ at the present Universe. We also discuss the evolution of the galaxy cluster population with redshift. Our results imply that there is hardly any clusters with a mass > 10^15 M$_\odot$ above a redshift of z = 1.
0
1
0
0
0
0
An Optimized Pattern Recognition Algorithm for Anomaly Detection in IoT Environment
With the advent of large-scale heterogeneous search engines comes the problem of unified search control resulting in mismatches that could have otherwise avoided. A mechanism is needed to determine exact patterns in web mining and ubiquitous device searching. In this paper we demonstrate the use of an optimized string searching algorithm to recognize exact patterns from a large database. The underlying principle in designing the algorithm is that each letter that maps to a fixed real values and some arithmetic operations which are applied to compute corresponding pattern and substring values. We have implemented this algorithm in C. We have tested the algorithm using a large dataset. We created our own dataset using DNA sequences. The experimental result shows the number of mismatch occurred in string search from a large database. Furthermore, some of the inherent weaknesses in the use of this algorithm are highlighted.
1
0
0
0
0
0
Uniform asymptotics as a stationary point approaches an endpoint
We obtain the rigorous uniform asymptotics of a particular integral where a stationary point is close to an endpoint. There exists a general method introduced by Bleistein for obtaining uniform asymptotics in this situation. However, this method does not provide rigorous estimates for the error. Indeed, the method of Bleistein starts with a change of variables, which implies that the parameter governing how close the stationary point is to the endpoint appears in several parts of the integrand, and this means that one cannot obtain general error bounds. By adapting the above method to our particular integral, we obtain rigorous uniform leading-order asymptotics. We also give a rigorous derivation of the asymptotics to all orders of the same integral; the novelty of this second approach is that it does not involve a global change of variables.
0
0
1
0
0
0
An SDP-Based Algorithm for Linear-Sized Spectral Sparsification
For any undirected and weighted graph $G=(V,E,w)$ with $n$ vertices and $m$ edges, we call a sparse subgraph $H$ of $G$, with proper reweighting of the edges, a $(1+\varepsilon)$-spectral sparsifier if \[ (1-\varepsilon)x^{\intercal}L_Gx\leq x^{\intercal} L_{H} x\leq (1+\varepsilon) x^{\intercal} L_Gx \] holds for any $x\in\mathbb{R}^n$, where $L_G$ and $L_{H}$ are the respective Laplacian matrices of $G$ and $H$. Noticing that $\Omega(m)$ time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of $G$ requires $\Omega(n)$ edges, a natural question is to investigate, for any constant $\varepsilon$, if a $(1+\varepsilon)$-spectral sparsifier of $G$ with $O(n)$ edges can be constructed in $\tilde{O}(m)$ time, where the $\tilde{O}$ notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification require either super-linear number of edges or $m^{1+\Omega(1)}$ time. In this work we answer this question affirmatively by presenting an algorithm that, for any undirected graph $G$ and $\varepsilon>0$, outputs a $(1+\varepsilon)$-spectral sparsifier of $G$ with $O(n/\varepsilon^2)$ edges in $\tilde{O}(m/\varepsilon^{O(1)})$ time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references; (2) an efficient reduction from a two-sided spectral sparsifier to a one-sided spectral sparsifier; (3) constructing a one-sided spectral sparsifier by a semi-definite program.
1
0
0
0
0
0
Self-Adjusting Threshold Mechanism for Pixel Detectors
Readout chips of hybrid pixel detectors use a low power amplifier and threshold discrimination to process charge deposited in semiconductor sensors. Due to transistor mismatch each pixel circuit needs to be calibrated individually to achieve response uniformity. Traditionally this is addressed by programmable threshold trimming in each pixel, but requires robustness against radiation effects, temperature, and time. In this paper a self-adjusting threshold mechanism is presented, which corrects the threshold for both spatial inequality and time variation and maintains a constant response. It exploits the electrical noise as relative measure for the threshold and automatically adjust the threshold of each pixel to always achieve a uniform frequency of noise hits. A digital implementation of the method in the form of an up/down counter and combinatorial logic filter is presented. The behavior of this circuit has been simulated to evaluate its performance and compare it to traditional calibration results. The simulation results show that this mechanism can perform equally well, but eliminates instability over time and is immune to single event upsets.
0
1
0
0
0
0
Rotational inertia interface in a dynamic lattice of flexural beams
The paper presents a novel analysis of a transmission problem for a network of flexural beams incorporating conventional Euler-Bernoulli beams as well as Rayleigh beams with the enhanced rotational inertia. Although, in the low-frequency regime, these beams have a similar dynamic response, we have demonstrated novel features which occur in the transmission at higher frequencies across the layer of the Rayleigh beams.
0
1
0
0
0
0
Low-frequency wide band-gap elastic/acoustic meta-materials using the K-damping concept
The terms "acoustic/elastic meta-materials" describe a class of periodic structures with unit cells exhibiting local resonance. This localized resonant structure has been shown to result in negative effective stiffness and/or mass at frequency ranges close to these local resonances. As a result, these structures present unusual wave propagation properties at wavelengths well below the regime corresponding to band-gap generation based on spatial periodicity, (i.e. "Bragg scattering"). Therefore, acoustic/elastic meta-materials can lead to applications, especially suitable in the low-frequency range. However, low frequency range applications of such meta-materials require very heavy internal moving masses, as well as additional constraints at the amplitudes of the internally oscillating locally resonating structures, which may prohibit their practical implementation. In order to resolve this disadvantage, the K-Damping concept will be analyzed. According to this concept, the acoustic/elastic meta-materials are designed to include negative stiffness elements instead or in addition to the internally resonating added masses. This concept removes the need for the heavy locally added heavy masses, while it simultaneously exploits the negative stiffness damping phenomenon. Application of both Bloch's theory and the classical modal analysis at the one-dimensional mass-in-mass lattice is analyzed and corresponding dispersion relations are derived. The results indicate significant advantages over the conventional mass-in-a mass lattice, such as broader band-gaps and increased damping ratio and reveal significant potential in the proposed solution. Preliminary feasibility analysis for seismic meta-structures and low frequency acoustic isolation-damping confirm the strong potential and applicability of this concept.
0
1
0
0
0
0
Inertia-Constrained Pixel-by-Pixel Nonnegative Matrix Factorisation: a Hyperspectral Unmixing Method Dealing with Intra-class Variability
Blind source separation is a common processing tool to analyse the constitution of pixels of hyperspectral images. Such methods usually suppose that pure pixel spectra (endmembers) are the same in all the image for each class of materials. In the framework of remote sensing, such an assumption is no more valid in the presence of intra-class variabilities due to illumination conditions, weathering, slight variations of the pure materials, etc... In this paper, we first describe the results of investigations highlighting intra-class variability measured in real images. Considering these results, a new formulation of the linear mixing model is presented leading to two new methods. Unconstrained Pixel-by-pixel NMF (UP-NMF) is a new blind source separation method based on the assumption of a linear mixing model, which can deal with intra-class variability. To overcome UP-NMF limitations an extended method is proposed, named Inertia-constrained Pixel-by-pixel NMF (IP-NMF). For each sensed spectrum, these extended versions of NMF extract a corresponding set of source spectra. A constraint is set to limit the spreading of each source's estimates in IP-NMF. The methods are tested on a semi-synthetic data set built with spectra extracted from a real hyperspectral image and then numerically mixed. We thus demonstrate the interest of our methods for realistic source variabilities. Finally, IP-NMF is tested on a real data set and it is shown to yield better performance than state of the art methods.
1
1
0
1
0
0
Warped Product Pointwise Semi-slant Submanifolds of Sasakian Manifolds
Recently, B.-Y. Chen and O. J. Garay studied pointwise slant submanifolds of almost Hermitian manifolds. By using this notion, we investigate pointwise semi-slant submanifolds and their warped products in Sasakian manifolds. We give non-trivial examples of such submanifolds and obtain several fundamental results, including a characterization for warped product pointwise semi-slant submanifolds of Sasakian manifolds.
0
0
1
0
0
0
Bayesian Semi-supervised Learning with Graph Gaussian Processes
We propose a data-efficient Gaussian process-based Bayesian approach to the semi-supervised learning problem on graphs. The proposed model shows extremely competitive performance when compared to the state-of-the-art graph neural networks on semi-supervised learning benchmark experiments, and outperforms the neural networks in active learning experiments where labels are scarce. Furthermore, the model does not require a validation data set for early stopping to control over-fitting. Our model can be viewed as an instance of empirical distribution regression weighted locally by network connectivity. We further motivate the intuitive construction of the model with a Bayesian linear model interpretation where the node features are filtered by an operator related to the graph Laplacian. The method can be easily implemented by adapting off-the-shelf scalable variational inference algorithms for Gaussian processes.
1
0
0
1
0
0
Knowledge Engineering for Hybrid Deductive Databases
Modern knowledge base systems frequently need to combine a collection of databases in different formats: e.g., relational databases, XML databases, rule bases, ontologies, etc. In the deductive database system DDBASE, we can manage these different formats of knowledge and reason about them. Even the file systems on different computers can be part of the knowledge base. Often, it is necessary to handle different versions of a knowledge base. E.g., we might want to find out common parts or differences of two versions of a relational database. We will examine the use of abstractions of rule bases by predicate dependency and rule predicate graphs. Also the proof trees of derived atoms can help to compare different versions of a rule base. Moreover, it might be possible to have derivations joining rules with other formalisms of knowledge representation. Ontologies have shown their benefits in many applications of intelligent systems, and there have been many proposals for rule languages compatible with the semantic web stack, e.g., SWRL, the semantic web rule language. Recently, ontologies are used in hybrid systems for specifying the provenance of the different components.
1
0
0
0
0
0
Satisfiability Bounds for ω-regular Properties in Interval-valued Markov Chains
We derive an algorithm to compute satisfiability bounds for arbitrary {\omega}-regular properties in an Interval-valued Markov Chain (IMC) interpreted in the adversarial sense. IMCs generalize regular Markov Chains by assigning a range of possible values to the transition probabilities between states. In particular, we expand the automata-based theory of {\omega}-regular property verification in Markov Chains to apply it to IMCs. Any {\omega}-regular property can be represented by a Deterministic Rabin Automata (DRA) with acceptance conditions expressed by Rabin pairs. Previous works on Markov Chains have shown that computing the probability of satisfying a given {\omega}-regular property reduces to a reachability problem in the product between the Markov Chain and the corresponding DRA. We similarly define the notion of a product between an IMC and a DRA. Then, we show that in a product IMC, there exists a particular assignment of the transition values that generates a largest set of non-accepting states. Subsequently, we prove that a lower bound is found by solving a reachability problem in that refined version of the original product IMC. We derive a similar approach for computing a satisfiability upper bound in a product IMC with one Rabin pair. For product IMCs with more than one Rabin pair, we establish that computing a satisfiability upper bound is equivalent to lower-bounding the satisfiability of the complement of the original property. A search algorithm for finding the largest accepting and non-accepting sets of states in a product IMC is proposed. Finally, we demonstrate our findings in a case study.
1
0
0
0
0
0
On the relaxed mean-field stochastic control problem
This paper is concerned with optimal control problems for systems governed by mean-field stochastic differential equation, in which the control enters both the drift and the diffusion coefficient. We prove that the relaxed state process, associated with measure valued controls, is governed by an orthogonal martingale measure rather that a Brownian motion. In particular, we show by a counter example that replacing the drift and diffusion coefficient by their relaxed counterparts does not define a true relaxed control problem. We establish the existence of an optimal relaxed control, which can be approximated by a sequence of strict controls. Moreover under some convexity conditions, we show that the optimal control is realized by a strict control.
0
0
1
0
0
0
A New Point-set Registration Algorithm for Fingerprint Matching
A novel minutia-based fingerprint matching algorithm is proposed that employs iterative global alignment on two minutia sets. The matcher considers all possible minutia pairings and iteratively aligns the two sets until the number of minutia pairs does not exceed the maximum number of allowable one-to-one pairings. The optimal alignment parameters are derived analytically via linear least squares. The first alignment establishes a region of overlap between the two minutia sets, which is then (iteratively) refined by each successive alignment. After each alignment, minutia pairs that exhibit weak correspondence are discarded. The process is repeated until the number of remaining pairs no longer exceeds the maximum number of allowable one-to-one pairings. The proposed algorithm is tested on both the FVC2000 and FVC2002 databases, and the results indicate that the proposed matcher is both effective and efficient for fingerprint authentication; it is fast and does not utilize any computationally expensive mathematical functions (e.g. trigonometric, exponential). In addition to the proposed matcher, another contribution of the paper is the analytical derivation of the least squares solution for the optimal alignment parameters for two point-sets lacking exact correspondence.
1
0
0
0
0
0
A Faster Solution to Smale's 17th Problem I: Real Binomial Systems
Suppose $F:=(f_1,\ldots,f_n)$ is a system of random $n$-variate polynomials with $f_i$ having degree $\leq\!d_i$ and the coefficient of $x^{a_1}_1\cdots x^{a_n}_n$ in $f_i$ being an independent complex Gaussian of mean $0$ and variance $\frac{d_i!}{a_1!\cdots a_n!\left(d_i-\sum^n_{j=1}a_j \right)!}$. Recent progress on Smale's 17th Problem by Lairez --- building upon seminal work of Shub, Beltran, Pardo, Bürgisser, and Cucker --- has resulted in a deterministic algorithm that finds a single (complex) approximate root of $F$ using just $N^{O(1)}$ arithmetic operations on average, where $N\!:=\!\sum^n_{i=1}\frac{(n+d_i)!}{n!d_i!}$ ($=n(n+\max_i d_i)^{O(\min\{n,\max_i d_i)\}}$) is the maximum possible total number of monomial terms for such an $F$. However, can one go faster when the number of terms is smaller, and we restrict to real coefficient and real roots? And can one still maintain average-case polynomial-time with more general probability measures? We show the answer is yes when $F$ is instead a binomial system --- a case whose numerical solution is a key step in polyhedral homotopy algorithms for solving arbitrary polynomial systems. We give a deterministic algorithm that finds a real approximate root (or correctly decides there are none) using just $O(n^2(\log(n)+\log\max_i d_i))$ arithmetic operations on average. Furthermore, our approach allows Gaussians with arbitrary variance. We also discuss briefly the obstructions to maintaining average-case time polynomial in $n\log \max_i d_i$ when $F$ has more terms.
1
0
0
0
0
0
Making compression algorithms for Unicode text
The majority of online content is written in languages other than English, and is most commonly encoded in UTF-8, the world's dominant Unicode character encoding. Traditional compression algorithms typically operate on individual bytes. While this approach works well for the single-byte ASCII encoding, it works poorly for UTF-8, where characters often span multiple bytes. Previous research has focused on developing Unicode compressors from scratch, which often failed to outperform established algorithms such as bzip2. We develop a technique to modify byte-based compressors to operate directly on Unicode characters, and implement variants of LZW and PPM that apply this technique. We find that our method substantially improves compression effectiveness on a UTF-8 corpus, with our PPM variant outperforming the state-of-the-art PPMII compressor. On ASCII and binary files, our variants perform similarly to the original unmodified compressors.
1
0
1
0
0
0
Adaptive Exact Learning of Decision Trees from Membership Queries
In this paper we study the adaptive learnability of decision trees of depth at most $d$ from membership queries. This has many applications in automated scientific discovery such as drugs development and software update problem. Feldman solves the problem in a randomized polynomial time algorithm that asks $\tilde O(2^{2d})\log n$ queries and Kushilevitz-Mansour in a deterministic polynomial time algorithm that asks $ 2^{18d+o(d)}\log n$ queries. We improve the query complexity of both algorithms. We give a randomized polynomial time algorithm that asks $\tilde O(2^{2d}) + 2^{d}\log n$ queries and a deterministic polynomial time algorithm that asks $2^{5.83d}+2^{2d+o(d)}\log n$ queries.
1
0
0
1
0
0
Image denoising by median filter in wavelet domain
The details of an image with noise may be restored by removing noise through a suitable image de-noising method. In this research, a new method of image de-noising based on using median filter (MF) in the wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction with median filter in experimenting with the proposed approach in order to obtain better results for image de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this experimental work, the proposed method presents better results than using only wavelet transform or median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised images.
1
0
0
0
0
0
Restoration of Images with Wavefront Aberrations
This contribution deals with image restoration in optical systems with coherent illumination, which is an important topic in astronomy, coherent microscopy and radar imaging. Such optical systems suffer from wavefront distortions, which are caused by imperfect imaging components and conditions. Known image restoration algorithms work well for incoherent imaging, they fail in case of coherent images. In this paper a novel wavefront correction algorithm is presented, which allows image restoration under coherent conditions. In most coherent imaging systems, especially in astronomy, the wavefront deformation is known. Using this information, the proposed algorithm allows a high quality restoration even in case of severe wavefront distortions. We present two versions of this algorithm, which are an evolution of the Gerchberg-Saxton and the Hybrid-Input-Output algorithm. The algorithm is verified on simulated and real microscopic images.
1
1
0
0
0
0
Controlling the shape of membrane protein polyhedra
Membrane proteins and lipids can self-assemble into membrane protein polyhedral nanoparticles (MPPNs). MPPNs have a closed spherical surface and a polyhedral protein arrangement, and may offer a new route for structure determination of membrane proteins and targeted drug delivery. We develop here a general analytic model of how MPPN self-assembly depends on bilayer-protein interactions and lipid bilayer mechanical properties. We find that the bilayer-protein hydrophobic thickness mismatch is a key molecular control parameter for MPPN shape that can be used to bias MPPN self-assembly towards highly symmetric and uniform MPPN shapes. Our results suggest strategies for optimizing MPPN shape for structural studies of membrane proteins and targeted drug delivery.
0
1
0
0
0
0
Realizing uniformly recurrent subgroups
We show that every uniformly recurrent subgroup of a locally compact group is the family of stabilizers of a minimal action on a compact space. More generally, every closed invariant subset of the Chabauty space is the family of stabilizers of an action on a compact space on which the stabilizer map is continuous everywhere. This answers a question of Glasner and Weiss. We also introduce the notion of a universal minimal flow relative to a uniformly recurrent subgroup and prove its existence and uniqueness.
0
0
1
0
0
0
A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity
In monotone submodular function maximization, approximation guarantees based on the curvature of the objective function have been extensively studied in the literature. However, the notion of curvature is often pessimistic, and we rarely obtain improved approximation guarantees, even for very simple objective functions. In this paper, we provide a novel approximation guarantee by extracting an M$^\natural$-concave function $h:2^E \to \mathbb R_+$, a notion in discrete convex analysis, from the objective function $f:2^E \to \mathbb R_+$. We introduce the notion of $h$-curvature, which measures how much $f$ deviates from $h$, and show that we can obtain a $(1-\gamma/e-\epsilon)$-approximation to the problem of maximizing $f$ under a cardinality constraint in polynomial time for any constant $\epsilon > 0$. Then, we show that we can obtain nontrivial approximation guarantees for various problems by applying the proposed algorithm.
1
0
0
0
0
0
Electrode Reactions in Slowly Relaxing Media
Standard models of reaction kinetics in condensed materials rely on the Boltzmann-Gibbs distribution for the population of reactants at the top of the free energy barrier separating them from the products. While energy dissipation and quantum effects at the barrier top can potentially affect the transmission coefficient entering the rate preexponential factor, much stronger dynamical effects on the reaction barrier are caused by the breakdown of ergodicity for populating the reaction barrier (violation of the Boltzmann-Gibbs statistics). When the spectrum of medium modes coupled to the reaction coordinate includes fluctuations slower than the reaction rate, such nuclear motions dynamically freeze on the reaction time-scale and do not contribute to the activation barrier. Here we consider the consequences of this scenario for electrode reactions in slowly relaxing media. Changing electrode overpotential speeds electrode electron transfer up, potentially cutting through the spectrum of nuclear modes coupled to the reaction coordinate. The reorganization energy of electrochemical electron transfer becomes a function of the electrode overpotential, switching between the thermodynamic value at low rates to the nonergodic limit at higher rates. The sharpness of this transition depends of the relaxation spectrum of the medium. The reorganization energy experiences a sudden drop with increasing overpotential for a medium with a Debye relaxation, but becomes a much shallower function of the overpotential for media with stretched exponential dynamics. The latter scenario characterizes electron transfer in ionic liquids. The analysis of electrode reactions in room-temperature ionic liquids shows that the magnitude of the free energy of nuclear solvation is significantly below its thermodynamic limit.
0
1
0
0
0
0
End-to-end 3D face reconstruction with deep neural networks
Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.
1
0
0
0
0
0
Entanglement of photons in their dual wave-particle nature
Wave-particle duality is the most fundamental description of the nature of a quantum object which behaves like a classical particle or wave depending on the measurement apparatus. On the other hand, entanglement represents nonclassical correlations of composite quantum systems, being also a key resource in quantum information. Despite the very recent observations of wave-particle superposition and entanglement, whether these two fundamental traits of quantum mechanics can emerge simultaneously remains an open issue. Here we introduce and experimentally realize a scheme that deterministically generates wave-particle entanglement of two photons. The elementary tool allowing this achievement is a scalable single-photon setup which can be in principle extended to generate multiphoton wave-particle entanglement. Our study reveals that photons can be entangled in their dual wave-particle nature and opens the way to potential applications in quantum information protocols exploiting the wave-particle degrees of freedom to encode qubits.
0
1
0
0
0
0
On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop an alternating optimization method with a randomization technique, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.
0
0
0
1
0
0
Threshold fluctuations in a superconducting current-carrying bridge
We calculate the energy of threshold fluctuation $\delta F_{thr}$ which triggers the transition of superconducting current-carrying bridge to resistive state. We show that the dependence $\delta F_{thr}(I)\propto I_{dep}\hbar(1-I/I_{dep})^{5/4}/e$, found by Langer and Ambegaokar for a long bridge with length $L \gg \xi$, holds far below the critical temperature both in dirty and clean limits (here $I_{dep}$ is the depairing current of the bridge and $\xi$ is a coherence length). We also find that even 'weak' local defect (leading to the small suppression of the critical current of the bridge $I_c \lesssim I_{dep}$) provides $\delta F_{thr}\propto I_c\hbar(1-I/I_c)^{3/2}/e$, typical for a short bridge with $L \ll \xi$ or a Josephson junction.
0
1
0
0
0
0
Transient phenomena in a three-layer waveguide and the analytical structure of the dispersion diagram
Excitation of waves in a three-layer acoustic wavegide is studied. The wave field is presented as a sum of integrals. The summation is held over all waveguide modes. The integration is performed over the temporal frequency axis. The dispersion diagram of the waveguide is analytically continued, and the integral is transformed by deformation of the integration contour into the domain of complex frequencies. As the result, the expression for the fast components of the signal (i.e. for the transient fields) is simplified. The structure of the Riemann surface of the dispersion diagram of the waveguide is studied. For this, a family of auxiliary problems indexed by the parameters describing the links between layers is introduced. The family depends on the linking parameters analytically, and the limiting case of weak links can be solved analytically.
0
0
1
0
0
0
Introducing symplectic billiards
In this article we introduce a simple dynamical system called symplectic billiards. As opposed to usual/Birkhoff billiards, where length is the generating function, for symplectic billiards symplectic area is the generating function. We explore basic properties and exhibit several similarities, but also differences of symplectic billiards to Birkhoff billiards.
0
0
1
0
0
0
Banchoff's sphere and branched covers over the trefoil
A filling Dehn surface in a $3$-manifold $M$ is a generically immersed surface in $M$ that induces a cellular decomposition of $M$. Given a tame link $L$ in $M$ there is a filling Dehn sphere of $M$ that "trivializes" (\emph{diametrically splits}) it. This allows to construct filling Dehn surfaces in the coverings of $M$ branched over $L$. It is shown that one of the simplest filling Dehn spheres of $S^3$ (Banchoff's sphere) diametrically splits the trefoil knot. Filling Dehn spheres, and their Johansson diagrams, are constructed for the coverings of $S^3$ branched over the trefoil. The construction is explained in detail. Johansson diagrams for generic cyclic coverings and for the simplest locally cyclic and irregular ones are constructed explicitly, providing new proofs of known results about cyclic coverings and the $3$-fold irregular covering over the trefoil.
0
0
1
0
0
0
The Impact of Social Curiosity on Information Spreading on Networks
Most information spreading models consider that all individuals are identical psychologically. They ignore, for instance, the curiosity level of people, which may indicate that they can be influenced to seek for information given their interest. For example, the game Pokémon GO spread rapidly because of the aroused curiosity among users. This paper proposes an information propagation model considering the curiosity level of each individual, which is a dynamical parameter that evolves over time. We evaluate the efficiency of our model in contrast to traditional information propagation models, like SIR or IC, and perform analysis on different types of artificial and real-world networks, like Google+, Facebook, and the United States roads map. We present a mean-field approach that reproduces with a good accuracy the evolution of macroscopic quantities, such as the density of stiflers, for the system's behavior with the curiosity. We also obtain an analytical solution of the mean-field equations that allows to predicts a transition from a phase where the information remains confined to a small number of users to a phase where it spreads over a large fraction of the population. The results indicate that the curiosity increases the information spreading in all networks as compared with the spreading without curiosity, and that this increase is larger in spatial networks than in social networks. When the curiosity is taken into account, the maximum number of informed individuals is reached close to the transition point. Since curious people are more open to a new product, concepts, and ideas, this is an important factor to be considered in propagation modeling. Our results contribute to the understanding of the interplay between diffusion process and dynamical heterogeneous transmission in social networks.
1
1
0
0
0
0
Decomposition of mean-field Gibbs distributions into product measures
We show that under a low complexity condition on the gradient of a Hamiltonian, Gibbs distributions on the Boolean hypercube are approximate mixtures of product measures whose probability vectors are critical points of an associated mean-field functional. This extends a previous work by the first author. As an application, we demonstrate how this framework helps characterize both Ising models satisfying a mean-field condition and the conditional distributions which arise in the emerging theory of nonlinear large deviations, both in the dense case and in the polynomially-sparse case.
0
0
1
1
0
0
Analytic Discs and Uniform Algebras on Real-Analytic Varieties
Under very general conditions it is shown that if $A$ is a uniform algebra generated by real-analytic functions, then either $A$ consists of all continuous functions or else there exists a disc on which every function in $A$ is holomorphic. This strengthens several earlier results concerning uniform algebras generated by real-analytic functions.
0
0
1
0
0
0
Novel solid state vacuum quartz encapsulated growth of p-Terphenyl: the parent High Tc Oraganic Superconductor (HTOS)
We report an easy and versatile route for the synthesis of the parent phase of newest superconducting wonder material i.e. p-Terphenyl. Doped p-terphenyl has recently shown superconductivity with transition temperature as high as 120K. For crystal growth, the commercially available p-Terphenyl powder is pelletized, encapsulated in evacuated (10-4 Torr) quartz tube and subjected to high temperature (260C) melt followed by slow cooling at 5C/hour. Simple temperature controlled heating furnace is used during the process. The obtained crystal is one piece, shiny and plate like. Single crystal surface XRD (X-ray Diffraction) showed unidirectional (00l) lines, indicating that the crystal is grown along c-direction. Powder XRD of the specimen showed that as grown p-Terphenyl is crystallized in monoclinic structure with space group P21/a space group, having lattice parameters a = 8.08(2) A, b = 5.62(5) A and c= 13.58(3) A. Scanning electron microscopy (SEM) pictures of the crystal showed clear layered slab like growth without any visible contamination from oxygen. Characteristic reported Raman active modes related to C-C-C bending, C-H bending, C-C stretching and C-H stretching vibrations are seen clearly for the studied p-Terphenyl crystal. The physical properties of crystal are yet underway. The short letter reports an easy and versatile crystal growth method for obtaining quality p-terphenyl. The same growth method may probably be applied to doped p-terphenyl and to subsequently achieve superconductivity to the tune of as high 120K for the newest superconductivity wonder i.e., High Tc Oraganic Superconductor (HTOS).
0
1
0
0
0
0
Twistors from Killing Spinors alias Radiation from Pair Annihilation I: Theoretical Considerations
This paper is intended to be a further step through our Killing spinor programme started with Class. Quantum Grav. \textbf{32}, 175007 (2015), and we will advance our programme in accordance with the road map recently given in arXiv:1611.04424v2. In the latter reference many open problems were declared, one of which contained the uncovered relations between specific spinors in spacetime represented by an arrow diagram built upon them. This work deals with one of the arrows with almost all of its details and ends up with an important physical interpretation of this setup in terms of the quantum electrodynamical pair annihilation process. This method will shed light on the classification of pseudo-Riemannian manifolds admitting twistors in connection with the classification problem related to Killing spinors. Many physical interpretations are given during the text some of which include dynamics of brane immersions, quantum field theoretical considerations and black hole evaporation.
0
0
1
0
0
0
Table Space Designs For Implicit and Explicit Concurrent Tabled Evaluation
One of the main advantages of Prolog is its potential for the implicit exploitation of parallelism and, as a high-level language, Prolog is also often used as a means to explicitly control concurrent tasks. Tabling is a powerful implementation technique that overcomes some limitations of traditional Prolog systems in dealing with recursion and redundant sub-computations. Given these advantages, the question that arises is if tabling has also the potential for the exploitation of concurrency/parallelism. On one hand, tabling still exploits a search space as traditional Prolog but, on the other hand, the concurrent model of tabling is necessarily far more complex since it also introduces concurrency on the access to the tables. In this paper, we summarize Yap's main contributions to concurrent tabled evaluation and we describe the design and implementation challenges of several alternative table space designs for implicit and explicit concurrent tabled evaluation which represent different trade-offs between concurrency and memory usage. We also motivate for the advantages of using fixed-size and lock-free data structures, elaborate on the key role that the engine's memory allocator plays on such environments, and discuss how Yap's mode-directed tabling support can be extended to concurrent evaluation. Finally, we present our future perspectives towards an efficient and novel concurrent framework which integrates both implicit and explicit concurrent tabled evaluation in a single Prolog engine. Under consideration in Theory and Practice of Logic Programming (TPLP).
1
0
0
0
0
0
Evidence for triplet superconductivity near an antiferromagnetic instability in CrAs
Superconductivity was recently observed in CrAs as the helimagnetic order is suppressed by applying pressure, suggesting possible unconventional superconductivity. To reveal the nature of the superconducting order parameter of CrAs, here we report the angular dependence of the upper critical field under pressure. Upon rotating the field by 360 degrees in the $bc$-plane, six maxima are observed in the upper critical field, where the oscillations have both six-fold and two-fold symmetric components. Our analysis suggests the presence of an unconventional odd-parity spin triplet state.
0
1
0
0
0
0
Singlet ground state in the spin-$1/2$ weakly coupled dimer compound NH$_4$[(V$_2$O$_3$)$_2$(4,4$^\prime$-$bpy$)$_2$(H$_2$PO$_4$)(PO$_4$)$_2$]$\cdot$0.5H$_2$O
We present the synthesis and a detailed investigation of structural and magnetic properties of polycrystalline NH$_4$[(V$_2$O$_3$)$_2$(4,4$^\prime$-$bpy$)$_2$(H$_2$PO$_4$)(PO$_4$)$_2$]$\cdot$0.5H$_2$O by means of x-ray diffraction, magnetic susceptibility, electron spin resonance, and $^{31}$P nuclear magnetic resonance measurements. Temperature dependent magnetic susceptibility could be described well using a weakly coupled spin-$1/2$ dimer model with an excitation gap $\Delta/k_{\rm B}\simeq 26.1$ K between the singlet ground state and triplet excited states and a weak inter-dimer exchange coupling $J^\prime/k_{\rm B} \simeq 4.6$ K. A gapped chain model also describes the data well with a gap of about 20 K. The ESR intensity as a function of temperature traces the bulk susceptibility nicely. The isotropic Land$\acute{\rm e}$ $g$-factor is estimated to be about $g \simeq 1.97$, at room temperature. We are able to resolve the $^{31}$P NMR signal as coming from two inequivalent P-sites in the crystal structure. The hyperfine coupling constant between $^{31}$P nucleus and V$^{4+}$ spins is calculated to be $A_{\rm hf}(1) \simeq 2963$ Oe/$\mu_{\rm B}$ and $A_{\rm hf}(2) \simeq 1466$ Oe/$\mu_{\rm B}$ for the P(1) and P(2) sites, respectively. Our NMR shift and spin-lattice relaxation rate for both the $^{31}$P sites show an activated behaviour at low temperatures, further confirming the singlet ground state. The estimated value of the spin gap from the NMR data measured in an applied field of $H = 9.394$ T is consistent with the gap obtained from the magnetic susceptibility analysis using the dimer model. Because of a relatively small spin gap, NH$_4$[(V$_2$O$_3$)$_2$(4,4$^\prime$-$bpy$)$_2$(H$_2$PO$_4$)(PO$_4$)$_2$]$\cdot$0.5H$_2$O is a promising compound for further experimental studies under high magnetic fields.
0
1
0
0
0
0
Deep learning bank distress from news and numerical financial data
In this paper we focus our attention on the exploitation of the information contained in financial news to enhance the performance of a classifier of bank distress. Such information should be analyzed and inserted into the predictive model in the most efficient way and this task deals with all the issues related to text analysis and specifically analysis of news media. Among the different models proposed for such purpose, we investigate one of the possible deep learning approaches, based on a doc2vec representation of the textual data, a kind of neural network able to map the sequential and symbolic text input onto a reduced latent semantic space. Afterwards, a second supervised neural network is trained combining news data with standard financial figures to classify banks whether in distressed or tranquil states, based on a small set of known distress events. Then the final aim is not only the improvement of the predictive performance of the classifier but also to assess the importance of news data in the classification process. Does news data really bring more useful information not contained in standard financial variables? Our results seem to confirm such hypothesis.
1
0
0
1
0
0
Theoretical studies of superconductivity in doped BaCoSO
We investigate superconductivity that may exist in the doped BaCoSO, a multi-orbital Mott insulator with a strong antiferromagnetic ground state. The superconductivity is studied in both t-J type and Hubbard type multi-orbital models by mean field approach and random phase approximation (RPA) analysis. Even if there is no C4 rotational symmetry, it is found that the system still carries a d-wave like pairing symmetry state with gapless nodes and sign changed superconducting order parameters on Fermi surfaces. The results are largely doping insensitive. In this superconducting state, the three t2g orbitals have very different superconducting form factors in momentum space. In particular, the intra-orbital pairing of the dx2-y2 orbital has a s-wave like pairing form factor. The two methods also predict very different pairing strength on different parts of Fermi surfaces.These results suggest that BaCoSO and related materials can be a new ground to test and establish fundamental principles for unconventional high temperature superconductivity.
0
1
0
0
0
0
On low for speed oracles
Relativizing computations of Turing machines to an oracle is a central concept in the theory of computation, both in complexity theory and in computability theory(!). Inspired by lowness notions from computability theory, Allender introduced the concept of "low for speed" oracles. An oracle A is low for speed if relativizing to A has essentially no effect on computational complexity, meaning that if a decidable language can be decided in time $f(n)$ with access to oracle A, then it can be decided in time poly(f(n)) without any oracle. The existence of non-computable such A's was later proven by Bayer and Slaman, who even constructed a computably enumerable one, and exhibited a number of properties of these oracles as well as interesting connections with computability theory. In this paper, we pursue this line of research, answering the questions left by Bayer and Slaman and give further evidence that the structure of the class of low for speed oracles is a very rich one.
1
0
1
0
0
0
$\mbox{Rb}_{2}\mbox{Ti}_2\mbox{O}_{5-δ}$: A superionic conductor with colossal dielectric constant
Electrical conductivity and high dielectric constant are in principle self-excluding, which makes the terms insulator and dielectric usually synonymous. This is certainly true when the electrical carriers are electrons, but not necessarily in a material where ions are extremely mobile, electronic conduction is negligible and the charge transfer at the interface is immaterial. Here we demonstrate in a perovskite-derived structure containing five-coordinated Ti atoms, a colossal dielectric constant (up to $\mbox{10}^9$) together with very high ionic conduction $\mbox{10}^{-3}\mbox{S.cm}^{-1}$ at room temperature. Coupled investigations of I-V and dielectric constant behavior allow to demonstrate that, due to ion migration and accumulation, this material behaves like a giant dipole, exhibiting colossal electrical polarization (of the order of $\mbox{0.1\,C.cm}^{-2}$). Therefore, it may be considered as a "ferro-ionet" and is extremely promising in terms of applications.
0
1
0
0
0
0
LinNet: Probabilistic Lineup Evaluation Through Network Embedding
Which of your team's possible lineups has the best chances against each of your opponents possible lineups? In order to answer this question we develop LinNet. LinNet exploits the dynamics of a directed network that captures the performance of lineups at their matchups. The nodes of this network represent the different lineups, while an edge from node j to node i exists if lineup i has outperformed lineup j. We further annotate each edge with the corresponding performance margin (point margin per minute). We then utilize this structure to learn a set of latent features for each node (i.e., lineup) using the node2vec framework. Consequently, LinNet builds a model on this latent space for the probability of lineup A beating lineup B. We evaluate LinNet using NBA lineup data from the five seasons between 2007-08 and 2011-12. Our results indicate that our method has an out-of-sample accuracy of 69%. In comparison, utilizing the adjusted plus-minus of the players within a lineup for the same prediction problem provides an accuracy of 56%. More importantly, the probabilities are well-calibrated as shown by the probability validation curves. One of the benefits of LinNet - apart from its accuracy - is that it is generic and can be applied in different sports since the only input required is the lineups' matchup performances, i.e., not sport-specific features are needed.
0
0
0
1
0
0
Observational Equivalence in System Estimation: Contractions in Complex Networks
Observability of complex systems/networks is the focus of this paper, which is shown to be closely related to the concept of contraction. Indeed, for observable network tracking it is necessary/sufficient to have one node in each contraction measured. Therefore, nodes in a contraction are equivalent to recover for loss of observability, implying that contraction size is a key factor for observability recovery. Here, using a polynomial order contraction detection algorithm, we analyze the distribution of contractions, studying its relation with key network properties. Our results show that contraction size is related to network clustering coefficient and degree heterogeneity. Particularly, in networks with power-law degree distribution, if the clustering coefficient is high there are less contractions with smaller size on average. The implication is that estimation/tracking of such systems requires less number of measurements, while their observational recovery is more restrictive in case of sensor failure. Further, in Small-World networks higher degree heterogeneity implies that there are more contractions with smaller size on average. Therefore, the estimation of representing system requires more measurements, and also the recovery of measurement failure is more limited. These results imply that one can tune the properties of synthetic networks to alleviate their estimation/observability recovery.
1
0
0
0
0
0
Incremental Adversarial Domain Adaptation for Continually Changing Environments
Continuous appearance shifts such as changes in weather and lighting conditions can impact the performance of deployed machine learning models. While unsupervised domain adaptation aims to address this challenge, current approaches do not utilise the continuity of the occurring shifts. In particular, many robotics applications exhibit these conditions and thus facilitate the potential to incrementally adapt a learnt model over minor shifts which integrate to massive differences over time. Our work presents an adversarial approach for lifelong, incremental domain adaptation which benefits from unsupervised alignment to a series of intermediate domains which successively diverge from the labelled source domain. We empirically demonstrate that our incremental approach improves handling of large appearance changes, e.g. day to night, on a traversable-path segmentation task compared with a direct, single alignment step approach. Furthermore, by approximating the feature distribution for the source domain with a generative adversarial network, the deployment module can be rendered fully independent of retaining potentially large amounts of the related source training data for only a minor reduction in performance.
1
0
0
1
0
0
SAND: An automated VLBI imaging and analysing pipeline - I. Stripping component trajectories
We present our implementation of an automated VLBI data reduction pipeline dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results since less human interference is involved. Source extraction is done in the image plane, while deconvolution and model fitting are done in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarisation maps, proper motion estimates, core light curves and multi-band spectra. We have developed a regression strip algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and determine their proper motions.
0
1
0
0
0
0
Quench-induced entanglement and relaxation dynamics in Luttinger liquids
We investigate the time evolution towards the asymptotic steady state of a one dimensional interacting system after a quantum quench. We show that at finite time the latter induces entanglement between right- and left- moving density excitations, encoded in their cross-correlators, which vanishes in the long-time limit. This behavior results in a universal time-decay in system spectral properties $ \propto t^{-2} $, in addition to non-universal power-law contributions typical of Luttinger liquids. Importantly, we argue that the presence of quench-induced entanglement clearly emerges in transport properties, such as charge and energy currents injected in the system from a biased probe, and determines their long-time dynamics. In particular, energy fractionalization phenomenon turns out to be a promising platform to observe the universal power-law decay $ \propto t^{-2} $ induced by entanglement and represents a novel way to study the corresponding relaxation mechanism.
0
1
0
0
0
0
Coded Caching Schemes with Low Rate and Subpacketizations
Coded caching scheme, which is an effective technique to increase the transmission efficiency during peak traffic times, has recently become quite popular among the coding community. Generally rate can be measured to the transmission in the peak traffic times, i.e., this efficiency increases with the decreasing of rate. In order to implement a coded caching scheme, each file in the library must be split in a certain number of packets. And this number directly reflects the complexity of a coded caching scheme, i.e., the complexity increases with the increasing of the packet number. However there exists a tradeoff between the rate and packet number. So it is meaningful to characterize this tradeoff and design the related Pareto-optimal coded caching schemes with respect to both parameters. Recently, a new concept called placement delivery array (PDA) was proposed to characterize the coded caching scheme. However as far as we know no one has yet proved that one of the previously known PDAs is Pareto-optimal. In this paper, we first derive two lower bounds on the rate under the framework of PDA. Consequently, the PDA proposed by Maddah-Ali and Niesen is Pareto-optimal, and a tradeoff between rate and packet number is obtained for some parameters. Then, from the above observations and the view point of combinatorial design, two new classes of Pareto-optimal PDAs are obtained. Based on these PDAs, the schemes with low rate and packet number are obtained. Finally the performance of some previously known PDAs are estimated by comparing with these two classes of schemes.
1
0
0
0
0
0
Multitask diffusion adaptation over networks with common latent representations
Online learning with streaming data in a distributed and collaborative manner can be useful in a wide range of applications. This topic has been receiving considerable attention in recent years with emphasis on both single-task and multitask scenarios. In single-task adaptation, agents cooperate to track an objective of common interest, while in multitask adaptation agents track multiple objectives simultaneously. Regularization is one useful technique to promote and exploit similarity among tasks in the latter scenario. This work examines an alternative way to model relations among tasks by assuming that they all share a common latent feature representation. As a result, a new multitask learning formulation is presented and algorithms are developed for its solution in a distributed online manner. We present a unified framework to analyze the mean-square-error performance of the adaptive strategies, and conduct simulations to illustrate the theoretical findings and potential applications.
1
0
0
1
0
0
Strong perpendicular magnetic anisotropy energy density at Fe alloy/HfO2 interfaces
We report on the perpendicular magnetic anisotropy (PMA) behavior of heavy metal (HM)/ Fe alloy/MgO thin film heterostructures after an ultrathin HfO2 passivation layer is inserted between the Fe alloy and the MgO. This is accomplished by depositing one to two atomic layers of Hf onto the Fe alloy before the subsequent rf sputter deposition of the MgO layer. This Hf layer is fully oxidized during the subsequent deposition of the MgO layer, as confirmed by X-ray photoelectron spectroscopy measurements. As the result a strong interfacial perpendicular anisotropy energy density can be achieved without any post-fabrication annealing treatment, for example 1.7 erg/cm^2 for the Ta/Fe60Co20B20/HfO2/MgO heterostructure. Depending on the HM, further enhancements of the PMA can be realized by thermal annealing to at least 400C. We show that ultra-thin HfO2 layers offer a range of options for enhancing the magnetic properties of magnetic heterostructures for spintronics applications.
0
1
0
0
0
0
Covering Groups of Nonconnected Topological Groups and 2-Groups
We investigate the universal cover of a topological group that is not necessarily connected. Its existence as a topological group is governed by a Taylor cocycle, an obstruction in 3-cohomology. Alternatively, it always exists as a topological 2-group. The splitness of this 2-group is also governed by an obstruction in 3-cohomology, a Sinh cocycle. We give explicit formulas for both obstructions and show that they are inverse of each other.
0
0
1
0
0
0
Polarization properties of turbulent synchrotron bubbles: an approach based on Chandrasekhar-Kendall functions
Synchrotron emitting bubbles arise when the outflow from a compact relativistic engine, either a Black Hole or a Neutron Star, impacts on the environment. The emission properties of synchrotron radiation are widely used to infer the dynamical properties of these bubbles, and from them the injection conditions of the engine. Radio polarization offers an important tool to investigate the level and spectrum of turbulence, the magnetic field configuration, and possibly the degree of mixing. Here we introduce a formalism based on Chandrasekhar-Kendall functions that allows us to properly take into account the geometry of the bubble, going beyond standard analysis based on periodic cartesian domains. We investigate how different turbulent spectra, magnetic helicity and particle distribution function, impact on global properties that are easily accessible to observations, even at low resolution, and we provide fitting formulae to relate observed quantities to the underlying magnetic field structure.
0
1
0
0
0
0
Efimov Effect in the Dirac Semi-metals
Efimov effect refers to quantum states with discrete scaling symmetry and a universal scaling factor, and has attracted considerable interests from nuclear to atomic physics communities. In a Dirac semi-metal, when an electron interacts with a static impurity though a Coulomb interaction, the same scaling of the kinetic and interaction energies also gives rise to such a Efimov effect. However, even when the Fermi energy exactly lies at the Dirac point, the vacuum polarization of electron-hole pair fluctuation can still screen the Coulomb interaction, which leads to derivation from this scaling symmetry and eventually breakdown of the Efimov effect. This distortion of the Efimov bound state energy due to vacuum polarization is a relativistic electron analogy of the Lamb shift for the hydrogen atom. Motivated by recent experimental observations in two- and three-dimensional Dirac semi-metals, in this paper we investigate this many-body correction to the Efimov effect, and answer the question that under what condition a good number of Efimov-like bound states can still be observed in these condensed matter experiments.
0
1
0
0
0
0
Logic Lectures: Gödel's Basic Logic Course at Notre Dame
An edited version is given of the text of Gödel's unpublished manuscript of the notes for a course in basic logic he delivered at the University of Notre Dame in 1939. Gödel's notes deal with what is today considered as important logical problems par excellence, completeness, decidability, independence of axioms, and with natural deduction too, which was all still a novelty at the time the course was delivered. Full of regards towards beginners, the notes are not excessively formalistic. Gödel presumably intended them just for himself, and they are full of abbreviations. This together with some other matters (like two versions of the same topic, and guessing the right order of the pages) required additional effort to obtain a readable edited version. Because of the quality of the material provided by Gödel, including also important philosophical points, this effort should however be worthwhile. The edited version of the text is accompanied by another version, called the source version, which is quite close to Gödel's manuscript. It is meant to be a record of the editorial interventions involved in producing the edited version (in particular, how the abbreviations were disabridged), and a justification of that later version.
0
0
1
0
0
0
Field dependent neutron diffraction study in Ni50Mn38Sb12 Heusler alloy
In this paper, we present temperature and field dependent neutron diffraction (ND) study to unravel the structural and the magnetic properties in Ni50Mn38Sb12 Heusler system. This alloy shows martensitic transition from high temperature austenite cubic phase to low temperature martensite orthorhombic phase on cooling. At 3 K, the lattice parameters and magnetic moments are found to be almost insensitive to field. Just below the martensitic transition temperature, the martensite phase fraction is found to be 85%. Upon applying the field, the austenite phase becomes dominant, and the field induced reverse martensitic transition is clearly observed in the ND data. Therefore, the present study gives an estimate of the strength of the martensite phase or the sharpness of the martensitic transition. Variation of individual moments and the change in the phase fraction obtained from the analysis of the ND data vividly show the change in the magneto-structural state of the material across the transition.
0
1
0
0
0
0
Optimistic lower bounds for convex regularized least-squares
Minimax lower bounds are pessimistic in nature: for any given estimator, minimax lower bounds yield the existence of a worst-case target vector $\beta^*_{worst}$ for which the prediction error of the given estimator is bounded from below. However, minimax lower bounds shed no light on the prediction error of the given estimator for target vectors different than $\beta^*_{worst}$. A characterization of the prediction error of any convex regularized least-squares is given. This characterization provide both a lower bound and an upper bound on the prediction error. This produces lower bounds that are applicable for any target vector and not only for a single, worst-case $\beta^*_{worst}$. Finally, these lower and upper bounds on the prediction error are applied to the Lasso is sparse linear regression. We obtain a lower bound involving the compatibility constant for any tuning parameter, matching upper and lower bounds for the universal choice of the tuning parameter, and a lower bound for the Lasso with small tuning parameter.
0
0
1
1
0
0
Continual Lifelong Learning with Neural Networks: A Review
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to the long-term memory consolidation and retrieval without catastrophic forgetting. Consequently, lifelong learning capabilities are crucial for autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
0
0
0
1
1
0
Potential kernel, hitting probabilities and distributional asymptotics
Z^d-extensions of probability-preserving dynamical systems are themselves dynamical systems preserving an infinite measure, and generalize random walks. Using the method of moments, we prove a generalized central limit theorem for additive functionals of the extension of integral zero, under spectral assumptions. As a corollary, we get the fact that Green-Kubo's formula is invariant under induction. This allows us to relate the hitting probability of sites with the symmetrized potential kernel, giving an alternative proof and generalizing a theorem of Spitzer. Finally, this relation is used to improve in turn the asumptions of the generalized central limit theorem. Applications to Lorentz gases in finite horizon and to the geodesic flow on abelian covers of compact manifolds of negative curvature are discussed.
0
0
1
0
0
0
On the economics of electrical storage for variable renewable energy sources
The use of renewable energy sources is a major strategy to mitigate climate change. Yet Sinn (2017) argues that excessive electrical storage requirements limit the further expansion of variable wind and solar energy. We question, and alter, strong implicit assumptions of Sinn's approach and find that storage needs are considerably lower, up to two orders of magnitude. First, we move away from corner solutions by allowing for combinations of storage and renewable curtailment. Second, we specify a parsimonious optimization model that explicitly considers an economic efficiency perspective. We conclude that electrical storage is unlikely to limit the transition to renewable energy.
1
0
0
0
0
0
Deep Learning in Customer Churn Prediction: Unsupervised Feature Learning on Abstract Company Independent Feature Vectors
As companies increase their efforts in retaining customers, being able to predict accurately ahead of time, whether a customer will churn in the foreseeable future is an extremely powerful tool for any marketing team. The paper describes in depth the application of Deep Learning in the problem of churn prediction. Using abstract feature vectors, that can generated on any subscription based company's user event logs, the paper proves that through the use of the intrinsic property of Deep Neural Networks (learning secondary features in an unsupervised manner), the complete pipeline can be applied to any subscription based company with extremely good churn predictive performance. Furthermore the research documented in the paper was performed for Framed Data (a company that sells churn prediction as a service for other companies) in conjunction with the Data Science Institute at Lancaster University, UK. This paper is the intellectual property of Framed Data.
1
0
0
1
0
0
Predicting and Discovering True Muonium
The recent observation of discrepancies in the muonic sector motivates searches for the yet undiscovered atom true muonium $(\mu^+\mu^-)$. To leverage potential experimental signals, precise theoretical calculations are required. I will present the on-going work to compute higher-order corrections to the hyperfine splitting and the Lamb shift. Further, possible detection in rare meson decay experiments like REDTOP and using true muonium production to constrain mesonic form factors will be discussed.
0
1
0
0
0
0
Prediction of half-metallic properties in TlCrS2 and TlCrSe2 based on density functional theory
Half-metallic properties of TlCrS2, TlCrSe2 and hypothetical TlCrSSe have been investigated by first-principles all-electron full-potential linearized augmented plane wave plus local orbital (FP-LAPW+lo) method based on density functional theory (DFT). The results of calculations show that TlCrS2 and TlCrSSe are half-metals with energy gap (Eg ) ~0.12 ev for spin-down channel. Strong hybridization of p-state of chalchogen and d-state of Cr leads to bonding and antibonding states and subsequently to the appearance of a gap in spin-down channel of TlCrS2 and TlCrSSe. In the case of TlCrSe2, there is a partial hybridization and p-state is partially present in the DOS at Fermi level making this compound nearly half- metallic. The present calculations revealed that total magnetic moment keeps its integer value on a relatively wide range of changes in volume (-10% 10%) for TlCrS2 and TlCrSSe, while total magnetic moment of TlCrSe2 decreases with increasing volume approaching to integer value 3{\mu}B.
0
1
0
0
0
0
Giant interfacial perpendicular magnetic anisotropy in Fe/CuIn$_{1-x}$Ga$_x$Se$_2$ beyond Fe/MgO
We study interfacial magnetocrystalline anisotropies in various Fe/semiconductor heterostructures by means of first-principles calculations. We find that many of those systems show perpendicular magnetic anisotropy (PMA) with a positive value of the interfacial anisotropy constant $K_{\rm i}$. In particular, the Fe/CuInSe$_2$ interface has a large $K_{\rm i}$ of $\sim 2.3\,{\rm mJ/m^2}$, which is about 1.6 times larger than that of Fe/MgO known as a typical system with relatively large PMA. We also find that the values of $K_{\rm i}$ in almost all the systems studied in this work follow the well-known Bruno's relation, which indicates that minority-spin states around the Fermi level provide dominant contributions to the interfacial magnetocrystalline anisotropies. Detailed analyses of the local density of states and wave-vector-resolved anisotropy energy clarify that the large $K_{\rm i}$ in Fe/CuInSe$_2$ is attributed to the preferable $3d$-orbital configurations around the Fermi level in the minority-spin states of the interfacial Fe atoms. Moreover, we have shown that the locations of interfacial Se atoms are the key for such orbital configurations of the interfacial Fe atoms.
0
1
0
0
0
0