title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Application of generative autoencoder in de novo molecular design
A major challenge in computational chemistry is the generation of novel molecular structures with desirable pharmacological and physiochemical properties. In this work, we investigate the potential use of autoencoder, a deep learning methodology, for de novo molecular design. Various generative autoencoders were used to map molecule structures into a continuous latent space and vice versa and their performance as structure generator was assessed. Our results show that the latent space preserves chemical similarity principle and thus can be used for the generation of analogue structures. Furthermore, the latent space created by autoencoders were searched systematically to generate novel compounds with predicted activity against dopamine receptor type 2 and compounds similar to known active compounds not included in the training set were identified.
1
0
0
1
0
0
Prospects for Measuring Cosmic Microwave Background Spectral Distortions in the Presence of Foregrounds
Measurements of cosmic microwave background spectral distortions have profound implications for our understanding of physical processes taking place over a vast window in cosmological history. Foreground contamination is unavoidable in such measurements and detailed signal-foreground separation will be necessary to extract cosmological science. We present MCMC-based spectral distortion detection forecasts in the presence of Galactic and extragalactic foregrounds for a range of possible experimental configurations, focusing on the Primordial Inflation Explorer (PIXIE) as a fiducial concept. We consider modifications to the baseline PIXIE mission (operating 12 months in distortion mode), searching for optimal configurations using a Fisher approach. Using only spectral information, we forecast an extended PIXIE mission to detect the expected average non-relativistic and relativistic thermal Sunyaev-Zeldovich distortions at high significance (194$\sigma$ and 11$\sigma$, respectively), even in the presence of foregrounds. The $\Lambda$CDM Silk damping $\mu$-type distortion is not detected without additional modifications of the instrument or external data. Galactic synchrotron radiation is the most problematic source of contamination in this respect, an issue that could be mitigated by combining PIXIE data with future ground-based observations at low frequencies ($\nu < 15-30$GHz). Assuming moderate external information on the synchrotron spectrum, we project an upper limit of $|\mu| < 3.6\times 10^{-7}$ (95\% c.l.), slightly more than one order of magnitude above the fiducial $\Lambda$CDM signal from the damping of small-scale primordial fluctuations, but a factor of $\simeq 250$ improvement over the current upper limit from COBE/FIRAS. This limit could be further reduced to $|\mu| < 9.4\times 10^{-8}$ (95\% c.l.) with more optimistic assumptions about low-frequency information. (Abridged)
0
1
0
0
0
0
The Fog of War: A Machine Learning Approach to Forecasting Weather on Mars
For over a decade, scientists at NASA's Jet Propulsion Laboratory (JPL) have been recording measurements from the Martian surface as a part of the Mars Exploration Rovers mission. One quantity of interest has been the opacity of Mars's atmosphere for its importance in day-to-day estimations of the amount of power available to the rover from its solar arrays. This paper proposes the use of neural networks as a method for forecasting Martian atmospheric opacity that is more effective than the current empirical model. The more accurate prediction provided by these networks would allow operators at JPL to make more accurate predictions of the amount of energy available to the rover when they plan activities for coming sols.
1
1
0
0
0
0
Substrate inhibition imposes fitness penalty at high protein stability
Proteins are only moderately stable. It has long been debated whether this narrow range of stabilities is solely a result of neutral drift towards lower stability or purifying selection against excess stability is also at work - for which no experimental evidence was found so far. Here we show that mutations outside the active site in the essential E. coli enzyme adenylate kinase result in stability-dependent increase in substrate inhibition by AMP, thereby impairing overall enzyme activity at high stability. Such inhibition caused substantial fitness defects not only in the presence of excess substrate but also under physiological conditions. In the latter case, substrate inhibition caused differential accumulation of AMP in the stationary phase for the inhibition prone mutants. Further, we show that changes in flux through Adk could accurately describe the variation in fitness effects. Taken together, these data suggest that selection against substrate inhibition and hence excess stability may have resulted in a narrow range of optimal stability observed for modern proteins.
0
0
0
0
1
0
Exact relativistic Toda chain eigenfunctions from Separation of Variables and gauge theory
We provide a proposal, motivated by Separation of Variables and gauge theory arguments, for constructing exact solutions to the quantum Baxter equation associated to the $N$-particle relativistic Toda chain and test our proposal against numerical results. Quantum Mechanical non-perturbative corrections, essential in order to obtain a sensible solution, are taken into account in our gauge theory approach by considering codimension two defects on curved backgrounds (squashed $S^5$ and degenerate limits) rather than flat space; this setting also naturally incorporates exact quantization conditions and energy spectrum of the relativistic Toda chain as well as its modular dual structure.
0
1
0
0
0
0
Simple Compact Monotone Tree Drawings
A monotone drawing of a graph G is a straight-line drawing of G such that every pair of vertices is connected by a path that is monotone with respect to some direction. Trees, as a special class of graphs, have been the focus of several papers and, recently, He and He~\cite{mt:4} showed how to produce a monotone drawing of an arbitrary $n$-vertex tree that is contained in a $12n \times 12n$ grid. All monotone tree drawing algorithms that have appeared in the literature consider rooted ordered trees and they draw them so that (i) the root of the tree is drawn at the origin of the drawing, (ii) the drawing is confined in the first quadrant, and (iii) the ordering/embedding of the tree is respected. In this paper, we provide a simple algorithm that has the exact same characteristics and, given an $n$-vertex rooted tree $T$, it outputs a monotone drawing of $T$ that fits on a $n \times n$ grid. For unrooted ordered trees, we present an algorithms that produces monotone drawings that respect the ordering and fit in an $(n+1) \times (\frac{n}{2} +1)$ grid, while, for unrooted non-ordered trees we produce monotone drawings of good aspect ratio which fit on a grid of size at most $\left\lfloor \frac{3}{4} \left(n+2\right)\right\rfloor \times \left\lfloor \frac{3}{4} \left(n+2\right)\right\rfloor$.
1
0
0
0
0
0
Context-Free Path Querying by Matrix Multiplication
Graph data models are widely used in many areas, for example, bioinformatics, graph databases. In these areas, it is often required to process queries for large graphs. Some of the most common graph queries are navigational queries. The result of query evaluation is a set of implicit relations between nodes of the graph, i.e. paths in the graph. A natural way to specify these relations is by specifying paths using formal grammars over the alphabet of edge labels. An answer to a context-free path query in this approach is usually a set of triples (A, m, n) such that there is a path from the node m to the node n, whose labeling is derived from a non-terminal A of the given context-free grammar. This type of queries is evaluated using the relational query semantics. Another example of path query semantics is the single-path query semantics which requires presenting a single path from the node m to the node n, whose labeling is derived from a non-terminal A for all triples (A, m, n) evaluated using the relational query semantics. There is a number of algorithms for query evaluation which use these semantics but all of them perform poorly on large graphs. One of the most common technique for efficient big data processing is the use of a graphics processing unit (GPU) to perform computations, but these algorithms do not allow to use this technique efficiently. In this paper, we show how the context-free path query evaluation using these query semantics can be reduced to the calculation of the matrix transitive closure. Also, we propose an algorithm for context-free path query evaluation which uses relational query semantics and is based on matrix operations that make it possible to speed up computations by using a GPU.
1
0
0
0
0
0
Equivalence of sparse and Carleson coefficients for general sets
We remark that sparse and Carleson coefficients are equivalent for every countable collection of Borel sets and hence, in particular, for dyadic rectangles, the case relevant to the theory of bi-parameter singular integrals. The key observation is that a dual refomulation by I. E. Verbitsky for Carleson coefficients over dyadic cubes holds also for Carleson coefficients over general sets. We give a simple proof for this reformulation.
0
0
1
0
0
0
Discovering Business Rules from Business Process Models
Discovering business rules from business process models are of advantage to ensure the compliance of business processes with business rules. Furthermore it provides the agility of business processes in case of business rules evolution. Current approaches are limited on types of rules that can be discovered. This paper analyses the expression power of some popular business process modelling languages in embedding business rules in its presentation and provides indicators to extract various types of business rules from business process models.
1
0
0
0
0
0
Fast Learning and Prediction for Object Detection using Whitened CNN Features
We combine features extracted from pre-trained convolutional neural networks (CNNs) with the fast, linear Exemplar-LDA classifier to get the advantages of both: the high detection performance of CNNs, automatic feature engineering, fast model learning from few training samples and efficient sliding-window detection. The Adaptive Real-Time Object Detection System (ARTOS) has been refactored broadly to be used in combination with Caffe for the experimental studies reported in this work.
1
0
0
0
0
0
N-body simulations of planet formation via pebble accretion I: First Results
Context. Planet formation with pebbles has been proposed to solve a couple of long-standing issues in the classical formation model. Some sophisticated simulations have been done to confirm the efficiency of pebble accretion. However, there has not been any global N-body simulations that compare the outcomes of planet formation via pebble accretion with observed extrasolar planetary systems. Aims. In this paper, we study the effects of a range of initial parameters of planet formation via pebble accretion, and present the first results of our simulations. Methods. We incorporate the pebble accretion model by Ida et al. (2016) in the N-body code SyMBA (Duncan et al. 1998), along with the effects of gas accretion, eccentricity and inclination damping and planet migration in the disc. Results. We confirm that pebble accretion leads to a variety of planetary systems, but have difficulty in reproducing observed properties of exoplanetary systems, such as planetary mass, semimajor axis, and eccentricity distributions. The main reason behind this is a too-efficient type I migration, which sensitively depends on the disc model. However, our simulations also lead to a few interesting predictions. First, we find that formation efficiencies of planets depend on the stellar metallicities, not only for giant planets, but also for Earths (Es) and Super-Earths (SEs). The dependency for Es/SEs is subtle. Although higher metallicity environments lead to faster formation of a larger number of Es/SEs, they also tend to be lost later via dynamical instability. Second, our results indicate that a wide range of bulk densities observed for Es and SEs is a natural consequence of dynamical evolution of planetary systems. Third, the ejection trend of our simulations suggest that one free-floating E/SE may be expected for two smaller-mass planets.
0
1
0
0
0
0
Local character of Kim-independence
We show that NSOP$_{1}$ theories are exactly the theories in which Kim-independence satisfies a form of local character. In particular, we show that if $T$ is NSOP$_{1}$, $M\models T$, and $p$ is a type over $M$, then the collection of elementary substructures of size $\left|T\right|$ over which $p$ does not Kim-fork is a club of $\left[M\right]^{\left|T\right|}$ and that this characterizes NSOP$_{1}$. We also present a new phenomenon we call dual local-character for Kim-independence in NSOP$_{1}$-theories.
0
0
1
0
0
0
A Further Analysis of The Role of Heterogeneity in Coevolutionary Spatial Games
Heterogeneity has been studied as one of the most common explanations of the puzzle of cooperation in social dilemmas. A large number of papers have been published discussing the effects of increasing heterogeneity in structured populations of agents, where it has been established that heterogeneity may favour cooperative behaviour if it supports agents to locally coordinate their strategies. In this paper, assuming an existing model of a heterogeneous weighted network, we aim to further this analysis by exploring the relationship (if any) between heterogeneity and cooperation. We adopt a weighted network which is fully populated by agents playing both the Prisoner's Dilemma or the Optional Prisoner's Dilemma games with coevolutionary rules, i.e., not only the strategies but also the link weights evolve over time. Surprisingly, results show that the heterogeneity of link weights (states) on their own does not always promote cooperation; rather cooperation is actually favoured by the increase in the number of overlapping states and not by the heterogeneity itself. We believe that these results can guide further research towards a more accurate analysis of the role of heterogeneity in social dilemmas.
1
0
0
0
0
0
Transport properties of the Azimuthal Magnetorotational Instability
The magnetorotational instability (MRI) is thought to be a powerful source of turbulence in Keplerian accretion disks. Motivated by recent laboratory experiments, we study the MRI driven by an azimuthal magnetic field in an electrically conducting fluid sheared between two concentric rotating cylinders. By adjusting the rotation rates of the cylinders, we approximate angular velocity profiles $\omega \propto r^{q}$. We perform direct numerical simulations of a steep profile close to the Rayleigh line $q \gtrsim -2 $ and a quasi-Keplerian profile $q \approx -3/2$ and cover wide ranges of Reynolds ($Re\le 4\cdot10^4$) and magnetic Prandtl numbers ($0\le Pm \le 1$). In the quasi-Keplerian case, the onset of instability depends on the magnetic Reynolds number, with $Rm_c \approx 50$, and angular momentum transport scales as $\sqrt{Pm} Re^2$ in the turbulent regime. The ratio of Maxwell to Reynolds stresses is set by $Rm$. At the onset of instability both stresses have similar magnitude, whereas the Reynolds stress vanishes or becomes even negative as $Rm$ increases. For the profile close to the Rayleigh line, the instability shares these properties as long as $Pm\gtrsim0.1$, but exhibits a markedly different character if $Pm\rightarrow 0$, where the onset of instability is governed by the Reynolds number, with $Re_c \approx 1250$, transport is via Reynolds stresses and scales as $Re^2$. At intermediate $Pm=0.01$ we observe a continuous transition from one regime to the other, with a crossover at $Rm=\mathcal{O}(100)$. Our results give a comprehensive picture of angular momentum transport of the MRI with an imposed azimuthal field.
0
1
0
0
0
0
New bounds for the Probability of Causation in Mediation Analysis
An individual has been subjected to some exposure and has developed some outcome. Using data on similar individuals, we wish to evaluate, for this case, the probability that the outcome was in fact caused by the exposure. Even with the best possible experimental data on exposure and outcome, we typically can not identify this "probability of causation" exactly, but we can provide information in the form of bounds for it. Under appropriate assumptions, these bounds can be tightened if we can make other observations (e.g., on non-experimental cases), measure additional variables (e.g., covariates) or measure complete mediators. In this work we propose new bounds for the case that a third variable mediates partially the effect of the exposure on the outcome.
0
0
1
1
0
0
Searching for a cosmological preferred direction with 147 rotationally supported galaxies
It is well known that the Milgrom's MOND (modified Newtonian dynamics) explains well the mass discrepancy problem in galaxy rotation curves. The MOND predicts a universal acceleration scale below which the Newtonian dynamics is invalid yet. The universal acceleration scale we got from the SPARC dataset is $g_†=1.02\times10^{-10} \rm m~s^{-2}$. Milgrom suggested that the acceleration scale may be a fingerprint of cosmology on local dynamics and related with the Hubble constant $g_†\sim cH_0$. In this paper, we use the hemisphere comparison method with the SPARC dataset to investigate the spatial anisotropy on the acceleration scale. We find that the hemisphere of the maximum acceleration scale is in the direction $(l,b) = ({175.5^\circ}^{+6^\circ}_{-10^\circ}, {-6.5^\circ}^{+8^\circ}_{-3^\circ})$ with $g_{†,max}=1.10\times10^{-10} \rm m~s^{-2}$, while the hemisphere of the minimum acceleration scale is in the opposite direction $(l,b) = ({355.5^\circ}^{+6^\circ}_{-10^\circ}, {6.5^\circ}^{+3^\circ}_{-8^\circ})$ with $g_{†,min}=0.76\times10^{-10} \rm m~s^{-2}$. The maximum anisotropy level reaches up to $0.37\pm0.04$. Robust tests present that such a level of anisotropy can't be reproduced by a statistically isotropic data. In addition, we show that the spatial anisotropy on the acceleration scale has little correlation with the non-uniform distribution of the SPARC data points in sky. We also find that the maximum anisotropy direction is close with other cosmological preferred directions, especially the direction of the "Australia dipole" for the fine structure constant.
0
1
0
0
0
0
MultiFIT: A Multivariate Multiscale Framework for Independence Tests
We present a framework for testing independence between two random vectors that is scalable to massive data. We break down the multivariate test into univariate tests of independence on a collection of $2\times 2$ contingency tables, constructed by sequentially discretizing the sample space. This transforms a complex problem that traditionally requires quadratic computational complexity with respect to the sample size into one that scales almost linearly with the sample size. We further consider the scenario when the dimensionality of the random vectors grows large, in which case the curse of dimensionality arises in the proposed framework through an explosion in the number of univariate tests to be completed. To overcome this difficulty we propose a data-adaptive version of our method that completes a fraction of the univariate tests judged to be more likely to contain evidence for dependency. We demonstrate the tremendous computational advantage of the algorithm in comparison to existing approaches while achieving desirable statistical power through an extensive simulation study. In addition, we illustrate how our method can be used for learning the nature of the underlying dependency. We demonstrate the use of our method through analyzing a data set from flow cytometry.
0
0
0
1
0
0
Stability of algebraic varieties and Kahler geometry
This is a survey article, based on the author's lectures in the 2015 AMS Summer Research Institute in Algebraic Geometry, and to appear in the Proceedings.
0
0
1
0
0
0
Scratch iridescence: Wave-optical rendering of diffractive surface structure
The surface of metal, glass and plastic objects is often characterized by microscopic scratches caused by manufacturing and/or wear. A closer look onto such scratches reveals iridescent colors with a complex dependency on viewing and lighting conditions. The physics behind this phenomenon is well understood; it is caused by diffraction of the incident light by surface features on the order of the optical wavelength. Existing analytic models are able to reproduce spatially unresolved microstructure such as the iridescent appearance of compact disks and similar materials. Spatially resolved scratches, on the other hand, have proven elusive due to the highly complex wave-optical light transport simulations needed to account for their appearance. In this paper, we propose a wave-optical shading model based on non-paraxial scalar diffraction theory to render this class of effects. Our model expresses surface roughness as a collection of line segments. To shade a point on the surface, the individual diffraction patterns for contributing scratch segments are computed analytically and superimposed coherently. This provides natural transitions from localized glint-like iridescence to smooth BRDFs representing the superposition of many reflections at large viewing distances. We demonstrate that our model is capable of recreating the overall appearance as well as characteristic detail effects observed on real-world examples.
1
0
0
0
0
0
An oscillation criterion for delay differential equations with several non-monotone arguments
The oscillatory behavior of the solutions to a differential equation with several non-monotone delay arguments and non-negative coefficients is studied. A new sufficient oscillation condition, involving lim sup, is obtained. An example illustrating the significance of the result is also given.
0
0
1
0
0
0
Item Recommendation with Variational Autoencoders and Heterogenous Priors
In recent years, Variational Autoencoders (VAEs) have been shown to be highly effective in both standard collaborative filtering applications and extensions such as incorporation of implicit feedback. We extend VAEs to collaborative filtering with side information, for instance when ratings are combined with explicit text feedback from the user. Instead of using a user-agnostic standard Gaussian prior, we incorporate user-dependent priors in the latent VAE space to encode users' preferences as functions of the review text. Taking into account both the rating and the text information to represent users in this multimodal latent space is promising to improve recommendation quality. Our proposed model is shown to outperform the existing VAE models for collaborative filtering (up to 29.41% relative improvement in ranking metric) along with other baselines that incorporate both user ratings and text for item recommendation.
0
0
0
1
0
0
Boltzmann Exploration Done Right
Boltzmann exploration is a classic strategy for sequential decision-making under uncertainty, and is one of the most standard tools in Reinforcement Learning (RL). Despite its widespread use, there is virtually no theoretical understanding about the limitations or the actual benefits of this exploration scheme. Does it drive exploration in a meaningful way? Is it prone to misidentifying the optimal actions or spending too much time exploring the suboptimal ones? What is the right tuning for the learning rate? In this paper, we address several of these questions in the classic setup of stochastic multi-armed bandits. One of our main results is showing that the Boltzmann exploration strategy with any monotone learning-rate sequence will induce suboptimal behavior. As a remedy, we offer a simple non-monotone schedule that guarantees near-optimal performance, albeit only when given prior access to key problem parameters that are typically not available in practical situations (like the time horizon $T$ and the suboptimality gap $\Delta$). More importantly, we propose a novel variant that uses different learning rates for different arms, and achieves a distribution-dependent regret bound of order $\frac{K\log^2 T}{\Delta}$ and a distribution-independent bound of order $\sqrt{KT}\log K$ without requiring such prior knowledge. To demonstrate the flexibility of our technique, we also propose a variant that guarantees the same performance bounds even if the rewards are heavy-tailed.
1
0
0
1
0
0
Deep reinforcement learning from human preferences
For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.
1
0
0
1
0
0
Entanglement spectroscopy on a quantum computer
We present a quantum algorithm to compute the entanglement spectrum of arbitrary quantum states. The interesting universal part of the entanglement spectrum is typically contained in the largest eigenvalues of the density matrix which can be obtained from the lower Renyi entropies through the Newton-Girard method. Obtaining the $p$ largest eigenvalues ($\lambda_1>\lambda_2\ldots>\lambda_p$) requires a parallel circuit depth of $\mathcal{O}(p(\lambda_1/\lambda_p)^p)$ and $\mathcal{O}(p\log(N))$ qubits where up to $p$ copies of the quantum state defined on a Hilbert space of size $N$ are needed as the input. We validate this procedure for the entanglement spectrum of the topologically-ordered Laughlin wave function corresponding to the quantum Hall state at filling factor $\nu=1/3$. Our scaling analysis exposes the tradeoffs between time and number of qubits for obtaining the entanglement spectrum in the thermodynamic limit using finite-size digital quantum computers. We also illustrate the utility of the second Renyi entropy in predicting a topological phase transition and in extracting the localization length in a many-body localized system.
0
1
0
0
0
0
Light Attenuation Length of High Quality Linear Alkyl Benzene as Liquid Scintillator Solvent for the JUNO Experiment
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment with a 20 kt liquid scintillator detector designed to determine the neutrino mass hierarchy, and measure the neutrino oscillation parameters. Linear alkyl benzene (LAB) will be used as the solvent for the liquid scintillation system in the central detector of JUNO. For this purpose, we have prepared LAB samples, and have measured their light attenuation lengths, with one achieving a length of 25.8 m, comparable to the diameter of the JUNO detector.
0
1
0
0
0
0
OIL: Observational Imitation Learning
Recent work has explored the problem of autonomous navigation by imitating a teacher and learning an end-to-end policy, which directly predicts controls from raw images. However, these approaches tend to be sensitive to mistakes by the teacher and do not scale well to other environments or vehicles. To this end, we propose Observational Imitation Learning (OIL), a novel imitation learning variant that supports online training and automatic selection of optimal behavior by observing multiple imperfect teachers. We apply our proposed methodology to the challenging problems of autonomous driving and UAV racing. For both tasks, we utilize the Sim4CV simulator that enables the generation of large amounts of synthetic training data and also allows for online learning and evaluation. We train a perception network to predict waypoints from raw image data and use OIL to train another network to predict controls from these waypoints. Extensive experiments demonstrate that our trained network outperforms its teachers, conventional imitation learning (IL) and reinforcement learning (RL) baselines and even humans in simulation. The project website is available at this https URL
1
0
0
0
0
0
A Multi-Scan Labeled Random Finite Set Model for Multi-object State Estimation
State space models in which the system state is a finite set--called the multi-object state--have generated considerable interest in recent years. Smoothing for state space models provides better estimation performance than filtering by using the full posterior rather than the filtering density. In multi-object state estimation, the Bayes multi-object filtering recursion admits an analytic solution known as the Generalized Labeled Multi-Bernoulli (GLMB) filter. In this work, we extend the analytic GLMB recursion to propagate the multi-object posterior. We also propose an implementation of this so-called multi-scan GLMB posterior recursion using a similar approach to the GLMB filter implementation.
0
0
0
1
0
0
On the Complexity of Sampling Nodes Uniformly from a Graph
We study a number of graph exploration problems in the following natural scenario: an algorithm starts exploring an undirected graph from some seed node; the algorithm, for an arbitrary node $v$ that it is aware of, can ask an oracle to return the set of the neighbors of $v$. (In social network analysis, a call to this oracle corresponds to downloading the profile page of user $v$ in a social network.) The goal of the algorithm is to either learn something (e.g., average degree) about the graph, or to return some random function of the graph (e.g., a uniform-at-random node), while accessing/downloading as few nodes of the graph as possible. Motivated by practical applications, we study the complexities of a variety of problems in terms of the graph's mixing time and average degree -- two measures that are believed to be quite small in real-world social networks, and that have often been used in the applied literature to bound the performance of online exploration algorithms. Our main result is that the algorithm has to access $\Omega\left(t_{\rm mix} d_{\rm avg} \epsilon^{-2} \ln \delta^{-1}\right)$ nodes to obtain, with probability at least $1-\delta$, an $\epsilon$-additive approximation of the average of a bounded function on the nodes of a graph -- this lower bound matches the performance of an algorithm that was proposed in the literature. We also give tight bounds for the problem of returning a close-to-uniform-at-random node from the graph. Finally, we give lower bounds for the problems of estimating the average degree of the graph, and the number of nodes of the graph.
1
0
0
0
0
0
A structure theorem for almost low-degree functions on the slice
The Fourier-Walsh expansion of a Boolean function $f \colon \{0,1\}^n \rightarrow \{0,1\}$ is its unique representation as a multilinear polynomial. The Kindler-Safra theorem (2002) asserts that if in the expansion of $f$, the total weight on coefficients beyond degree $k$ is very small, then $f$ can be approximated by a Boolean-valued function depending on at most $O(2^k)$ variables. In this paper we prove a similar theorem for Boolean functions whose domain is the `slice' ${{[n]}\choose{pn}} = \{x \in \{0,1\}^n\colon \sum_i x_i = pn\}$, where $0 \ll p \ll 1$, with respect to their unique representation as harmonic multilinear polynomials. We show that if in the representation of $f\colon {{[n]}\choose{pn}} \rightarrow \{0,1\}$, the total weight beyond degree $k$ is at most $\epsilon$, where $\epsilon = \min(p, 1-p)^{O(k)}$, then $f$ can be $O(\epsilon)$-approximated by a degree-$k$ Boolean function on the slice, which in turn depends on $O(2^{k})$ coordinates. This proves a conjecture of Filmus, Kindler, Mossel, and Wimmer (2015). Our proof relies on hypercontractivity, along with a novel kind of a shifting procedure. In addition, we show that the approximation rate in the Kindler-Safra theorem can be improved from $\epsilon + \exp(O(k)) \epsilon^{1/4}$ to $\epsilon+\epsilon^2 (2\ln(1/\epsilon))^k/k!$, which is tight in terms of the dependence on $\epsilon$ and misses at most a factor of $2^{O(k)}$ in the lower-order term.
1
0
0
0
0
0
Oxygen-vacancy driven electron localization and itinerancy in rutile-based TiO$_2$
Oxygen-deficient TiO$_2$ in the rutile structure as well as the Ti$_3$O$_5$ Magn{é}li phase is investigated within the charge self-consistent combination of density functional theory (DFT) with dynamical mean-field theory (DMFT). It is shown that an isolated oxygen vacancy (V$_{\rm O}$) in titanium dioxide is not sufficient to metallize the system at low temperatures. In a semiconducting phase, an in-gap state is identified at $\varepsilon_{\rm IG}^{\hfill}\sim -0.75\,$eV\, in excellent agreement with experimental data. Band-like impurity levels, resulting from a threefold V$_{\rm O}$-Ti coordination as well as entangled $(t_{2g},e_g)$ states, become localized due to site-dependent electronic correlations. Charge localization and strong orbital polarization occur in the V$_{\rm O}$-near Ti ions, which details can be modified by a variation of the correlated subspace. At higher oxygen vacancy concentration, a correlated metal is stabilized in the Magn{é}li phase. A V$_{\rm O}$-defect rutile structure of identical stoichiometry shows key differences in the orbital-resolved character and the spectral properties. Charge disproportionation is vital in the oxygen-deficient compounds, but obvious metal-insulator transitions driven or sustained by charge order are not identified.
0
1
0
0
0
0
3D printable multimaterial cellular auxetics with tunable stiffness
Auxetic materials are a novel class of mechanical metamaterials which exhibit an interesting property of negative Poisson ratio by virtue of their architecture rather than composition. It has been well established that a wide range of negative Poisson ratio can be obtained by varying the geometry and architecture of the cellular materials. However, the limited range of stiffness values obtained from a given geometry restricts their applications. Research trials have revealed that multi-material cellular designs have the capability to generate range of stiffness values as per the requirement of application. With the advancements in 3D printing, multi-material cellular designs can be realized in practice. In this work, multi-material cellular designs are investigated using finite element method. It was observed that introduction of material gradient/distribution in the cell provides a means to tune cellular stiffness as per the specific requirement. These results will aid in the design of wearable auxetic impact protection devices which rely on stiffness gradients and variable auxeticity.
0
1
0
0
0
0
MACS J0416.1-2403: Impact of line-of-sight structures on strong gravitational lensing modelling of galaxy clusters
Exploiting the powerful tool of strong gravitational lensing by galaxy clusters to study the highest-redshift Universe and cluster mass distributions relies on precise lens mass modelling. In this work, we present the first attempt at modelling line-of-sight mass distribution in addition to that of the cluster, extending previous modelling techniques that assume mass distributions to be on a single lens plane. We focus on the Hubble Frontier Field cluster MACS J0416.1-2403, and our multi-plane model reproduces the observed image positions with a rms offset of ~0.53". Starting from this best-fitting model, we simulate a mock cluster that resembles MACS J0416.1-2403 in order to explore the effects of line-of-sight structures on cluster mass modelling. By systematically analysing the mock cluster under different model assumptions, we find that neglecting the lensing environment has a significant impact on the reconstruction of image positions (rms ~0.3"); accounting for line-of-sight galaxies as if they were at the cluster redshift can partially reduce this offset. Moreover, foreground galaxies are more important to include into the model than the background ones. While the magnification factors of the lensed multiple images are recovered within ~10% for ~95% of them, those ~5% that lie near critical curves can be significantly affected by the exclusion of the lensing environment in the models (up to a factor of ~200). In addition, line-of-sight galaxies cannot explain the apparent discrepancy in the properties of massive subhalos between MACS J0416.1-2403 and N-body simulated clusters. Since our model of MACS J0416.1-2403 with line-of-sight galaxies only reduced modestly the rms offset in the image positions, we conclude that additional complexities, such as more flexible halo shapes, would be needed in future models of MACS J0416.1-2403.
0
1
0
0
0
0
What can you do with a rock? Affordance extraction via word embeddings
Autonomous agents must often detect affordances: the set of behaviors enabled by a situation. Affordance detection is particularly helpful in domains with large action spaces, allowing the agent to prune its search space by avoiding futile behaviors. This paper presents a method for affordance extraction via word embeddings trained on a Wikipedia corpus. The resulting word vectors are treated as a common knowledge database which can be queried using linear algebra. We apply this method to a reinforcement learning agent in a text-only environment and show that affordance-based action selection improves performance most of the time. Our method increases the computational complexity of each learning step but significantly reduces the total number of steps needed. In addition, the agent's action selections begin to resemble those a human would choose.
1
0
0
0
0
0
On Generalized Gibbs Ensembles with an infinite set of conserved charges
We revisit the question of whether and how the steady states arising after non-equilibrium time evolution in integrable models (and in particular in the XXZ spin chain) can be described by the so-called Generalized Gibbs Ensemble (GGE). It is known that the micro-canonical ensemble built on a complete set of charges correctly describes the long-time limit of local observables, and recently a canonical ensemble was built by Ilievski et. al. using particle occupation number operators. Here we provide an alternative construction by considering truncated GGE's (tGGE's) that only include a finite number of well localized conserved operators. It is shown that the tGGE's can approximate the steady states with arbitrary precision, i.e. all physical observables are exactly reproduced in the infinite truncation limit. In addition, we show that a complete canonical ensemble can in fact be built in terms of a new (discrete) set of charges built as linear combinations of the standard ones. Our general arguments are applied to concrete quench situations in the XXZ chain, where the initial states are simple two-site or four-site product states. Depending on the quench we find that numerical results for the local correlators can be obtained with remarkable precision using truncated GGE's with only 10-100 charges.
0
1
0
0
0
0
High resolution ion trap time-of-flight mass spectrometer for cold trapped ion experiments
Trapping molecular ions that have been sympathetically cooled with laser-cooled atomic ions is a useful platform for exploring cold ion chemistry. We designed and characterized a new experimental apparatus for probing chemical reaction dynamics between molecular cations and neutral radicals at temperatures below 1 K. The ions are trapped in a linear quadrupole radio-frequency trap and sympathetically cooled by co-trapped, laser-cooled, atomic ions. The ion trap is coupled to a time-of-flight mass spectrometer to readily identify product ion species, as well as to accurately determine trapped ion numbers. We discuss, and present in detail, the design of this ion trap time-of-flight mass spectrometer, as well as the electronics required for driving the trap and mass spectrometer. Furthermore, we measure the performance of this system, which yields mass resolutions of $m/\Delta{}m \geq 1100$ over a wide mass range, and discuss its relevance for future measurements in chemical reaction kinetics and dynamics.
0
1
0
0
0
0
The role of local-geometrical-orders on the growth of dynamic-length-scales in glass-forming liquids
The precise nature of complex structural relaxation as well as an explanation for the precipitous growth of relaxation time in cooling glass-forming liquids are essential to the understanding of vitrification of liquids. The dramatic increase of relaxation time is believed to be caused by the growth of one or more correlation lengths, which has received much attention recently. Here, we report a direct link between the growth of a specific local-geometrical-order and an increase of dynamic-length-scale as the atomic dynamics in metallic glass-forming liquids slow down. Although several types of local geometrical-orders are present in these metallic liquids, the growth of icosahedral ordering is found to be directly related to the increase of the dynamic-length-scale. This finding suggests an intriguing scenario that the transient icosahedral ordering could be the origin of the dynamic-length-scale in metallic glass-forming liquids.
0
1
0
0
0
0
Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs
A regularized optimization problem over a large unstructured graph is studied, where the regularization term is tied to the graph geometry. Typical regularization examples include the total variation and the Laplacian regularizations over the graph. When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops. In this paper, an algorithm, referred to as "Snake", is proposed to solve such regularized problems over general graphs, by taking benefit of these fast methods. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs.
1
0
0
1
0
0
Stability of the coexistent superconducting-nematic phase under the presence of intersite interactions
We analyze the effect of intersite-interaction terms on the stability of the coexisting superconucting-nematic phase (SC+N) within the extended Hubbard and $t$-$J$-$U$ models on the square lattice. In order to take into account the correlation effects with a proper precision, we use the approach based on the \textit{diagrammatic expansion of the Gutzwiller wave function} (DE-GWF), which goes beyond the renormalized mean field theory (RMFT) in a systematic manner. As a starting point of our analysis we discuss the stability region of the SC+N phase on the intrasite Coulomb repulsion-hole doping plane for the case of the Hubbard model. Next, we show that the exchange interaction term enhances superconductivity while suppresses the nematicity, whereas the intersite Coulomb repulsion term acts in the opposite manner. The competing character of the SC and N phases interplay is clearly visible throughout the analysis. A universal conclusion is that the nematic phase does not survive within the $t$-$J$-$U$ model with the value of $J$ integral typical for the high-T$_C$ cuprates ($J\approx 0.1$eV). For the sake of completeness, the effect of the correlated hopping term is also analyzed. Thus the present discussion contains all relevant two-site interaction terms which appear in the parametrized one-band model within the second quantization scheme. At the end, the influence of the higher-order terms of the diagrammatic expansion on the rotational symmetry breaking is also shown by comparing the DE-GWF results with those corresponding to the RMFT.
0
1
0
0
0
0
Multi-Advisor Reinforcement Learning
We consider tackling a single-agent RL problem by distributing it to $n$ learners. These learners, called advisors, endeavour to solve the problem from a different focus. Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system. We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the egocentric planning overestimates values of states where the other advisors disagree, and the agnostic planning is inefficient around danger zones. We introduce a novel approach called empathic and discuss its theoretical aspects. We empirically examine and validate our theoretical findings on a fruit collection task.
1
0
0
1
0
0
Using CMB spectral distortions to distinguish between dark matter solutions to the small-scale crisis
The dissipation of small-scale perturbations in the early universe produces a distortion in the blackbody spectrum of cosmic microwave background photons. In this work, we propose to use these distortions as a probe of the microphysics of dark matter on scales $1\,\textrm{Mpc}^{-1}\lesssim k \lesssim 10^{4}\,\textrm{Mpc}^{-1}$. We consider in particular models in which the dark matter is kinetically coupled to either neutrinos or photons until shortly before recombination, and compute the photon heating rate and the resultant $\mu$-distortion in both cases. We show that the $\mu$-parameter is generally enhanced relative to $\Lambda$CDM for interactions with neutrinos, and may be either enhanced or suppressed in the case of interactions with photons. The deviations from the $\Lambda$CDM signal are potentially within the sensitivity reach of a PRISM-like experiment if $\sigma_{\textrm{DM}-\gamma} \gtrsim 1.1\times10^{-30} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ and $\sigma_{\textrm{DM}-\nu} \gtrsim 4.8\times 10^{-32} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ for time-independent cross sections, and $\sigma^{0}_{\textrm{DM}-\gamma} \gtrsim 1.8 \times 10^{-40} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ and $\sigma^{0}_{\textrm{DM}-\nu} \gtrsim 2.5 \times 10^{-47} \left(m_{\textrm{DM}}/\textrm{GeV}\right) \textrm{cm}^{2}$ for cross sections scaling as temperature squared, coinciding with the parameter regions in which late kinetic decoupling may serve as a solution to the small-scale crisis. Furthermore, these $\mu$-distortion signals differ from those of warm dark matter (no deviation from $\Lambda$CDM) and a suppressed primordial power spectrum (strongly suppressed or a negative $\mu$-parameter), demonstrating that CMB spectral distortion can potentially be used to distinguish between solutions to the small-scale crisis.
0
1
0
0
0
0
Gradient Hyperalignment for multi-subject fMRI data alignment
Multi-subject fMRI data analysis is an interesting and challenging problem in human brain decoding studies. The inherent anatomical and functional variability across subjects make it necessary to do both anatomical and functional alignment before classification analysis. Besides, when it comes to big data, time complexity becomes a problem that cannot be ignored. This paper proposes Gradient Hyperalignment (Gradient-HA) as a gradient-based functional alignment method that is suitable for multi-subject fMRI datasets with large amounts of samples and voxels. The advantage of Gradient-HA is that it can solve independence and high dimension problems by using Independent Component Analysis (ICA) and Stochastic Gradient Ascent (SGA). Validation using multi-classification tasks on big data demonstrates that Gradient-HA method has less time complexity and better or comparable performance compared with other state-of-the-art functional alignment methods.
0
0
0
1
1
0
Using Task Descriptions in Lifelong Machine Learning for Improved Performance and Zero-Shot Transfer
Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong learning method based on coupled dictionary learning that utilizes high-level task descriptions to model the inter-task relationships. We show that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of learning problems. Given only the descriptor for a new task, the lifelong learner is also able to accurately predict a model for the new task through zero-shot learning using the coupled dictionary, eliminating the need to gather training data before addressing the task.
1
0
0
1
0
0
Giant Unification Theory of the Grand Unification and Gravitation Theories
Because the grand unification theory of gauge theories of strong, weak and electromagnetic interactions is based on principal bundle theory, and gravitational theory is based on the tangent vector bundle theory, so people cannot unify the these four basic interactions in principal bundle theory. This Letter discovers and gives giant unification theory of the grand unification theory and gravitation theory, i.e., the giant unification theory of strong, weak, electromagnetic and gravitational interactions according to the general fiber bundle theory, symmetry, quantitative causal principle (QCP) and so on. Consequently, the research of this Letter is based on the rigorous scientific bases of mathematics and physics. The Lagrangians of the well-known fundamental physics interactions are unifiedly deduced from QCP and satisfy the gauge invariant principle of general gauge fields interacting with Fermion and/or boson fields. The geometry and physics meanings of gauge invariant property of different physical systems are revealed, and it is discovered that all the Lagrangians of the well-known fundamental physics interactions are composed of the invariant quantities in corresponding spacetime structures. The difficulties that fundamental physics interactions and Noether theorem are not able to be unifiedly given and investigated are overcome, the unified description and origin of the fundamental physics interactions and Noether theorem are shown by QCP, their two-order general Euler-Lagrange Equations and corresponding Noether conservation currents are derived in general curved spacetime. Therefore, using the new unification theory, a lot of research works about different branches of physics etc can be renewedly done and expressed simpler with clear quantitative causal physical meanings.
0
1
0
0
0
0
Generalized Rich-Club Ordering in Networks
Rich-club ordering refers to the tendency of nodes with a high degree to be more interconnected than expected. In this paper we consider the concept of rich-club ordering when generalized to structural measures that differ from the node degree and to non-structural measures (i.e. to node metadata). The differences in considering rich-club ordering (RCO) with respect to both structural and non-structural measures is then discussed in terms of employed coefficients and of appropriate null models (link rewiring vs metadata reshuffling). Once a framework for the evaluation of generalized rich-club ordering (GRCO) is defined, we investigate such a phenomenon in real networks provided with node metadata. By considering different notions of node richness, we compare structural and non-structural rich-club ordering, observing how external information about the network nodes is able to validate the presence of rich-clubs in networked systems.
1
0
0
0
0
0
Integrating Proactive Mode Changes in Mixed Criticality Systems
In this work, we propose to integrate prediction algorithms to the scheduling of mode changes under the Earliest-Deadline-First and Fixed-priority scheduling in mixed-criticality real-time systems. The method proactively schedules a mode change in the system based on state variables such as laxity, to the percentage difference in the temporal distance between the completion time of the instance of a task and its respective deadline, by the deadline (D) stipulated for the task, in order to minimize deadline misses. The simulation model was validated against an analytical model prior to the logical integration of the Kalman-based prediction algorithm. Two study cases were presented, one covering earliest-deadline first and the other the fixed-priority scheduling approach. The results showed the gains in the adoption of the prediction approach for both scheduling paradigms by presenting a significant reduction of the number of missed deadlines for low-criticality tasks.
1
0
0
0
0
0
Leveraging Continuous Material Averaging for Inverse Electromagnetic Design
Inverse electromagnetic design has emerged as a way of efficiently designing active and passive electromagnetic devices. This maturing strategy involves optimizing the shape or topology of a device in order to improve a figure of merit--a process which is typically performed using some form of steepest descent algorithm. Naturally, this requires that we compute the gradient of a figure of merit which describes device performance, potentially with respect to many design variables. In this paper, we introduce a new strategy based on smoothing abrupt material interfaces which enables us to efficiently compute these gradients with high accuracy irrespective of the resolution of the underlying simulation. This has advantages over previous approaches to shape and topology optimization in nanophotonics which are either prone to gradient errors or place important constraints on the shape of the device. As a demonstration of this new strategy, we optimize a non-adiabatic waveguide taper between a narrow and wide waveguide. This optimization leads to a non-intuitive design with a very low insertion loss of only 0.041 dB at 1550 nm.
0
1
0
0
0
0
Robust Speech Recognition Using Generative Adversarial Networks
This paper describes a general, scalable, end-to-end framework that uses the generative adversarial network (GAN) objective to enable robust speech recognition. Encoders trained with the proposed approach enjoy improved invariance by learning to map noisy audio to the same embedding space as that of clean audio. Unlike previous methods, the new framework does not rely on domain expertise or simplifying assumptions as are often needed in signal processing, and directly encourages robustness in a data-driven way. We show the new approach improves simulated far-field speech recognition of vanilla sequence-to-sequence models without specialized front-ends or preprocessing.
1
0
0
0
0
0
Minimal inequalities for an infinite relaxation of integer programs
We show that maximal $S$-free convex sets are polyhedra when $S$ is the set of integral points in some rational polyhedron of $\mathbb{R}^n$. This result extends a theorem of Lovász characterizing maximal lattice-free convex sets. Our theorem has implications in integer programming. In particular, we show that maximal $S$-free convex sets are in one-to-one correspondence with minimal inequalities.
0
0
1
0
0
0
The population of SNe/SNRs in the starburst galaxy Arp 220. A self-consistent analysis of 20 years of VLBI monitoring
The nearby ultra-luminous infrared galaxy (ULIRG) Arp 220 is an excellent laboratory for studies of extreme astrophysical environments. For 20 years, Very Long Baseline Interferometry (VLBI) has been used to monitor a population of compact sources thought to be supernovae (SNe), supernova remnants (SNRs) and possibly active galactic nuclei (AGNs). Using new and archival VLBI data spanning 20 years, we obtain 23 high-resolution radio images of Arp 220 at wavelengths from 18 cm to 2 cm. From model-fitting to the images we obtain estimates of flux densities and sizes of all detected sources. We detect radio continuum emission from 97 compact sources and present flux densities and sizes for all analysed observation epochs. We find evidence for a LD-relation within Arp 220, with larger sources being less luminous. We find a compact source LF $n(L)\propto L^\beta$ with $\beta=-2.02\pm0.11$, similar to SNRs in normal galaxies. Based on simulations we argue that there are many relatively large and weak sources below our detection threshold. The rapidly declining object 0.2227+0.482 is proposed as a possible AGN candidate. The observations can be explained by a mixed population of SNe and SNRs, where the former expand in a dense circumstellar medium (CSM) and the latter interact with the surrounding interstellar medium (ISM). Several sources (9 of the 94 with fitted sizes) are likely luminous, type IIn SNe. This number of luminous SNe correspond to few percent of the total number of SNe in Arp 220 which is consistent with a total SN-rate of 4 yr$^{-1}$ as inferred from the total radio emission given a normal stellar initial mass function (IMF). Based on the fitted luminosity function, we argue that emission from all compact sources, also below our detection threshold, make up at most 20\% of the total radio emission at GHz frequencies.
0
1
0
0
0
0
Dirac and Chiral Quantum Spin Liquids on the Honeycomb Lattice in a Magnetic Field
Motivated by recent experimental observations in $\alpha$-RuCl$_3$, we study the $K$-$\Gamma$ model on the honeycomb lattice in an external magnetic field. By a slave-particle representation and Variational Monte Carlo calculations, we reproduce the phase transition from zigzag magnetic order to a field-induced disordered phase. The nature of this state depends crucially on the field orientation. For particular field directions in the honeycomb plane, we find a gapless Dirac spin liquid, in agreement with recent experiments on $\alpha$-RuCl$_3$. For a range of out-of-plane fields, we predict the existence of a Kalmeyer-Laughlin-type chiral spin liquid, which would show an integer-quantized thermal Hall effect.
0
1
0
0
0
0
Secular Dynamics of an Exterior Test Particle: The Inverse Kozai and Other Eccentricity-Inclination Resonances
The behavior of an interior test particle in the secular 3-body problem has been studied extensively. A well-known feature is the Lidov-Kozai resonance in which the test particle's argument of periapse librates about $\pm 90^\circ$ and large oscillations in eccentricity and inclination are possible. Less explored is the inverse problem: the dynamics of an exterior test particle and an interior perturber. We survey numerically the inverse secular problem, expanding the potential to hexadecapolar order and correcting an error in the published expansion. Four secular resonances are uncovered that persist in full $N$-body treatments (in what follows, $\varpi$ and $\Omega$ are the longitudes of periapse and of ascending node, $\omega$ is the argument of periapse, and subscripts 1 and 2 refer to the inner perturber and outer test particle): (i) an orbit-flipping quadrupole resonance requiring a non-zero perturber eccentricity $e_1$, in which $\Omega_2-\varpi_1$ librates about $\pm 90^\circ$; (ii) a hexadecapolar resonance (the "inverse Kozai" resonance) for perturbers that are circular or nearly so and inclined by $I \simeq 63^\circ/117^\circ$, in which $\omega_2$ librates about $\pm 90^\circ$ and which can vary the particle eccentricity by $\Delta e_2 \simeq 0.2$ and lead to orbit crossing; (iii) an octopole "apse-aligned" resonance at $I \simeq 46^\circ/107^\circ$ wherein $\varpi_2 - \varpi_1$ librates about $0^\circ$ and $\Delta e_2$ grows with $e_1$; and (iv) an octopole resonance at $I \simeq 73^\circ/134^\circ$ wherein $\varpi_2 + \varpi_1 - 2 \Omega_2$ librates about $0^\circ$ and $\Delta e_2$ can be as large as 0.3 for small $e_1 \neq 0$. The more eccentric the perturber, the more the particle's eccentricity and inclination vary; also, more polar orbits are more chaotic. Our inverse solutions may be applied to the Kuiper belt and debris disks, circumbinary planets, and stellar systems.
0
1
0
0
0
0
A pulsed, mono-energetic and angular-selective UV photo-electron source for the commissioning of the KATRIN experiment
The KATRIN experiment aims to determine the neutrino mass scale with a sensitivity of 200 meV/c^2 (90% C.L.) by a precision measurement of the shape of the tritium $\beta$-spectrum in the endpoint region. The energy analysis of the decay electrons is achieved by a MAC-E filter spectrometer. To determine the transmission properties of the KATRIN main spectrometer, a mono-energetic and angular-selective electron source has been developed. In preparation for the second commissioning phase of the main spectrometer, a measurement phase was carried out at the KATRIN monitor spectrometer where the device was operated in a MAC-E filter setup for testing. The results of these measurements are compared with simulations using the particle-tracking software "Kassiopeia", which was developed in the KATRIN collaboration over recent years.
0
1
0
0
0
0
Carrier frequency modulation of an acousto-optic modulator for laser stabilization
The stabilization of lasers to absolute frequency references is a fundamental requirement in several areas of atomic, molecular and optical physics. A range of techniques are available to produce a suitable reference onto which one can 'lock' the laser, many of which depend on the specific internal structure of the reference or are sensitive to laser intensity noise. We present a novel method using the frequency modulation of an acousto-optic modulator's carrier (drive) signal to generate two spatially separated beams, with a frequency difference of only a few MHz. These beams are used to probe a narrow absorption feature and the difference in their detected signals leads to a dispersion-like feature suitable for wavelength stabilization of a diode laser. This simple and versatile method only requires a narrow absorption line and is therefore suitable for both atomic and cavity based stabilization schemes. To demonstrate the suitability of this method we lock an external cavity diode laser near the $^{85}\mathrm{Rb}\,5S_{1/2}\rightarrow5P_{3/2}, F=3\rightarrow F^{\prime}=4$ using sub-Doppler pump probe spectroscopy and also demonstrate excellent agreement between the measured signal and a theoretical model.
0
1
0
0
0
0
Solving a New 3D Bin Packing Problem with Deep Reinforcement Learning Method
In this paper, a new type of 3D bin packing problem (BPP) is proposed, in which a number of cuboid-shaped items must be put into a bin one by one orthogonally. The objective is to find a way to place these items that can minimize the surface area of the bin. This problem is based on the fact that there is no fixed-sized bin in many real business scenarios and the cost of a bin is proportional to its surface area. Our research shows that this problem is NP-hard. Based on previous research on 3D BPP, the surface area is determined by the sequence, spatial locations and orientations of items. Among these factors, the sequence of items plays a key role in minimizing the surface area. Inspired by recent achievements of deep reinforcement learning (DRL) techniques, especially Pointer Network, on combinatorial optimization problems such as TSP, a DRL-based method is applied to optimize the sequence of items to be packed into the bin. Numerical results show that the method proposed in this paper achieve about 5% improvement than heuristic method.
1
0
0
0
0
0
Entendendo o Pensamento Computacional
The goal of this article is to clarify the meaning of Computational Thinking. We differentiate logical from computational reasoning and discuss the importance of Computational Thinking in solving problems. The three pillars of Computational Thinking - Abstraction, Automation and Analysis - are outlined, highlighting the role of each one in developing the skills needed for the problem-solving process. ----- O objetivo deste artigo é esclarecer o significado de Pensamento Computacional. Diferencia-se o raciocínio lógico do computacional e discute-se a importância do Pensamento Computacional na resolução de problemas. Os três pilares do Pensamento Computacional - Abstração, Automação e Análise - são delineados, destacando-se o papel de cada um deles no desenvolvimento das habilidades necessárias para o processo de solução de problemas.
1
0
0
0
0
0
Estimation of samples relevance by their histograms
The problem of the estimation of relevance to a set of histograms generated by samples of a discrete time process is discussed on the base of the variational principles proposed in the previous paper [1]. Some conditions for dimension reduction of corresponding linear programming problems are presented also.
0
0
1
0
0
0
On the synthesis of acoustic sources with controllable near fields
In this paper we present a strategy for the the synthesis of acoustic sources with controllable near fields in free space and finite depth homogeneous ocean environments. We first present the theoretical results at the basis of our discussion and then, to illustrate our findings we focus on the following three particular examples: 1. acoustic source approximating a prescribed field pattern in a given bounded sub- region of its near field. 2. acoustic source approximating different prescribed field patterns in given disjoint bounded near field sub-regions. 3. acoustic source approximating a prescribed back-propagating field in a given bounded near field sub-region while maintaining a very low far field signature. For each of these three examples, we discuss the optimization scheme used to approx- imate their solutions and support our claims through relevant numerical simulations.
0
0
1
0
0
0
Inverse Protein Folding Problem via Quadratic Programming
This paper presents a method of reconstruction a primary structure of a protein that folds into a given geometrical shape. This method predicts the primary structure of a protein and restores its linear sequence of amino acids in the polypeptide chain using the tertiary structure of a molecule. Unknown amino acids are determined according to the principle of energy minimization. This study represents inverse folding problem as a quadratic optimization problem and uses different relaxation techniques to reduce it to the problem of convex optimizations. Computational experiment compares the quality of these approaches on real protein structures.
0
0
1
0
0
0
Spoken Language Understanding on the Edge
We consider the problem of performing Spoken Language Understanding (SLU) on small devices typical of IoT applications. Our contributions are twofold. First, we outline the design of an embedded, private-by-design SLU system and show that it has performance on par with cloud-based commercial solutions. Second, we release the datasets used in our experiments in the interest of reproducibility and in the hope that they can prove useful to the SLU community.
1
0
0
0
0
0
On the link between column density distribution and density scaling relation in star formation regions
We present a method to derive the density scaling relation $\langle n\rangle \propto L^{-\alpha}$ in regions of star formation or in their turbulent vicinities from straightforward binning of the column-density distribution ($N$-pdf). The outcome of the method is studied for three types of $N$-pdf: power law ($7/5\le\alpha\le5/3$), lognormal ($0.7\lesssim\alpha\lesssim1.4$) and combination of lognormals. In the last case, the method of Stanchev et al. (2015) was also applied for comparison and a very weak (or close to zero) correlation was found. We conclude that the considered `binning approach' reflects rather the local morphology of the $N$-pdf with no reference to the physical conditions in a considered region. The rough consistency of the derived slopes with the widely adopted Larson's (1981) value $\alpha\sim1.1$ is suggested to support claims that the density-size relation in molecular clouds is indeed an artifact of the observed $N$-pdf.
0
1
0
0
0
0
Measuring Effectiveness of Video Advertisements
Advertisements are unavoidable in modern society. Times Square is notorious for its incessant display of advertisements. Its popularity is worldwide and smaller cities possess miniature versions of the display, such as Pittsburgh and its digital works in Oakland on Forbes Avenue. Tokyo's Ginza district recently rose to popularity due to its upscale shops and constant onslaught of advertisements to pedestrians. Advertisements arise in other mediums as well. For example, they help popular streaming services, such as Spotify, Hulu, and Youtube TV gather significant streams of revenue to reduce the cost of monthly subscriptions for consumers. Ads provide an additional source of money for companies and entire industries to allocate resources toward alternative business motives. They are attractive to companies and nearly unavoidable for consumers. One challenge for advertisers is examining a advertisement's effectiveness or usefulness in conveying a message to their targeted demographics. Rather than constructing a single, static image of content, a video advertisement possesses hundreds of frames of data with varying scenes, actors, objects, and complexity. Therefore, measuring effectiveness of video advertisements is important to impacting a billion-dollar industry. This paper explores the combination of human-annotated features and common video processing techniques to predict effectiveness ratings of advertisements collected from Youtube. This task is seen as a binary (effective vs. non-effective), four-way, and five-way machine learning classification task. The first findings in terms of accuracy and inference on this dataset, as well as some of the first ad research, on a small dataset are presented. Accuracies of 84\%, 65\%, and 55\% are reached on the binary, four-way, and five-way tasks respectively.
1
0
0
0
0
0
Multilinear compressive sensing and an application to convolutional linear networks
We study a deep linear network endowed with a structure. It takes the form of a matrix $X$ obtained by multiplying $K$ matrices (called factors and corresponding to the action of the layers). The action of each layer (i.e. a factor) is obtained by applying a fixed linear operator to a vector of parameters satisfying a constraint. The number of layers is not limited. Assuming that $X$ is given and factors have been estimated, the error between the product of the estimated factors and $X$ (i.e. the reconstruction error) is either the statistical or the empirical risk. In this paper, we provide necessary and sufficient conditions on the network topology under which a stability property holds. The stability property requires that the error on the parameters defining the factors (i.e. the stability of the recovered parameters) scales linearly with the reconstruction error (i.e. the risk). Therefore, under these conditions on the network topology, any successful learning task leads to stably defined features and therefore interpretable layers/network.In order to do so, we first evaluate how the Segre embedding and its inverse distort distances. Then, we show that any deep structured linear network can be cast as a generic multilinear problem (that uses the Segre embedding). This is the {\em tensorial lifting}. Using the tensorial lifting, we provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We finally provide the necessary and sufficient condition called \NSPlong~(because of the analogy with the usual Null Space Property in the compressed sensing framework) which guarantees that the stability property holds. We illustrate the theory with a practical example where the deep structured linear network is a convolutional linear network. As expected, the conditions are rather strong but not empty. A simple test on the network topology can be implemented to test if the condition holds.
0
0
1
1
0
0
Normalization of closed Ekedahl-Oort strata
We apply our theory of partial flag spaces developed in previous articles to study a group-theoretical generalization of the canonical filtration of a truncated Barsotti-Tate group of level 1. As an application, we determine explicitly the normalization of the Zariski closures of Ekedahl-Oort strata of Shimura varieties of Hodge-type as certain closed coarse strata in the associated partial flag spaces.
0
0
1
0
0
0
Robustness of functional networks at criticality against structural defects
The robustness of dynamical properties of neuronal networks against structural damages is a central problem in computational and experimental neuroscience. Research has shown that the cortical network of a healthy brain works near a critical state, and moreover, that functional neuronal networks often have scale-free and small-world properties. In this work, we study how the robustness of simple functional networks at criticality is affected by structural defects. In particular, we consider a 2D Ising model at the critical temperature and investigate how its functional network changes with the increasing degree of structural defects. We show that the scale-free and small-world properties of the functional network at criticality are robust against large degrees of structural lesions while the system remains below the percolation limit. Although the Ising model is only a conceptual description of a two-state neuron, our research reveals fundamental robustness properties of functional networks derived from classical statistical mechanics models.
0
0
0
0
1
0
Semi-Supervised AUC Optimization based on Positive-Unlabeled Learning
Maximizing the area under the receiver operating characteristic curve (AUC) is a standard approach to imbalanced classification. So far, various supervised AUC optimization methods have been developed and they are also extended to semi-supervised scenarios to cope with small sample problems. However, existing semi-supervised AUC optimization methods rely on strong distributional assumptions, which are rarely satisfied in real-world problems. In this paper, we propose a novel semi-supervised AUC optimization method that does not require such restrictive assumptions. We first develop an AUC optimization method based only on positive and unlabeled data (PU-AUC) and then extend it to semi-supervised learning by combining it with a supervised AUC optimization method. We theoretically prove that, without the restrictive distributional assumptions, unlabeled data contribute to improving the generalization performance in PU and semi-supervised AUC optimization methods. Finally, we demonstrate the practical usefulness of the proposed methods through experiments.
1
0
0
1
0
0
Identifiability of Nonparametric Mixture Models and Bayes Optimal Clustering
Motivated by problems in data clustering, we establish general conditions under which families of nonparametric mixture models are identifiable, by introducing a novel framework involving clustering overfitted \emph{parametric} (i.e. misspecified) mixture models. These identifiability conditions generalize existing conditions in the literature, and are flexible enough to include for example mixtures of Gaussian mixtures. In contrast to the recent literature on estimating nonparametric mixtures, we allow for general nonparametric mixture components, and instead impose regularity assumptions on the underlying mixing measure. As our primary application, we apply these results to partition-based clustering, generalizing the notion of a Bayes optimal partition from classical parametric model-based clustering to nonparametric settings. Furthermore, this framework is constructive so that it yields a practical algorithm for learning identified mixtures, which is illustrated through several examples on real data. The key conceptual device in the analysis is the convex, metric geometry of probability measures on metric spaces and its connection to the Wasserstein convergence of mixing measures. The result is a flexible framework for nonparametric clustering with formal consistency guarantees.
0
0
0
1
0
0
The geometric classification of Leibniz algebras
We describe all rigid algebras and all irreducible components in the variety of four dimensional Leibniz algebras $\mathfrak{Leib}_4$ over $\mathbb{C}.$ In particular, we prove that the Grunewald--O'Halloran conjecture is not valid and the Vergne conjecture is valid for $\mathfrak{Leib}_4.$
0
0
1
0
0
0
Discrete fundamental groups of Warped Cones and expanders
In this paper we compute the discrete fundamental groups of warped cones. As an immediate consequence, this allows us to show that there exist coarsely simply-connected expanders and superexpanders. This also provides a strong coarse invariant of warped cones and implies that many warped cones cannot be coarsely equivalent to any box space.
0
0
1
0
0
0
Simulated Tempering Method in the Infinite Switch Limit with Adaptive Weight Learning
We investigate the theoretical foundations of the simulated tempering method and use our findings to design efficient algorithms. Employing a large deviation argument first used for replica exchange molecular dynamics [Plattner et al., J. Chem. Phys. 135:134111 (2011)], we demonstrate that the most efficient approach to simulated tempering is to vary the temperature infinitely rapidly. In this limit, we can replace the equations of motion for the temperature and physical variables by averaged equations for the latter alone, with the forces rescaled according to a position-dependent function defined in terms of temperature weights. The averaged equations are similar to those used in Gao's integrated-over-temperature method, except that we show that it is better to use a continuous rather than a discrete set of temperatures. We give a theoretical argument for the choice of the temperature weights as the reciprocal partition function, thereby relating simulated tempering to Wang-Landau sampling. Finally, we describe a self-consistent algorithm for simultaneously sampling the canonical ensemble and learning the weights during simulation. This algorithm is tested on a system of harmonic oscillators as well as a continuous variant of the Curie-Weiss model, where it is shown to perform well and to accurately capture the second-order phase transition observed in this model.
0
0
0
1
0
0
Analysis and Modeling of 3D Indoor Scenes
We live in a 3D world, performing activities and interacting with objects in the indoor environments everyday. Indoor scenes are the most familiar and essential environments in everyone's life. In the virtual world, 3D indoor scenes are also ubiquitous in 3D games and interior design. With the fast development of VR/AR devices and the emerging applications, the demand of realistic 3D indoor scenes keeps growing rapidly. Currently, designing detailed 3D indoor scenes requires proficient 3D designing and modeling skills and is often time-consuming. For novice users, creating realistic and complex 3D indoor scenes is even more difficult and challenging. Many efforts have been made in different research communities, e.g. computer graphics, vision and robotics, to capture, analyze and generate the 3D indoor data. This report mainly focuses on the recent research progress in graphics on geometry, structure and semantic analysis of 3D indoor data and different modeling techniques for creating plausible and realistic indoor scenes. We first review works on understanding and semantic modeling of scenes from captured 3D data of the real world. Then, we focus on the virtual scenes composed of 3D CAD models and study methods for 3D scene analysis and processing. After that, we survey various modeling paradigms for creating 3D indoor scenes and investigate human-centric scene analysis and modeling, which bridge indoor scene studies of graphics, vision and robotics. At last, we discuss open problems in indoor scene processing that might bring interests to graphics and all related communities.
1
0
0
0
0
0
Introduction to the Special Issue on Digital Signal Processing in Radio Astronomy
Advances in astronomy are intimately linked to advances in digital signal processing (DSP). This special issue is focused upon advances in DSP within radio astronomy. The trend within that community is to use off-the-shelf digital hardware where possible and leverage advances in high performance computing. In particular, graphics processing units (GPUs) and field programmable gate arrays (FPGAs) are being used in place of application-specific circuits (ASICs); high-speed Ethernet and Infiniband are being used for interconnect in place of custom backplanes. Further, to lower hurdles in digital engineering, communities have designed and released general-purpose FPGA-based DSP systems, such as the CASPER ROACH board, ASTRON Uniboard and CSIRO Redback board. In this introductory article, we give a brief historical overview, a summary of recent trends, and provide an outlook on future directions.
0
1
0
0
0
0
HPDedup: A Hybrid Prioritized Data Deduplication Mechanism for Primary Storage in the Cloud
Eliminating duplicate data in primary storage of clouds increases the cost-efficiency of cloud service providers as well as reduces the cost of users for using cloud services. Existing primary deduplication techniques either use inline caching to exploit locality in primary workloads or use post-processing deduplication running in system idle time to avoid the negative impact on I/O performance. However, neither of them works well in the cloud servers running multiple services or applications for the following two reasons: Firstly, the temporal locality of duplicate data writes may not exist in some primary storage workloads thus inline caching often fails to achieve good deduplication ratio. Secondly, the post-processing deduplication allows duplicate data to be written into disks, therefore does not provide the benefit of I/O deduplication and requires high peak storage capacity. This paper presents HPDedup, a Hybrid Prioritized data Deduplication mechanism to deal with the storage system shared by applications running in co-located virtual machines or containers by fusing an inline and a post-processing process for exact deduplication. In the inline deduplication phase, HPDedup gives a fingerprint caching mechanism that estimates the temporal locality of duplicates in data streams from different VMs or applications and prioritizes the cache allocation for these streams based on the estimation. HPDedup also allows different deduplication threshold for streams based on their spatial locality to reduce the disk fragmentation. The post-processing phase removes duplicates whose fingerprints are not able to be cached due to the weak temporal locality from disks. Our experimental results show that HPDedup clearly outperforms the state-of-the-art primary storage deduplication techniques in terms of inline cache efficiency and primary deduplication efficiency.
1
0
0
0
0
0
Two-domain and three-domain limit cycles in a typical aeroelastic system with freeplay in pitch
Freeplay is a significant source of nonlinearity in aeroelastic systems and is strictly regulated by airworthiness authorities. It splits the phase plane of such systems into three piecewise linear subdomains. Depending on the location of the freeplay, limit cycle oscillations can result that span either two or three of these subdomains. The purpose of this work is to demonstrate the existence of two-domain cycles both theoretically and experimentally. A simple aeroelastic system with pitch, plunge and control deflection degrees of freedom is investigated in the presence of freeplay in pitch. It is shown that two-domain and three-domain cycles can result from a grazing bifurcation and propagate in the decreasing airspeed direction. Close to the bifurcation, the two limit cycle branches interact with each other and aperiodic oscillations ensue. Equivalent linearization is used to derive the conditions of existence of each type of limit cycle and to predict their amplitudes and frequencies. Comparisons with measurements from wind tunnel experiments demonstrate that the theory describes these phenomena with accuracy.
0
1
0
0
0
0
Preference Modeling by Exploiting Latent Components of Ratings
Understanding user preference is essential to the optimization of recommender systems. As a feedback of user's taste, rating scores can directly reflect the preference of a given user to a given product. Uncovering the latent components of user ratings is thus of significant importance for learning user interests. In this paper, a new recommendation approach, called LCR, was proposed by investigating the latent components of user ratings. The basic idea is to decompose an existing rating into several components via a cost-sensitive learning strategy. Specifically, each rating is assigned to several latent factor models and each model is updated according to its predictive errors. Afterwards, these accumulated predictive errors of models are utilized to decompose a rating into several components, each of which is treated as an independent part to retrain the latent factor models. Finally, all latent factor models are combined linearly to estimate predictive ratings for users. In contrast to existing methods, LCR provides an intuitive preference modeling strategy via multiple component analysis at an individual perspective. Meanwhile, it is verified by the experimental results on several benchmark datasets that the proposed method is superior to the state-of-art methods in terms of recommendation accuracy.
1
0
0
0
0
0
Inductive Representation Learning on Large Graphs
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.
1
0
0
1
0
0
Milnor and Tjurina numbers for a hypersurface germ with isolated singularity
Assume that $f:(\mathbb{C}^n,0) \to (\mathbb{C},0)$ is an analytic function germ at the origin with only isolated singularity. Let $\mu$ and $\tau$ be the corresponding Milnor and Tjurina numbers. We show that $\dfrac{\mu}{\tau} \leq n$. As an application, we give a lower bound for the Tjurina number in terms of $n$ and the multiplicity of $f$ at the origin.
0
0
1
0
0
0
Analysis of the Gibbs Sampler for Gaussian hierarchical models via multigrid decomposition
We study the convergence properties of the Gibbs Sampler in the context of posterior distributions arising from Bayesian analysis of Gaussian hierarchical models. We consider centred and non-centred parameterizations as well as their hybrids including the full family of partially non-centred parameterizations. We develop a novel methodology based on multi-grid decompositions to derive analytic expressions for the convergence rates of the algorithm for an arbitrary number of layers in the hierarchy, while previous work was typically limited to the two-level case. Our work gives a complete understanding for the three-level symmetric case and this gives rise to approximations for the non-symmetric case. We also give analogous, if less explicit, results for models of arbitrary level. This theory gives rise to simple and easy-to-implement guidelines for the practical implementation of Gibbs samplers on conditionally Gaussian hierarchical models.
0
0
0
1
0
0
A Cosmic Selection Rule for Glueball Dark Matter Relic Density
We point out a unique mechanism to produce the relic abundance for glueball dark matter from a gauged $SU(N)_d$ hidden sector which is bridged to the standard model sector through heavy vectorlike quarks colored under gauge interactions from both sides. A necessary ingredient of our assumption is that the vectorlike quarks, produced either thermally or non-thermally, are abundant enough to dominate the universe for some time in the early universe. They later undergo dark color confinement and form unstable vectorlike-quarkonium states which annihilate decay and reheat the visible and dark sectors. The ratio of entropy dumped into two sectors and the final energy budget in the dark glueballs is only determined by low energy parameters, including the intrinsic scale of the dark $SU(N)_d$, $\Lambda_d$, and number of dark colors, $N_d$, but depend weakly on parameters in the ultraviolet such as the vectorlike quark mass or the initial condition. We call this a cosmic selection rule for the glueball dark matter relic density.
0
1
0
0
0
0
Multispectral and Hyperspectral Image Fusion Using a 3-D-Convolutional Neural Network
In this paper, we propose a method using a three dimensional convolutional neural network (3-D-CNN) to fuse together multispectral (MS) and hyperspectral (HS) images to obtain a high resolution hyperspectral image. Dimensionality reduction of the hyperspectral image is performed prior to fusion in order to significantly reduce the computational time and make the method more robust to noise. Experiments are performed on a data set simulated using a real hyperspectral image. The results obtained show that the proposed approach is very promising when compared to conventional methods. This is especially true when the hyperspectral image is corrupted by additive noise.
1
0
0
1
0
0
The fundamental Lepage form in variational theory for submanifolds
A setting for global variational geometry on Grassmann fibrations is presented. The integral variational functionals for finite dimensional immersed submanifolds are studied by means of the fundamental Lepage equivalent of a homogeneous Lagrangian, which can be regarded as a generalization of the well-known Hilbert form in the classical mechanics. Prolongations of immersions, diffeomorphisms and vector fields to the Grassmann fibrations are introduced as geometric tools for the variations of immersions. The first infinitesimal variation formula together with its consequences, the Euler-Lagrange equations for extremal submanifolds and the Noether theorem for invariant variational functionals are proved. The theory is illustrated on the variational functional for minimal submanifolds.
0
0
1
0
0
0
Adding educational funcionalities to classic board games
In this paper we revisit some classic board games like Pachisi or the Game of Gosse. The main contribution of the paper is to design and add some functionalities to the games in order to transform them in serious games, that is, in games with learning and educational purposes. To do that, at the beginning of the game, players choose one or several topics and during the game, players have to anwers questions on these topics in order to move their markers. We choose classic board games because a lot of people are familiar with them so it is very easy to start to play without wasting time learning game rules and, we think that this is an important element to make the game more attractive to people. To enlarge the number of potential users we have implement the games just using html and javascript and the games can be used in any web browser, in any computer (including tablets) , in any computer arquitecture (Windows, Mac, Linux) and no internet/server conexion is required. Associated software is distributed under Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence and can be obtained at this http URL
1
0
0
0
0
0
Conceptual Frameworks for Building Online Citizen Science Projects
In recent years, citizen science has grown in popularity due to a number of reasons, including the emphasis on informal learning and creativity potential associated with these initiatives. Citizen science projects address research questions from various domains, ranging from Ecology to Astronomy. Due to the advancement of communication technologies, which makes outreach and engagement of wider communities easier, scientists are keen to turn their own research into citizen science projects. However, the development, deployment and management of these projects remains challenging. One of the most important challenges is building the project itself. There is no single tool or framework, which guides the step-by-step development of the project, since every project has specific characteristics, such as geographical constraints or volunteers' mode of participation. Therefore, in this article, we present a series of conceptual frameworks for categorisation, decision and deployment, which guide a citizen science project creator in every step of creating a new project starting from the research question to project deployment. The frameworks are designed with consideration to the properties of already existing citizen science projects and could be easily extended to include other dimensions, which are not currently perceived.
1
0
0
0
0
0
SDN Architecture and Southbound APIs for IPv6 Segment Routing Enabled Wide Area Networks
The SRv6 architecture (Segment Routing based on IPv6 data plane) is a promising solution to support services like Traffic Engineering, Service Function Chaining and Virtual Private Networks in IPv6 backbones and datacenters. The SRv6 architecture has interesting scalability properties as it reduces the amount of state information that needs to be configured in the nodes to support the network services. In this paper, we describe the advantages of complementing the SRv6 technology with an SDN based approach in backbone networks. We discuss the architecture of a SRv6 enabled network based on Linux nodes. In addition, we present the design and implementation of the Southbound API between the SDN controller and the SRv6 device. We have defined a data-model and four different implementations of the API, respectively based on gRPC, REST, NETCONF and remote Command Line Interface (CLI). Since it is important to support both the development and testing aspects we have realized an Intent based emulation system to build realistic and reproducible experiments. This collection of tools automate most of the configuration aspects relieving the experimenter from a significant effort. Finally, we have realized an evaluation of some performance aspects of our architecture and of the different variants of the Southbound APIs and we have analyzed the effects of the configuration updates in the SRv6 enabled nodes.
1
0
0
0
0
0
Topology-optimized Dual-Polarization Dirac Cones
We apply a large-scale computational technique, known as topology optimization, to the inverse design of photonic Dirac cones. In particular, we report on a variety of photonic crystal geometries, realizable in simple isotropic dielectric materials, which exhibit dual-polarization and dual-wavelength Dirac cones. We demonstrate the flexibility of this technique by designing photonic crystals of different symmetry types, such as ones with four-fold and six-fold rotational symmetry, which possess Dirac cones at different points within the Brillouin zone. The demonstrated and related optimization techniques could open new avenues to band-structure engineering and manipulating the propagation of light in periodic media, with possible applications in exotic optical phenomena such as effective zero-index media and topological photonics.
0
1
0
0
0
0
Reconstruction of Word Embeddings from Sub-Word Parameters
Pre-trained word embeddings improve the performance of a neural model at the cost of increasing the model size. We propose to benefit from this resource without paying the cost by operating strictly at the sub-lexical level. Our approach is quite simple: before task-specific training, we first optimize sub-word parameters to reconstruct pre-trained word embeddings using various distance measures. We report interesting results on a variety of tasks: word similarity, word analogy, and part-of-speech tagging.
1
0
0
0
0
0
Deep Submodular Functions
We start with an overview of a class of submodular functions called SCMMs (sums of concave composed with non-negative modular functions plus a final arbitrary modular). We then define a new class of submodular functions we call {\em deep submodular functions} or DSFs. We show that DSFs are a flexible parametric family of submodular functions that share many of the properties and advantages of deep neural networks (DNNs). DSFs can be motivated by considering a hierarchy of descriptive concepts over ground elements and where one wishes to allow submodular interaction throughout this hierarchy. Results in this paper show that DSFs constitute a strictly larger class of submodular functions than SCMMs. We show that, for any integer $k>0$, there are $k$-layer DSFs that cannot be represented by a $k'$-layer DSF for any $k'<k$. This implies that, like DNNs, there is a utility to depth, but unlike DNNs, the family of DSFs strictly increase with depth. Despite this, we show (using a "backpropagation" like method) that DSFs, even with arbitrarily large $k$, do not comprise all submodular functions. In offering the above results, we also define the notion of an antitone superdifferential of a concave function and show how this relates to submodular functions (in general), DSFs (in particular), negative second-order partial derivatives, continuous submodularity, and concave extensions. To further motivate our analysis, we provide various special case results from matroid theory, comparing DSFs with forms of matroid rank, in particular the laminar matroid. Lastly, we discuss strategies to learn DSFs, and define the classes of deep supermodular functions, deep difference of submodular functions, and deep multivariate submodular functions, and discuss where these can be useful in applications.
1
0
0
0
0
0
Inverse problems for the wave equation with under-determined data
We consider the inverse problems of determining the potential or the damping coefficient appearing in the wave equation. We will prove the unique determination of these coefficients from the one point measurement. Since our problem is under-determined, so some extra assumption on the coefficients is required to prove the uniqueness.
0
0
1
0
0
0
Spatial interactions and oscillatory tragedies of the commons
A tragedy of the commons (TOC) occurs when individuals acting in their own self-interest deplete commonly-held resources, leading to a worse outcome than had they cooperated. Over time, the depletion of resources can change incentives for subsequent actions. Here, we investigate long-term feedback between game and environment across a continuum of incentives in an individual-based framework. We identify payoff-dependent transition rules that lead to oscillatory TOC-s in stochastic simulations and the mean field limit. Further extending the stochastic model, we find that spatially explicit interactions can lead to emergent, localized dynamics, including the propagation of cooperative wave fronts and cluster formation of both social context and resources. These dynamics suggest new mechanisms underlying how TOCs arise and how they might be averted.
0
0
0
0
1
0
An upper bound on transport
The linear growth of operators in local quantum systems leads to an effective lightcone even if the system is non-relativistic. We show that consistency of diffusive transport with this lightcone places an upper bound on the diffusivity: $D \lesssim v^2 \tau_\text{eq}$. The operator growth velocity $v$ defines the lightcone and $\tau_\text{eq}$ is the local equilibration timescale, beyond which the dynamics of conserved densities is diffusive. We verify that the bound is obeyed in various weakly and strongly interacting theories. In holographic models this bound establishes a relation between the hydrodynamic and leading non-hydrodynamic quasinormal modes of planar black holes. Our bound relates transport data --- including the electrical resistivity and the shear viscosity --- to the local equilibration time, even in the absence of a quasiparticle description. In this way, the bound sheds light on the observed $T$-linear resistivity of many unconventional metals, the shear viscosity of the quark-gluon plasma and the spin transport of unitary fermions.
0
1
0
0
0
0
Taylor expansion in linear logic is invertible
Each Multiplicative Exponential Linear Logic (MELL) proof-net can be expanded into a differential net, which is its Taylor expansion. We prove that two different MELL proof-nets have two different Taylor expansions. As a corollary, we prove a completeness result for MELL: We show that the relational model is injective for MELL proof-nets, i.e. the equality between MELL proof-nets in the relational model is exactly axiomatized by cut-elimination.
1
0
0
0
0
0
Collision-Free Multi Robot Trajectory Optimization in Unknown Environments using Decentralized Trajectory Planning
Multi robot systems have the potential to be utilized in a variety of applications. In most of the previous works, the trajectory generation for multi robot systems is implemented in known environments. To overcome that we present an online trajectory optimization algorithm that utilizes communication of robots' current states to account to the other robots while using local object based maps for identifying obstacles. Based upon this data, we predict the trajectory expected to be traversed by the robots and utilize that to avoid collisions by formulating regions of free space that the robot can be without colliding with other robots and obstacles. A trajectory is optimized constraining the robot to remain within this region.The proposed method is tested in simulations on Gazebo using ROS.
1
0
0
0
0
0
On the bottom of spectra under coverings
For a Riemannian covering $M_1\to M_0$ of complete Riemannian manifolds with boundary (possibly empty) and respective fundamental groups $\Gamma_1\subseteq\Gamma_0$, we show that the bottoms of the spectra of $M_0$ and $M_1$ coincide if the right action of $\Gamma_0$ on $\Gamma_1\backslash\Gamma_0$ is amenable.
0
0
1
0
0
0
Theory of ground states for classical Heisenberg spin systems IV
We extend the theory of ground states of classical Heisenberg spin systems previously published to the case where the interaction with an external magnetic field is described by a Zeeman term. The ground state problem for the Heisenberg-Zeeman Hamiltonian can be reduced first to the relative ground state problem, and, in a second step, to the absolute ground state problem for pure Heisenberg Hamiltonians depending on an additional Lagrange parameter. We distinguish between continuous and discontinuous reduction. Moreover, there are various general statements about Heisenberg-Zeeman systems that will be proven under most general assumptions. One topic is the connection between the minimal energy functions $E_{min}$ for the Heisenberg energy and $H_{min}$ for the Heisenberg-Zeeman energy which turn out to be essentially mutual Legendre-Fenchel transforms. This generalization of the traditional Legendre transform is especially suited to cope with situations where the function $E_{min}$ is not convex and consequently there is a magnetization jump at a critical field. Another topic is magnetization and the occurrence of threshold fields $B_{thr}$ and saturation fields $B_{sat}$, where we provide a general formula for the latter. We suggest a distinction between ferromagnetic and anti-ferromagnetic systems based on the vanishing of $B_{sat}$ for the former ones. Parabolic systems are defined in such a way that $E_{min}$ and $H_{min}$ have a particularly simple form and studied in detail. For a large class of parabolic systems the relative ground states can be constructed from the absolute ground state by means of a so-called umbrella family. Finally we provide a counter-example of a parabolic system where this construction is not possible.
0
1
0
0
0
0
Aggressive Deep Driving: Model Predictive Control with a CNN Cost Model
We present a framework for vision-based model predictive control (MPC) for the task of aggressive, high-speed autonomous driving. Our approach uses deep convolutional neural networks to predict cost functions from input video which are directly suitable for online trajectory optimization with MPC. We demonstrate the method in a high speed autonomous driving scenario, where we use a single monocular camera and a deep convolutional neural network to predict a cost map of the track in front of the vehicle. Results are demonstrated on a 1:5 scale autonomous vehicle given the task of high speed, aggressive driving.
1
0
0
0
0
0
K-theory of line bundles and smooth varieties
We give a $K$-theoretic criterion for a quasi-projective variety to be smooth. If $\mathbb{L}$ is a line bundle corresponding to an ample invertible sheaf on $X$, it suffices that $K_q(X) = K_q(\mathbb{L})$ for all $q\le\dim(X)+1$.
0
0
1
0
0
0
Local Two-Sample Testing: A New Tool for Analysing High-Dimensional Astronomical Data
Modern surveys have provided the astronomical community with a flood of high-dimensional data, but analyses of these data often occur after their projection to lower-dimensional spaces. In this work, we introduce a local two-sample hypothesis test framework that an analyst may directly apply to data in their native space. In this framework, the analyst defines two classes based on a response variable of interest (e.g. higher-mass galaxies versus lower-mass galaxies) and determines at arbitrary points in predictor space whether the local proportions of objects that belong to the two classes significantly differs from the global proportion. Our framework has a potential myriad of uses throughout astronomy; here, we demonstrate its efficacy by applying it to a sample of 2487 i-band-selected galaxies observed by the HST ACS in four of the CANDELS program fields. For each galaxy, we have seven morphological summary statistics along with an estimated stellar mass and star-formation rate. We perform two studies: one in which we determine regions of the seven-dimensional space of morphological statistics where high-mass galaxies are significantly more numerous than low-mass galaxies, and vice-versa, and another study where we use SFR in place of mass. We find that we are able to identify such regions, and show how high-mass/low-SFR regions are associated with concentrated and undisturbed galaxies while galaxies in low-mass/high-SFR regions appear more extended and/or disturbed than their high-mass/low-SFR counterparts.
0
1
0
0
0
0
Are You Tampering With My Data?
We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.
0
0
0
1
0
0
Answering Spatial Multiple-Set Intersection Queries Using 2-3 Cuckoo Hash-Filters
We show how to answer spatial multiple-set intersection queries in O(n(log w)/w + kt) expected time, where n is the total size of the t sets involved in the query, w is the number of bits in a memory word, k is the output size, and c is any fixed constant. This improves the asymptotic performance over previous solutions and is based on an interesting data structure, known as 2-3 cuckoo hash-filters. Our results apply in the word-RAM model (or practical RAM model), which allows for constant-time bit-parallel operations, such as bitwise AND, OR, NOT, and MSB (most-significant 1-bit), as exist in modern CPUs and GPUs. Our solutions apply to any multiple-set intersection queries in spatial data sets that can be reduced to one-dimensional range queries, such as spatial join queries for one-dimensional points or sets of points stored along space-filling curves, which are used in GIS applications.
1
0
0
0
0
0
A framework for cascade size calculations on random networks
We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations and, in case of threshold models, with arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g. if load is accumulated over time. These distributions respect the Markovian nature of the studied random processes. Random variables capture the impact of nodes that have failed at any point in the past on their neighborhood. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable cascade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows to include interventions in time or further model complexity in the analysis.
1
1
0
0
0
0
A Gaussian Process Regression Model for Distribution Inputs
Monge-Kantorovich distances, otherwise known as Wasserstein distances, have received a growing attention in statistics and machine learning as a powerful discrepancy measure for probability distributions. In this paper, we focus on forecasting a Gaussian process indexed by probability distributions. For this, we provide a family of positive definite kernels built using transportation based distances. We provide a probabilistic understanding of these kernels and characterize the corresponding stochastic processes. We prove that the Gaussian processes indexed by distributions corresponding to these kernels can be efficiently forecast, opening new perspectives in Gaussian process modeling.
0
0
1
1
0
0