ID
int64
1
21k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.76k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
16,401
Spitzer Secondary Eclipses of Qatar-1b
Previous secondary eclipse observations of the hot Jupiter Qatar-1b in the Ks band suggest that it may have an unusually high day side temperature, indicative of minimal heat redistribution. There have also been indications that the orbit may be slightly eccentric, possibly forced by another planet in the system. We investigate the day side temperature and orbital eccentricity using secondary eclipse observations with Spitzer. We observed the secondary eclipse with Spitzer/IRAC in subarray mode, in both 3.6 and 4.5 micron wavelengths. We used pixel-level decorrelation to correct for Spitzer's intra-pixel sensitivity variations and thereby obtain accurate eclipse depths and central phases. Our 3.6 micron eclipse depth is 0.149 +/- 0.051% and the 4.5 micron depth is 0.273 +/- 0.049%. Fitting a blackbody planet to our data and two recent Ks band eclipse depths indicates a brightness temperature of 1506 +/- 71K. Comparison to model atmospheres for the planet indicates that its degree of longitudinal heat redistribution is intermediate between fully uniform and day side only. The day side temperature of the planet is unlikely to be as high (1885K) as indicated by the ground-based eclipses in the Ks band, unless the planet's emergent spectrum deviates strongly from model atmosphere predictions. The average central phase for our Spitzer eclipses is 0.4984 +/- 0.0017, yielding e cos(omega) = -0.0028 +/- 0.0027. Our results are consistent with a circular orbit, and we constrain e cos(omega) much more strongly than has been possible with previous observations.
0
1
0
0
0
0
16,402
Dielectric response of Anderson and pseudogapped insulators
Using a combination of analytic and numerical methods, we study the polarizability of a (non-interacting) Anderson insulator in one, two, and three dimensions and demonstrate that, in a wide range of parameters, it scales proportionally to the square of the localization length, contrary to earlier claims based on the effective-medium approximation. We further analyze the effect of electron-electron interactions on the dielectric constant in quasi-1D, quasi-2D and 3D materials with large localization length, including both Coulomb repulsion and phonon-mediated attraction. The phonon-mediated attraction (in the pseudogapped state on the insulating side of the Superconductor-Insulator Transition) produces a correction to the dielectric constant, which may be detected from a linear response of a dielectric constant to an external magnetic field.
0
1
0
0
0
0
16,403
Epidemiological modeling of the 2005 French riots: a spreading wave and the role of contagion
As a large-scale instance of dramatic collective behaviour, the 2005 French riots started in a poor suburb of Paris, then spread in all of France, lasting about three weeks. Remarkably, although there were no displacements of rioters, the riot activity did travel. Access to daily national police data has allowed us to explore the dynamics of riot propagation. Here we show that an epidemic-like model, with just a few parameters and a single sociological variable characterizing neighbourhood deprivation, accounts quantitatively for the full spatio-temporal dynamics of the riots. This is the first time that such data-driven modelling involving contagion both within and between cities (through geographic proximity or media) at the scale of a country, and on a daily basis, is performed. Moreover, we give a precise mathematical characterization to the expression "wave of riots", and provide a visualization of the propagation around Paris, exhibiting the wave in a way not described before. The remarkable agreement between model and data demonstrates that geographic proximity played a major role in the propagation, even though information was readily available everywhere through media. Finally, we argue that our approach gives a general framework for the modelling of the dynamics of spontaneous collective uprisings.
1
1
0
0
0
0
16,404
Option Pricing in Illiquid Markets with Jumps
The classical linear Black--Scholes model for pricing derivative securities is a popular model in financial industry. It relies on several restrictive assumptions such as completeness, and frictionless of the market as well as the assumption on the underlying asset price dynamics following a geometric Brownian motion. The main purpose of this paper is to generalize the classical Black--Scholes model for pricing derivative securities by taking into account feedback effects due to an influence of a large trader on the underlying asset price dynamics exhibiting random jumps. The assumption that an investor can trade large amounts of assets without affecting the underlying asset price itself is usually not satisfied, especially in illiquid markets. We generalize the Frey--Stremme nonlinear option pricing model for the case the underlying asset follows a Levy stochastic process with jumps. We derive and analyze a fully nonlinear parabolic partial-integro differential equation for the price of the option contract. We propose a semi-implicit numerical discretization scheme and perform various numerical experiments showing influence of a large trader and intensity of jumps on the option price.
0
0
0
0
0
1
16,405
A Knowledge-Based Analysis of the Blockchain Protocol
At the heart of the Bitcoin is a blockchain protocol, a protocol for achieving consensus on a public ledger that records bitcoin transactions. To the extent that a blockchain protocol is used for applications such as contract signing and making certain transactions (such as house sales) public, we need to understand what guarantees the protocol gives us in terms of agents' knowledge. Here, we provide a complete characterization of agent's knowledge when running a blockchain protocol using a variant of common knowledge that takes into account the fact that agents can enter and leave the system, it is not known which agents are in fact following the protocol (some agents may want to deviate if they can gain by doing so), and the fact that the guarantees provided by blockchain protocols are probabilistic. We then consider some scenarios involving contracts and show that this level of knowledge suffices for some scenarios, but not others.
1
0
0
0
0
0
16,406
Convergence Analysis of Optimization Algorithms
The regret bound of an optimization algorithms is one of the basic criteria for evaluating the performance of the given algorithm. By inspecting the differences between the regret bounds of traditional algorithms and adaptive one, we provide a guide for choosing an optimizer with respect to the given data set and the loss function. For analysis, we assume that the loss function is convex and its gradient is Lipschitz continuous.
1
0
1
1
0
0
16,407
On the combinatorics of the 2-class classification problem
A set of points $X = X_B \cup X_R \subseteq \mathbb{R}^d$ is linearly separable if the convex hulls of $X_B$ and $X_R$ are disjoint, hence there exists a hyperplane separating $X_B$ from $X_R$. Such a hyperplane provides a method for classifying new points, according to which side of the hyperplane the new points lie. When such a linear separation is not possible, it may still be possible to partition $X_B$ and $X_R$ into prespecified numbers of groups, in such a way that every group from $X_B$ is linearly separable from every group from $X_R$. We may also discard some points as outliers, and seek to minimize the number of outliers necessary to find such a partition. Based on these ideas, Bertsimas and Shioda proposed the classification and regression by integer optimization (CRIO) method in 2007. In this work we explore the integer programming aspects of the classification part of CRIO, in particular theoretical properties of the associated formulation. We are able to find facet-inducing inequalities coming from the stable set polytope, hence showing that this classification problem has exploitable combinatorial properties.
1
0
0
0
0
0
16,408
Laboratory evidence of dynamo amplification of magnetic fields in a turbulent plasma
Magnetic fields are ubiquitous in the Universe. Extragalactic disks, halos and clusters have consistently been shown, via diffuse radio-synchrotron emission and Faraday rotation measurements, to exhibit magnetic field strengths ranging from a few nG to tens of $\mu$G. The energy density of these fields is typically comparable to the energy density of the fluid motions of the plasma in which they are embedded, making magnetic fields essential players in the dynamics of the luminous matter. The standard theoretical model for the origin of these strong magnetic fields is through the amplification of tiny seed fields via turbulent dynamo to the level consistent with current observations. Here we demonstrate, using laser-produced colliding plasma flows, that turbulence is indeed capable of rapidly amplifying seed fields to near equipartition with the turbulent fluid motions. These results support the notion that turbulent dynamo is a viable mechanism responsible for the observed present-day magnetization of the Universe.
0
1
0
0
0
0
16,409
Radiating Electron Interaction with Multiple Colliding Electromagnetic Waves: Random Walk Trajectories, Levy Flights, Limit Circles, and Attractors (Survey of the Structurally Determinate Patterns)
The multiple colliding laser pulse concept formulated in Ref. [1] is beneficial for achieving an extremely high amplitude of coherent electromagnetic field. Since the topology of electric and magnetic fields oscillating in time of multiple colliding laser pulses is far from trivial and the radiation friction effects are significant in the high field limit, the dynamics of charged particles interacting with the multiple colliding laser pulses demonstrates remarkable features corresponding to random walk trajectories, limit circles, attractors, regular patterns and Levy flights. Under extremely high intensity conditions the nonlinear dissipation mechanism stabilizes the particle motion resulting in the charged particle trajectory being located within narrow regions and in the occurrence of a new class of regular patterns made by the particle ensembles.
0
1
0
0
0
0
16,410
Towards End-to-end Text Spotting with Convolutional Recurrent Neural Networks
In this work, we jointly address the problem of text detection and recognition in natural scene images based on convolutional recurrent neural networks. We propose a unified network that simultaneously localizes and recognizes text with a single forward pass, avoiding intermediate processes like image cropping and feature re-calculation, word separation, or character grouping. In contrast to existing approaches that consider text detection and recognition as two distinct tasks and tackle them one by one, the proposed framework settles these two tasks concurrently. The whole framework can be trained end-to-end, requiring only images, the ground-truth bounding boxes and text labels. Through end-to-end training, the learned features can be more informative, which improves the overall performance. The convolutional features are calculated only once and shared by both detection and recognition, which saves processing time. Our proposed method has achieved competitive performance on several benchmark datasets.
1
0
0
0
0
0
16,411
Evidence for OH or H2O on the surface of 433 Eros and 1036 Ganymed
Water and hydroxyl, once thought to be found only in the primitive airless bodies that formed beyond roughly 2.5-3 AU, have recently been detected on the Moon and Vesta, which both have surfaces dominated by evolved, non-primitive compositions. In both these cases, the water/OH is thought to be exogenic, either brought in via impacts with comets or hydrated asteroids or created via solar wind interactions with silicates in the regolith or both. Such exogenic processes should also be occurring on other airless body surfaces. To test this hypothesis, we used the NASA Infrared Telescope Facility (IRTF) to measure reflectance spectra (2.0 to 4.1 {\mu}m) of two large near-Earth asteroids (NEAs) with compositions generally interpreted as anhydrous: 433 Eros and 1036 Ganymed. OH is detected on both of these bodies in the form of absorption features near 3 {\mu}m. The spectra contain a component of thermal emission at longer wavelengths, from which we estimate thermal of 167+/- 98 J m-2s-1/2K-1 for Eros (consistent with previous estimates) and 214+/- 80 J m-2s-1/2K-1 for Ganymed, the first reported measurement of thermal inertia for this object. These observations demonstrate that processes responsible for water/OH creation on large airless bodies also act on much smaller bodies.
0
1
0
0
0
0
16,412
Jump Locations of Jump-Diffusion Processes with State-Dependent Rates
We propose a general framework for studying jump-diffusion systems driven by both Gaussian noise and a jump process with state-dependent intensity. Of particular natural interest are the jump locations: the system evaluated at the jump times. However, the state-dependence of the jump rate provides direct coupling between the diffusion and jump components, making disentangling the two to study individually difficult. We provide an iterative map formulation of the sequence of distributions of jump locations. Computation of these distributions allows for the extraction of the interjump time statistics. These quantities reveal a relationship between the long-time distribution of jump location and the stationary density of the full process. We provide a few examples to demonstrate the analytical and numerical tools stemming from the results proposed in the paper, including an application that shows a non-monotonic dependence on the strength of diffusion.
0
1
1
0
0
0
16,413
Dust evolution with active galactic nucleus feedback in elliptical galaxies
We have recently suggested that dust growth in the cold gas phase dominates the dust abundance in elliptical galaxies while dust is efficiently destroyed in the hot X-ray emitting plasma (hot gas). In order to understand the dust evolution in elliptical galaxies, we construct a simple model that includes dust growth in the cold gas and dust destruction in the hot gas. We also take into account the effect of mass exchange between these two gas components induced by active galactic nucleus (AGN) feedback. We survey reasonable ranges of the relevant parameters in the model and find that AGN feedback cycles actually produce a variety in cold gas mass and dust-to-gas ratio. By comparing with an observational sample of nearby elliptical galaxies, we find that, although the dust-to-gas ratio varies by an order of magnitude in our model, the entire range of the observed dust-to-gas ratios is difficult to be reproduced under a single parameter set. Variation of the dust growth efficiency is the most probable solution to explain the large variety in dust-to-gas ratio of the observational sample. Therefore, dust growth can play a central role in creating the variation in dust-to-gas ratio through the AGN feedback cycle and through the variation in dust growth efficiency.
0
1
0
0
0
0
16,414
Generative-Discriminative Variational Model for Visual Recognition
The paradigm shift from shallow classifiers with hand-crafted features to end-to-end trainable deep learning models has shown significant improvements on supervised learning tasks. Despite the promising power of deep neural networks (DNN), how to alleviate overfitting during training has been a research topic of interest. In this paper, we present a Generative-Discriminative Variational Model (GDVM) for visual classification, in which we introduce a latent variable inferred from inputs for exhibiting generative abilities towards prediction. In other words, our GDVM casts the supervised learning task as a generative learning process, with data discrimination to be jointly exploited for improved classification. In our experiments, we consider the tasks of multi-class classification, multi-label classification, and zero-shot learning. We show that our GDVM performs favorably against the baselines or recent generative DNN models.
1
0
0
0
0
0
16,415
PCA-Initialized Deep Neural Networks Applied To Document Image Analysis
In this paper, we present a novel approach for initializing deep neural networks, i.e., by turning PCA into neural layers. Usually, the initialization of the weights of a deep neural network is done in one of the three following ways: 1) with random values, 2) layer-wise, usually as Deep Belief Network or as auto-encoder, and 3) re-use of layers from another network (transfer learning). Therefore, typically, many training epochs are needed before meaningful weights are learned, or a rather similar dataset is required for seeding a fine-tuning of transfer learning. In this paper, we describe how to turn a PCA into an auto-encoder, by generating an encoder layer of the PCA parameters and furthermore adding a decoding layer. We analyze the initialization technique on real documents. First, we show that a PCA-based initialization is quick and leads to a very stable initialization. Furthermore, for the task of layout analysis we investigate the effectiveness of PCA-based initialization and show that it outperforms state-of-the-art random weight initialization methods.
1
0
0
1
0
0
16,416
On a frame theoretic measure of quality of LTI systems
It is of practical significance to define the notion of a measure of quality of a control system, i.e., a quantitative extension of the classical notion of controllability. In this article we demonstrate that the three standard measures of quality involving the trace, minimum eigenvalue, and the determinant of the controllability grammian achieve their optimum values when the columns of the controllability matrix from a tight frame. Motivated by this, and in view of some recent developments in frame theoretic signal processing, we provide a measure of quality for LTI systems based on a measure of tightness of the columns of the reachability matrix .
1
0
1
0
0
0
16,417
Fuzzy Clustering Data Given on the Ordinal Scale Based on Membership and Likelihood Functions Sharing
A task of clustering data given in the ordinal scale under conditions of overlapping clusters has been considered. It's proposed to use an approach based on memberhsip and likelihood functions sharing. A number of performed experiments proved effectiveness of the proposed method. The proposed method is characterized by robustness to outliers due to a way of ordering values while constructing membership functions.
1
0
0
0
0
0
16,418
The ALICE O2 common driver for the C-RORC and CRU read-out cards
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the strongly interacting state of matter realized in relativistic heavy-ion collisions at the CERN Large Hadron Collider (LHC). A major upgrade of the experiment is planned during the 2019-2020 long shutdown. In order to cope with a data rate 100 times higher than during LHC Run 1 and with the continuous read-out of the Time Projection Chamber (TPC), it is necessary to upgrade the Online and Offline Computing to a new common system called O2 . The O2 read- out chain will use commodity x86 Linux servers equipped with custom PCIe FPGA-based read- out cards. This paper discusses the driver architecture for the cards that will be used in O2 : the PCIe v2 x8, Xilinx Virtex 6 based C-RORC (Common Readout Receiver Card) and the PCIe v3 x16, Intel Arria 10 based CRU (Common Readout Unit). Access to the PCIe cards is provided via three layers of software. Firstly, the low-level PCIe (PCI Express) layer responsible for the userspace interface for low-level operations such as memory mapping the PCIe BAR (Base Address Registers) and creating scatter-gather lists, which is provided by the PDA (Portable Driver Architecture) library developed by the Frankfurt Institute for Advanced Studies (FIAS). Above that sits our userspace driver which implements synchronization, controls the read-out card -- e.g. resetting and configuring the card, providing it with bus addresses to transfer data to and checking for data arrival -- and presents a uniform, high-level C++ interface that abstracts over the differences between the C-RORC and CRU. This interface -- of which direct usage is principally intended for high-performance read-out processes -- allows users to configure and use the various aspects of the read-out cards, such as configuration, DMA transfers and commands to the front-end. [...]
1
1
0
0
0
0
16,419
Adapting control policies from simulation to reality using a pairwise loss
This paper proposes an approach to domain transfer based on a pairwise loss function that helps transfer control policies learned in simulation onto a real robot. We explore the idea in the context of a 'category level' manipulation task where a control policy is learned that enables a robot to perform a mating task involving novel objects. We explore the case where depth images are used as the main form of sensor input. Our experimental results demonstrate that proposed method consistently outperforms baseline methods that train only in simulation or that combine real and simulated data in a naive way.
1
0
0
0
0
0
16,420
Evidence against a supervoid causing the CMB Cold Spot
We report the results of the 2dF-VST ATLAS Cold Spot galaxy redshift survey (2CSz) based on imaging from VST ATLAS and spectroscopy from 2dF AAOmega over the core of the CMB Cold Spot. We sparsely surveyed the inner 5$^{\circ}$ radius of the Cold Spot to a limit of $i_{AB} \le 19.2$, sampling $\sim7000$ galaxies at $z<0.4$. We have found voids at $z=$ 0.14, 0.26 and 0.30 but they are interspersed with small over-densities and the scale of these voids is insufficient to explain the Cold Spot through the $\Lambda$CDM ISW effect. Combining with previous data out to $z\sim1$, we conclude that the CMB Cold Spot could not have been imprinted by a void confined to the inner core of the Cold Spot. Additionally we find that our 'control' field GAMA G23 shows a similarity in its galaxy redshift distribution to the Cold Spot. Since the GAMA G23 line-of-sight shows no evidence of a CMB temperature decrement we conclude that the Cold Spot may have a primordial origin rather than being due to line-of-sight effects.
0
1
0
0
0
0
16,421
How to Produce an Arbitrarily Small Tensor to Scalar Ratio
We construct a toy a model which demonstrates that large field single scalar inflation can produce an arbitrarily small scalar to tensor ratio in the window of e-foldings recoverable from CMB experiments. This is done by generalizing the $\alpha$-attractor models to allow the potential to approach a constant as rapidly as we desire for super-planckian field values. This implies that a non-detection of r alone can never rule out entirely the theory of large field inflation.
0
1
0
0
0
0
16,422
Meta Learning Shared Hierarchies
We develop a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps. Specifically, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.
1
0
0
0
0
0
16,423
The bright-star masks for the HSC-SSP survey
We present the procedure to build and validate the bright-star masks for the Hyper-Suprime-Cam Strategic Subaru Proposal (HSC-SSP) survey. To identify and mask the saturated stars in the full HSC-SSP footprint, we rely on the Gaia and Tycho-2 star catalogues. We first assemble a pure star catalogue down to $G_{\rm Gaia} < 18$ after removing $\sim1.5\%$ of sources that appear extended in the Sloan Digital Sky Survey (SDSS). We perform visual inspection on the early data from the S16A internal release of HSC-SSP, finding that our star catalogue is $99.2\%$ pure down to $G_{\rm Gaia} < 18$. Second, we build the mask regions in an automated way using stacked detected source measurements around bright stars binned per $G_{\rm Gaia}$ magnitude. Finally, we validate those masks from visual inspection and comparison with the literature of galaxy number counts and angular two-point correlation functions. This version (Arcturus) supersedes the previous version (Sirius) used in the S16A internal and DR1 public releases. We publicly release the full masks and tools to flag objects in the entire footprint of the planned HSC-SSP observations at this address: this ftp URL.
0
1
0
0
0
0
16,424
Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
Generative adversarial networks (GANs) are a family of generative models that do not minimize a single training criterion. Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize their own cost. GANs are designed to reach a Nash equilibrium at which each player cannot reduce their cost without changing the other players' parameters. One useful approach for the theory of GANs is to show that a divergence between the training distribution and the model distribution obtains its minimum value at equilibrium. Several recent research directions have been motivated by the idea that this divergence is the primary guide for the learning process and that every step of learning should decrease the divergence. We show that this view is overly restrictive. During GAN training, the discriminator provides learning signal in situations where the gradients of the divergences between distributions would not be useful. We provide empirical counterexamples to the view of GAN training as divergence minimization. Specifically, we demonstrate that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail. We also show that gradient penalties motivated from the divergence minimization perspective are equally helpful when applied in other contexts in which the divergence minimization perspective does not predict they would be helpful. This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.
1
0
0
1
0
0
16,425
The Tensor Memory Hypothesis
We discuss memory models which are based on tensor decompositions using latent representations of entities and events. We show how episodic memory and semantic memory can be realized and discuss how new memory traces can be generated from sensory input: Existing memories are the basis for perception and new memories are generated via perception. We relate our mathematical approach to the hippocampal memory indexing theory. We describe the first detailed mathematical models for the complete processing pipeline from sensory input and its semantic decoding, i.e., perception, to the formation of episodic and semantic memories and their declarative semantic decodings. Our main hypothesis is that perception includes an active semantic decoding process, which relies on latent representations of entities and predicates, and that episodic and semantic memories depend on the same decoding process. We contribute to the debate between the leading memory consolidation theories, i.e., the standard consolidation theory (SCT) and the multiple trace theory (MTT). The latter is closely related to the complementary learning systems (CLS) framework. In particular, we show explicitly how episodic memory can teach the neocortex to form a semantic memory, which is a core issue in MTT and CLS.
1
0
0
1
0
0
16,426
Data-driven Approach to Measuring the Level of Press Freedom Using Media Attention Diversity from Unfiltered News
Published by Reporters Without Borders every year, the Press Freedom Index (PFI) reflects the fear and tension in the newsroom pushed by the government and private sectors. While the PFI is invaluable in monitoring media environments worldwide, the current survey-based method has inherent limitations to updates in terms of cost and time. In this work, we introduce an alternative way to measure the level of press freedom using media attention diversity compiled from Unfiltered News.
1
0
0
0
0
0
16,427
Advanced Satellite-based Frequency Transfer at the 10^{-16} Level
Advanced satellite-based frequency transfers by TWCP and IPPP have been performed between NICT and KRISS. We confirm that the disagreement between them is less than 1x10^{-16} at an averaging time of several days. Additionally, an intercontinental frequency ratio measurement of Sr and Yb optical lattice clocks was directly performed by TWCP. We achieved an uncertainty at the mid-10^{-16} level after a total measurement time of 12 hours. The frequency ratio was consistent with the recently reported values within the uncertainty.
0
1
0
0
0
0
16,428
Are there needles in a moving haystack? Adaptive sensing for detection of dynamically evolving signals
In this paper we investigate the problem of detecting dynamically evolving signals. We model the signal as an $n$ dimensional vector that is either zero or has $s$ non-zero components. At each time step $t\in \mathbb{N}$ the non-zero components change their location independently with probability $p$. The statistical problem is to decide whether the signal is a zero vector or in fact it has non-zero components. This decision is based on $m$ noisy observations of individual signal components collected at times $t=1,\ldots,m$. We consider two different sensing paradigms, namely adaptive and non-adaptive sensing. For non-adaptive sensing the choice of components to measure has to be decided before the data collection process started, while for adaptive sensing one can adjust the sensing process based on observations collected earlier. We characterize the difficulty of this detection problem in both sensing paradigms in terms of the aforementioned parameters, with special interest to the speed of change of the active components. In addition we provide an adaptive sensing algorithm for this problem and contrast its performance to that of non-adaptive detection algorithms.
0
0
1
1
0
0
16,429
Image-based Localization using Hourglass Networks
In this paper, we propose an encoder-decoder convolutional neural network (CNN) architecture for estimating camera pose (orientation and location) from a single RGB-image. The architecture has a hourglass shape consisting of a chain of convolution and up-convolution layers followed by a regression part. The up-convolution layers are introduced to preserve the fine-grained information of the input image. Following the common practice, we train our model in end-to-end manner utilizing transfer learning from large scale classification data. The experiments demonstrate the performance of the approach on data exhibiting different lighting conditions, reflections, and motion blur. The results indicate a clear improvement over the previous state-of-the-art even when compared to methods that utilize sequence of test frames instead of a single frame.
1
0
0
0
0
0
16,430
Suppression of Decoherence of a Spin-Boson System by Time-Periodic Control
We consider a finite-dimensional quantum system coupled to the bosonic radiation field and subject to a time-periodic control operator. Assuming the validity of a certain dynamic decoupling condition we approximate the system's time evolution with respect to the non-interacting dynamics. For sufficiently small coupling constants $g$ and control periods $T$ we show that a certain deviation of coupled and uncoupled propagator may be estimated by $\mathcal{O}(gt \, T)$. Our approach relies on the concept of Kato stability and general theory on non-autonomous linear evolution equations.
0
0
1
0
0
0
16,431
Energy saving for building heating via a simple and efficient model-free control design: First steps with computer simulations
The model-based control of building heating systems for energy saving encounters severe physical, mathematical and calibration difficulties in the numerous attempts that has been published until now. This topic is addressed here via a new model-free control setting, where the need of any mathematical description disappears. Several convincing computer simulations are presented. Comparisons with classic PI controllers and flatness-based predictive control are provided.
1
0
0
0
0
0
16,432
Trace Properties from Separation Logic Specifications
We propose a formal approach for relating abstract separation logic library specifications with the trace properties they enforce on interactions between a client and a library. Separation logic with abstract predicates enforces a resource discipline that constrains when and how calls may be made between a client and a library. Intuitively, this can enforce a protocol on the interaction trace. This intuition is broadly used in the separation logic community but has not previously been formalised. We provide just such a formalisation. Our approach is based on using wrappers which instrument library code to induce execution traces for the properties under examination. By considering a separation logic extended with trace resources, we prove that when a library satisfies its separation logic specification then its wrapped version satisfies the same specification and, moreover, maintains the trace properties as an invariant. Consequently, any client and library implementation that are correct with respect to the separation logic specification will satisfy the trace properties.
1
0
0
0
0
0
16,433
Estimation of Local Degree Distributions via Local Weighted Averaging and Monte Carlo Cross-Validation
Owing to their capability of summarising interactions between elements of a system, networks have become a common type of data in many fields. As networks can be inhomogeneous, in that different regions of the network may exhibit different topologies, an important topic concerns their local properties. This paper focuses on the estimation of the local degree distribution of a vertex in an inhomogeneous network. The contributions are twofold: we propose an estimator based on local weighted averaging, and we set up a Monte Carlo cross-validation procedure to pick the parameters of this estimator. Under a specific modelling assumption we derive an oracle inequality that shows how the model parameters affect the precision of the estimator. We illustrate our method by several numerical experiments, on both real and synthetic data, showing in particular that the approach considerably improves upon the natural, empirical estimator.
0
0
0
1
0
0
16,434
The ALMA Phasing System: A Beamforming Capability for Ultra-High-Resolution Science at (Sub)Millimeter Wavelengths
The Atacama Millimeter/submillimeter Array (ALMA) Phasing Project (APP) has developed and deployed the hardware and software necessary to coherently sum the signals of individual ALMA antennas and record the aggregate sum in Very Long Baseline Interferometry (VLBI) Data Exchange Format. These beamforming capabilities allow the ALMA array to collectively function as the equivalent of a single large aperture and participate in global VLBI arrays. The inclusion of phased ALMA in current VLBI networks operating at (sub)millimeter wavelengths provides an order of magnitude improvement in sensitivity, as well as enhancements in u-v coverage and north-south angular resolution. The availability of a phased ALMA enables a wide range of new ultra-high angular resolution science applications, including the resolution of supermassive black holes on event horizon scales and studies of the launch and collimation of astrophysical jets. It also provides a high-sensitivity aperture that may be used for investigations such as pulsar searches at high frequencies. This paper provides an overview of the ALMA Phasing System design, implementation, and performance characteristics.
0
1
0
0
0
0
16,435
Signatures of two-step impurity mediated vortex lattice melting in Bose-Einstein Condensates
We simulate a rotating 2D BEC to study the melting of a vortex lattice in presence of random impurities. Impurities are introduced either through a protocol in which vortex lattice is produced in an impurity potential or first creating the vortex lattice in the absence of random pinning and then cranking up the (co-rotating) impurity potential. We find that for a fixed strength, pinning of vortices at randomly distributed impurities leads to the new states of vortex lattice. It is unearthed that the vortex lattice follow a two-step melting via loss of positional and orientational order. Also, the comparisons between the states obtained in two protocols show that the vortex lattice states are metastable states when impurities are introduced after the formation of an ordered vortex lattice. We also show the existence of metastable states which depend on the history of how the vortex lattice is created.
0
1
0
0
0
0
16,436
Cycle Consistent Adversarial Denoising Network for Multiphase Coronary CT Angiography
In coronary CT angiography, a series of CT images are taken at different levels of radiation dose during the examination. Although this reduces the total radiation dose, the image quality during the low-dose phases is significantly degraded. To address this problem, here we propose a novel semi-supervised learning technique that can remove the noises of the CT images obtained in the low-dose phases by learning from the CT images in the routine dose phases. Although a supervised learning approach is not possible due to the differences in the underlying heart structure in two phases, the images in the two phases are closely related so that we propose a cycle-consistent adversarial denoising network to learn the non-degenerate mapping between the low and high dose cardiac phases. Experimental results showed that the proposed method effectively reduces the noise in the low-dose CT image while the preserving detailed texture and edge information. Moreover, thanks to the cyclic consistency and identity loss, the proposed network does not create any artificial features that are not present in the input images. Visual grading and quality evaluation also confirm that the proposed method provides significant improvement in diagnostic quality.
0
0
0
1
0
0
16,437
The multidimensional truncated Moment Problem: Carathéodory Numbers
Let $\mathcal{A}$ be a finite-dimensional subspace of $C(\mathcal{X};\mathbb{R})$, where $\mathcal{X}$ is a locally compact Hausdorff space, and $\mathsf{A}=\{f_1,\dots,f_m\}$ a basis of $\mathcal{A}$. A sequence $s=(s_j)_{j=1}^m$ is called a moment sequence if $s_j=\int f_j(x) \, d\mu(x)$, $j=1,\dots,m$, for some positive Radon measure $\mu$ on $\mathcal{X}$. Each moment sequence $s$ has a finitely atomic representing measure $\mu$. The smallest possible number of atoms is called the Carathéodory number $\mathcal{C}_{\mathsf{A}}(s)$. The largest number $\mathcal{C}_{\mathsf{A}}(s)$ among all moment sequences $s$ is the Carathéodory number $\mathcal{C}_{\mathsf{A}}$. In this paper the Carathéodory numbers $\mathcal{C}_{\mathsf{A}}(s)$ and $\mathcal{C}_{\mathsf{A}}$ are studied. In the case of differentiable functions methods from differential geometry are used. The main emphasis is on real polynomials. For a large class of spaces of polynomials in one variable the number $\mathcal{C}_{\mathsf{A}}$ is determined. In the multivariate case we obtain some lower bounds and we use results on zeros of positive polynomials to derive upper bounds for the Carathéodory numbers.
0
0
1
0
0
0
16,438
Inspiration, Captivation, and Misdirection: Emergent Properties in Networks of Online Navigation
The World Wide Web (WWW) has fundamentally changed the ways billions of people are able to access information. Thus, understanding how people seek information online is an important issue of study. Wikipedia is a hugely important part of information provision on the web, with hundreds of millions of users browsing and contributing to its network of knowledge. The study of navigational behaviour on Wikipedia, due to the site's popularity and breadth of content, can reveal more general information seeking patterns that may be applied beyond Wikipedia and the Web. Our work addresses the relative shortcomings of existing literature in relating how information structure influences patterns of navigation online. We study aggregated clickstream data for articles on the English Wikipedia in the form of a weighted, directed navigational network. We introduce two parameters that describe how articles act to source and spread traffic through the network, based on their in/out strength and entropy. From these, we construct a navigational phase space where different article types occupy different, distinct regions, indicating how the structure of information online has differential effects on patterns of navigation. Finally, we go on to suggest applications for this analysis in identifying and correcting deficiencies in the Wikipedia page network that may also be adapted to more general information networks.
1
0
0
0
0
0
16,439
Multiphase Aluminum A356 Foam Formation Process Simulation Using Lattice Boltzmann Method
Shan-Chen model is a numerical scheme to simulate multiphase fluid flows using Lattice Boltzmann approach. The original Shan-Chen model suffers from inability to accurately predict behavior of air bubbles interacting in a non-aqueous fluid. In the present study, we extended the Shan-Chen model to take the effect of the attraction-repulsion barriers among bubbles in to account. The proposed model corrects the interaction and coalescence criterion of the original Shan-Chen scheme in order to have a more accurate simulation of bubbles morphology in a metal foam. The model is based on forming a thin film (narrow channel) between merging bubbles during growth. Rupturing of the film occurs when an oscillation in velocity and pressure arises inside the channel followed by merging of the bubbles. Comparing numerical results obtained from proposed model with mettallorgraphy images for aluminum A356 demonstrated a good consistency in mean bubble size and bubbles distribution
1
1
0
0
0
0
16,440
Deformations of coisotropic submanifolds in Jacobi manifolds
In this thesis, we study the deformation problem of coisotropic submanifolds in Jacobi manifolds. In particular we attach two algebraic invariants to any coisotropic submanifold $S$ in a Jacobi manifold, namely the $L_\infty[1]$-algebra and the BFV-complex of $S$. Our construction generalizes and unifies analogous constructions in symplectic, Poisson, and locally conformal symplectic geometry. As a new special case we also attach an $L_\infty[1]$-algebra and a BFV-complex to any coisotropic submanifold in a contact manifold. The $L_\infty[1]$-algebra of $S$ controls the formal coisotropic deformation problem of $S$, even under Hamiltonian equivalence. The BFV-complex of $S$ controls the non-formal coisotropic deformation problem of $S$, even under both Hamiltonian and Jacobi equivalence. In view of these results, we exhibit, in the contact setting, two examples of coisotropic submanifolds whose coisotropic deformation problem is obstructed.
0
0
1
0
0
0
16,441
Discriminate-and-Rectify Encoders: Learning from Image Transformation Sets
The complexity of a learning task is increased by transformations in the input space that preserve class identity. Visual object recognition for example is affected by changes in viewpoint, scale, illumination or planar transformations. While drastically altering the visual appearance, these changes are orthogonal to recognition and should not be reflected in the representation or feature encoding used for learning. We introduce a framework for weakly supervised learning of image embeddings that are robust to transformations and selective to the class distribution, using sets of transforming examples (orbit sets), deep parametrizations and a novel orbit-based loss. The proposed loss combines a discriminative, contrastive part for orbits with a reconstruction error that learns to rectify orbit transformations. The learned embeddings are evaluated in distance metric-based tasks, such as one-shot classification under geometric transformations, as well as face verification and retrieval under more realistic visual variability. Our results suggest that orbit sets, suitably computed or observed, can be used for efficient, weakly-supervised learning of semantically relevant image embeddings.
1
0
0
1
0
0
16,442
Covering compact metric spaces greedily
A general greedy approach to construct coverings of compact metric spaces by metric balls is given and analyzed. The analysis is a continuous version of Chvatal's analysis of the greedy algorithm for the weighted set cover problem. The approach is demonstrated in an exemplary manner to construct efficient coverings of the n-dimensional sphere and n-dimensional Euclidean space to give short and transparent proofs of several best known bounds obtained from deterministic constructions in the literature on sphere coverings.
0
0
1
0
0
0
16,443
Gauge covariances and nonlinear optical responses
The formalism of the reduced density matrix is pursued in both length and velocity gauges of the perturbation to the crystal Hamiltonian. The covariant derivative is introduced as a convenient representation of the position operator. This allow us to write compact expressions for the reduced density matrix in any order of the perturbation which simplifies the calculations of nonlinear optical responses; as an example, we compute the first and third order contributions of the monolayer graphene. Expressions obtained in both gauges share the same formal structure, allowing a comparison of the effects of truncation to a finite set of bands. This truncation breaks the equivalence between the two approaches: its proper implementation can be done directly in the expressions derived in the length gauge, but require a revision of the equations of motion of the reduced density matrix in the velocity gauge.
0
1
0
0
0
0
16,444
Quasi-Static Internal Magnetic Field Detected in the Pseudogap Phase of Bi$_{2+x}$Sr$_{2-x}$CaCu$_2$O$_{8+δ}$ by $μ$SR
We report muon spin relaxation ($\mu$SR) measurements of optimally-doped and overdoped Bi$_{2+x}$Sr$_{2-x}$CaCu$_2$O$_{8+\delta}$ (Bi2212) single crystals that reveal the presence of a weak temperature-dependent quasi-static internal magnetic field of electronic origin in the superconducting (SC) and pseudogap (PG) phases. In both samples the internal magnetic field persists up to 160~K, but muon diffusion prevents following the evolution of the field to higher temperatures. We consider the evidence from our measurments in support of PG order parameter candidates, namely, electronic loop currents and magnetoelectric quadrupoles.
0
1
0
0
0
0
16,445
Auto-Encoding Total Correlation Explanation
Advances in unsupervised learning enable reconstruction and generation of samples from complex distributions, but this success is marred by the inscrutability of the representations learned. We propose an information-theoretic approach to characterizing disentanglement and dependence in representation learning using multivariate mutual information, also called total correlation. The principle of total Cor-relation Ex-planation (CorEx) has motivated successful unsupervised learning applications across a variety of domains, but under some restrictive assumptions. Here we relax those restrictions by introducing a flexible variational lower bound to CorEx. Surprisingly, we find that this lower bound is equivalent to the one in variational autoencoders (VAE) under certain conditions. This information-theoretic view of VAE deepens our understanding of hierarchical VAE and motivates a new algorithm, AnchorVAE, that makes latent codes more interpretable through information maximization and enables generation of richer and more realistic samples.
0
0
0
1
0
0
16,446
A Characterisation of Open Bisimilarity using an Intuitionistic Modal Logic
Open bisimilarity is the original notion of bisimilarity to be introduced for the pi-calculus that is a congruence. In open bisimilarity, free names in processes are treated as variables that may be instantiated lazily; in contrast to early and late bisimilarity where free names are constants. We build on the established line of work, due to Milner, Parrow, and Walker, on classical modal logics characterising early and late bisimilarity for the $\pi$-calculus. The important insight is, to characterise open bisimilarity, we move to the setting of intuitionistic modal logics. The intuitionistic modal logic introduced, called OM, is such that modalities are closed under (respectful) substitutions, inducing a property known as intuitionistic hereditary. Intuitionistic hereditary reflects the lazy instantiation of names in open bisimilarity. The soundness proof for open bisimilarity with respect to the modal logic is mechanised in Abella. The constructive content of the completeness proof provides an algorithm for generating distinguishing formulae, where such formulae are useful as a certificate explaining why two processes are not open bisimilar. We draw attention to the fact that open bisimilarity is not the only notion of bisimilarity that is a congruence: for name-passing calculi there is a classical/intuitionistic spectrum of bisimilarities.
1
0
0
0
0
0
16,447
Using data science as a community advocacy tool to promote equity in urban renewal programs: An analysis of Atlanta's Anti-Displacement Tax Fund
Cities across the United States are undergoing great transformation and urban growth. Data and data analysis has become an essential element of urban planning as cities use data to plan land use and development. One great challenge is to use the tools of data science to promote equity along with growth. The city of Atlanta is an example site of large-scale urban renewal that aims to engage in development without displacement. On the Westside of downtown Atlanta, the construction of the new Mercedes-Benz Stadium and the conversion of an underutilized rail-line into a multi-use trail may result in increased property values. In response to community residents' concerns and a commitment to development without displacement, the city and philanthropic partners announced an Anti-Displacement Tax Fund to subsidize future property tax increases of owner occupants for the next twenty years. To achieve greater transparency, accountability, and impact, residents expressed a desire for a tool that would help them determine eligibility and quantify this commitment. In support of this goal, we use machine learning techniques to analyze historical tax assessment and predict future tax assessments. We then apply eligibility estimates to our predictions to estimate the total cost for the first seven years of the program. These forecasts are also incorporated into an interactive tool for community residents to determine their eligibility for the fund and the expected increase in their home value over the next seven years.
1
0
0
0
0
0
16,448
General Bayesian inference schemes in infinite mixture models
Bayesian statistical models allow us to formalise our knowledge about the world and reason about our uncertainty, but there is a need for better procedures to accurately encode its complexity. One way to do so is through compositional models, which are formed by combining blocks consisting of simpler models. One can increase the complexity of the compositional model by either stacking more blocks or by using a not-so-simple model as a building block. This thesis is an example of the latter. One first aim is to expand the choice of Bayesian nonparametric (BNP) blocks for constructing tractable compositional models. So far, most of the models that have a Bayesian nonparametric component use a Dirichlet Process or a Pitman-Yor process because of the availability of tractable and compact representations. This thesis shows how to overcome certain intractabilities in order to obtain analogous compact representations for the class of Poisson-Kingman priors which includes the Dirichlet and Pitman-Yor processes. A major impediment to the widespread use of Bayesian nonparametric building blocks is that inference is often costly, intractable or difficult to carry out. This is an active research area since dealing with the model's infinite dimensional component forbids the direct use of standard simulation-based methods. The main contribution of this thesis is a variety of inference schemes that tackle this problem: Markov chain Monte Carlo and Sequential Monte Carlo methods, which are exact inference schemes since they target the true posterior. The contributions of this thesis, in a larger context, provide general purpose exact inference schemes in the flavour or probabilistic programming: the user is able to choose from a variety of models, focusing only on the modelling part. Indeed, if the wide enough class of Poisson-Kingman priors is used as one of our blocks, this objective is achieved.
0
0
0
1
0
0
16,449
Glow: Generative Flow with Invertible 1x1 Convolutions
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. The code for our model is available at this https URL
0
0
0
1
0
0
16,450
Pohozaev identity for the fractional $p-$Laplacian on $\mathbb{R}^N$
By virtue of a suitable approximation argument, we prove a Pohozaev identity for nonlinear nonlocal problems on $\mathbb{R}^N$ involving the fractional $p-$Laplacian operator. Furthermore we provide an application of the identity to show that some relevant levels of the energy functional associated with the problem coincide.
0
0
1
0
0
0
16,451
A Second Wave of Expanders over Finite Fields
This is an expository survey on recent sum-product results in finite fields. We present a number of sum-product or "expander" results that say that if $|A| > p^{2/3}$ then some set determined by sums and product of elements of $A$ is nearly as large as possible, and if $|A|<p^{2/3}$ then the set in question is significantly larger that $A$. These results are based on a point-plane incidence bound of Rudnev, and are quantitatively stronger than a wave of earlier results following Bourgain, Katz, and Tao's breakthrough sum-product result. In addition, we present two geometric results: an incidence bound due to Stevens and de Zeeuw, and bound on collinear triples, and an example of an expander that breaks the threshold of $p^{2/3}$ required by the other results. We have simplified proofs wherever possible, and hope that this survey may serve as a compact guide to recent advances in arithmetic combinatorics over finite fields. We do not claim originality for any of the results.
0
0
1
0
0
0
16,452
Homology of torus knots
Using the method of Elias-Hogancamp and combinatorics of toric braids we give an explicit formula for the triply graded Khovanov-Rozansky homology of an arbitrary torus knot, thereby proving some of the conjectures of Aganagic-Shakirov, Cherednik, Gorsky-Negut and Oblomkov-Rasmussen-Shende.
0
0
1
0
0
0
16,453
2-associahedra
For any $r\geq 1$ and $\mathbf{n} \in \mathbb{Z}_{\geq0}^r \setminus \{\mathbf0\}$ we construct a poset $W_{\mathbf{n}}$ called a 2-associahedron. The 2-associahedra arose in symplectic geometry, where they are expected to control maps between Fukaya categories of different symplectic manifolds. We prove that the completion $\widehat{W_{\mathbf{n}}}$ is an abstract polytope of dimension $|\mathbf{n}|+r-3$. There are forgetful maps $W_{\mathbf{n}} \to K_r$, where $K_r$ is the $(r-2)$-dimensional associahedron, and the 2-associahedra specialize to the associahedra (in two ways) and to the multiplihedra. In an appendix, we work out the 2- and 3-dimensional associahedra in detail.
0
0
1
0
0
0
16,454
Anisotropic thermophoresis
Colloidal migration in temperature gradient is referred to as thermophoresis. In contrast to particles with spherical shape, we show that elongated colloids may have a thermophoretic response that varies with the colloid orientation. Remarkably, this can translate into a non-vanishing thermophoretic force in the direction perpendicular to the temperature gradient. Oppositely to the friction force, the thermophoretic force of a rod oriented with the temperature gradient can be larger or smaller than when oriented perpendicular to it. The precise anisotropic thermophoretic behavior clearly depends on the colloidal rod aspect ratio, and also on its surface details, which provides an interesting tunability to the devices constructed based on this principle. By means of mesoscale hydrodynamic simulations, we characterize this effect for different types of rod-like colloids.
0
1
0
0
0
0
16,455
Ultra-Wideband Aided Fast Localization and Mapping System
This paper proposes an ultra-wideband (UWB) aided localization and mapping system that leverages on inertial sensor and depth camera. Inspired by the fact that visual odometry (VO) system, regardless of its accuracy in the short term, still faces challenges with accumulated errors in the long run or under unfavourable environments, the UWB ranging measurements are fused to remove the visual drift and improve the robustness. A general framework is developed which consists of three parallel threads, two of which carry out the visual-inertial odometry (VIO) and UWB localization respectively. The other mapping thread integrates visual tracking constraints into a pose graph with the proposed smooth and virtual range constraints, such that an optimization is performed to provide robust trajectory estimation. Experiments show that the proposed system is able to create dense drift-free maps in real-time even running on an ultra-low power processor in featureless environments.
1
0
0
0
0
0
16,456
Modular Sensor Fusion for Semantic Segmentation
Sensor fusion is a fundamental process in robotic systems as it extends the perceptual range and increases robustness in real-world operations. Current multi-sensor deep learning based semantic segmentation approaches do not provide robustness to under-performing classes in one modality, or require a specific architecture with access to the full aligned multi-sensor training data. In this work, we analyze statistical fusion approaches for semantic segmentation that overcome these drawbacks while keeping a competitive performance. The studied approaches are modular by construction, allowing to have different training sets per modality and only a much smaller subset is needed to calibrate the statistical models. We evaluate a range of statistical fusion approaches and report their performance against state-of-the-art baselines on both real-world and simulated data. In our experiments, the approach improves performance in IoU over the best single modality segmentation results by up to 5%. We make all implementations and configurations publicly available.
1
0
0
0
0
0
16,457
Thickness-dependent electronic and magnetic properties of $γ'$-Fe$_{\mathrm 4}$N atomic layers on Cu(001)
Growth, electronic and magnetic properties of $\gamma'$-Fe$_{4}$N atomic layers on Cu(001) are studied by scanning tunneling microscopy/spectroscopy and x-ray absorption spectroscopy/magnetic circular dichroism. A continuous film of ordered trilayer $\gamma'$-Fe$_{4}$N is obtained by Fe deposition under N$_{2}$ atmosphere onto monolayer Fe$_{2}$N/Cu(001), while the repetition of a bombardment with 0.5 keV N$^{+}$ ions during growth cycles results in imperfect bilayer $\gamma'$-Fe$_{4}$N. The increase in the sample thickness causes the change of the surface electronic structure, as well as the enhancement in the spin magnetic moment of Fe atoms reaching $\sim$ 1.4 $\mu_{\mathrm B}$/atom in the trilayer sample. The observed thickness-dependent properties of the system are well interpreted by layer-resolved density of states calculated using first principles, which demonstrates the strongly layer-dependent electronic states within each surface, subsurface, and interfacial plane of the $\gamma'$-Fe$_{4}$N atomic layers on Cu(001).
0
1
0
0
0
0
16,458
Error Characterization, Mitigation, and Recovery in Flash Memory Based Solid-State Drives
NAND flash memory is ubiquitous in everyday life today because its capacity has continuously increased and cost has continuously decreased over decades. This positive growth is a result of two key trends: (1) effective process technology scaling, and (2) multi-level (e.g., MLC, TLC) cell data coding. Unfortunately, the reliability of raw data stored in flash memory has also continued to become more difficult to ensure, because these two trends lead to (1) fewer electrons in the flash memory cell (floating gate) to represent the data and (2) larger cell-to-cell interference and disturbance effects. Without mitigation, worsening reliability can reduce the lifetime of NAND flash memory. As a result, flash memory controllers in solid-state drives (SSDs) have become much more sophisticated: they incorporate many effective techniques to ensure the correct interpretation of noisy data stored in flash memory cells. In this article, we review recent advances in SSD error characterization, mitigation, and data recovery techniques for reliability and lifetime improvement. We provide rigorous experimental data from state-of-the-art MLC and TLC NAND flash devices on various types of flash memory errors, to motivate the need for such techniques. Based on the understanding developed by the experimental characterization, we describe several mitigation and recovery techniques, including (1) cell-to-cell interference mitigation, (2) optimal multi-level cell sensing, (3) error correction using state-of-the-art algorithms and methods, and (4) data recovery when error correction fails. We quantify the reliability improvement provided by each of these techniques. Looking forward, we briefly discuss how flash memory and these techniques could evolve into the future.
1
0
0
0
0
0
16,459
Unified Backpropagation for Multi-Objective Deep Learning
A common practice in most of deep convolutional neural architectures is to employ fully-connected layers followed by Softmax activation to minimize cross-entropy loss for the sake of classification. Recent studies show that substitution or addition of the Softmax objective to the cost functions of support vector machines or linear discriminant analysis is highly beneficial to improve the classification performance in hybrid neural networks. We propose a novel paradigm to link the optimization of several hybrid objectives through unified backpropagation. This highly alleviates the burden of extensive boosting for independent objective functions or complex formulation of multiobjective gradients. Hybrid loss functions are linked by basic probability assignment from evidence theory. We conduct our experiments for a variety of scenarios and standard datasets to evaluate the advantage of our proposed unification approach to deliver consistent improvements into the classification performance of deep convolutional neural networks.
1
0
0
1
0
0
16,460
Information-Theoretic Representation Learning for Positive-Unlabeled Classification
Recent advances in weakly supervised classification allow us to train a classifier only from positive and unlabeled (PU) data. However, existing PU classification methods typically require an accurate estimate of the class-prior probability, which is a critical bottleneck particularly for high-dimensional data. This problem has been commonly addressed by applying principal component analysis in advance, but such unsupervised dimension reduction can collapse underlying class structure. In this paper, we propose a novel representation learning method from PU data based on the information-maximization principle. Our method does not require class-prior estimation and thus can be used as a preprocessing method for PU classification. Through experiments, we demonstrate that our method combined with deep neural networks highly improves the accuracy of PU class-prior estimation, leading to state-of-the-art PU classification performance.
1
0
0
1
0
0
16,461
Aspects of Chaitin's Omega
The halting probability of a Turing machine,also known as Chaitin's Omega, is an algorithmically random number with many interesting properties. Since Chaitin's seminal work, many popular expositions have appeared, mainly focusing on the metamathematical or philosophical significance of Omega (or debating against it). At the same time, a rich mathematical theory exploring the properties of Chaitin's Omega has been brewing in various technical papers, which quietly reveals the significance of this number to many aspects of contemporary algorithmic information theory. The purpose of this survey is to expose these developments and tell a story about Omega, which outlines its multifaceted mathematical properties and roles in algorithmic randomness.
1
0
1
0
0
0
16,462
This Looks Like That: Deep Learning for Interpretable Image Recognition
When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. The model thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, geologists, architects, and others would explain to people on how to solve challenging image classification tasks. The network uses only image-level labels for training, meaning that there are no labels for parts of images. We demonstrate our method on the CUB-200-2011 dataset and the CBIS-DDSM dataset. Our experiments show that our interpretable network can achieve comparable accuracy with its analogous standard non-interpretable counterpart as well as other interpretable deep models.
0
0
0
1
0
0
16,463
Deep Learning in the Automotive Industry: Applications and Tools
Deep Learning refers to a set of machine learning techniques that utilize neural networks with many hidden layers for tasks, such as image classification, speech recognition, language understanding. Deep learning has been proven to be very effective in these domains and is pervasively used by many Internet services. In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision. We surveys the current state-of-the-art in libraries, tools and infrastructures (e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural networks. We particularly focus on convolutional neural networks and computer vision use cases, such as the visual inspection process in manufacturing plants and the analysis of social media data. To train neural networks, curated and labeled datasets are essential. In particular, both the availability and scope of such datasets is typically very limited. A main contribution of this paper is the creation of an automotive dataset, that allows us to learn and automatically recognize different vehicle properties. We describe an end-to-end deep learning application utilizing a mobile app for data collection and process support, and an Amazon-based cloud backend for storage and training. For training we evaluate the use of cloud and on-premises infrastructures (including multiple GPUs) in conjunction with different neural network architectures and frameworks. We assess both the training times as well as the accuracy of the classifier. Finally, we demonstrate the effectiveness of the trained classifier in a real world setting during manufacturing process.
1
0
0
0
0
0
16,464
Design, Engineering and Optimization of a Grid-Tie Multicell Inverter for Energy Storage Applications
Multilevel converters have found many applications within renewable energy systems thanks to their unique capability of generating multiple voltage levels. However, these converters need multiple DC sources and the voltage balancing over capacitors for these systems is cumbersome. In this work, a new grid-tie multicell inverter with high level of safety has been designed, engineered and optimized for integrating energy storage devices to the electric grid. The multilevel converter proposed in this work is capable of maintaining the flying capacitors voltage in the desired value. The solar cells are the primary energy sources for proposed inverter where the maximum power density is obtained. Finally, the performance of the inverter and its control method simulated using PSCAD/EMTDC software package and good agreement achieved with experimental data.
0
1
0
0
0
0
16,465
An infinitely differentiable function with compact support: Definition and properties
This is the English translation of my old paper 'Definición y estudio de una función indefinidamente diferenciable de soporte compacto', Rev. Real Acad. Ciencias 76 (1982) 21-38. In it a function (essentially Fabius function) is defined and given its main properties, including: unicity, interpretation as a probability, partition of unity with its translates, formulas for its $n$-th derivates, rationality of its values at dyadic points, formulas for the effective computation of these values, and some arithmetical properties of these values. Since I need it now for a reference, I have translated it.
0
0
1
0
0
0
16,466
Analysis of bacterial population growth using extended logistic growth model with distributed delay
In the present work, we develop a delayed Logistic growth model to study the effects of decontamination on the bacterial population in the ambient environment. Using the linear stability analysis, we study different case scenarios, where bacterial population may establish at the positive equilibrium or go extinct due to increased decontamination. The results are verified using numerical simulation of the model.
0
0
0
0
1
0
16,467
Reverse Quantum Annealing Approach to Portfolio Optimization Problems
We investigate a hybrid quantum-classical solution method to the mean-variance portfolio optimization problems. Starting from real financial data statistics and following the principles of the Modern Portfolio Theory, we generate parametrized samples of portfolio optimization problems that can be related to quadratic binary optimization forms programmable in the analog D-Wave Quantum Annealer 2000Q. The instances are also solvable by an industry-established Genetic Algorithm approach, which we use as a classical benchmark. We investigate several options to run the quantum computation optimally, ultimately discovering that the best results in terms of expected time-to-solution as a function of number of variables for the hardest instances set are obtained by seeding the quantum annealer with a solution candidate found by a greedy local search and then performing a reverse annealing protocol. The optimized reverse annealing protocol is found to be more than 100 times faster than the corresponding forward quantum annealing on average.
0
0
0
0
0
1
16,468
GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations
Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large-scale unlabeled data and transferring the graphs to downstream tasks. Our proposed transfer learning framework improves performance on various tasks including question answering, natural language inference, sentiment analysis, and image classification. We also show that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have not been trained (including GloVe embeddings, ELMo embeddings, and task-specific RNN hidden unit), or embedding-free units such as image pixels.
0
0
0
1
0
0
16,469
Learning to cluster in order to transfer across domains and tasks
This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not. This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.
1
0
0
0
0
0
16,470
Modeling Perceptual Aliasing in SLAM via Discrete-Continuous Graphical Models
Perceptual aliasing is one of the main causes of failure for Simultaneous Localization and Mapping (SLAM) systems operating in the wild. Perceptual aliasing is the phenomenon where different places generate a similar visual (or, in general, perceptual) footprint. This causes spurious measurements to be fed to the SLAM estimator, which typically results in incorrect localization and mapping results. The problem is exacerbated by the fact that those outliers are highly correlated, in the sense that perceptual aliasing creates a large number of mutually-consistent outliers. Another issue stems from the fact that most state-of-the-art techniques rely on a given trajectory guess (e.g., from odometry) to discern between inliers and outliers and this makes the resulting pipeline brittle, since the accumulation of error may result in incorrect choices and recovery from failures is far from trivial. This work provides a unified framework to model perceptual aliasing in SLAM and provides practical algorithms that can cope with outliers without relying on any initial guess. We present two main contributions. The first is a Discrete-Continuous Graphical Model (DC-GM) for SLAM: the continuous portion of the DC-GM captures the standard SLAM problem, while the discrete portion describes the selection of the outliers and models their correlation. The second contribution is a semidefinite relaxation to perform inference in the DC-GM that returns estimates with provable sub-optimality guarantees. Experimental results on standard benchmarking datasets show that the proposed technique compares favorably with state-of-the-art methods while not relying on an initial guess for optimization.
1
0
0
0
0
0
16,471
Sparse distributed representation, hierarchy, critical periods, metaplasticity: the keys to lifelong fixed-time learning and best-match retrieval
Among the more important hallmarks of human intelligence, which any artificial general intelligence (AGI) should have, are the following. 1. It must be capable of on-line learning, including with single/few trials. 2. Memories/knowledge must be permanent over lifelong durations, safe from catastrophic forgetting. Some confabulation, i.e., semantically plausible retrieval errors, may gradually accumulate over time. 3. The time to both: a) learn a new item, and b) retrieve the best-matching / most relevant item(s), i.e., do similarity-based retrieval, must remain constant throughout the lifetime. 4. The system should never become full: it must remain able to store new information, i.e., make new permanent memories, throughout very long lifetimes. No artificial computational system has been shown to have all these properties. Here, we describe a neuromorphic associative memory model, Sparsey, which does, in principle, possess them all. We cite prior results supporting possession of hallmarks 1 and 3 and sketch an argument, hinging on strongly recursive, hierarchical, part-whole compositional structure of natural data, that Sparsey also possesses hallmarks 2 and 4.
0
0
0
0
1
0
16,472
Efficient implementations of the modified Gram-Schmidt orthogonalization with a non-standard inner product
The modified Gram-Schmidt (MGS) orthogonalization is one of the most well-used algorithms for computing the thin QR factorization. MGS can be straightforwardly extended to a non-standard inner product with respect to a symmetric positive definite matrix $A$. For the thin QR factorization of an $m \times n$ matrix with the non-standard inner product, a naive implementation of MGS requires $2n$ matrix-vector multiplications (MV) with respect to $A$. In this paper, we propose $n$-MV implementations: a high accuracy (HA) type and a high performance (HP) type, of MGS. We also provide error bounds of the HA-type implementation. Numerical experiments and analysis indicate that the proposed implementations have competitive advantages over the naive implementation in terms of both computational cost and accuracy.
0
0
1
0
0
0
16,473
On the isoperimetric constant, covariance inequalities and $L_p$-Poincaré inequalities in dimension one
Firstly, we derive in dimension one a new covariance inequality of $L_{1}-L_{\infty}$ type that characterizes the isoperimetric constant as the best constant achieving the inequality. Secondly, we generalize our result to $L_{p}-L_{q}$ bounds for the covariance. Consequently, we recover Cheeger's inequality without using the co-area formula. We also prove a generalized weighted Hardy type inequality that is needed to derive our covariance inequalities and that is of independent interest. Finally, we explore some consequences of our covariance inequalities for $L_{p}$-Poincaré inequalities and moment bounds. In particular, we obtain optimal constants in general $L_{p}$-Poincaré inequalities for measures with finite isoperimetric constant, thus generalizing in dimension one Cheeger's inequality, which is a $L_{p}$-Poincaré inequality for $p=2$, to any real $p\geq 1$.
0
0
1
1
0
0
16,474
Faster arbitrary-precision dot product and matrix multiplication
We present algorithms for real and complex dot product and matrix multiplication in arbitrary-precision floating-point and ball arithmetic. A low-overhead dot product is implemented on the level of GMP limb arrays; it is about twice as fast as previous code in MPFR and Arb at precision up to several hundred bits. Up to 128 bits, it is 3-4 times as fast, costing 20-30 cycles per term for floating-point evaluation and 40-50 cycles per term for balls. We handle large matrix multiplications even more efficiently via blocks of scaled integer matrices. The new methods are implemented in Arb and significantly speed up polynomial operations and linear algebra.
1
0
0
0
0
0
16,475
Improving Distributed Representations of Tweets - Present and Future
Unsupervised representation learning for tweets is an important research field which helps in solving several business applications such as sentiment analysis, hashtag prediction, paraphrase detection and microblog ranking. A good tweet representation learning model must handle the idiosyncratic nature of tweets which poses several challenges such as short length, informal words, unusual grammar and misspellings. However, there is a lack of prior work which surveys the representation learning models with a focus on tweets. In this work, we organize the models based on its objective function which aids the understanding of the literature. We also provide interesting future directions, which we believe are fruitful in advancing this field by building high-quality tweet representation learning models.
1
0
0
0
0
0
16,476
Tverberg-type theorems for matroids: A counterexample and a proof
Bárány, Kalai, and Meshulam recently obtained a topological Tverberg-type theorem for matroids, which guarantees multiple coincidences for continuous maps from a matroid complex to d-dimensional Euclidean space, if the matroid has sufficiently many disjoint bases. They make a conjecture on the connectivity of k-fold deleted joins of a matroid with many disjoint bases, which would yield a much tighter result - but we provide a counterexample already for the case of k=2, where a tight Tverberg-type theorem would be a topological Radon theorem for matroids. Nevertheless, we prove the topological Radon theorem for the counterexample family of matroids by an index calculation, despite the failure of the connectivity-based approach.
0
0
1
0
0
0
16,477
Near-Perfect Conversion of a Propagating Plane Wave into a Surface Wave Using Metasurfaces
In this paper, theoretical and numerical studies of perfect/nearly-perfect conversion of a plane wave into a surface wave are presented. The problem of determining the electromagnetic properties of an inhomogeneous lossless boundary which would fully transform an incident plane wave into a surface wave propagating along the boundary is considered. An approximate field solution which produces a slowly growing surface wave and satisfies the energy conservation law is discussed and numerically demonstrated. The results of the study are of great importance for the future development of such devices as perfect leaky-wave antennas and can potentially lead to many novel applications.
0
1
0
0
0
0
16,478
A National Research Agenda for Intelligent Infrastructure
Our infrastructure touches the day-to-day life of each of our fellow citizens, and its capabilities, integrity and sustainability are crucial to the overall competitiveness and prosperity of our country. Unfortunately, the current state of U.S. infrastructure is not good: the American Society of Civil Engineers' latest report on America's infrastructure ranked it at a D+ -- in need of $3.9 trillion in new investments. This dire situation constrains the growth of our economy, threatens our quality of life, and puts our global leadership at risk. The ASCE report called out three actions that need to be taken to address our infrastructure problem: 1) investment and planning in the system; 2) bold leadership by elected officials at the local and federal state; and 3) planning sustainability and resiliency in our infrastructure. While our immediate infrastructure needs are critical, it would be shortsighted to simply replicate more of what we have today. By doing so, we miss the opportunity to create Intelligent Infrastructure that will provide the foundation for increased safety and resilience, improved efficiencies and civic services, and broader economic opportunities and job growth. Indeed, our challenge is to proactively engage the declining, incumbent national infrastructure system and not merely repair it, but to enhance it; to create an internationally competitive cyber-physical system that provides an immediate opportunity for better services for citizens and that acts as a platform for a 21st century, high-tech economy and beyond.
1
0
0
0
0
0
16,479
Surface Edge Explorer (SEE): Planning Next Best Views Directly from 3D Observations
Surveying 3D scenes is a common task in robotics. Systems can do so autonomously by iteratively obtaining measurements. This process of planning observations to improve the model of a scene is called Next Best View (NBV) planning. NBV planning approaches often use either volumetric (e.g., voxel grids) or surface (e.g., triangulated meshes) representations. Volumetric approaches generalise well between scenes as they do not depend on surface geometry but do not scale to high-resolution models of large scenes. Surface representations can obtain high-resolution models at any scale but often require tuning of unintuitive parameters or multiple survey stages. This paper presents a scene-model-free NBV planning approach with a density representation. The Surface Edge Explorer (SEE) uses the density of current measurements to detect and explore observed surface boundaries. This approach is shown experimentally to provide better surface coverage in lower computation time than the evaluated state-of-the-art volumetric approaches while moving equivalent distances.
1
0
0
0
0
0
16,480
N/O abundance ratios in gamma-ray burst and supernova host galaxies at z<4. Comparison with AGN, starburst and HII regions
The distribution of N/O abundance ratios calculated by the detailed modelling of different galaxy spectra at z<4 is investigated. Supernova (SN) and long gamma-ray-burst (LGRB) host galaxies cover different redshift domains. N/O in SN hosts increases due to secondary N production towards low z (0.01) accompanying the growing trend of active galaxies (AGN, LINER). N/O in LGRB hosts decreases rapidly between z>1 and z ~0.1 following the N/H trend and reach the characteristic N/O ratios calculated for the HII regions in local and nearby galaxies. The few short period GRB (SGRB) hosts included in the galaxy sample show N/H <0.04 solar and O/H solar. They seem to continue the low bound N/H trend of SN hosts at z<0.3. The distribution of N/O as function of metallicity for SN and LGRB hosts is compared with star chemical evolution models. The results show that several LGRB hosts can be explained by star multi-bursting models when 12+log(O/H) <8.5, while some objects follow the trend of continuous star formation models. N/O in SN hosts at log(O/H)+12 <8.5 are not well explained by stellar chemical evolution models calculated for starburst galaxies. At 12+log(O/H) >8.5 many different objects are nested close to O/H solar with N/O ranging between the maximum corresponding to starburst galaxies and AGN and the minimum corresponding to HII regions and SGRB.
0
1
0
0
0
0
16,481
Baselines and a datasheet for the Cerema AWP dataset
This paper presents the recently published Cerema AWP (Adverse Weather Pedestrian) dataset for various machine learning tasks and its exports in machine learning friendly format. We explain why this dataset can be interesting (mainly because it is a greatly controlled and fully annotated image dataset) and present baseline results for various tasks. Moreover, we decided to follow the very recent suggestions of datasheets for dataset, trying to standardize all the available information of the dataset, with a transparency objective.
0
0
0
1
0
0
16,482
Characterizing Directed and Undirected Networks via Multidimensional Walks with Jumps
Estimating distributions of node characteristics (labels) such as number of connections or citizenship of users in a social network via edge and node sampling is a vital part of the study of complex networks. Due to its low cost, sampling via a random walk (RW) has been proposed as an attractive solution to this task. Most RW methods assume either that the network is undirected or that walkers can traverse edges regardless of their direction. Some RW methods have been designed for directed networks where edges coming into a node are not directly observable. In this work, we propose Directed Unbiased Frontier Sampling (DUFS), a sampling method based on a large number of coordinated walkers, each starting from a node chosen uniformly at random. It is applicable to directed networks with invisible incoming edges because it constructs, in real-time, an undirected graph consistent with the walkers trajectories, and due to the use of random jumps which prevent walkers from being trapped. DUFS generalizes previous RW methods and is suited for undirected networks and to directed networks regardless of in-edges visibility. We also propose an improved estimator of node label distributions that combines information from the initial walker locations with subsequent RW observations. We evaluate DUFS, compare it to other RW methods, investigate the impact of its parameters on estimation accuracy and provide practical guidelines for choosing them. In estimating out-degree distributions, DUFS yields significantly better estimates of the head of the distribution than other methods, while matching or exceeding estimation accuracy of the tail. Last, we show that DUFS outperforms uniform node sampling when estimating distributions of node labels of the top 10% largest degree nodes, even when sampling a node uniformly has the same cost as RW steps.
1
1
0
0
0
0
16,483
Tensorial Recurrent Neural Networks for Longitudinal Data Analysis
Traditional Recurrent Neural Networks assume vectorized data as inputs. However many data from modern science and technology come in certain structures such as tensorial time series data. To apply the recurrent neural networks for this type of data, a vectorisation process is necessary, while such a vectorisation leads to the loss of the precise information of the spatial or longitudinal dimensions. In addition, such a vectorized data is not an optimum solution for learning the representation of the longitudinal data. In this paper, we propose a new variant of tensorial neural networks which directly take tensorial time series data as inputs. We call this new variant as Tensorial Recurrent Neural Network (TRNN). The proposed TRNN is based on tensor Tucker decomposition.
1
0
0
1
0
0
16,484
End-to-End Learning for the Deep Multivariate Probit Model
The multivariate probit model (MVP) is a popular classic model for studying binary responses of multiple entities. Nevertheless, the computational challenge of learning the MVP model, given that its likelihood involves integrating over a multidimensional constrained space of latent variables, significantly limits its application in practice. We propose a flexible deep generalization of the classic MVP, the Deep Multivariate Probit Model (DMVP), which is an end-to-end learning scheme that uses an efficient parallel sampling process of the multivariate probit model to exploit GPU-boosted deep neural networks. We present both theoretical and empirical analysis of the convergence behavior of DMVP's sampling process with respect to the resolution of the correlation structure. We provide convergence guarantees for DMVP and our empirical analysis demonstrates the advantages of DMVP's sampling compared with standard MCMC-based methods. We also show that when applied to multi-entity modelling problems, which are natural DMVP applications, DMVP trains faster than classical MVP, by at least an order of magnitude, captures rich correlations among entities, and further improves the joint likelihood of entities compared with several competitive models.
0
0
0
1
0
0
16,485
Purity and separation for oriented matroids
Leclerc and Zelevinsky, motivated by the study of quasi-commuting quantum flag minors, introduced the notions of strongly separated and weakly separated collections. These notions are closely related to the theory of cluster algebras, to the combinatorics of the double Bruhat cells, and to the totally positive Grassmannian. A key feature, called the purity phenomenon, is that every maximal by inclusion strongly (resp., weakly) separated collection of subsets in $[n]$ has the same cardinality. In this paper, we extend these notions and define $\mathcal{M}$-separated collections, for any oriented matroid $\mathcal{M}$. We show that maximal by size $\mathcal{M}$-separated collections are in bijection with fine zonotopal tilings (if $\mathcal{M}$ is a realizable oriented matroid), or with one-element liftings of $\mathcal{M}$ in general position (for an arbitrary oriented matroid). We introduce the class of pure oriented matroids for which the purity phenomenon holds: an oriented matroid $\mathcal{M}$ is pure if $\mathcal{M}$-separated collections form a pure simplicial complex, i.e., any maximal by inclusion $\mathcal{M}$-separated collection is also maximal by size. We pay closer attention to several special classes of oriented matroids: oriented matroids of rank $3$, graphical oriented matroids, and uniform oriented matroids. We classify pure oriented matroids in these cases. An oriented matroid of rank $3$ is pure if and only if it is a positroid (up to reorienting and relabeling its ground set). A graphical oriented matroid is pure if and only if its underlying graph is an outerplanar graph, that is, a subgraph of a triangulation of an $n$-gon. We give a simple conjectural characterization of pure oriented matroids by forbidden minors and prove it for the above classes of matroids (rank $3$, graphical, uniform).
0
0
1
0
0
0
16,486
Random Manifolds have no Totally Geodesic Submanifolds
For $n\geq 4$ we show that generic closed Riemannian $n$-manifolds have no nontrivial totally geodesic submanifolds, answering a question of Spivak. An immediate consequence is a severe restriction on the isometry group of a generic Riemannian metric. Both results are widely believed to be true, but we are not aware of any proofs in the literature.
0
0
1
0
0
0
16,487
Kinematically Redundant Octahedral Motion Platform for Virtual Reality Simulations
We propose a novel design of a parallel manipulator of Stewart Gough type for virtual reality application of single individuals; i.e. an omni-directional treadmill is mounted on the motion platform in order to improve VR immersion by giving feedback to the human body. For this purpose we modify the well-known octahedral manipulator in a way that it has one degree of kinematical redundancy; namely an equiform reconfigurability of the base. The instantaneous kinematics and singularities of this mechanism are studied, where especially "unavoidable singularities" are characterized. These are poses of the motion platform, which can only be realized by singular configurations of the mechanism despite its kinematic redundancy.
1
0
0
0
0
0
16,488
Loss Max-Pooling for Semantic Image Segmentation
We introduce a novel loss max-pooling concept for handling imbalanced training data distributions, applicable as alternative loss layer in the context of deep neural networks for semantic image segmentation. Most real-world semantic segmentation datasets exhibit long tail distributions with few object categories comprising the majority of data and consequently biasing the classifiers towards them. Our method adaptively re-weights the contributions of each pixel based on their observed losses, targeting under-performing classification results as often encountered for under-represented object classes. Our approach goes beyond conventional cost-sensitive learning attempts through adaptive considerations that allow us to indirectly address both, inter- and intra-class imbalances. We provide a theoretical justification of our approach, complementary to experimental analyses on benchmark datasets. In our experiments on the Cityscapes and Pascal VOC 2012 segmentation datasets we find consistently improved results, demonstrating the efficacy of our approach.
1
0
0
1
0
0
16,489
Nonlinear Traveling Internal Waves in Depth-Varying Currents
In this work, we study the nonlinear traveling waves in density stratified fluids with depth varying shear currents. Beginning the formulation of the water-wave problem due to [1], we extend the work of [4] and [18] to examine the interface between two fluids of differing densities and varying linear shear. We derive as systems of equations depending only on variables at the interface, and numerically solve for periodic traveling wave solutions using numerical continuation. Here we consider only branches which bifurcate from solutions where there is no slip in the tangential velocity at the interface for the trivial flow. The spectral stability of these solutions is then determined using a numerical Fourier-Floquet technique. We find that the strength of the linear shear in each fluid impacts the stability of the corresponding traveling wave solutions. Specifically, opposing shears may amplify or suppress instabilities.
0
1
0
0
0
0
16,490
Instabilities of Internal Gravity Wave Beams
Internal gravity waves play a primary role in geophysical fluids: they contribute significantly to mixing in the ocean and they redistribute energy and momentum in the middle atmosphere. Until recently, most studies were focused on plane wave solutions. However, these solutions are not a satisfactory description of most geophysical manifestations of internal gravity waves, and it is now recognized that internal wave beams with a confined profile are ubiquitous in the geophysical context. We will discuss the reason for the ubiquity of wave beams in stratified fluids, related to the fact that they are solutions of the nonlinear governing equations. We will focus more specifically on situations with a constant buoyancy frequency. Moreover, in light of recent experimental and analytical studies of internal gravity beams, it is timely to discuss the two main mechanisms of instability for those beams. i) The Triadic Resonant Instability generating two secondary wave beams. ii) The streaming instability corresponding to the spontaneous generation of a mean flow.
0
1
0
0
0
0
16,491
Multiphoton-Excited Fluorescence of Silicon-Vacancy Color Centers in Diamond
Silicon-vacancy color centers in nanodiamonds are promising as fluorescent labels for biological applications, with a narrow, non-bleaching emission line at 738\,nm. Two-photon excitation of this fluorescence offers the possibility of low-background detection at significant tissue depth with high three-dimensional spatial resolution. We have measured the two-photon fluorescence cross section of a negatively-charged silicon vacancy (SiV$^-$) in ion-implanted bulk diamond to be $0.74(19) \times 10^{-50}{\rm cm^4\;s/photon}$ at an excitation wavelength of 1040\,nm. In comparison to the diamond nitrogen vacancy (NV) center, the expected detection threshold of a two-photon excited SiV center is more than an order of magnitude lower, largely due to its much narrower linewidth. We also present measurements of two- and three-photon excitation spectra, finding an increase in the two-photon cross section with decreasing wavelength, and discuss the physical interpretation of the spectra in the context of existing models of the SiV energy-level structure.
0
1
0
0
0
0
16,492
Exploring Latent Semantic Factors to Find Useful Product Reviews
Online reviews provided by consumers are a valuable asset for e-Commerce platforms, influencing potential consumers in making purchasing decisions. However, these reviews are of varying quality, with the useful ones buried deep within a heap of non-informative reviews. In this work, we attempt to automatically identify review quality in terms of its helpfulness to the end consumers. In contrast to previous works in this domain exploiting a variety of syntactic and community-level features, we delve deep into the semantics of reviews as to what makes them useful, providing interpretable explanation for the same. We identify a set of consistency and semantic factors, all from the text, ratings, and timestamps of user-generated reviews, making our approach generalizable across all communities and domains. We explore review semantics in terms of several latent factors like the expertise of its author, his judgment about the fine-grained facets of the underlying product, and his writing style. These are cast into a Hidden Markov Model -- Latent Dirichlet Allocation (HMM-LDA) based model to jointly infer: (i) reviewer expertise, (ii) item facets, and (iii) review helpfulness. Large-scale experiments on five real-world datasets from Amazon show significant improvement over state-of-the-art baselines in predicting and ranking useful reviews.
1
0
0
1
0
0
16,493
Thermodynamically-consistent semi-classical $\ell$-changing rates
We compare the results of the semi-classical (SC) and quantum-mechanical (QM) formalisms for angular-momentum changing transitions in Rydberg atom collisions given by Vrinceanu & Flannery, J. Phys. B 34, L1 (2001), and Vrinceanu, Onofrio & Sadeghpour, ApJ 747, 56 (2012), with those of the SC formalism using a modified Monte Carlo realization. We find that this revised SC formalism agrees well with the QM results. This provides further evidence that the rates derived from the QM treatment are appropriate to be used when modelling recombination through Rydberg cascades, an important process in understanding the state of material in the early universe. The rates for $\Delta\ell=\pm1$ derived from the QM formalism diverge when integrated to sufficiently large impact parameter, $b$. Further to the empirical limits to the $b$ integration suggested by Pengelly & Seaton, MNRAS 127, 165 (1964), we suggest that the fundamental issue causing this divergence in the theory is that it does not fully cater for the finite time taken for such distant collisions to complete.
0
1
0
0
0
0
16,494
A Fourier transform for the quantum Toda lattice
We introduce an algebraic Fourier transform for the quantum Toda lattice.
0
0
1
0
0
0
16,495
Semi-supervised learning
Semi-supervised learning deals with the problem of how, if possible, to take advantage of a huge amount of not classified data, to perform classification, in situations when, typically, the labelled data are few. Even though this is not always possible (it depends on how useful is to know the distribution of the unlabelled data in the inference of the labels), several algorithm have been proposed recently. A new algorithm is proposed, that under almost neccesary conditions, attains asymptotically the performance of the best theoretical rule, when the size of unlabeled data tends to infinity. The set of necessary assumptions, although reasonables, show that semi-parametric classification only works for very well conditioned problems.
0
0
1
1
0
0
16,496
The Cosmic-Ray Neutron Rover - Mobile Surveys of Field Soil Moisture and the Influence of Roads
Measurements of root-zone soil moisture across spatial scales of tens to thousands of meters have been a challenge for many decades. The mobile application of Cosmic-Ray Neutron Sensing (CRNS) is a promising approach to measure field soil moisture non-invasively by surveying large regions with a ground-based vehicle. Recently, concerns have been raised about a potentially biasing influence of local structures and roads. We employed neutron transport simulations and dedicated experiments to quantify the influence of different road types on the CRNS measurement. We found that the presence of roads introduces a bias in the CRNS estimation of field soil moisture compared to non-road scenarios. However, this effect becomes insignificant at distances beyond a few meters from the road. Measurements from the road could overestimate the field value by up to 40 % depending on road material, width, and the surrounding field water content. The bias could be successfully removed with an analytical correction function that accounts for these parameters. Additionally, an empirical approach is proposed that can be used on-the-fly without prior knowledge of field soil moisture. Tests at different study sites demonstrated good agreement between road-effect corrected measurements and field soil moisture observations. However, if knowledge about the road characteristics is missing, any measurements on the road could substantially reduce the accuracy of this method. Our results constitute a practical advancement of the mobile CRNS methodology, which is important for providing unbiased estimates of field-scale soil moisture to support applications in hydrology, remote sensing, and agriculture.
0
1
0
0
0
0
16,497
Invariance in Constrained Switching
We study discrete time linear constrained switching systems with additive disturbances, in which the switching may be on the system matrices, the disturbance sets, the state constraint sets or a combination of the above. In our general setting, a switching sequence is admissible if it is accepted by an automaton. For this family of systems, stability does not necessarily imply the existence of an invariant set. Nevertheless, it does imply the existence of an invariant multi-set, which is a relaxation of invariance and the object of our work. First, we establish basic results concerning the characterization, approximation and computation of the minimal and the maximal admissible invariant multi-set. Second, by exploiting the topological properties of the directed graph which defines the switching constraints, we propose invariant multi-set constructions with several benefits. We illustrate our results in benchmark problems in control.
1
0
1
0
0
0
16,498
On the choice of the low-dimensional domain for global optimization via random embeddings
The challenge of taking many variables into account in optimization problems may be overcome under the hypothesis of low effective dimensionality. Then, the search of solutions can be reduced to the random embedding of a low dimensional space into the original one, resulting in a more manageable optimization problem. Specifically, in the case of time consuming black-box functions and when the budget of evaluations is severely limited, global optimization with random embeddings appears as a sound alternative to random search. Yet, in the case of box constraints on the native variables, defining suitable bounds on a low dimensional domain appears to be complex. Indeed, a small search domain does not guarantee to find a solution even under restrictive hypotheses about the function, while a larger one may slow down convergence dramatically. Here we tackle the issue of low-dimensional domain selection based on a detailed study of the properties of the random embedding, giving insight on the aforementioned difficulties. In particular, we describe a minimal low-dimensional set in correspondence with the embedded search space. We additionally show that an alternative equivalent embedding procedure yields simultaneously a simpler definition of the low-dimensional minimal set and better properties in practice. Finally, the performance and robustness gains of the proposed enhancements for Bayesian optimization are illustrated on numerical examples.
0
0
1
1
0
0
16,499
Distributed resource allocation through utility design - Part II: applications to submodular, supermodular and set covering problems
A fundamental component of the game theoretic approach to distributed control is the design of local utility functions. In Part I of this work we showed how to systematically design local utilities so as to maximize the induced worst case performance. The purpose of the present manuscript is to specialize the general results obtained in Part I to a class of monotone submodular, supermodular and set covering problems. In the case of set covering problems, we show how any distributed algorithm capable of computing a Nash equilibrium inherits a performance certificate matching the well known 1-1/e approximation of Nemhauser. Relative to the class of submodular maximization problems considered here, we show how the performance offered by the game theoretic approach improves on existing approximation algorithms. We briefly discuss the algorithmic complexity of computing (pure) Nash equilibria and show how our approach generalizes and subsumes previously fragmented results in the area of optimal utility design. Two applications and corresponding numerics are presented: the vehicle target assignment problem and a coverage problem arising in distributed caching for wireless networks.
1
0
0
0
0
0
16,500
Quasiparticle band structure engineering in van der Waals heterostructures via dielectric screening
The idea of combining different two-dimensional (2D) crystals in van der Waals heterostructures (vdWHs) has led to a new paradigm for band structure engineering with atomic precision. Due to the weak interlayer couplings, the band structures of the individual 2D crystals are largely preserved upon formation of the heterostructure. However, regardless of the details of the interlayer hybridisation, the size of the 2D crystal band gaps are always reduced due to the enhanced dielectric screening provided by the surrounding layers. The effect can be on the order of electron volts, but its precise magnitude is non-trivial to predict because of the non-local nature of the screening in quasi-2D materials, and it is not captured by effective single-particle methods such as density functional theory. Here we present an efficient and general method for calculating the band gap renormalization of a 2D material embedded in an arbitrary vdWH. The method evaluates the change in the GW self-energy of the 2D material from the change in the screened Coulomb interaction. The latter is obtained using the quantum-electrostatic heterostructure (QEH) model. We benchmark the G$\Delta$W method against full first-principles GW calculations and use it to unravel the importance of screening-induced band structure renormalisation in various vdWHs. A main result is the observation that the size of the band gap reduction of a given 2D material when inserted into a heterostructure scales inversely with the polarisability of the 2D material. Our work demonstrates that dielectric engineering \emph{via} van der Waals heterostructuring represents a promising strategy for tailoring the band structure of 2D materials.
0
1
0
0
0
0