title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Characterizing spectral continuity in SDSS u'g'r'i'z' asteroid photometry
Context. The 4th release of the SDSS Moving Object Catalog (SDSSMOC) is presently the largest photometric dataset of asteroids. Up to this point, the release of large asteroid datasets has always been followed by a redefinition of asteroid taxonomy. In the years that followed the release of the first SDSSMOC, several classification schemes using its data were proposed, all using the taxonomic classes from previous taxonomies. However, no successful attempt has been made to derive a new taxonomic system directly from the SDSS dataset. Aims. The scope of the work is to propose a different interpretation scheme for gauging u0g0r0i0z0 asteroid observations based on the continuity of spectral features. The scheme is integrated into previous taxonomic labeling, but is not dependent on them. Methods. We analyzed the behavior of asteroid sampling through principal components analysis to understand the role of uncertainties in the SDSSMOC. We identified that asteroids in this space follow two separate linear trends using reflectances in the visible, which is characteristic of their spectrophotometric features. Results. Introducing taxonomic classes, we are able to interpret both trends as representative of featured and featureless spectra. The evolution within the trend is connected mainly to the band depth for featured asteroids and to the spectral slope for featureless ones. We defined a different taxonomic system that allowed us to only classify asteroids by two labels. Conclusions. We have classified 69% of all SDSSMOC sample, which is a robustness higher than reached by previous SDSS classifications. Furthermore, as an example, we present the behavior of asteroid (5129) Groom, whose taxonomic labeling changes according to one of the trends owing to phase reddening. Now, such behavior can be characterized by the variation of one single parameter, its position in the trend.
0
1
0
0
0
0
Transverse-spin correlations of the random transverse-field Ising model
The critical behavior of the random transverse-field Ising model in finite dimensional lattices is governed by infinite disorder fixed points, several properties of which have already been calculated by the use of the strong disorder renormalization group (SDRG) method. Here we extend these studies and calculate the connected transverse-spin correlation function by a numerical implementation of the SDRG method in $d=1,2$ and $3$ dimensions. At the critical point an algebraic decay of the form $\sim r^{-\eta_t}$ is found, with a decay exponent being approximately $\eta_t \approx 2+2d$. In $d=1$ the results are related to dimer-dimer correlations in the random AF XX-chain and have been tested by numerical calculations using free-fermionic techniques.
0
1
0
0
0
0
Multivariate stable distributions and their applications for modelling cryptocurrency-returns
In this paper we extend the known methodology for fitting stable distributions to the multivariate case and apply the suggested method to the modelling of daily cryptocurrency-return data. The investigated time period is cut into 10 non-overlapping sections, thus the changes can also be observed. We apply bootstrap tests for checking the models and compare our approach to the more traditional extreme-value and copula models.
0
0
0
0
0
1
Beautiful and damned. Combined effect of content quality and social ties on user engagement
User participation in online communities is driven by the intertwinement of the social network structure with the crowd-generated content that flows along its links. These aspects are rarely explored jointly and at scale. By looking at how users generate and access pictures of varying beauty on Flickr, we investigate how the production of quality impacts the dynamics of online social systems. We develop a deep learning computer vision model to score images according to their aesthetic value and we validate its output through crowdsourcing. By applying it to over 15B Flickr photos, we study for the first time how image beauty is distributed over a large-scale social system. Beautiful images are evenly distributed in the network, although only a small core of people get social recognition for them. To study the impact of exposure to quality on user engagement, we set up matching experiments aimed at detecting causality from observational data. Exposure to beauty is double-edged: following people who produce high-quality content increases one's probability of uploading better photos; however, an excessive imbalance between the quality generated by a user and the user's neighbors leads to a decline in engagement. Our analysis has practical implications for improving link recommender systems.
1
0
0
0
0
0
SySeVR: A Framework for Using Deep Learning to Detect Software Vulnerabilities
The detection of software vulnerabilities (or vulnerabilities for short) is an important problem that has yet to be tackled, as manifested by many vulnerabilities reported on a daily basis. This calls for machine learning methods to automate vulnerability detection. Deep learning is attractive for this purpose because it does not require human experts to manually define features. Despite the tremendous success of deep learning in other domains, its applicability to vulnerability detection is not systematically understood. In order to fill this void, we propose the first systematic framework for using deep learning to detect vulnerabilities. The framework, dubbed Syntax-based, Semantics-based, and Vector Representations (SySeVR), focuses on obtaining program representations that can accommodate syntax and semantic information pertinent to vulnerabilities. Our experiments with 4 software products demonstrate the usefulness of the framework: we detect 15 vulnerabilities that are not reported in the National Vulnerability Database. Among these 15 vulnerabilities, 7 are unknown and have been reported to the vendors, and the other 8 have been "silently" patched by the vendors when releasing newer versions of the products.
0
0
0
1
0
0
Differentially Private Learning of Undirected Graphical Models using Collective Graphical Models
We investigate the problem of learning discrete, undirected graphical models in a differentially private way. We show that the approach of releasing noisy sufficient statistics using the Laplace mechanism achieves a good trade-off between privacy, utility, and practicality. A naive learning algorithm that uses the noisy sufficient statistics "as is" outperforms general-purpose differentially private learning algorithms. However, it has three limitations: it ignores knowledge about the data generating process, rests on uncertain theoretical foundations, and exhibits certain pathologies. We develop a more principled approach that applies the formalism of collective graphical models to perform inference over the true sufficient statistics within an expectation-maximization framework. We show that this learns better models than competing approaches on both synthetic data and on real human mobility data used as a case study.
1
0
0
1
0
0
Phase unwinding, or invariant subspace decompositions of Hardy spaces
We consider orthogonal decompositions of invariant subspaces of Hardy spaces, these relate to the Blaschke based phase unwinding decompositions. We prove convergence in Lp. In particular we build an explicit multiscale wavelet basis. We also obtain an explicit unwindinig decomposition for the singular inner function, exp 2i\pi/x.
0
0
1
0
0
0
SalProp: Salient object proposals via aggregated edge cues
In this paper, we propose a novel object proposal generation scheme by formulating a graph-based salient edge classification framework that utilizes the edge context. In the proposed method, we construct a Bayesian probabilistic edge map to assign a saliency value to the edgelets by exploiting low level edge features. A Conditional Random Field is then learned to effectively combine these features for edge classification with object/non-object label. We propose an objectness score for the generated windows by analyzing the salient edge density inside the bounding box. Extensive experiments on PASCAL VOC 2007 dataset demonstrate that the proposed method gives competitive performance against 10 popular generic object detection techniques while using fewer number of proposals.
1
0
0
0
0
0
Quantum gap and spin-wave excitations in the Kitaev model on a triangular lattice
We study the effects of quantum fluctuations on the dynamical generation of a gap and on the evolution of the spin-wave spectra of a frustrated magnet on a triangular lattice with bond-dependent Ising couplings, analog of the Kitaev honeycomb model. The quantum fluctuations lift the subextensive degeneracy of the classical ground-state manifold by a quantum order-by-disorder mechanism. Nearest-neighbor chains remain decoupled and the surviving discrete degeneracy of the ground state is protected by a hidden model symmetry. We show how the four-spin interaction, emergent from the fluctuations, generates a spin gap shifting the nodal lines of the linear spin-wave spectrum to finite energies.
0
1
0
0
0
0
Sparse Gaussian ICA
Independent component analysis (ICA) is a cornerstone of modern data analysis. Its goal is to recover a latent random vector S with independent components from samples of X=AS where A is an unknown mixing matrix. Critically, all existing methods for ICA rely on and exploit strongly the assumption that S is not Gaussian as otherwise A becomes unidentifiable. In this paper, we show that in fact one can handle the case of Gaussian components by imposing structure on the matrix A. Specifically, we assume that A is sparse and generic in the sense that it is generated from a sparse Bernoulli-Gaussian ensemble. Under this condition, we give an efficient algorithm to recover the columns of A given only the covariance matrix of X as input even when S has several Gaussian components.
0
0
0
1
0
0
Supervisor Synthesis of POMDP based on Automata Learning
As a general and thus popular model for autonomous systems, partially observable Markov decision process (POMDP) can capture uncertainties from different sources like sensing noises, actuation errors, and uncertain environments. However, its comprehensiveness makes the planning and control in POMDP difficult. Traditional POMDP planning problems target to find the optimal policy to maximize the expectation of accumulated rewards. But for safety critical applications, guarantees of system performance described by formal specifications are desired, which motivates us to consider formal methods to synthesize supervisor for POMDP. With system specifications given by Probabilistic Computation Tree Logic (PCTL), we propose a supervisory control framework with a type of deterministic finite automata (DFA), za-DFA, as the controller form. While the existing work mainly relies on optimization techniques to learn fixed-size finite state controllers (FSCs), we develop an $L^*$ learning based algorithm to determine both space and transitions of za-DFA. Membership queries and different oracles for conjectures are defined. The learning algorithm is sound and complete. An example is given in detailed steps to illustrate the supervisor synthesis algorithm.
1
0
0
0
0
0
A Pseudo Knockoff Filter for Correlated Features
In 2015, Barber and Candes introduced a new variable selection procedure called the knockoff filter to control the false discovery rate (FDR) and prove that this method achieves exact FDR control. Inspired by the work of Barber and Candes (2015), we propose and analyze a pseudo-knockoff filter that inherits some advantages of the original knockoff filter and has more flexibility in constructing its knockoff matrix. Although we have not been able to obtain exact FDR control of the pseudo knockoff filter, we show that it satisfies an expectation inequality that offers some insight into FDR control. Moreover, we provide some partial analysis of the pseudo knockoff filter for the half Lasso and the least squares statistics. Our analysis indicates that the inverse of the covariance matrix of the feature matrix plays an important role in designing and analyzing the pseudo knockoff filter. Our preliminary numerical experiments show that the pseudo knockoff filter with the half Lasso statistic has FDR control. Moreover, our numerical experiments show that the pseudo-knockoff filter could offer more power than the original knockoff filter with the OMP or Lasso Path statistic when the features are correlated and non-sparse.
0
0
1
1
0
0
$μ$-constant monodromy groups and Torelli results for the quadrangle singularities and the bimodal series
This paper is a sequel to [He11] and [GH17]. In [He11] a notion of marking of isolated hypersurface singularities was defined, and a moduli space $M_\mu^{mar}$ for marked singularities in one $\mu$-homotopy class of isolated hypersurface singularities was established. It is an analogue of a Teichmüller space. It comes together with a $\mu$-constant monodromy group $G^{mar}\subset G_{\mathbb{Z}}$. Here $G_{\mathbb{Z}}$ is the group of automorphisms of a Milnor lattice which respect the Seifert form. It was conjectured that $M_\mu^{mar}$ is connected. This is equivalent to $G^{mar}= G_{\mathbb{Z}}$. Also Torelli type conjectures were formulated. In [He11] and [GH17] $M_\mu^{mar}, G_{\mathbb{Z}}$ and $G^{mar}$ were determined and all conjectures were proved for the simple, the unimodal and the exceptional bimodal singularities. In this paper the quadrangle singularities and the bimodal series are treated. The Torelli type conjectures are true. But the conjecture $G^{mar}= G_{\mathbb{Z}}$ and $M_\mu^{mar}$ connected does not hold for certain subseries of the bimodal series.
0
0
1
0
0
0
SoaAlloc: Accelerating Single-Method Multiple-Objects Applications on GPUs
We propose SoaAlloc, a dynamic object allocator for Single-Method Multiple-Objects applications in CUDA. SoaAlloc is the first allocator for GPUs that (a) arranges allocations in a SIMD-friendly Structure of Arrays (SOA) data layout, (b) provides a do-all operation for maximizing the benefit of SOA, and (c) is on par with state-of-the-art memory allocators for raw (de)allocation time. Our benchmarks show that the SOA layout leads to significantly better memory bandwidth utilization, resulting in a 2x speedup of application code.
1
0
0
0
0
0
Probabilistic risk bounds for the characterization of radiological contamination
The radiological characterization of contaminated elements (walls, grounds, objects) from nuclear facilities often suffers from a too small number of measurements. In order to determine risk prediction bounds on the level of contamination, some classic statistical methods may then reveal unsuited as they rely upon strong assumptions (e.g. that the underlying distribution is Gaussian) which cannot be checked. Considering that a set of measurements or their average value arise from a Gaussian distribution can sometimes lead to erroneous conclusion, possibly underconservative. This paper presents several alternative statistical approaches which are based on much weaker hypotheses than Gaussianity. They result from general probabilistic inequalities and order-statistics based formula. Given a data sample, these inequalities make it possible to derive prediction intervals for a random variable, which can be directly interpreted as probabilistic risk bounds. For the sake of validation, they are first applied to synthetic data samples generated from several known theoretical distributions. In a second time, the proposed methods are applied to two data sets obtained from real radiological contamination measurements.
0
0
1
1
0
0
Automatic Backward Differentiation for American Monte-Carlo Algorithms (Conditional Expectation)
In this note we derive the backward (automatic) differentiation (adjoint [automatic] differentiation) for an algorithm containing a conditional expectation operator. As an example we consider the backward algorithm as it is used in Bermudan product valuation, but the method is applicable in full generality. The method relies on three simple properties: 1) a forward or backward (automatic) differentiation of an algorithm containing a conditional expectation operator results in a linear combination of the conditional expectation operators; 2) the differential of an expectation is the expectation of the differential $\frac{d}{dx} E(Y) = E(\frac{d}{dx}Y)$; 3) if we are only interested in the expectation of the final result (as we are in all valuation problems), we may use $E(A \cdot E(B\vert\mathcal{F})) = E(E(A\vert\mathcal{F}) \cdot B)$, i.e., instead of applying the (conditional) expectation operator to a function of the underlying random variable (continuation values), it may be applied to the adjoint differential. \end{enumerate} The methodology not only allows for a very clean and simple implementation, but also offers the ability to use different conditional expectation estimators in the valuation and the differentiation.
1
0
0
0
0
0
The Belgian repository of fundamental atomic data and stellar spectra (BRASS). I. Cross-matching atomic databases of astrophysical interest
Fundamental atomic parameters, such as oscillator strengths, play a key role in modelling and understanding the chemical composition of stars in the universe. Despite the significant work underway to produce these parameters for many astrophysically important ions, uncertainties in these parameters remain large and can propagate throughout the entire field of astronomy. The Belgian repository of fundamental atomic data and stellar spectra (BRASS) aims to provide the largest systematic and homogeneous quality assessment of atomic data to date in terms of wavelength, atomic and stellar parameter coverage. To prepare for it, we first compiled multiple literature occurrences of many individual atomic transitions, from several atomic databases of astrophysical interest, and assessed their agreement. Several atomic repositories were searched and their data retrieved and formatted in a consistent manner. Data entries from all repositories were cross-matched against our initial BRASS atomic line list to find multiple occurrences of the same transition. Where possible we used a non-parametric cross-match depending only on electronic configurations and total angular momentum values. We also checked for duplicate entries of the same physical transition, within each retrieved repository, using the non-parametric cross-match. We report the cross-matched transitions for each repository and compare their fundamental atomic parameters. We find differences in log(gf) values of up to 2 dex or more. We also find and report that ~2% of our line list and Vienna Atomic Line Database retrievals are composed of duplicate transitions. Finally we provide a number of examples of atomic spectral lines with different log(gf) values, and discuss the impact of these uncertain log(gf) values on quantitative spectroscopy. All cross-matched atomic data and duplicate transitions are available to download at brass.sdf.org.
0
1
0
0
0
0
High-Precision Calculations in Strongly Coupled Quantum Field Theory with Next-to-Leading-Order Renormalized Hamiltonian Truncation
Hamiltonian Truncation (a.k.a. Truncated Spectrum Approach) is an efficient numerical technique to solve strongly coupled QFTs in d=2 spacetime dimensions. Further theoretical developments are needed to increase its accuracy and the range of applicability. With this goal in mind, here we present a new variant of Hamiltonian Truncation which exhibits smaller dependence on the UV cutoff than other existing implementations, and yields more accurate spectra. The key idea for achieving this consists in integrating out exactly a certain class of high energy states, which corresponds to performing renormalization at the cubic order in the interaction strength. We test the new method on the strongly coupled two-dimensional quartic scalar theory. Our work will also be useful for the future goal of extending Hamiltonian Truncation to higher dimensions d >= 3.
0
1
0
0
0
0
State Distribution-aware Sampling for Deep Q-learning
A critical and challenging problem in reinforcement learning is how to learn the state-action value function from the experience replay buffer and simultaneously keep sample efficiency and faster convergence to a high quality solution. In prior works, transitions are uniformly sampled at random from the replay buffer or sampled based on their priority measured by temporal-difference (TD) error. However, these approaches do not fully take into consideration the intrinsic characteristics of transition distribution in the state space and could result in redundant and unnecessary TD updates, slowing down the convergence of the learning procedure. To overcome this problem, we propose a novel state distribution-aware sampling method to balance the replay times for transitions with skew distribution, which takes into account both the occurrence frequencies of transitions and the uncertainty of state-action values. Consequently, our approach could reduce the unnecessary TD updates and increase the TD updates for state-action value with more uncertainty, making the experience replay more effective and efficient. Extensive experiments are conducted on both classic control tasks and Atari 2600 games based on OpenAI gym platform and the experimental results demonstrate the effectiveness of our approach in comparison with the standard DQN approach.
0
0
0
1
0
0
Energy Trading between microgrids Individual Cost Minimization and Social Welfare Maximization
High penetration of renewable energy source makes microgrid (MGs) be environment friendly. However, the stochastic input from renewable energy resource brings difficulty in balancing the energy supply and demand. Purchasing extra energy from macrogrid to deal with energy shortage will increase MG energy cost. To mitigate intermittent nature of renewable energy, energy trading and energy storage which can exploit diversity of renewable energy generation across space and time are efficient and cost-effective methods. But current energy storage control action will impact the future control action which brings challenge to energy management. In addition, due to MG participating energy trading as prosumer, it calls for an efficient trading mechanism. Therefore, this paper focuses on the problem of MG energy management and trading. Energy trading problem is formulated as a stochastic optimization one with both individual profit and social welfare maximization. Firstly a Lyapunov optimization based algorithm is developed to solve the stochastic problem. Secondly the double-auction based mechanism is provided to attract MG truthful bidding for buying and selling energy. Through theoretical analysis, we demonstrate that individual MG can achieve a time average energy cost close to offline optimum with tradeoff between storage capacity and energy trading cost. Meanwhile the social welfare is also asymptotically maximized under double auction. Simulation results based on real world data show the effectiveness of our algorithm.
1
0
0
0
0
0
Learning Kolmogorov Models for Binary Random Variables
We summarize our recent findings, where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables. More specifically, we derive conditions that link outcomes of specific random variables, and extract valuable relations from the data. We also propose an algorithm for computing the model and show its first-order optimality, despite the combinatorial nature of the learning problem. We apply the proposed algorithm to recommendation systems, although it is applicable to other scenarios. We believe that the work is a significant step toward interpretable machine learning.
0
0
0
1
0
0
Möbius topological superconductivity in UPt$_3$
Intensive studies for more than three decades have elucidated multiple superconducting phases and odd-parity Cooper pairs in a heavy fermion superconductor UPt$_3$. We identify a time-reversal invariant superconducting phase of UPt$_3$ as a recently proposed topological nonsymmorphic superconductivity. Combining the band structure of UPt$_3$, order parameter of $E_{\rm 2u}$ representation allowed by $P6_3/mmc$ space group symmetry, and topological classification by $K$-theory, we demonstrate the nontrivial $Z_2$-invariant of three-dimensional DIII class enriched by glide symmetry. Correspondingly, double Majorana cone surface states appear at the surface Brillouin zone boundary. Furthermore, we show a variety of surface states and clarify the topological protection by crystal symmetry. Majorana arcs corresponding to tunable Weyl points appear in the time-reversal symmetry broken B-phase. Majorana cone protected by mirror Chern number and Majorana flat band by glide-winding number are also revealed.
0
1
0
0
0
0
Sparse Identification and Estimation of High-Dimensional Vector AutoRegressive Moving Averages
The Vector AutoRegressive Moving Average (VARMA) model is fundamental to the study of multivariate time series. However, estimation becomes challenging in even relatively low-dimensional VARMA models. With growing interest in the simultaneous modeling of large numbers of marginal time series, many authors have abandoned the VARMA model in favor of the Vector AutoRegressive (VAR) model, which is seen as a simpler alternative, both in theory and practice, in this high-dimensional context. However, even very simple VARMA models can be very complicated to represent using only VAR modeling. In this paper, we develop a new approach to VARMA identification and propose a two-phase method for estimation. Our identification and estimation strategies are linked in their use of sparsity-inducing convex regularizers, which favor VARMA models that have only a small number of nonzero parameters. We establish sufficient conditions for consistency of sparse infinite-order VAR estimates in high dimensions, a key ingredient for our two-phase sparse VARMA estimation strategy. The proposed framework has good estimation and forecast accuracy under numerous simulation settings. We illustrate the forecast performance of the sparse VARMA models for several application domains, including macro-economic forecasting, demand forecasting, and volatility forecasting. The proposed sparse VARMA estimator gives parsimonious forecast models that lead to important gains in relative forecast accuracy.
0
0
0
1
0
0
Information Retrieval and Criticality in Parity-Time-Symmetric Systems
By investigating information flow between a general parity-time (PT) -symmetric non-Hermitian system and an environment, we find that the complete information retrieval from the environment can be achieved in the PT-unbroken phase, whereas no information can be retrieved in the PT-broken phase. The PT-transition point thus marks the reversible-irreversible criticality of information flow, around which many physical quantities such as the recurrence time and the distinguishability between quantum states exhibit power-law behavior. Moreover, by embedding a PT-symmetric system into a larger Hilbert space so that the entire system obeys unitary dynamics, we reveal that behind the information retrieval lies a hidden entangled partner protected by PT symmetry. Possible experimental situations are also discussed.
0
1
0
0
0
0
A practical guide and software for analysing pairwise comparison experiments
Most popular strategies to capture subjective judgments from humans involve the construction of a unidimensional relative measurement scale, representing order preferences or judgments about a set of objects or conditions. This information is generally captured by means of direct scoring, either in the form of a Likert or cardinal scale, or by comparative judgments in pairs or sets. In this sense, the use of pairwise comparisons is becoming increasingly popular because of the simplicity of this experimental procedure. However, this strategy requires non-trivial data analysis to aggregate the comparison ranks into a quality scale and analyse the results, in order to take full advantage of the collected data. This paper explains the process of translating pairwise comparison data into a measurement scale, discusses the benefits and limitations of such scaling methods and introduces a publicly available software in Matlab. We improve on existing scaling methods by introducing outlier analysis, providing methods for computing confidence intervals and statistical testing and introducing a prior, which reduces estimation error when the number of observers is low. Most of our examples focus on image quality assessment.
1
0
0
1
0
0
Web-Based Implementation of Travelling Salesperson Problem Using Genetic Algorithm
The world is connected through the Internet. As the abundance of Internet users connected into the Web and the popularity of cloud computing research, the need of Artificial Intelligence (AI) is demanding. In this research, Genetic Algorithm (GA) as AI optimization method through natural selection and genetic evolution is utilized. There are many applications of GA such as web mining, load balancing, routing, and scheduling or web service selection. Hence, it is a challenging task to discover whether the code mainly server side and web based language technology affects the performance of GA. Travelling Salesperson Problem (TSP) as Non Polynomial-hard (NP-hard) problem is provided to be a problem domain to be solved by GA. While many scientists prefer Python in GA implementation, another popular high-level interpreter programming language such as PHP (PHP Hypertext Preprocessor) and Ruby were benchmarked. Line of codes, file sizes, and performances based on GA implementation and runtime were found varies among these programming languages. Based on the result, the use of Ruby in GA implementation is recommended.
1
0
0
0
0
0
Random Spatial Networks: Small Worlds without Clustering, Traveling Waves, and Hop-and-Spread Disease Dynamics
Random network models play a prominent role in modeling, analyzing and understanding complex phenomena on real-life networks. However, a key property of networks is often neglected: many real-world networks exhibit spatial structure, the tendency of a node to select neighbors with a probability depending on physical distance. Here, we introduce a class of random spatial networks (RSNs) which generalizes many existing random network models but adds spatial structure. In these networks, nodes are placed randomly in space and joined in edges with a probability depending on their distance and their individual expected degrees, in a manner that crucially remains analytically tractable. We use this network class to propose a new generalization of small-world networks, where the average shortest path lengths in the graph are small, as in classical Watts-Strogatz small-world networks, but with close spatial proximity of nodes that are neighbors in the network playing the role of large clustering. Small-world effects are demonstrated on these spatial small-world networks without clustering. We are able to derive partial integro-differential equations governing susceptible-infectious-recovered disease spreading through an RSN, and we demonstrate the existence of traveling wave solutions. If the distance kernel governing edge placement decays slower than exponential, the population-scale dynamics are dominated by long-range hops followed by local spread of traveling waves. This provides a theoretical modeling framework for recent observations of how epidemics like Ebola evolve in modern connected societies, with long-range connections seeding new focal points from which the epidemic locally spreads in a wavelike manner.
0
1
0
0
0
0
The evolution of magnetic hot massive stars: Implementation of the quantitative influence of surface magnetic fields in modern models of stellar evolution
Large-scale dipolar surface magnetic fields have been detected in a fraction of OB stars, however only few stellar evolution models of massive stars have considered the impact of these fossil fields. We are performing 1D hydrodynamical model calculations taking into account evolutionary consequences of the magnetospheric-wind interactions in a simplified parametric way. Two effects are considered: i) the global mass-loss rates are reduced due to mass-loss quenching, and ii) the surface angular momentum loss is enhanced due to magnetic braking. As a result of the magnetic mass-loss quenching, the mass of magnetic massive stars remains close to their initial masses. Thus magnetic massive stars - even at Galactic metallicity - have the potential to be progenitors of `heavy' stellar mass black holes. Similarly, at Galactic metallicity, the formation of pair instability supernovae is plausible with a magnetic progenitor.
0
1
0
0
0
0
The finite gap method and the analytic description of the exact rogue wave recurrence in the periodic NLS Cauchy problem. 1
The focusing NLS equation is the simplest universal model describing the modulation instability (MI) of quasi monochromatic waves in weakly nonlinear media, considered the main physical mechanism for the appearance of rogue (anomalous) waves (RWs) in Nature. In this paper we study, using the finite gap method, the NLS Cauchy problem for periodic initial perturbations of the unstable background solution of NLS exciting just one of the unstable modes. We distinguish two cases. In the case in which only the corresponding unstable gap is theoretically open, the solution describes an exact deterministic alternate recurrence of linear and nonlinear stages of MI, and the nonlinear RW stages are described by the 1-breather Akhmediev solution, whose parameters, different at each RW appearance, are always given in terms of the initial data through elementary functions. If the number of unstable modes is >1, this uniform in t dynamics is sensibly affected by perturbations due to numerics and/or real experiments, provoking O(1) corrections to the result. In the second case in which more than one unstable gap is open, a detailed investigation of all these gaps is necessary to get a uniform in $t$ dynamics, and this study is postponed to a subsequent paper. It is however possible to obtain the elementary description of the first nonlinear stage of MI, given again by the Akhmediev 1-breather solution, and how perturbations due to numerics and/or real experiments can affect this result.
0
1
0
0
0
0
First Discoveries of z>6 Quasars with the DECam Legacy Survey and UKIRT Hemisphere Survey
We present the first discoveries from a survey of $z\gtrsim6$ quasars using imaging data from the DECam Legacy Survey (DECaLS) in the optical, the UKIRT Deep Infrared Sky Survey (UKIDSS) and a preliminary version of the UKIRT Hemisphere Survey (UHS) in the near-IR, and ALLWISE in the mid-IR. DECaLS will image 9000 deg$^2$ of sky down to $z_{\rm AB}\sim23.0$, and UKIDSS and UHS, which will map the northern sky at $0<DEC<+60^{\circ}$, reaching $J_{\rm VEGA}\sim19.6$ (5-$\sigma$). The combination of these datasets allows us to discover quasars at redshift $z\gtrsim7$ and to conduct a complete census of the faint quasar population at $z\gtrsim6$. In this paper, we report on the selection method of our search, and on the initial discoveries of two new, faint $z\gtrsim6$ quasars and one new $z=6.63$ quasar in our pilot spectroscopic observations. The two new $z\sim6$ quasars are at $z=6.07$ and $z=6.17$ with absolute magnitudes at rest-frame wavelength 1450 \AA\ being $M_{1450}=-25.83$ and $M_{1450}=-25.76$, respectively. These discoveries suggest that we can find quasars close to or fainter than the break magnitude of the Quasar Luminosity Function (QLF) at $z\gtrsim6$. The new $z=6.63$ quasar has an absolute magnitude of $M_{1450}=-25.95$. This demonstrates the potential of using the combined DECaLS and UKIDSS/UHS datasets to find $z\gtrsim7$ quasars. Extrapolating from previous QLF measurements, we predict that these combined datasets will yield $\sim200$ $z\sim6$ quasars to $z_{\rm AB} < 21.5$, $\sim1{,}000$ $z\sim6$ quasars to $z_{\rm AB}<23$, and $\sim 30$ quasars at $z>6.5$ to $J_{\rm VEGA}<19.5$.
0
1
0
0
0
0
On unique continuation for solutions of the Schr{ö}dinger equation on trees
We prove that if a solution of the time-dependent Schr{ö}dinger equation on an homogeneous tree with bounded potential decays fast at two distinct times then the solution is trivial. For the free Schr{ö}dinger operator, we use the spectral theory of the Laplacian and complex analysis and obtain a characterization of the initial conditions that lead to a sharp decay at any time. We then use the recent spectral decomposition of the Schr{ö}dinger operator with compactly supported potential due to Colin de Verdi{è}rre and Turc to extend our results in the presence of such potentials. Finally, we use real variable methods first introduced by Escauriaza, Kenig, Ponce and Vega to establish a general sharp result in the case of bounded potentials.
0
0
1
0
0
0
Particle-hole Asymmetry in the Cuprate Pseudogap Measured with Time-Resolved Spectroscopy
One of the most puzzling features of high-temperature cuprate superconductors is the pseudogap state, which appears above the temperature at which superconductivity is destroyed. There remain fundamental questions regarding its nature and its relation to superconductivity. But to address these questions, we must first determine whether the pseudogap and superconducting states share a common property: particle-hole symmetry. We introduce a new technique to test particle-hole symmetry by using laser pulses to manipulate and measure the chemical potential on picosecond time scales. The results strongly suggest that the asymmetry in the density of states is inverted in the pseudogap state, implying a particle-hole asymmetric gap. Independent of interpretation, these results can test theoretical predictions of the density of states in cuprates.
0
1
0
0
0
0
Numerically modeling Brownian thermal noise in amorphous and crystalline thin coatings
Thermal noise is expected to be one of the noise sources limiting the astrophysical reach of Advanced LIGO (once commissioning is complete) and third-generation detectors. Adopting crystalline materials for thin, reflecting mirror coatings, rather than the amorphous coatings used in current-generation detectors, could potentially reduce thermal noise. Understanding and reducing thermal noise requires accurate theoretical models, but modeling thermal noise analytically is especially challenging with crystalline materials. Thermal noise models typically rely on the fluctuation-dissipation theorem, which relates the power spectral density of the thermal noise to an auxiliary elastic problem. In this paper, we present results from a new, open-source tool that numerically solves the auxiliary elastic problem to compute the Brownian thermal noise for both amorphous and crystalline coatings. We employ open-source frameworks to solve the auxiliary elastic problem using a finite-element method, adaptive mesh refinement, and parallel processing that enables us to use high resolutions capable of resolving the thin reflective coating. We compare with approximate analytic solutions for amorphous materials, and we verify that our solutions scale as expected. Finally, we model the crystalline coating thermal noise in an experiment reported by Cole and collaborators (2013), comparing our results to a simpler numerical calculation that treats the coating as an "effectively amorphous" material. We find that treating the coating as a cubic crystal instead of as an effectively amorphous material increases the thermal noise by about 3%. Our results are a step toward better understanding and reducing thermal noise to increase the reach of future gravitational-wave detectors. (Abstract abbreviated.)
0
1
0
0
0
0
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
In an effort to understand the meaning of the intermediate representations captured by deep networks, recent papers have tried to associate specific semantic concepts to individual neural network filter responses, where interesting correlations are often found, largely by focusing on extremal filter responses. In this paper, we show that this approach can favor easy-to-interpret cases that are not necessarily representative of the average behavior of a representation. A more realistic but harder-to-study hypothesis is that semantic representations are distributed, and thus filters must be studied in conjunction. In order to investigate this idea while enabling systematic visualization and quantification of multiple filter responses, we introduce the Net2Vec framework, in which semantic concepts are mapped to vectorial embeddings based on corresponding filter responses. By studying such embeddings, we are able to show that 1., in most cases, multiple filters are required to code for a concept, that 2., often filters are not concept specific and help encode multiple concepts, and that 3., compared to single filter activations, filter embeddings are able to better characterize the meaning of a representation and its relationship to other concepts.
0
0
0
1
0
0
Can the removal of molecular cloud envelopes by external feedback affect the efficiency of star formation?
We investigate how star formation efficiency can be significantly decreased by the removal of a molecular cloud's envelope by feedback from an external source. Feedback from star formation has difficulties halting the process in dense gas but can easily remove the less dense and warmer envelopes where star formation does not occur. However, the envelopes can play an important role keeping their host clouds bound by deepening the gravitational potential and providing a constraining pressure boundary. We use numerical simulations to show that removal of the cloud envelopes results in all cases in a fall in the star formation efficiency (SFE). At 1.38 free-fall times our 4 pc cloud simulation experienced a drop in the SFE from 16 to six percent, while our 5 pc cloud fell from 27 to 16 per cent. At the same time, our 3 pc cloud (the least bound) fell from an SFE of 5.67 per cent to zero when the envelope was lost. The star formation efficiency per free-fall time varied from zero to $\approx$ 0.25 according to $\alpha$, defined to be the ratio of the kinetic plus thermal to gravitational energy, and irrespective of the absolute star forming mass available. Furthermore the fall in SFE associated with the loss of the envelope is found to even occur at later times. We conclude that the SFE will always fall should a star forming cloud lose its envelope due to stellar feedback, with less bound clouds suffering the greatest decrease.
0
1
0
0
0
0
Joint Rate and Resource Allocation in Hybrid Digital-Analog Transmission over Fading Channels
In hybrid digital-analog (HDA) systems, resource allocation has been utilized to achieve desired distortion performance. However, existing studies on this issue assume error-free digital transmission, which is not valid for fading channels. With time-varying channel fading, the exact channel state information is not available at the transmitter. Thus, random outage and resulting digital distortion cannot be ignored. Moreover, rate allocation should be considered in resource allocation, since it not only determines the amount of information for digital transmission and that for analog transmission, but also affects the outage probability. Based on above observations, in this paper, we attempt to perform joint rate and resource allocation strategies to optimize system distortion in HDA systems over fading channels. Consider a bandwidth expansion scenario where a memoryless Gaussian source is transmitted in an HDA system with the entropy-constrained scalar quantizer (ECSQ). Firstly, we formulate the joint allocation problem as an expected system distortion minimization problem where both analog and digital distortion are considered. Then, in the limit of low outage probability, we decompose the problem into two coupled sub-problems based on the block coordinate descent method, and propose an iterative gradient algorithm to approach the optimal solution. Furthermore, we extend our work to the multivariate Gaussian source scenario where a two-stage fast algorithm integrating rounding and greedy strategies is proposed to optimize the joint rate and resource allocation problem. Finally, simulation results demonstrate that the proposed algorithms can achieve up to 2.3dB gains in terms of signal-to-distortion ratio over existing schemes under the single Gaussian source scenario, and up to 3.5dB gains under the multivariate Gaussian source scenario.
1
0
0
0
0
0
Relaxation to a Phase-locked Equilibrium State in a One-dimensional Bosonic Josephson Junction
We present an experimental study on the non-equilibrium tunnel dynamics of two coupled one-dimensional Bose-Einstein quasi-condensates deep in the Josephson regime. Josephson oscillations are initiated by splitting a single one-dimensional condensate and imprinting a relative phase between the superfluids. Regardless of the initial state and experimental parameters, the dynamics of the relative phase and atom number imbalance shows a relaxation to a phase-locked steady state. The latter is characterized by a high phase coherence and reduced fluctuations with respect to the initial state. We propose an empirical model based on the analogy with the anharmonic oscillator to describe the effect of various experimental parameters. A microscopic theory compatible with our observations is still missing.
0
1
0
0
0
0
A Hybrid Model for Role-related User Classification on Twitter
To aid a variety of research studies, we propose TWIROLE, a hybrid model for role-related user classification on Twitter, which detects male-related, female-related, and brand-related (i.e., organization or institution) users. TWIROLE leverages features from tweet contents, user profiles, and profile images, and then applies our hybrid model to identify a user's role. To evaluate it, we used two existing large datasets about Twitter users, and conducted both intra- and inter-comparison experiments. TWIROLE outperforms existing methods and obtains more balanced results over the several roles. We also confirm that user names and profile images are good indicators for this task. Our research extends prior work that does not consider brand-related users, and is an aid to future evaluation efforts relative to investigations that rely upon self-labeled datasets.
1
0
0
0
0
0
SecureBoost: A Lossless Federated Learning Framework
The protection of user privacy is an important concern in machine learning, as evidenced by the rolling out of the General Data Protection Regulation (GDPR) in the European Union (EU) in May 2018. The GDPR is designed to give users more control over their personal data, which motivates us to explore machine learning frameworks with data sharing without violating user privacy. To meet this goal, in this paper, we propose a novel lossless privacy-preserving tree-boosting system known as SecureBoost in the setting of federated learning. This federated-learning system allows a learning process to be jointly conducted over multiple parties with partially common user samples but different feature sets, which corresponds to a vertically partitioned virtual data set. An advantage of SecureBoost is that it provides the same level of accuracy as the non-privacy-preserving approach while at the same time, reveal no information of each private data provider. We theoretically prove that the SecureBoost framework is as accurate as other non-federated gradient tree-boosting algorithms that bring the data into one place. In addition, along with a proof of security, we discuss what would be required to make the protocols completely secure.
1
0
0
1
0
0
Selective reflection from Rb layer with thickness below $λ$/12 and applications
We have studied the peculiarities of selective reflection from Rb vapor cell with thickness $L <$ 70 nm, which is over an order of magnitude smaller than the resonant wavelength for Rb atomic D$_1$ line $\lambda$ = 795 nm. A huge ($\approx$ 240 MHz) red shift and spectral broadening of reflection signal is recorded for $L =$ 40 nm caused by the atom-surface interaction. Also completely frequency resolved hyperfine Paschen-Back splitting of atomic transitions to four components for $^{87}$Rb and six components for $^{85}$Rb is recorded in strong magnetic field ($B >$ 2 kG).
0
1
0
0
0
0
The Complexity of Abstract Machines
The lambda-calculus is a peculiar computational model whose definition does not come with a notion of machine. Unsurprisingly, implementations of the lambda-calculus have been studied for decades. Abstract machines are implementations schema for fixed evaluation strategies that are a compromise between theory and practice: they are concrete enough to provide a notion of machine and abstract enough to avoid the many intricacies of actual implementations. There is an extensive literature about abstract machines for the lambda-calculus, and yet-quite mysteriously-the efficiency of these machines with respect to the strategy that they implement has almost never been studied. This paper provides an unusual introduction to abstract machines, based on the complexity of their overhead with respect to the length of the implemented strategies. It is conceived to be a tutorial, focusing on the case study of implementing the weak head (call-by-name) strategy, and yet it is an original re-elaboration of known results. Moreover, some of the observation contained here never appeared in print before.
1
0
0
0
0
0
Recent progress in the Zimmer program
This paper can be viewed as a sequel to the author's long survey on the Zimmer program \cite{F11} published in 2011. The sequel focuses on recent rapid progress on certain aspects of the program particularly concerning rigidity of Anosov actions and Zimmer's conjecture that there are no actions in low dimensions. Some emphasis is put on the surprising connections between these two different sets of developments and also on the key connections and ideas for future research that arise from these works taken together.
0
0
1
0
0
0
Restricted Causal Inference Algorithm
This paper proposes a new algorithm for recovery of belief network structure from data handling hidden variables. It consists essentially in an extension of the CI algorithm of Spirtes et al. by restricting the number of conditional dependencies checked up to k variables and in an extension of the original CI by additional steps transforming so called partial including path graph into a belief network. Its correctness is demonstrated.
1
0
0
0
0
0
Fiber plucking by molecular motors yields large emergent contractility in stiff biopolymer networks
The mechanical properties of the cell depend crucially on the tension of its cytoskeleton, a biopolymer network that is put under stress by active motor proteins. While the fibrous nature of the network is known to strongly affect the transmission of these forces to the cellular scale, our understanding of this process remains incomplete. Here we investigate the transmission of forces through the network at the individual filament level, and show that active forces can be geometrically amplified as a transverse motor-generated force force "plucks" the fiber and induces a nonlinear tension. In stiff and densely connnected networks, this tension results in large network-wide tensile stresses that far exceed the expectation drawn from a linear elastic theory. This amplification mechanism competes with a recently characterized network-level amplification due to fiber buckling, suggesting that that fiber networks provide several distinct pathways for living systems to amplify their molecular forces.
0
0
0
0
1
0
Demographics of News Sharing in the U.S. Twittersphere
The widespread adoption and dissemination of online news through social media systems have been revolutionizing many segments of our society and ultimately our daily lives. In these systems, users can play a central role as they share content to their friends. Despite that, little is known about news spreaders in social media. In this paper, we provide the first of its kind in-depth characterization of news spreaders in social media. In particular, we investigate their demographics, what kind of content they share, and the audience they reach. Among our main findings, we show that males and white users tend to be more active in terms of sharing news, biasing the news audience to the interests of these demographic groups. Our results also quantify differences in interests of news sharing across demographics, which has implications for personalized news digests.
1
0
0
0
0
0
Dynamics and fragmentation mechanism of (CH3-C5H4)Pt(CH3)3 on SiO2 Surfaces
The interaction of (CH3-C5H4)Pt(CH3)3 ((methylcyclopentadienyl)trimethylplatinum)) molecules on fully and partially hydroxylated SiO2 surfaces, as well as the dynamics of this interaction were investigated using density functional theory (DFT) and finite temperature DFT-based molecular dynamics simulations. Fully and partially hydroxylated surfaces represent substrates before and after electron beam treatment and this study examines the role of electron beam pretreatment on the substrates in the initial stages of precursor dissociation and formation of Pt deposits. Our simulations show that on fully hydroxylated surfaces or untreated surfaces, the precursor molecules remain inactivated while we observe fragmentation of (CH3-C5H4)Pt(CH3)3 on partially hydroxylated surfaces. The behavior of precursor molecules on the partially hydroxylated surfaces has been found to depend on the initial orientation of the molecule and the distribution of surface active sites. Based on the observations from the simulations and available experiments, we discuss possible dissociation channels of the precursor.
0
1
0
0
0
0
On the robustness of the H$β$ Lick index as a cosmic clock in passive early-type galaxies
We examine the H$\beta$ Lick index in a sample of $\sim 24000$ massive ($\rm log(M/M_{\odot})>10.75$) and passive early-type galaxies extracted from SDSS at z<0.3, in order to assess the reliability of this index to constrain the epoch of formation and age evolution of these systems. We further investigate the possibility of exploiting this index as "cosmic chronometer", i.e. to derive the Hubble parameter from its differential evolution with redshift, hence constraining cosmological models independently of other probes. We find that the H$\beta$ strength increases with redshift as expected in passive evolution models, and shows at each redshift weaker values in more massive galaxies. However, a detailed comparison of the observed index with the predictions of stellar population synthesis models highlights a significant tension, with the observed index being systematically lower than expected. By analyzing the stacked spectra, we find a weak [NII]$\lambda6584$ emission line (not detectable in the single spectra) which anti-correlates with the mass, that can be interpreted as a hint of the presence of ionized gas. We estimated the correction of the H$\beta$ index by the residual emission component exploiting different approaches, but find it very uncertain and model-dependent. We conclude that, while the qualitative trends of the observed H$\beta$-z relations are consistent with the expected passive and downsizing scenario, the possible presence of ionized gas even in the most massive and passive galaxies prevents to use this index for a quantitative estimate of the age evolution and for cosmological applications.
0
1
0
0
0
0
Finite-size effects in a stochastic Kuramoto model
We present a collective coordinate approach to study the collective behaviour of a finite ensemble of $N$ stochastic Kuramoto oscillators using two degrees of freedom; one describing the shape dynamics of the oscillators and one describing their mean phase. Contrary to the thermodynamic limit $N\to\infty$ in which the mean phase of the cluster of globally synchronized oscillators is constant in time, the mean phase of a finite-size cluster experiences Brownian diffusion with a variance proportional to $1/N$. This finite-size effect is quantitatively well captured by our collective coordinate approach.
0
1
0
0
0
0
The normal distribution is freely selfdecomposable
The class of selfdecomposable distributions in free probability theory was introduced by Barndorff-Nielsen and the third named author. It constitutes a fairly large subclass of the freely infinitely divisible distributions, but so far specific examples have been limited to Wigner's semicircle distributions, the free stable distributions, two kinds of free gamma distributions and a few other examples. In this paper, we prove that the (classical) normal distributions are freely selfdecomposable. More generally it is established that the Askey-Wimp-Kerov distribution $\mu_c$ is freely selfdecomposable for any $c$ in $[-1,0]$. The main ingredient in the proof is a general characterization of the freely selfdecomposable distributions in terms of the derivative of their free cumulant transform.
0
0
1
0
0
0
Simplex Queues for Hot-Data Download
In cloud storage systems, hot data is usually replicated over multiple nodes in order to accommodate simultaneous access by multiple users as well as increase the fault tolerance of the system. Recent cloud storage research has proposed using availability codes, which is a special class of erasure codes, as a more storage efficient way to store hot data. These codes enable data recovery from multiple, small disjoint groups of servers. The number of the recovery groups is referred to as the availability and the size of each group as the locality of the code. Until now, we have very limited knowledge on how code locality and availability affect data access time. Data download from these systems involves multiple fork-join queues operating in-parallel, making the analysis of access time a very challenging problem. In this paper, we present an approximate analysis of data access time in storage systems that employ simplex codes, which are an important and in certain sense optimal class of availability codes. We consider and compare three strategies in assigning download requests to servers; first one aggressively exploits the storage availability for faster download, second one implements only load balancing, and the last one employs storage availability only for hot data download without incurring any negative impact on the cold data download.
1
0
0
0
0
0
Voting power of political parties in the Senate of Chile during the whole binomial system period: 1990-2017
The binomial system is an electoral system unique in the world. It was used to elect the senators and deputies of Chile during 27 years, from the return of democracy in 1990 until 2017. In this paper we study the real voting power of the different political parties in the Senate of Chile during the whole binomial period. We not only consider the different legislative periods, but also any party changes between one period and the next. The real voting power is measured by considering power indices from cooperative game theory, which are based on the capability of the political parties to form winning coalitions. With this approach, we can do an analysis that goes beyond the simple count of parliamentary seats.
0
0
0
0
0
1
A uniform stability principle for dual lattices
We prove a highly uniform stability or "almost-near" theorem for dual lattices of lattices $L \subseteq \Bbb R^n$. More precisely, we show that, for a vector $x$ from the linear span of a lattice $L \subseteq \Bbb R^n$, subject to $\lambda_1(L) \ge \lambda > 0$, to be $\varepsilon$-close to some vector from the dual lattice $L'$ of $L$, it is enough that the inner products $u\,x$ are $\delta$-close (with $\delta < 1/3$) to some integers for all vectors $u \in L$ satisfying $\| u \| \le r$, where $r > 0$ depends on $n$, $\lambda$, $\delta$ and $\varepsilon$, only. This generalizes an earlier analogous result proved for integral vector lattices by M. Mačaj and the second author. The proof is nonconstructive, using the ultraproduct construction and a slight portion of nonstandard analysis.
0
0
1
0
0
0
pandapower - an Open Source Python Tool for Convenient Modeling, Analysis and Optimization of Electric Power Systems
pandapower is a Python based, BSD-licensed power system analysis tool aimed at automation of static and quasi-static analysis and optimization of balanced power systems. It provides power flow, optimal power flow, state estimation, topological graph searches and short circuit calculations according to IEC 60909. pandapower includes a Newton-Raphson power flow solver formerly based on PYPOWER, which has been accelerated with just-in-time compilation. Additional enhancements to the solver include the capability to model constant current loads, grids with multiple reference nodes and a connectivity check. The pandapower network model is based on electric elements, such as lines, two and three-winding transformers or ideal switches. All elements can be defined with nameplate parameters and are internally processed with equivalent circuit models, which have been validated against industry standard software tools. The tabular data structure used to define networks is based on the Python library pandas, which allows comfortable handling of input and output parameters. The implementation in Python makes pandapower easy to use and allows comfortable extension with third-party libraries. pandapower has been successfully applied in several grid studies as well as for educational purposes. A comprehensive, publicly available case-study demonstrates a possible application of pandapower in an automated time series calculation.
1
0
0
0
0
0
Temporal oscillations of light transmission through dielectric microparticles subjected to optically induced motion
We consider light-induced binding and motion of dielectric microparticles in an optical waveguide that gives rise to a back-action effect such as light transmission oscillating with time. Modeling the particles by dielectric slabs allows us to solve the problem analytically and obtain a rich variety of dynamical regimes both for Newtonian and damped motion. This variety is clearly reflected in temporal oscillations of the light transmission. The characteristic frequencies of the oscillations are within the ultrasound range of the order of $10^{5}$ Hz for micron size particles and injected power of the order of 100 mW. In addition, we consider driven by propagating light dynamics of a dielectric particle inside a Fabry-Perot resonator. These phenomena pave a way for optical driving and monitoring of motion of particles in waveguides and resonators.
0
1
0
0
0
0
Berry-Esséen bounds for parameter estimation of general Gaussian processes
We study rates of convergence in central limit theorems for the partial sum of squares of general Gaussian sequences, using tools from analysis on Wiener space. No assumption of stationarity, asymptotically or otherwise, is made. The main theoretical tool is the so-called Optimal Fourth Moment Theorem \cite{NP2015}, which provides a sharp quantitative estimate of the total variation distance on Wiener chaos to the normal law. The only assumptions made on the sequence are the existence of an asymptotic variance, that a least-squares-type estimator for this variance parameter has a bias and a variance which can be controlled, and that the sequence's auto-correlation function, which may exhibit long memory, has a no-worse memory than that of fractional Brownian motion with Hurst parameter }$H<3/4$.{\ \ Our main result is explicit, exhibiting the trade-off between bias, variance, and memory. We apply our result to study drift parameter estimation problems for subfractional Ornstein-Uhlenbeck and bifractional Ornstein-Uhlenbeck processes with fixed-time-step observations. These are processes which fail to be stationary or self-similar, but for which detailed calculations result in explicit formulas for the estimators' asymptotic normality.
0
0
1
1
0
0
Anomaly Detection via Minimum Likelihood Generative Adversarial Networks
Anomaly detection aims to detect abnormal events by a model of normality. It plays an important role in many domains such as network intrusion detection, criminal activity identity and so on. With the rapidly growing size of accessible training data and high computation capacities, deep learning based anomaly detection has become more and more popular. In this paper, a new domain-based anomaly detection method based on generative adversarial networks (GAN) is proposed. Minimum likelihood regularization is proposed to make the generator produce more anomalies and prevent it from converging to normal data distribution. Proper ensemble of anomaly scores is shown to improve the stability of discriminator effectively. The proposed method has achieved significant improvement than other anomaly detection methods on Cifar10 and UCI datasets.
0
0
0
1
0
0
Supercurrent as a Probe for Topological Superconductivity in Magnetic Adatom Chains
A magnetic adatom chain, proximity coupled to a conventional superconductor with spin-orbit coupling, exhibits locally an odd-parity, spin-triplet pairing amplitude. We show that the singlet-triplet junction, thus formed, leads to a net spin accumulation in the near vicinity of the chain. The accumulated spins are polarized along the direction of the local $\mathbf{d}$-vector for triplet pairing and generate an enhanced persistent current flowing around the chain. The spin polarization and the "supercurrent" reverse their directions beyond a critical exchange coupling strength at which the singlet superconducting order changes its sign on the chain. The current is strongly enhanced in the topological superconducting regime where Majorana bound states appear at the chain ends. The current and the spin profile offer alternative routes to characterize the topological superconducting state in adatom chains and islands.
0
1
0
0
0
0
Experimental statistics of veering triangulations
Certain fibered hyperbolic 3-manifolds admit a $\mathit{\text{layered veering triangulation}}$, which can be constructed algorithmically given the stable lamination of the monodromy. These triangulations were introduced by Agol in 2011, and have been further studied by several others in the years since. We obtain experimental results which shed light on the combinatorial structure of veering triangulations, and its relation to certain topological invariants of the underlying manifold.
0
0
1
0
0
0
Concept Drift and Anomaly Detection in Graph Streams
Graph representations offer powerful and intuitive ways to describe data in a multitude of application domains. Here, we consider stochastic processes generating graphs and propose a methodology for detecting changes in stationarity of such processes. The methodology is general and considers a process generating attributed graphs with a variable number of vertices/edges, without the need to assume one-to-one correspondence between vertices at different time steps. The methodology acts by embedding every graph of the stream into a vector domain, where a conventional multivariate change detection procedure can be easily applied. We ground the soundness of our proposal by proving several theoretical results. In addition, we provide a specific implementation of the methodology and evaluate its effectiveness on several detection problems involving attributed graphs representing biological molecules and drawings. Experimental results are contrasted with respect to suitable baseline methods, demonstrating the effectiveness of our approach.
1
0
0
1
0
0
Lat-Net: Compressing Lattice Boltzmann Flow Simulations using Deep Neural Networks
Computational Fluid Dynamics (CFD) is a hugely important subject with applications in almost every engineering field, however, fluid simulations are extremely computationally and memory demanding. Towards this end, we present Lat-Net, a method for compressing both the computation time and memory usage of Lattice Boltzmann flow simulations using deep neural networks. Lat-Net employs convolutional autoencoders and residual connections in a fully differentiable scheme to compress the state size of a simulation and learn the dynamics on this compressed form. The result is a computationally and memory efficient neural network that can be iterated and queried to reproduce a fluid simulation. We show that once Lat-Net is trained, it can generalize to large grid sizes and complex geometries while maintaining accuracy. We also show that Lat-Net is a general method for compressing other Lattice Boltzmann based simulations such as Electromagnetism.
0
1
0
1
0
0
Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study
While first-order optimization methods such as stochastic gradient descent (SGD) are popular in machine learning (ML), they come with well-known deficiencies, including relatively-slow convergence, sensitivity to the settings of hyper-parameters such as learning rate, stagnation at high training errors, and difficulty in escaping flat regions and saddle points. These issues are particularly acute in highly non-convex settings such as those arising in neural networks. Motivated by this, there has been recent interest in second-order methods that aim to alleviate these shortcomings by capturing curvature information. In this paper, we report detailed empirical evaluations of a class of Newton-type methods, namely sub-sampled variants of trust region (TR) and adaptive regularization with cubics (ARC) algorithms, for non-convex ML problems. In doing so, we demonstrate that these methods not only can be computationally competitive with hand-tuned SGD with momentum, obtaining comparable or better generalization performance, but also they are highly robust to hyper-parameter settings. Further, in contrast to SGD with momentum, we show that the manner in which these Newton-type methods employ curvature information allows them to seamlessly escape flat regions and saddle points.
1
0
0
1
0
0
Computational Approaches for Stochastic Shortest Path on Succinct MDPs
We consider the stochastic shortest path (SSP) problem for succinct Markov decision processes (MDPs), where the MDP consists of a set of variables, and a set of nondeterministic rules that update the variables. First, we show that several examples from the AI literature can be modeled as succinct MDPs. Then we present computational approaches for upper and lower bounds for the SSP problem: (a)~for computing upper bounds, our method is polynomial-time in the implicit description of the MDP; (b)~for lower bounds, we present a polynomial-time (in the size of the implicit description) reduction to quadratic programming. Our approach is applicable even to infinite-state MDPs. Finally, we present experimental results to demonstrate the effectiveness of our approach on several classical examples from the AI literature.
1
0
0
0
0
0
One-dimensional in-plane edge domain walls in ultrathin ferromagnetic films
We study existence and properties of one-dimensional edge domain walls in ultrathin ferromagnetic films with uniaxial in-plane magnetic anisotropy. In these materials, the magnetization vector is constrained to lie entirely in the film plane, with the preferred directions dictated by the magnetocrystalline easy axis. We consider magnetization profiles in the vicinity of a straight film edge oriented at an arbitrary angle with respect to the easy axis. To minimize the micromagnetic energy, these profiles form transition layers in which the magnetization vector rotates away from the direction of the easy axis to align with the film edge. We prove existence of edge domain walls as minimizers of the appropriate one-dimensional micromagnetic energy functional and show that they are classical solutions of the associated Euler-Lagrange equation with Dirichlet boundary condition at the edge. We also perform a numerical study of these one-dimensional domain walls and uncover further properties of these domain wall profiles.
0
1
1
0
0
0
Nontrivial Turmites are Turing-universal
A Turmit is a Turing machine that works over a two-dimensional grid, that is, an agent that moves, reads and writes symbols over the cells of the grid. Its state is an arrow and, depending on the symbol that it reads, it turns to the left or to the right, switching the symbol at the same time. Several symbols are admitted, and the rule is specified by the turning sense that the machine has over each symbol. Turmites are a generalization of Langtons ant, and they present very complex and diverse behaviors. We prove that any Turmite, except for those whose rule does not depend on the symbol, can simulate any Turing Machine. We also prove the P-completeness of prediction their future behavior by explicitly giving a log-space reduction from the Topological Circuit Value Problem. A similar result was already established for Langtons ant; here we use a similar technique but prove a stronger notion of simulation, and for a more general family.
1
1
0
0
0
0
Dandelion: Redesigning the Bitcoin Network for Anonymity
Bitcoin and other cryptocurrencies have surged in popularity over the last decade. Although Bitcoin does not claim to provide anonymity for its users, it enjoys a public perception of being a `privacy-preserving' financial system. In reality, cryptocurrencies publish users' entire transaction histories in plaintext, albeit under a pseudonym; this is required for transaction validation. Therefore, if a user's pseudonym can be linked to their human identity, the privacy fallout can be significant. Recently, researchers have demonstrated deanonymization attacks that exploit weaknesses in the Bitcoin network's peer-to-peer (P2P) networking protocols. In particular, the P2P network currently forwards content in a structured way that allows observers to deanonymize users. In this work, we redesign the P2P network from first principles with the goal of providing strong, provable anonymity guarantees. We propose a simple networking policy called Dandelion, which achieves nearly-optimal anonymity guarantees at minimal cost to the network's utility. We also provide a practical implementation of Dandelion.
1
0
0
0
0
0
A characterisation of Lie algebras amongst anti-commutative algebras
Let $\mathbb{K}$ be an infinite field. We prove that if a variety of anti-commutative $\mathbb{K}$-algebras - not necessarily associative, where $xx=0$ is an identity - is locally algebraically cartesian closed, then it must be a variety of Lie algebras over $\mathbb{K}$. In particular, $\mathsf{Lie}_{\mathbb{K}}$ is the largest such. Thus, for a given variety of anti-commutative $\mathbb{K}$-algebras, the Jacobi identity becomes equivalent to a categorical condition: it is an identity in~$\mathcal{V}$ if and only if $\mathcal{V}$ is a subvariety of a locally algebraically cartesian closed variety of anti-commutative $\mathbb{K}$-algebras. This is based on a result saying that an algebraically coherent variety of anti-commutative $\mathbb{K}$-algebras is either a variety of Lie algebras or a variety of anti-associative algebras over $\mathbb{K}$.
0
0
1
0
0
0
Characteristics of a magneto-optical trap of molecules
We present the properties of a magneto-optical trap (MOT) of CaF molecules. We study the process of loading the MOT from a decelerated buffer-gas-cooled beam, and how best to slow this molecular beam in order to capture the most molecules. We determine how the number of molecules, the photon scattering rate, the oscillation frequency, damping constant, temperature, cloud size and lifetime depend on the key parameters of the MOT, especially the intensity and detuning of the main cooling laser. We compare our results to analytical and numerical models, to the properties of standard atomic MOTs, and to MOTs of SrF molecules. We load up to $2 \times 10^4$ molecules, and measure a maximum scattering rate of $2.5 \times 10^6$ s$^{-1}$ per molecule, a maximum oscillation frequency of 100 Hz, a maximum damping constant of 500 s$^{-1}$, and a minimum MOT rms radius of 1.5 mm. A minimum temperature of 730 $\mu$K is obtained by ramping down the laser intensity to low values. The lifetime, typically about 100 ms, is consistent with a leak out of the cooling cycle with a branching ratio of about $6 \times 10^{-6}$. The MOT has a capture velocity of about 11 m/s.
0
1
0
0
0
0
Reconstructing a Lattice Equation: a Non-Autonomous Approach to the Hietarinta Equation
In this paper we construct a non-autonomous version of the Hietarinta equation [Hietarinta J., J. Phys. A: Math. Gen. 37 (2004), L67-L73] and study its integrability properties. We show that this equation possess linear growth of the degrees of iterates, generalized symmetries depending on arbitrary functions, and that it is Darboux integrable. We use the first integrals to provide a general solution of this equation. In particular we show that this equation is a sub-case of the non-autonomous $Q_{\rm V}$ equation, and we provide a non-autonomous Möbius transformation to another equation found in [Hietarinta J., J. Nonlinear Math. Phys. 12 (2005), suppl. 2, 223-230] and appearing also in Boll's classification [Boll R., Ph.D. Thesis, Technische Universität Berlin, 2012].
0
1
0
0
0
0
Adversarial Training Versus Weight Decay
Performance-critical machine learning models should be robust to input perturbations not seen during training. Adversarial training is a method for improving a model's robustness to some perturbations by including them in the training process, but this tends to exacerbate other vulnerabilities of the model. The adversarial training framework has the effect of translating the data with respect to the cost function, while weight decay has a scaling effect. Although weight decay could be considered a crude regularization technique, it appears superior to adversarial training as it remains stable over a broader range of regimes and reduces all generalization errors. Equipped with these abstractions, we provide key baseline results and methodology for characterizing robustness. The two approaches can be combined to yield one small model that demonstrates good robustness to several white-box attacks associated with different metrics.
0
0
0
1
0
0
Ray tracing method for stereo image synthesis using CUDA
This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU). The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA) have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 %) is spent for transfer of data between the central processing unit and GPU for computations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of computations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization. Keywords: Volumetric 3D visualization, stereo 3D visualization, ray tracing, parallel computing on GPU, CUDA
1
0
0
0
0
0
Numerical investigation of gapped edge states in fractional quantum Hall-superconductor heterostructures
Fractional quantum Hall-superconductor heterostructures may provide a platform towards non-abelian topological modes beyond Majoranas. However their quantitative theoretical study remains extremely challenging. We propose and implement a numerical setup for studying edge states of fractional quantum Hall droplets with a superconducting instability. The fully gapped edges carry a topological degree of freedom that can encode quantum information protected against local perturbations. We simulate such a system numerically using exact diagonalization by restricting the calculation to the quasihole-subspace of a (time-reversal symmetric) bilayer fractional quantum Hall system of Laughlin $\nu=1/3$ states. We show that the edge ground states are permuted by spin-dependent flux insertion and demonstrate their fractional $6\pi$ Josephson effect, evidencing their topological nature and the Cooper pairing of fractionalized quasiparticles.
0
1
0
0
0
0
Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving
In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving.
1
0
0
1
0
0
Theoretical limitations of Encoder-Decoder GAN architectures
Encoder-decoder GANs architectures (e.g., BiGAN and ALI) seek to add an inference mechanism to the GANs setup, consisting of a small encoder deep net that maps data-points to their succinct encodings. The intuition is that being forced to train an encoder alongside the usual generator forces the system to learn meaningful mappings from the code to the data-point and vice-versa, which should improve the learning of the target distribution and ameliorate mode-collapse. It should also yield meaningful codes that are useful as features for downstream tasks. The current paper shows rigorously that even on real-life distributions of images, the encode-decoder GAN training objectives (a) cannot prevent mode collapse; i.e. the objective can be near-optimal even when the generated distribution has low and finite support (b) cannot prevent learning meaningless codes for data -- essentially white noise. Thus if encoder-decoder GANs do indeed work then it must be due to reasons as yet not understood, since the training objective can be low even for meaningless solutions.
1
0
0
1
0
0
Ground state sign-changing solutions for a class of nonlinear fractional Schrödinger-Poisson system in $\mathbb{R}^{3}$
In this paper, we are concerned with the existence of the least energy sign-changing solutions for the following fractional Schrödinger-Poisson system: \begin{align*} \left\{ \begin{aligned} &(-\Delta)^{s} u+V(x)u+\lambda\phi(x)u=f(x, u),\quad &\text{in}\, \ \mathbb{R}^{3},\\ &(-\Delta)^{t}\phi=u^{2},& \text{in}\,\ \mathbb{R}^{3}, \end{aligned} \right. \end{align*} where $\lambda\in \mathbb{R}^{+}$ is a parameter, $s, t\in (0, 1)$ and $4s+2t>3$, $(-\Delta)^{s}$ stands for the fractional Laplacian. By constraint variational method and quantitative deformation lemma, we prove that the above problem has one least energy sign-changing solution. Moreover, for any $\lambda>0$, we show that the energy of the least energy sign-changing solutions is strictly larger than two times the ground state energy. Finally, we consider $\lambda$ as a parameter and study the convergence property of the least energy sign-changing solutions as $\lambda\searrow 0$.
0
0
1
0
0
0
Monads on higher monoidal categories
We study the action of monads on categories equipped with several monoidal structures. We identify the structure and conditions that guarantee that the higher monoidal structure is inherited by the category of algebras over the monad. Monoidal monads and comonoidal monads appear as the base cases in this hierarchy. Monads acting on duoidal categories constitute the next case. We cover the general case of $n$-monoidal categories and discuss several naturally occurring examples in which $n\leq 3$.
0
0
1
0
0
0
A Review of Augmented Reality Applications for Building Evacuation
Evacuation is one of the main disaster management solutions to reduce the impact of man-made and natural threats on building occupants. To date, several modern technologies and gamification concepts, e.g. immersive virtual reality and serious games, have been used to enhance building evacuation preparedness and effectiveness. Those tools have been used both to investigate human behavior during building emergencies and to train building occupants on how to cope with building evacuations. Augmented Reality (AR) is novel technology that can enhance this process providing building occupants with virtual contents to improve their evacuation performance. This work aims at reviewing existing AR applications developed for building evacuation. This review identifies the disasters and types of building those tools have been applied for. Moreover, the application goals, hardware and evacuation stages affected by AR are also investigated in the review. Finally, this review aims at identifying the challenges to face for further development of AR evacuation tools.
1
0
0
0
0
0
Topology in time-reversal symmetric crystals
The discovery of topological insulators has reformed modern materials science, promising to be a platform for tabletop relativistic physics, electronic transport without scattering, and stable quantum computation. Topological invariants are used to label distinct types of topological insulators. But it is not generally known how many or which invariants can exist in any given crystalline material. Using a new and efficient counting algorithm, we study the topological invariants that arise in time-reversal symmetric crystals. This results in a unified picture that explains the relations between all known topological invariants in these systems. It also predicts new topological phases and one entirely new topological invariant. We present explicitly the classification of all two-dimensional crystalline fermionic materials, and give a straightforward procedure for finding the analogous result in any three-dimensional structure. Our study represents a single, intuitive physical picture applicable to all topological invariants in real materials, with crystal symmetries.
0
1
0
0
0
0
Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations
Deep convolutional neural network (CNN) inference requires significant amount of memory and computation, which limits its deployment on embedded devices. To alleviate these problems to some extent, prior research utilize low precision fixed-point numbers to represent the CNN weights and activations. However, the minimum required data precision of fixed-point weights varies across different networks and also across different layers of the same network. In this work, we propose using floating-point numbers for representing the weights and fixed-point numbers for representing the activations. We show that using floating-point representation for weights is more efficient than fixed-point representation for the same bit-width and demonstrate it on popular large-scale CNNs such as AlexNet, SqueezeNet, GoogLeNet and VGG-16. We also show that such a representation scheme enables compact hardware multiply-and-accumulate (MAC) unit design. Experimental results show that the proposed scheme reduces the weight storage by up to 36% and power consumption of the hardware multiplier by up to 50%.
1
0
0
0
0
0
Voids in the Cosmic Web as a probe of dark energy
The formation of large voids in the Cosmic Web from the initial adiabatic cosmological perturbations of space-time metric, density and velocity of matter is investigated in cosmological model with the dynamical dark energy accelerating expansion of the Universe. It is shown that the negative density perturbations with the initial radius of about 50 Mpc in comoving to the cosmological background coordinates and the amplitude corresponding to the r.m.s. temperature fluctuations of the cosmic microwave background lead to the formation of voids with the density contrast up to $-$0.9, maximal peculiar velocity about 400 km/s and the radius close to the initial one. An important feature of voids formation from the analyzed initial amplitudes and profiles is establishing the surrounding overdensity shell. We have shown that the ratio of the peculiar velocity in units of the Hubble flow to the density contrast in the central part of a void does not depend or weakly depends on the distance from the center of the void. It is also shown that this ratio is sensitive to the values of dark energy parameters and can be used to find them based on the observational data on mass density and peculiar velocities of galaxies in the voids.
0
1
0
0
0
0
Templated ligation can create a hypercycle replication network
The stability of sequence replication was crucial for the emergence of molecular evolution and early life. Exponential replication with a first-order growth dynamics show inherent instabilities such as the error catastrophe and the dominance by the fastest replicators. This favors less structured and short sequences. The theoretical concept of hypercycles has been proposed to solve these problems. Their higher-order growth kinetics leads to frequency-dependent selection and stabilizes the replication of majority molecules. However, many implementations of hypercycles are unstable or require special sequences with catalytic activity. Here, we demonstrate the spontaneous emergence of higher-order cooperative replication from a network of simple ligation chain reactions (LCR). We performed long-term LCR experiments from a mixture of sequences under molecule degrading conditions with a ligase protein. At the chosen temperature cycling, a network of positive feedback loops arose from both the cooperative ligation of matching sequences and the emerging increase in sequence length. It generated higher-order replication with frequency-dependent selection. The experiments matched a complete simulation using experimentally determined ligation rates and the hypercycle mechanism was also confirmed by abstracted modeling. Since templated ligation is a most basic reaction of oligonucleotides, the described mechanism could have been implemented under microthermal convection on early Earth.
0
0
0
0
1
0
Variational characterization of H^p
In this paper we obtain the variational characterization of Hardy space $H^p$ for $p\in(\frac n{n+1},1]$ and get estimates for the oscillation operator and the $\lambda$-jump operator associated with approximate identities acting on $H^p$ for $p\in(\frac n{n+1},1]$. Moreover, we give counterexamples to show that the oscillation and $\lambda$-jump associated with some approximate identity can not be used to characterize $H^p$ for $p\in(\frac n{n+1},1]$.
0
0
1
0
0
0
Towards Deep Learning Models Resistant to Adversarial Attacks
Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.
1
0
0
1
0
0
Internal delensing of Planck CMB temperature and polarization
We present a first internal delensing of CMB maps, both in temperature and polarization, using the public foreground-cleaned (SMICA) Planck 2015 maps. After forming quadratic estimates of the lensing potential, we use the corresponding displacement field to undo the lensing on the same data. We build differences of the delensed spectra to the original data spectra specifically to look for delensing signatures. After taking into account reconstruction noise biases in the delensed spectra, we find an expected sharpening of the power spectrum acoustic peaks with a delensing efficiency of $29\,\%$ ($TT$) $25\,\%$ ($TE$) and $22\,\%$ ($EE$). The detection significance of the delensing effects is very high in all spectra: $12\,\sigma$ in $EE$ polarization; $18\,\sigma$ in $TE$; and $20\,\sigma$ in $TT$. The null hypothesis of no lensing in the maps is rejected at $26\,\sigma$. While direct detection of the power in lensing $B$-modes themselves is not possible at high significance at Planck noise levels, we do detect (at $4.5\,\sigma$ under the null hypothesis) delensing effects in the $B$-mode map, with $7\,\%$ reduction in lensing power. Our results provide a first demonstration of polarization delensing, and generally of internal CMB delensing, and stand in agreement with the baseline $\Lambda$CDM Planck 2015 cosmology expectations.
0
1
0
0
0
0
Renaissance: Self-Stabilizing Distributed SDN Control Plane
By introducing programmability, automated verification, and innovative debugging tools, Software-Defined Networks (SDNs) are poised to meet the increasingly stringent dependability requirements of today's communication networks. However, the design of fault-tolerant SDNs remains an open challenge. This paper considers the design of dependable SDNs through the lenses of self-stabilization - a very strong notion of fault-tolerance. In particular, we develop algorithms for an in-band and distributed control plane for SDNs, called Renaissance, which tolerate a wide range of (concurrent) controller, link, and communication failures. Our self-stabilizing algorithms ensure that after the occurrence of an arbitrary combination of failures, (i) every non-faulty SDN controller can eventually reach any switch in the network within a bounded communication delay (in the presence of a bounded number of concurrent failures) and (ii) every switch is managed by at least one non-faulty controller. We evaluate Renaissance through a rigorous worst-case analysis as well as a prototype implementation (based on OVS and Floodlight), and we report on our experiments using Mininet.
1
0
0
0
0
0
Robust and Scalable Power System State Estimation via Composite Optimization
In today's cyber-enabled smart grids, high penetration of uncertain renewables, purposeful manipulation of meter readings, and the need for wide-area situational awareness, call for fast, accurate, and robust power system state estimation. The least-absolute-value (LAV) estimator is known for its robustness relative to the weighted least-squares (WLS) one. However, due to nonconvexity and nonsmoothness, existing LAV solvers based on linear programming are typically slow, hence inadequate for real-time system monitoring. This paper develops two novel algorithms for efficient LAV estimation, which draw from recent advances in composite optimization. The first is a deterministic linear proximal scheme that handles a sequence of convex quadratic problems, each efficiently solvable either via off-the-shelf algorithms or through the alternating direction method of multipliers. Leveraging the sparse connectivity inherent to power networks, the second scheme is stochastic, and updates only \emph{a few} entries of the complex voltage state vector per iteration. In particular, when voltage magnitude and (re)active power flow measurements are used only, this number reduces to one or two, \emph{regardless of} the number of buses in the network. This computational complexity evidently scales well to large-size power systems. Furthermore, by carefully \emph{mini-batching} the voltage and power flow measurements, accelerated implementation of the stochastic iterations becomes possible. The developed algorithms are numerically evaluated using a variety of benchmark power networks. Simulated tests corroborate that improved robustness can be attained at comparable or markedly reduced computation times for medium- or large-size networks relative to the "workhorse" WLS-based Gauss-Newton iterations.
1
0
0
0
0
0
A new algorithm for constraint satisfaction problems with few subpowers templates
In this article, we provide a new algorithm for solving constraint satisfaction problems over templates with few subpowers, by reducing the problem to the combination of solvability of a polynomial number of systems of linear equations over finite fields and reductions via absorbing subuniverses.
0
0
1
0
0
0
Testing of General Relativity with Geodetic VLBI
The geodetic VLBI technique is capable of measuring the Sun's gravity light deflection from distant radio sources around the whole sky. This light deflection is equivalent to the conventional gravitational delay used for the reduction of geodetic VLBI data. While numerous tests based on a global set of VLBI data have shown that the parameter 'gamma' of the post-Newtonian approximation is equal to unity with a precision of about 0.02 percent, more detailed analysis reveals some systematic deviations depending on the angular elongation from the Sun. In this paper a limited set of VLBI observations near the Sun were adjusted to obtain the estimate of the parameter 'gamma' free of the elongation angle impact. The parameter 'gamma' is still found to be close to unity with precision of 0.06 percent, two subsets of VLBI data measured at short and long baselines produce some statistical inconsistency.
0
1
0
0
0
0
An unbiased estimator for the ellipticity from image moments
An unbiased estimator for the ellipticity of an object in a noisy image is given in terms of the image moments. Three assumptions are made: i) the pixel noise is normally distributed, although with arbitrary covariance matrix, ii) the image moments are taken about a fixed centre, and iii) the point-spread function is known. The relevant combinations of image moments are then jointly normal and their covariance matrix can be computed. A particular estimator for the ratio of the means of jointly normal variates is constructed and used to provide the unbiased estimator for the ellipticity. Furthermore, an unbiased estimate of the covariance of the new estimator is also given.
0
1
1
1
0
0
Spectral estimation of the percolation transition in clustered networks
There have been several spectral bounds for the percolation transition in networks, using spectrum of matrices associated with the network such as the adjacency matrix and the non-backtracking matrix. However they are far from being tight when the network is sparse and displays clustering or transitivity, which is represented by existence of short loops e.g. triangles. In this work, for the bond percolation, we first propose a message passing algorithm for calculating size of percolating clusters considering effects of triangles, then relate the percolation transition to the leading eigenvalue of a matrix that we name the triangle-non-backtracking matrix, by analyzing stability of the message passing equations. We establish that our method gives a tighter lower-bound to the bond percolation transition than previous spectral bounds, and it becomes exact for an infinite network with no loops longer than 3. We evaluate numerically our methods on synthetic and real-world networks, and discuss further generalizations of our approach to include higher-order sub-structures.
1
1
0
1
0
0
Collapsibility to a subcomplex of a given dimension is NP-complete
In this paper we extend the works of Tancer and of Malgouyres and Francés, showing that $(d,k)$-collapsibility is NP-complete for $d\geq k+2$ except $(2,0)$. By $(d,k)$-collapsibility we mean the following problem: determine whether a given $d$-dimensional simplicial complex can be collapsed to some $k$-dimensional subcomplex. The question of establishing the complexity status of $(d,k)$-collapsibility was asked by Tancer, who proved NP-completeness of $(d,0)$ and $(d,1)$-collapsibility (for $d\geq 3$). Our extended result, together with the known polynomial-time algorithms for $(2,0)$ and $d=k+1$, answers the question completely.
1
0
1
0
0
0
Run, skeleton, run: skeletal model in a physics-based simulation
In this paper, we present our approach to solve a physics-based reinforcement learning challenge "Learning to Run" with objective to train physiologically-based human model to navigate a complex obstacle course as quickly as possible. The environment is computationally expensive, has a high-dimensional continuous action space and is stochastic. We benchmark state of the art policy-gradient methods and test several improvements, such as layer normalization, parameter noise, action and state reflecting, to stabilize training and improve its sample-efficiency. We found that the Deep Deterministic Policy Gradient method is the most efficient method for this environment and the improvements we have introduced help to stabilize training. Learned models are able to generalize to new physical scenarios, e.g. different obstacle courses.
1
0
0
1
0
0
Affect Estimation in 3D Space Using Multi-Task Active Learning for Regression
Acquisition of labeled training samples for affective computing is usually costly and time-consuming, as affects are intrinsically subjective, subtle and uncertain, and hence multiple human assessors are needed to evaluate each affective sample. Particularly, for affect estimation in the 3D space of valence, arousal and dominance, each assessor has to perform the evaluations in three dimensions, which makes the labeling problem even more challenging. Many sophisticated machine learning approaches have been proposed to reduce the data labeling requirement in various other domains, but so far few have considered affective computing. This paper proposes two multi-task active learning for regression approaches, which select the most beneficial samples to label, by considering the three affect primitives simultaneously. Experimental results on the VAM corpus demonstrated that our optimal sample selection approaches can result in better estimation performance than random selection and several traditional single-task active learning approaches. Thus, they can help alleviate the data labeling problem in affective computing, i.e., better estimation performance can be obtained from fewer labeling queries.
1
0
0
1
0
0
Sparse Markov Decision Processes with Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning
In this paper, a sparse Markov decision process (MDP) with novel causal sparse Tsallis entropy regularization is proposed.The proposed policy regularization induces a sparse and multi-modal optimal policy distribution of a sparse MDP. The full mathematical analysis of the proposed sparse MDP is provided.We first analyze the optimality condition of a sparse MDP. Then, we propose a sparse value iteration method which solves a sparse MDP and then prove the convergence and optimality of sparse value iteration using the Banach fixed point theorem. The proposed sparse MDP is compared to soft MDPs which utilize causal entropy regularization. We show that the performance error of a sparse MDP has a constant bound, while the error of a soft MDP increases logarithmically with respect to the number of actions, where this performance error is caused by the introduced regularization term. In experiments, we apply sparse MDPs to reinforcement learning problems. The proposed method outperforms existing methods in terms of the convergence speed and performance.
1
0
0
1
0
0
Mid-infrared Spectroscopic Observations of the Dust-forming Classical Nova V2676 Oph
The dust-forming nova V2676 Oph is unique in that it was the first nova to provide evidence of C_2 and CN molecules during its near-maximum phase and evidence of CO molecules during its early decline phase. Observations of this nova have revealed the slow evolution of its lightcurves and have also shown low isotopic ratios of carbon (12C/13C) and nitrogen (14N/15N) in its nova envelope. These behaviors indicate that the white dwarf (WD) star hosting V2676 Oph is a CO-rich WD rather than an ONe-rich WD (typically larger in mass than the former). We performed mid-infrared spectroscopic and photometric observations of V2676 Oph in 2013 and 2014 (respectively 452 and 782 days after its discovery). No significant [Ne II] emission at 12.8 micron was detected at either epoch. These provided evidence for a CO-rich WD star hosting V2676 Oph. Both carbon-rich and oxygen-rich grains were detected in addition to an unidentified infrared feature at 11.4 micron originating from polycyclic aromatic hydrocarbon molecules or hydrogenated amorphous carbon grains in the envelope of V2676 Oph.
0
1
0
0
0
0
An invitation to 2D TQFT and quantization of Hitchin spectral curves
This article consists of two parts. In Part 1, we present a formulation of two-dimensional topological quantum field theories in terms of a functor from a category of Ribbon graphs to the endofuntor category of a monoidal category. The key point is that the category of ribbon graphs produces all Frobenius objects. Necessary backgrounds from Frobenius algebras, topological quantum field theories, and cohomological field theories are reviewed. A result on Frobenius algebra twisted topological recursion is included at the end of Part 1. In Part 2, we explain a geometric theory of quantum curves. The focus is placed on the process of quantization as a passage from families of Hitchin spectral curves to families of opers. To make the presentation simpler, we unfold the story using SL_2(\mathbb{C})-opers and rank 2 Higgs bundles defined on a compact Riemann surface $C$ of genus greater than $1$. In this case, quantum curves, opers, and projective structures in $C$ all become the same notion. Background materials on projective coordinate systems, Higgs bundles, opers, and non-Abelian Hodge correspondence are explained.
0
0
1
0
0
0
An Ensemble Classification Algorithm Based on Information Entropy for Data Streams
Data stream mining problem has caused widely concerns in the area of machine learning and data mining. In some recent studies, ensemble classification has been widely used in concept drift detection, however, most of them regard classification accuracy as a criterion for judging whether concept drift happening or not. Information entropy is an important and effective method for measuring uncertainty. Based on the information entropy theory, a new algorithm using information entropy to evaluate a classification result is developed. It uses ensemble classification techniques, and the weight of each classifier is decided through the entropy of the result produced by an ensemble classifiers system. When the concept in data streams changing, the classifiers' weight below a threshold value will be abandoned to adapt to a new concept in one time. In the experimental analysis section, six databases and four proposed algorithms are executed. The results show that the proposed method can not only handle concept drift effectively, but also have a better classification accuracy and time performance than the contrastive algorithms.
1
0
0
0
0
0
Competition between disorder and interaction effects in 3D Weyl semimetals
We investigate the low-energy scaling behavior of an interacting 3D Weyl semimetal in the presence of disorder. In order to achieve a renormalization group analysis of the theory, we focus on the effects of a short-ranged-correlated disorder potential, checking nevertheless that this choice is not essential to locate the different phases of the Weyl semimetal. We show that there is a line of fixed-points in the renormalization group flow of the interacting theory, corresponding to the disorder-driven transition to a diffusive metal phase. Along that boundary, the critical disorder strength undergoes a strong increase with respect to the noninteracting theory, as a consequence of the unconventional screening of the Coulomb and disorder-induced interactions. A complementary resolution of the Schwinger-Dyson equations allows us to determine the full phase diagram of the system, showing the prevalence of a renormalized semimetallic phase in the regime of intermediate interaction strength, and adjacent to the non-Fermi liquid phase characteristic of the strong interaction regime of 3D Weyl semimetals.
0
1
0
0
0
0
Mixing properties and central limit theorem for associated point processes
Positively (resp. negatively) associated point processes are a class of point processes that induce attraction (resp. inhibition) between the points. As an important example, determinantal point processes (DPPs) are negatively associated. We prove $\alpha$-mixing properties for associated spatial point processes by controlling their $\alpha$-coefficients in terms of the first two intensity functions. A central limit theorem for functionals of associated point processes is deduced, using both the association and the $\alpha$-mixing properties. We discuss in detail the case of DPPs, for which we obtain the limiting distribution of sums, over subsets of close enough points of the process, of any bounded function of the DPP. As an application, we get the asymptotic properties of the parametric two-step estimator of some inhomogeneous DPPs.
0
0
1
1
0
0
Computational landscape of user behavior on social media
With the increasing abundance of 'digital footprints' left by human interactions in online environments, e.g., social media and app use, the ability to model complex human behavior has become increasingly possible. Many approaches have been proposed, however, most previous model frameworks are fairly restrictive. We introduce a new social modeling approach that enables the creation of models directly from data with minimal a priori restrictions on the model class. In particular, we infer the minimally complex, maximally predictive representation of an individual's behavior when viewed in isolation and as driven by a social input. We then apply this framework to a heterogeneous catalog of human behavior collected from fifteen thousand users on the microblogging platform Twitter. The models allow us to describe how a user processes their past behavior and their social inputs. Despite the diversity of observed user behavior, most models inferred fall into a small subclass of all possible finite-state processes. Thus, our work demonstrates that user behavior, while quite complex, belies simple underlying computational structures.
1
0
0
1
0
0
Simulating optical coherence tomography for observing nerve activity: a finite difference time domain bi-dimensional model
We present a finite difference time domain (FDTD) model for computation of A line scans in time domain optical coherence tomography (OCT). By simulating only the end of the two arms of the interferometer and computing the interference signal in post processing, it is possible to reduce the computational time required by the simulations and, thus, to simulate much bigger environments. Moreover, it is possible to simulate successive A lines and thus obtaining a cross section of the sample considered. In this paper we present the model applied to two different samples: a glass rod filled with water-sucrose solution at different concentrations and a peripheral nerve. This work demonstrates the feasibility of using OCT for non-invasive, direct optical monitoring of peripheral nerve activity, which is a long-sought goal of neuroscience.
0
1
0
0
0
0