title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
MRA - Proof of Concept of a Multilingual Report Annotator Web Application
MRA (Multilingual Report Annotator) is a web application that translates Radiology text and annotates it with RadLex terms. Its goal is to explore the solution of translating non-English Radiology reports as a way to solve the problem of most of the Text Mining tools being developed for English. In this brief paper we explain the language barrier problem and shortly describe the application. MRA can be found at this https URL .
1
0
0
0
0
0
Contagions in Social Networks: Effects of Monophilic Contagion, Friendship Paradox and Reactive Networks
We consider SIS contagion processes over networks where, a classical assumption is that individuals' decisions to adopt a contagion are based on their immediate neighbors. However, recent literature shows that some attributes are more correlated between two-hop neighbors, a concept referred to as monophily. This motivates us to explore monophilic contagion, the case where a contagion (e.g. a product, disease) is adopted by considering two-hop neighbors instead of immediate neighbors (e.g. you ask your friend about the new iPhone and she recommends you the opinion of one of her friends). We show that the phenomenon called friendship paradox makes it easier for the monophilic contagion to spread widely. We also consider the case where the underlying network stochastically evolves in response to the state of the contagion (e.g. depending on the severity of a flu virus, people restrict their interactions with others to avoid getting infected) and show that the dynamics of such a process can be approximated by a differential equation whose trajectory satisfies an algebraic constraint restricting it to a manifold. Our results shed light on how graph theoretic consequences affect contagions and, provide simple deterministic models to approximate the collective dynamics of contagions over stochastic graph processes.
1
0
0
0
0
0
Derivation of a multilayer approach to model suspended sediment transport: application to hyperpycnal and hypopycnal plumes
We propose a multi-layer approach to simulate hyperpycnal and hypopycnal plumes in flows with free surface. The model allows to compute the vertical profile of the horizontal and the vertical components of the velocity of the fluid flow. The model can describe as well the vertical profile of the sediment concentration and the velocity components of each one of the sediment species that form the turbidity current. To do so, it takes into account the settling velocity of the particles and their interaction with the fluid. This allows to better describe the phenomena than a single layer approach. It is in better agreement with the physics of the problem and gives promising results. The numerical simulation is carried out by rewriting the multi-layer approach in a compact formulation, which corresponds to a system with non-conservative products, and using path-conservative numerical scheme. Numerical results are presented in order to show the potential of the model.
0
1
0
0
0
0
Continued fractions and conformal mappings for domains with angle points
Here we construct the conformal mappings with the help of continuous fractions approximations. These approximations converge to the algebraic roots $\sqrt[N]{z}$ for $N \in \mathbb{N}$ and $z$ from the right half-plane of the complex plane. We estimate both the convergence rate and the compact set of convergence. Also we give the examples that illustrate the introduced technique of a conformal mapping construction.
0
0
1
0
0
0
Stochastic Activation Pruning for Robust Adversarial Defense
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration.
0
0
0
1
0
0
Canonical bases of modules over one dimensional k-algebras
Let K be a field and denote by K[t], the polynomial ring with coefficients in K. Set A = K[f1,. .. , fs], with f1,. .. , fs $\in$ K[t]. We give a procedure to calculate the monoid of degrees of the K algebra M = F1A + $\times$ $\times$ $\times$ + FrA with F1,. .. , Fr $\in$ K[t]. We show some applications to the problem of the classification of plane polynomial curves (that is, plane algebraic curves parametrized by polynomials) with respect to some oh their invariants, using the module of K{ä}hler differentials.
0
0
1
0
0
0
Detecting Adversarial Samples from Artifacts
Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations--small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.
1
0
0
1
0
0
Bots sustain and inflate striking opposition in online social systems
Societies are complex systems which tend to polarize into sub-groups of individuals with dramatically opposite perspectives. This phenomenon is reflected -- and often amplified -- in online social networks where, however, humans are no more the only players, and co-exist alongside with social bots, i.e. software-controlled accounts. Analyzing large-scale social data collected during the Catalan referendum for independence on October 1 2017, consisting of nearly 4 millions Twitter posts generated by almost 1 million users, we identify the two polarized groups of Independentists and Constitutionalists and quantify the structural and emotional roles played by social bots. We show that bots act from peripheral areas of the social system to target influential humans of both groups, mostly bombarding Independentists with negative and violent contents, sustaining and inflating instability in this online society. These results quantify the potential dangerous influence of political bots during voting processes.
1
0
0
0
0
0
Approximation of full-boundary data from partial-boundary electrode measurements
Measurements on a subset of the boundary are common in electrical impedance tomography, especially any electrode model can be interpreted as a partial boundary problem. The information obtained is different to full-boundary measurements as modeled by the ideal continuum model. In this study we discuss an approach to approximate full-boundary data from partial-boundary measurements that is based on the knowledge of the involved projections. The approximate full-boundary data can then be obtained as the solution of a suitable optimization problem on the coefficients of the Neumann-to-Dirichlet map. By this procedure we are able to improve the reconstruction quality of continuum model based algorithms, in particular we present the effectiveness with a D-bar method. Reconstructions are presented for noisy simulated and real measurement data.
0
0
1
0
0
0
A Random Block-Coordinate Douglas-Rachford Splitting Method with Low Computational Complexity for Binary Logistic Regression
In this paper, we propose a new optimization algorithm for sparse logistic regression based on a stochastic version of the Douglas-Rachford splitting method. Our algorithm sweeps the training set by randomly selecting a mini-batch of data at each iteration, and it allows us to update the variables in a block coordinate manner. Our approach leverages the proximity operator of the logistic loss, which is expressed with the generalized Lambert W function. Experiments carried out on standard datasets demonstrate the efficiency of our approach w.r.t. stochastic gradient-like methods.
1
0
0
0
0
0
Corrective Re-gridding Techniques for Non-Uniform Sampling in Time Domain Terahertz Spectroscopy
Time domain terahertz spectroscopy typically uses mechanical delay stages that inherently suffer from non-uniform sampling positions. We review, simulate, and experimentally test the ability of corrective cubic spline and Shannon re-gridding algorithms to mitigate the inherent sampling position noise. We present simulations and experimental results that show re-gridding algorithms can increase the signal to noise ratio within the frequency range of 100 GHz to 2 THz. We also predict that re-gridding corrections will become increasingly important to both spectroscopy and imaging as THz technology continues to improve and higher frequencies become experimentally accessible.
0
1
0
0
0
0
Optimistic Robust Optimization With Applications To Machine Learning
Robust Optimization has traditionally taken a pessimistic, or worst-case viewpoint of uncertainty which is motivated by a desire to find sets of optimal policies that maintain feasibility under a variety of operating conditions. In this paper, we explore an optimistic, or best-case view of uncertainty and show that it can be a fruitful approach. We show that these techniques can be used to address a wide variety of problems. First, we apply our methods in the context of robust linear programming, providing a method for reducing conservatism in intuitive ways that encode economically realistic modeling assumptions. Second, we look at problems in machine learning and find that this approach is strongly connected to the existing literature. Specifically, we provide a new interpretation for popular sparsity inducing non-convex regularization schemes. Additionally, we show that successful approaches for dealing with outliers and noise can be interpreted as optimistic robust optimization problems. Although many of the problems resulting from our approach are non-convex, we find that DCA or DCA-like optimization approaches can be intuitive and efficient.
1
0
0
1
0
0
High Dimensional Time Series Generators
Multidimensional time series are sequences of real valued vectors. They occur in different areas, for example handwritten characters, GPS tracking, and gestures of modern virtual reality motion controllers. Within these areas, a common task is to search for similar time series. Dynamic Time Warping (DTW) is a common distance function to compare two time series. The Edit Distance with Real Penalty (ERP) and the Dog Keeper Distance (DK) are two more distance functions on time series. Their behaviour has been analyzed on 1-dimensional time series. However, it is not easy to evaluate their behaviour in relation to growing dimensionality. For this reason we propose two new data synthesizers generating multidimensional time series. The first synthesizer extends the well known cylinder-bell-funnel (CBF) dataset to multidimensional time series. Here, each time series has an arbitrary type (cylinder, bell, or funnel) in each dimension, thus for $d$-dimensional time series there are $3^{d}$ different classes. The second synthesizer (RAM) creates time series with ideas adapted from Brownian motions which is a common model of movement in physics. Finally, we evaluate the applicability of a 1-nearest neighbor classifier using DTW on datasets generated by our synthesizers.
0
0
0
1
0
0
An isoperimetric inequality for Laplace eigenvalues on the sphere
We show that for any positive integer k, the k-th nonzero eigenvalue of the Laplace-Beltrami operator on the two-dimensional sphere endowed with a Riemannian metric of unit area, is maximized in the limit by a sequence of metrics converging to a union of k touching identical round spheres. This proves a conjecture posed by the second author in 2002 and yields a sharp isoperimetric inequality for all nonzero eigenvalues of the Laplacian on a sphere. Earlier, the result was known only for k=1 (J.Hersch, 1970), k=2 (N.Nadirashvili, 2002; R.Petrides, 2014) and k=3 (N.Nadirashvili and Y.Sire, 2017). In particular, we argue that for any k>=2, the supremum of the k-th nonzero eigenvalue on a sphere of unit area is not attained in the class of Riemannin metrics which are smooth outsitde a finite set of conical singularities. The proof uses certain properties of harmonic maps between spheres, the key new ingredient being a bound on the harmonic degree of a harmonic map into a sphere obtained by N. Ejiri.
0
0
1
0
0
0
Numerical solutions of an unsteady 2-D incompressible flow with heat and mass transfer at low, moderate, and high Reynolds numbers
In this paper, we have proposed a modified Marker-And-Cell (MAC) method to investigate the problem of an unsteady 2-D incompressible flow with heat and mass transfer at low, moderate, and high Reynolds numbers with no-slip and slip boundary conditions. We have used this method to solve the governing equations along with the boundary conditions and thereby to compute the flow variables, viz. $u$-velocity, $v$-velocity, $P$, $T$, and $C$. We have used the staggered grid approach of this method to discretize the governing equations of the problem. A modified MAC algorithm was proposed and used to compute the numerical solutions of the flow variables for Reynolds numbers $Re = 10$, 500, and 50,000 in consonance with low, moderate, and high Reynolds numbers. We have also used appropriate Prandtl $(Pr)$ and Schmidt $(Sc)$ numbers in consistence with relevancy of the physical problem considered. We have executed this modified MAC algorithm with the aid of a computer program developed and run in C compiler. We have also computed numerical solutions of local Nusselt $(Nu)$ and Sherwood $(Sh)$ numbers along the horizontal line through the geometric center at low, moderate, and high Reynolds numbers for fixed $Pr = 6.62$ and $Sc = 340$ for two grid systems at time $t = 0.0001s$. Our numerical solutions for u and v velocities along the vertical and horizontal line through the geometric center of the square cavity for $Re = 100$ has been compared with benchmark solutions available in the literature and it has been found that they are in good agreement. The present numerical results indicate that, as we move along the horizontal line through the geometric center of the domain, we observed that, the heat and mass transfer decreases up to the geometric center. It, then, increases symmetrically.
0
1
0
0
0
0
A Dynamic Programming Solution to Bounded Dejittering Problems
We propose a dynamic programming solution to image dejittering problems with bounded displacements and obtain efficient algorithms for the removal of line jitter, line pixel jitter, and pixel jitter.
1
0
1
0
0
0
Delayed pull-in transitions in overdamped MEMS devices
We consider the dynamics of overdamped MEMS devices undergoing the pull-in instability. Numerous previous experiments and numerical simulations have shown a significant increase in the pull-in time under DC voltages close to the pull-in voltage. Here the transient dynamics slow down as the device passes through a meta-stable or bottleneck phase, but this slowing down is not well understood quantitatively. Using a lumped parallel-plate model, we perform a detailed analysis of the pull-in dynamics in this regime. We show that the bottleneck phenomenon is a type of critical slowing down arising from the pull-in transition. This allows us to show that the pull-in time obeys an inverse square-root scaling law as the transition is approached; moreover we determine an analytical expression for this pull-in time. We then compare our prediction to a wide range of pull-in time data reported in the literature, showing that the observed slowing down is well captured by our scaling law, which appears to be generic for overdamped pull-in under DC loads. This realization provides a useful design rule with which to tune dynamic response in applications, including state-of-the-art accelerometers and pressure sensors that use pull-in time as a sensing mechanism. We also propose a method to estimate the pull-in voltage based only on data of the pull-in times.
0
1
0
0
0
0
How ConvNets model Non-linear Transformations
In this paper, we theoretically address three fundamental problems involving deep convolutional networks regarding invariance, depth and hierarchy. We introduce the paradigm of Transformation Networks (TN) which are a direct generalization of Convolutional Networks (ConvNets). Theoretically, we show that TNs (and thereby ConvNets) are can be invariant to non-linear transformations of the input despite pooling over mere local translations. Our analysis provides clear insights into the increase in invariance with depth in these networks. Deeper networks are able to model much richer classes of transformations. We also find that a hierarchical architecture allows the network to generate invariance much more efficiently than a non-hierarchical network. Our results provide useful insight into these three fundamental problems in deep learning using ConvNets.
1
0
0
0
0
0
Volcano transition in a solvable model of oscillator glass
In 1992 a puzzling transition was discovered in simulations of randomly coupled limit-cycle oscillators. This so-called volcano transition has resisted analysis ever since. It was originally conjectured to mark the emergence of an oscillator glass, but here we show it need not. We introduce and solve a simpler model with a qualitatively identical volcano transition and find, unexpectedly, that its supercritical state is not glassy. We discuss the implications for the original model and suggest experimental systems in which a volcano transition and oscillator glass may appear.
0
1
0
0
0
0
Recent advances and open questions on the susy structure of the chiral de Rham Complex
We review different constructions of the supersymmetry subalgebras of the chiral de Rham complex on special holonomy manifolds. We describe the difference between the holomorphic-anti-holomorphic sectors based on a local free ghost system vs the decomposition in left-right sectors from a local Boson-Fermion system. We describe the topological twist in the case of $G_2$ and $Spin_7$ manifolds. We describe the construction of these algebras as quantum Hamiltonian reduction of Lie superalgebras at the minimal or superprincipal nilpotent.
0
0
1
0
0
0
Toda maps, cocycles, and canonical systems
I present a discussion of the hierarchy of Toda flows that gives center stage to the associated cocycles and the maps they induce on the $m$ functions. In the second part, these ideas are then applied to canonical systems; an important feature of this discussion will be my proposal that the role of the shift on Jacobi matrices should now be taken over by the more general class of twisted shifts.
0
0
1
0
0
0
The Gibbs paradox, the Landauer principle and the irreversibility associated with tilted observers
It is well known that, in the context of General Relativity, some spacetimes, when described by a congruence of comoving observers, may consist in a distribution of a perfect (non-dissipative) fluid, whereas the same spacetime as seen by a "tilted"' (Lorentz-boosted) congruence of observers, may exhibit the presence of dissipative processes. As we shall see, the appearence of entropy producing processes are related to the tight dependence of entropy on the specific congruence of observers. This fact is well illustrated by the Gibbs paradox. The appearance of such dissipative processes, as required by the Landauer principle, are necessary, in order to erase the different amount of information stored by comoving observers, with respect to tilted ones.
0
1
0
0
0
0
PlumX As a Potential Tool to Assess the Macroscopic Multidimensional Impact of Books
The main purpose of this macro-study is to shed light on the broad impact of books. For this purpose, the impact of a very large collection of books has been analyzed by using PlumX, an analytical tool providing a great number of different metrics provided by various tools.
1
0
0
0
0
0
Action Robust Reinforcement Learning and Applications in Continuous Control
A policy is said to be robust if it maximizes the reward while considering a bad, or even adversarial, model. In this work we formalize two new criteria of robustness to action uncertainty. Specifically, we consider two scenarios in which the agent attempts to perform an action $\mathbf{a}$, and (i) with probability $\alpha$, an alternative adversarial action $\bar{\mathbf{a}}$ is taken, or (ii) an adversary adds a perturbation to the selected action in the case of continuous action space. We show that our criteria are related to common forms of uncertainty in robotics domains, such as the occurrence of abrupt forces, and suggest algorithms in the tabular case. Building on the suggested algorithms, we generalize our approach to deep reinforcement learning (DRL) and provide extensive experiments in the various MuJoCo domains. Our experiments show that not only does our approach produce robust policies, but it also improves the performance in the absence of perturbations. This generalization indicates that action-robustness can be thought of as implicit regularization in RL problems.
1
0
0
1
0
0
A characterization of finite vector bundles on Gauduchon astheno-Kahler manifolds
A vector bundle E on a projective variety X is called finite if it satisfies a nontrivial polynomial equation with integral coefficients. A theorem of Nori implies that E is finite if and only if the pullback of E to some finite etale Galois covering of X is trivial. We prove the same statement when X is a compact complex manifold admitting a Gauduchon astheno-Kahler metric.
0
0
1
0
0
0
APSYNSIM: An Interactive Tool To Learn Interferometry
The APerture SYNthesis SIMulator is a simple interactive tool to help the students visualize and understand the basics of the Aperture Synthesis technique, applied to astronomical interferometers. The users can load many different interferometers and source models (and also create their own), change the observing parameters (e.g., source coordinates, observing wavelength, antenna location, integration time, etc.), and even deconvolve the interferometric images and corrupt the data with gain errors (amplitude and phase). The program is fully interactive and all the figures are updated in real time. APSYNSIM has already been used in several interferometry schools and has got very positive feedback from the students.
0
1
0
0
0
0
The dimensionless dissipation rate and the Kolmogorov (1941) hypothesis of local stationarity in freely decaying isotropic turbulence
An expression for the dimensionless dissipation rate was derived from the Karman-Howarth equation by asymptotic expansion of the second- and third- order structure functions in powers of the inverse Reynolds number. The implications of the time-derivative term for the assumption of local stationarity (or local equilibrium) which underpins the derivation of the Kolmogorov `4/5' law for the third-order structure function were studied. It was concluded that neglect of the time-derivative cannot be justified by reason of restriction to certain scales (the inertial range) nor to large Reynolds numbers. In principle, therefore, the hypothesis cannot be correct, although it may be a good approximation. It follows, at least in principle, that the quantitative aspects of the hypothesis of local stationarity could be tested by a comparison of the asymptotic dimensionless dissipation rate for free decay with that for the stationary case. But in practice this is complicated by the absence of an agreed evolution time for making the measurements during the decay. However, we can assess the quantitative error involved in using the hypothesis by comparing the exact asymptotic value of the dimensionless dissipation in free decay calculated on the assumption of local stationarity to the experimentally determined value (e.g. by means of direct numerical simulation), as this relationship holds for all measuring times. Should the assumption of local stationarity lead to significant error, then the `4/5' law needs to be corrected. Despite this, scale invariance in wavenumber space appears to hold in the formal limit of infinite Reynolds numbers, which implies that the `-5/3' energy spectrum does not require correction in this limit.
0
1
0
0
0
0
Consistent estimation of the spectrum of trace class data augmentation algorithms
Markov chain Monte Carlo is widely used in a variety of scientific applications to generate approximate samples from intractable distributions. A thorough understanding of the convergence and mixing properties of these Markov chains can be obtained by studying the spectrum of the associated Markov operator. While several methods to bound/estimate the second largest eigenvalue are available in the literature, very few general techniques for consistent estimation of the entire spectrum have been proposed. Existing methods for this purpose require the Markov transition density to be available in closed form, which is often not true in practice, especially in modern statistical applications. In this paper, we propose a novel method to consistently estimate the entire spectrum of a general class of Markov chains arising from a popular and widely used statistical approach known as Data Augmentation. The transition densities of these Markov chains can often only be expressed as intractable integrals. We illustrate the applicability of our method using real and simulated data.
0
0
0
1
0
0
Balancing Selection Pressures, Multiple Objectives, and Neural Modularity to Coevolve Cooperative Agent Behavior
Previous research using evolutionary computation in Multi-Agent Systems indicates that assigning fitness based on team vs.\ individual behavior has a strong impact on the ability of evolved teams of artificial agents to exhibit teamwork in challenging tasks. However, such research only made use of single-objective evolution. In contrast, when a multiobjective evolutionary algorithm is used, populations can be subject to individual-level objectives, team-level objectives, or combinations of the two. This paper explores the performance of cooperatively coevolved teams of agents controlled by artificial neural networks subject to these types of objectives. Specifically, predator agents are evolved to capture scripted prey agents in a torus-shaped grid world. Because of the tension between individual and team behaviors, multiple modes of behavior can be useful, and thus the effect of modular neural networks is also explored. Results demonstrate that fitness rewarding individual behavior is superior to fitness rewarding team behavior, despite being applied to a cooperative task. However, the use of networks with multiple modules allows predators to discover intelligent behavior, regardless of which type of objectives are used.
1
0
0
0
0
0
High-throughput nanofluidic device for one-dimensional confined detection of single fluorophores
Ensemble averaging experiments may conceal many fundamental molecular interactions. To overcome that, a high-throughput detection of single molecules or colloidal nanodots is crucial for biomedical, nanoelectronic, and solid-state applications. One-dimensional (1D) discrete flow of nanoscale objects is an efficient approach in this direction. The development of simple and cost-effective nanofluidic devices is a critical step to realise 1D flow. This letter presents a nanofabrication technique using shadow-angle-electron-beam-deposition for a high-throughput preparation of parallel nanofluidic channels. These were used to flow and detect DNA, carbon-nanodots, and organic fluorophores. The 1D molecular mass transport was performed using electro-osmotic flow. The 1D flow behaviour was identified and analysed using two-focus fluorescence correlation spectroscopy (2fFCS). A range of flow velocities of single molecules was achieved. The transitions of single molecules or nanodots through the two foci were quantitatively analysed using confocal scanning imaging, correlative photon detection, and burst size distribution analysis. The results suggest an efficient nanofabrication technique is developed to prepare nanofluidic devices. This first demonstration of high-throughput nanochannel fabrication process and using 2fFCS-based single molecule flow detection should have a potential impact on ultra-sensitive biomedical diagnostics and studying biomolecular interactions as well as nanomaterials.
0
1
0
0
0
0
Coupled Graphs and Tensor Factorization for Recommender Systems and Community Detection
Joint analysis of data from multiple information repositories facilitates uncovering the underlying structure in heterogeneous datasets. Single and coupled matrix-tensor factorization (CMTF) has been widely used in this context for imputation-based recommendation from ratings, social network, and other user-item data. When this side information is in the form of item-item correlation matrices or graphs, existing CMTF algorithms may fall short. Alleviating current limitations, we introduce a novel model coined coupled graph-tensor factorization (CGTF) that judiciously accounts for graph-related side information. The CGTF model has the potential to overcome practical challenges, such as missing slabs from the tensor and/or missing rows/columns from the correlation matrices. A novel alternating direction method of multipliers (ADMM) is also developed that recovers the nonnegative factors of CGTF. Our algorithm enjoys closed-form updates that result in reduced computational complexity and allow for convergence claims. A novel direction is further explored by employing the interpretable factors to detect graph communities having the tensor as side information. The resulting community detection approach is successful even when some links in the graphs are missing. Results with real data sets corroborate the merits of the proposed methods relative to state-of-the-art competing factorization techniques in providing recommendations and detecting communities.
1
0
0
1
0
0
A computer algebra system for R: Macaulay2 and the m2r package
Algebraic methods have a long history in statistics. The most prominent manifestation of modern algebra in statistics can be seen in the field of algebraic statistics, which brings tools from commutative algebra and algebraic geometry to bear on statistical problems. Now over two decades old, algebraic statistics has applications in a wide range of theoretical and applied statistical domains. Nevertheless, algebraic statistical methods are still not mainstream, mostly due to a lack of easy off-the-shelf implementations. In this article we debut m2r, an R package that connects R to Macaulay2 through a persistent back-end socket connection running locally or on a cloud server. Topics range from basic use of m2r to applications and design philosophy.
0
0
0
1
0
0
Transversal magnetoresistance and Shubnikov-de Haas oscillations in Weyl semimetals
We explore theoretically the magnetoresistance of Weyl semimetals in transversal magnetic fields away from charge neutrality. The analysis within the self-consistent Born approximation is done for the two different models of disorder: (i) short-range impurties and (ii) charged (Coulomb) impurities. For these models of disorder, we calculate the conductivity away from charge neutrality point as well as the Hall conductivity, and analyze the transversal magnetoresistance (TMR) and Shubnikov-de Haas oscillations for both types of disorder. We further consider a model with Weyl nodes shifted in energy with respect to each other (as found in various materials) with the chemical potential corresponding to the total charge neutrality. In the experimentally most relevant case of Coulomb impurities, we find in this model a large TMR in a broad range of quantizing magnetic fields. More specifically, in the ultra-quantum limit, where only the zeroth Landau level is effective, the TMR is linear in magnetic field. In the regime of moderate (but still quantizing) magnetic fields, where the higher Landau levels are relevant, the rapidly growing TMR is supplemented by strong Shubnikov-de Haas oscillations, consistent with experimental observations.
0
1
0
0
0
0
In Search of Lost (Mixing) Time: Adaptive Markov chain Monte Carlo schemes for Bayesian variable selection with very large p
The availability of data sets with large numbers of variables is rapidly increasing. The effective application of Bayesian variable selection methods for regression with these data sets has proved difficult since available Markov chain Monte Carlo methods do not perform well in typical problem sizes of interest. The current paper proposes new adaptive Markov chain Monte Carlo algorithms to address this shortcoming. The adaptive design of these algorithms exploits the observation that in large $p$ small $n$ settings, the majority of the $p$ variables will be approximately uncorrelated a posteriori. The algorithms adaptively build suitable non-local proposals that result in moves with squared jumping distance significantly larger than standard methods. Their performance is studied empirically in high-dimension problems (with both simulated and actual data) and speedups of up to 4 orders of magnitude are observed. The proposed algorithms are easily implementable on multi-core architectures and are well suited for parallel tempering or sequential Monte Carlo implementations.
0
0
0
1
0
0
Network growth models: A behavioural basis for attachment proportional to fitness
Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example.
1
1
0
0
0
0
On Convergence Property of Implicit Self-paced Objective
Self-paced learning (SPL) is a new methodology that simulates the learning principle of humans/animals to start learning easier aspects of a learning task, and then gradually take more complex examples into training. This new-coming learning regime has been empirically substantiated to be effective in various computer vision and pattern recognition tasks. Recently, it has been proved that the SPL regime has a close relationship to a implicit self-paced objective function. While this implicit objective could provide helpful interpretations to the effectiveness, especially the robustness, insights under the SPL paradigms, there are still no theoretical results strictly proved to verify such relationship. To this issue, in this paper, we provide some convergence results on this implicit objective of SPL. Specifically, we prove that the learning process of SPL always converges to critical points of this implicit objective under some mild conditions. This result verifies the intrinsic relationship between SPL and this implicit objective, and makes the previous robustness analysis on SPL complete and theoretically rational.
1
0
0
0
0
0
Graphene quantum dots prevent alpha-synucleinopathy in Parkinson's disease
While the emerging evidence indicates that the pathogenesis of Parkinson's disease (PD) is strongly correlated to the accumulation of alpha-synuclein ({\alpha}-syn) aggregates, there has been no clinical success in anti-aggregation agents for the disease to date. Here we show that graphene quantum dots (GQDs) exhibit anti-amyloid activity via direct interaction with {\alpha}-syn. Employing biophysical, biochemical, and cell-based assays as well as molecular dynamics (MD) simulation, we find that GQDs have notable potency in not only inhibiting fibrillization of {\alpha}-syn but also disaggregating mature fibrils in a time-dependent manner. Remarkably, GQDs rescue neuronal death and synaptic loss, reduce Lewy body (LB)/Lewy neurite (LN) formation, ameliorate mitochondrial dysfunctions, and prevent neuron-to-neuron transmission of {\alpha}-syn pathology induced by {\alpha}-syn preformed fibrils (PFFs) in neurons. In addition, in vivo administration of GQDs protects against {\alpha}-syn PFFs-induced loss of dopamine neurons, LB/LN pathology, and behavioural deficits through the penetration of the blood-brain barrier (BBB). The finding that GQDs function as an anti-aggregation agent provides a promising novel therapeutic target for the treatment of PD and related {\alpha}-synucleinopathies.
0
1
0
0
0
0
Liu-Nagel phase diagrams in infinite dimension
We study Harmonic Soft Spheres as a model of thermal structural glasses in the limit of infinite dimensions. We show that cooling, compressing and shearing a glass lead to a Gardner transition and, hence, to a marginally stable amorphous solid as found for Hard Spheres systems. A general outcome of our results is that a reduced stability of the glass favors the appearance of the Gardner transition. Therefore using strong perturbations, e.g. shear and compression, on standard glasses or using weak perturbations on weakly stable glasses, e.g. the ones prepared close to the jamming point, are the generic ways to induce a Gardner transition. The formalism that we discuss allows to study general perturbations, including strain deformations that are important to study soft glassy rheology at the mean field level.
0
1
0
0
0
0
Networks of planar Hamiltonian systems
We introduce diffusively coupled networks where the dynamical system at each vertex is planar Hamiltonian. The problems we address are synchronisation and an analogue of diffusion-driven Turing instability for time-dependent homogeneous states. As a consequence of the underlying Hamiltonian structure there exist unusual behaviours compared with networks of coupled limit cycle oscillators or activator-inhibitor systems.
0
1
1
0
0
0
Robust Tracking Using Region Proposal Networks
Recent advances in visual tracking showed that deep Convolutional Neural Networks (CNN) trained for image classification can be strong feature extractors for discriminative trackers. However, due to the drastic difference between image classification and tracking, extra treatments such as model ensemble and feature engineering must be carried out to bridge the two domains. Such procedures are either time consuming or hard to generalize well across datasets. In this paper we discovered that the internal structure of Region Proposal Network (RPN)'s top layer feature can be utilized for robust visual tracking. We showed that such property has to be unleashed by a novel loss function which simultaneously considers classification accuracy and bounding box quality. Without ensemble and any extra treatment on feature maps, our proposed method achieved state-of-the-art results on several large scale benchmarks including OTB50, OTB100 and VOT2016. We will make our code publicly available.
1
0
0
0
0
0
Recovering sparse graphs
We construct a fixed parameter algorithm parameterized by d and k that takes as an input a graph G' obtained from a d-degenerate graph G by complementing on at most k arbitrary subsets of the vertex set of G and outputs a graph H such that G and H agree on all but f(d,k) vertices. Our work is motivated by the first order model checking in graph classes that are first order interpretable in classes of sparse graphs. We derive as a corollary that if G_0 is a graph class with bounded expansion, then the first order model checking is fixed parameter tractable in the class of all graphs that can obtained from a graph G from G_0 by complementing on at most k arbitrary subsets of the vertex set of G; this implies an earlier result that the first order model checking is fixed parameter tractable in graph classes interpretable in classes of graphs with bounded maximum degree.
1
0
0
0
0
0
Remark on arithmetic topology
We formalize the arithmetic topology, i.e. a relationship between knots and primes. Namely, using the notion of a cluster C*-algebra we construct a functor from the category of 3-dimensional manifolds M to a category of algebraic number fields K, such that the prime ideals (ideals, resp.) in the ring of integers of K correspond to knots (links, resp.) in M. It is proved that the functor realizes all axioms of the arithmetic topology conjectured in the 1960's by Manin, Mazur and Mumford.
0
0
1
0
0
0
Stable Architectures for Deep Neural Networks
Deep neural networks have become invaluable tools for supervised machine learning, e.g., classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Important issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper we propose new forward propagation techniques inspired by systems of Ordinary Differential Equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.
1
0
1
0
0
0
Geometry and Arithmetic of Crystallographic Sphere Packings
We introduce the notion of a "crystallographic sphere packing," defined to be one whose limit set is that of a geometrically finite hyperbolic reflection group in one higher dimension. We exhibit for the first time an infinite family of conformally-inequivalent such with all radii being reciprocals of integers. We then prove a result in the opposite direction: the "superintegral" ones exist only in finitely many "commensurability classes," all in dimensions below 30.
0
0
1
0
0
0
On the global convergence of the Jacobi method for symmetric matrices of order 4 under parallel strategies
The paper analyzes special cyclic Jacobi methods for symmetric matrices of order $4$. Only those cyclic pivot strategies that enable full parallelization of the method are considered. These strategies, unlike the serial pivot strategies, can force the method to be very slow or very fast within one cycle, depending on the underlying matrix. Hence, for the global convergence proof one has to consider two or three adjacent cycles. It is proved that for any symmetric matrix $A$ of order~$4$ the inequality $S(A^{[2]})\leq(1-10^{-5})S(A)$ holds, where $A^{[2]}$ results from $A$ by applying two cycles of a particular parallel method. Here $S(A)$ stands for the Frobenius norm of the strictly upper-triangular part of $A$. The result holds for two special parallel strategies and implies the global convergence of the method under all possible fully parallel strategies. It is also proved that for every $\epsilon>0$ and $n\geq4$ there exist a symmetric matrix $A(\epsilon)$ of order $n$ and a cyclic strategy, such that upon completion of the first cycle of the appropriate Jacobi method the inequality $S(A^{[1]})> (1-\epsilon)S(A(\epsilon))$ holds.
0
0
1
0
0
0
Counting Motifs with Graph Sampling
Applied researchers often construct a network from a random sample of nodes in order to infer properties of the parent network. Two of the most widely used sampling schemes are subgraph sampling, where we sample each vertex independently with probability $p$ and observe the subgraph induced by the sampled vertices, and neighborhood sampling, where we additionally observe the edges between the sampled vertices and their neighbors. In this paper, we study the problem of estimating the number of motifs as induced subgraphs under both models from a statistical perspective. We show that: for any connected $h$ on $k$ vertices, to estimate $s=\mathsf{s}(h,G)$, the number of copies of $h$ in the parent graph $G$ of maximum degree $d$, with a multiplicative error of $\epsilon$, (a) For subgraph sampling, the optimal sampling ratio $p$ is $\Theta_{k}(\max\{ (s\epsilon^2)^{-\frac{1}{k}}, \; \frac{d^{k-1}}{s\epsilon^{2}} \})$, achieved by Horvitz-Thompson type of estimators. (b) For neighborhood sampling, we propose a family of estimators, encompassing and outperforming the Horvitz-Thompson estimator and achieving the sampling ratio $O_{k}(\min\{ (\frac{d}{s\epsilon^2})^{\frac{1}{k-1}}, \; \sqrt{\frac{d^{k-2}}{s\epsilon^2}}\})$. This is shown to be optimal for all motifs with at most $4$ vertices and cliques of all sizes. The matching minimax lower bounds are established using certain algebraic properties of subgraph counts. These results quantify how much more informative neighborhood sampling is than subgraph sampling, as empirically verified by experiments on both synthetic and real-world data. We also address the issue of adaptation to the unknown maximum degree, and study specific problems for parent graphs with additional structures, e.g., trees or planar graphs.
0
0
0
1
0
0
Optimizing noise level for perturbing geo-location data
With the tremendous increase in the number of smart phones, app stores have been overwhelmed with applications requiring geo-location access in order to provide their users better services through personalization. Revealing a user's location to these third party apps, no matter at what frequency, is a severe privacy breach which can have unpleasant social consequences. In order to prevent inference attacks derived from geo-location data, a number of location obfuscation techniques have been proposed in the literature. However, none of them provides any objective measure of privacy guarantee. Some work has been done to define differential privacy for geo-location data in the form of geo-indistinguishability with l privacy guarantee. These techniques do not utilize any prior background information about the Points of Interest (PoIs) of a user and apply Laplacian noise to perturb all the location coordinates. Intuitively, the utility of such a mechanism can be improved if the noise distribution is derived after considering some prior information about PoIs. In this paper, we apply the standard definition of differential privacy on geo-location data. We use first principles to model various privacy and utility constraints, prior background information available about the PoIs (distribution of PoI locations in a 1D plane) and the granularity of the input required by different types of apps, in order to produce a more accurate and a utility maximizing differentially private algorithm for geo-location data at the OS level. We investigate this for a particular category of apps and for some specific scenarios. This will also help us to verify that whether Laplacian noise is still the optimal perturbation when we have such prior information.
1
0
0
0
0
0
Hochschild Cohomology and Deformation Quantization of Affine Toric Varieties
For an affine toric variety $\mathrm{Spec}(A)$, we give a convex geometric description of the Hodge decomposition of its Hochschild cohomology. Under certain assumptions we compute the dimensions of the Hodge summands $T^1_{(i)}(A)$, generalizing the existing results about the Andre-Quillen cohomology group $T^1_{(1)}(A)$. We prove that every Poisson structure on a possibly singular affine toric variety can be quantized in the sense of deformation quantization.
0
0
1
0
0
0
Adversarial Examples for Semantic Image Segmentation
Machine learning methods in general and Deep Neural Networks in particular have shown to be vulnerable to adversarial perturbations. So far this phenomenon has mainly been studied in the context of whole-image classification. In this contribution, we analyse how adversarial perturbations can affect the task of semantic segmentation. We show how existing adversarial attackers can be transferred to this task and that it is possible to create imperceptible adversarial perturbations that lead a deep network to misclassify almost all pixels of a chosen class while leaving network prediction nearly unchanged outside this class.
1
0
0
1
0
0
Graphical virtual links and a polynomial of signed cyclic graphs
For a signed cyclic graph G, we can construct a unique virtual link L by taking the medial construction and convert 4-valent vertices of the medial graph to crossings according to the signs. If a virtual link can occur in this way then we say that the virtual link is graphical. In the article we shall prove that a virtual link L is graphical if and only if it is checkerboard colorable. On the other hand, we introduce a polynomial F[G] for signed cyclic graphs, which is defined via a deletion-marking recursion. We shall establish the relationship between F[G] of a signed cyclic graph G and the bracket polynomial of one of the virtual link diagrams associated with G. Finally we give a spanning subgraph expansion for F[G].
0
0
1
0
0
0
Exploring the Single-Particle Mobility Edge in a One-Dimensional Quasiperiodic Optical Lattice
A single-particle mobility edge (SPME) marks a critical energy separating extended from localized states in a quantum system. In one-dimensional systems with uncorrelated disorder, a SPME cannot exist, since all single-particle states localize for arbitrarily weak disorder strengths. However, if correlations are present in the disorder potential, the localization transition can occur at a finite disorder strength and SPMEs become possible. In this work, we find experimental evidence for the existence of such a SPME in a one-dimensional quasi-periodic optical lattice. Specifically, we find a regime where extended and localized single-particle states coexist, in good agreement with theoretical simulations, which predict a SPME in this regime.
0
1
0
0
0
0
Integral field observations of the blue compact galaxy Haro14. Star formation and feedback in dwarf galaxies
(Abridged) Low-luminosity, gas-rich blue compact galaxies (BCG) are ideal laboratories to investigate many aspects of the star formation in galaxies. We study the morphology, stellar content, kinematics, and the nebular excitation and ionization mechanism in the BCG Haro 14 by means of integral field observations with VIMOS in the VLT. From these data we build maps in continuum and in the brighter emission lines, produce line-ratio maps, and obtain the velocity and velocity dispersion fields. We also generate the integrated spectrum of the major HII regions and young stellar clusters identified in the maps to determine reliable physical parameters and oxygen abundances. We find as follows: i) the current star formation in Haro 14 is spatially extended with the major HII regions placed along a linear structure, elongated in the north-south direction, and in a horseshoe-like curvilinear feature that extends about 760 pc eastward; the continuum emission is more concentrated and peaks close to the galaxy center; ii) two different episodes of star formation are present: the recent starburst, with ages $\leq$ 6 Myrs and the intermediate-age clusters, with ages between 10 and 30 Myrs; these stellar components rest on a several Gyr old underlying host galaxy; iii) the H$\alpha$/H$\beta$ pattern is inhomogeneous, with excess color values varying from E(B-V)=0.04 up to E(B-V)=1.09; iv) shocks play a significant role in the galaxy; and v) the velocity field displays a complicated pattern with regions of material moving toward us in the east and north galaxy areas. The morphology of Haro 14, its irregular velocity field, and the presence of shocks speak in favor of a scenario of triggered star formation. Ages of the knots are consistent with the ongoing burst being triggered by the collective action of stellar winds and supernovae originated in the central clusters.
0
1
0
0
0
0
A Framework for Automated Cellular Network Tuning with Reinforcement Learning
Tuning cellular network performance against always occurring wireless impairments can dramatically improve reliability to end users. In this paper, we formulate cellular network performance tuning as a reinforcement learning (RL) problem and provide a solution to improve the signal to interference-plus-noise ratio (SINR) for indoor and outdoor environments. By leveraging the ability of Q-learning to estimate future SINR improvement rewards, we propose two algorithms: (1) voice over LTE (VoLTE) downlink closed loop power control (PC) and (2) self-organizing network (SON) fault management. The VoLTE PC algorithm uses RL to adjust the indoor base station transmit power so that the effective SINR meets the target SINR. The SON fault management algorithm uses RL to improve the performance of an outdoor cluster by resolving faults in the network through configuration management. Both algorithms exploit measurements from the connected users, wireless impairments, and relevant configuration parameters to solve a non-convex SINR optimization problem using RL. Simulation results show that our proposed RL based algorithms outperform the industry standards today in realistic cellular communication environments.
0
0
0
1
0
0
A Survey of Augmented Reality Navigation
Navigation has been a popular area of research in both academia and industry. Combined with maps, and different localization technologies, navigation systems have become robust and more usable. By combining navigation with augmented reality, it can be improved further to become realistic and user friendly. This paper surveys existing researches carried out in this area, describes existing techniques for building augmented reality navigation systems, and the problems faced.
1
0
0
0
0
0
Strong Landau-quantization effects in high-magnetic-field superconductivity of a two-dimensional multiple-band metal near the Lifshitz transition
We investigate the onset of superconductivity in magnetic field for a clean two-dimensional multiple-band superconductor in the vicinity of the Lifshitz transition when one of the bands is very shallow. Due to small number of carriers in this band, the quasiclassical Werthamer-Helfand approximation breaks down and Landau quantization has to be taken into account. We found that the transition temperature TC2(H) has giant oscillations and is resonantly enhanced at the magnetic fields corresponding to full occupancy of the Landau levels in the shallow band. This enhancement is especially pronounced for the lowest Landau level. As a consequence, the reentrant superconducting regions in the temperature-field phase diagram emerge at low temperatures near the magnetic fields at which the chemical potential matches the Landau levels. The specific behavior depends on the relative strength of the intraband and interband pairing interactions and the reentrance is most pronounced in the purely interband coupling scenario. The reentrant behavior is suppressed by the Zeeman spin splitting in the shallow band, the separated regions disappear already for very small spin-splitting factors. On the other hand, the reentrance is restored in the resonance cases when the spin-splitting energy exactly matches the separation between the Landau levels. The predicted behavior may realize in the gate-tuned FeSe monolayer.
0
1
0
0
0
0
On boundary extension of mappings in metric spaces in terms of prime ends
We study the boundary behavior of the so-called ring $Q$-mappings obtained as a natural generalization of mappings with bounded distortion. We establish a series of conditions imposed on a function $Q(x)$ for the continuous extension of given mappings with respect to prime ends in domains with regular boundaries in metric spaces.
0
0
1
0
0
0
Temporally Evolving Community Detection and Prediction in Content-Centric Networks
In this work, we consider the problem of combining link, content and temporal analysis for community detection and prediction in evolving networks. Such temporal and content-rich networks occur in many real-life settings, such as bibliographic networks and question answering forums. Most of the work in the literature (that uses both content and structure) deals with static snapshots of networks, and they do not reflect the dynamic changes occurring over multiple snapshots. Incorporating dynamic changes in the communities into the analysis can also provide useful insights about the changes in the network such as the migration of authors across communities. In this work, we propose Chimera, a shared factorization model that can simultaneously account for graph links, content, and temporal analysis. This approach works by extracting the latent semantic structure of the network in multidimensional form, but in a way that takes into account the temporal continuity of these embeddings. Such an approach simplifies temporal analysis of the underlying network by using the embedding as a surrogate. A consequence of this simplification is that it is also possible to use this temporal sequence of embeddings to predict future communities. We present experimental results illustrating the effectiveness of the approach.
1
0
0
1
0
0
A Robust Utility Learning Framework via Inverse Optimization
In many smart infrastructure applications flexibility in achieving sustainability goals can be gained by engaging end-users. However, these users often have heterogeneous preferences that are unknown to the decision-maker tasked with improving operational efficiency. Modeling user interaction as a continuous game between non-cooperative players, we propose a robust parametric utility learning framework that employs constrained feasible generalized least squares estimation with heteroskedastic inference. To improve forecasting performance, we extend the robust utility learning scheme by employing bootstrapping with bagging, bumping, and gradient boosting ensemble methods. Moreover, we estimate the noise covariance which provides approximated correlations between players which we leverage to develop a novel correlated utility learning framework. We apply the proposed methods both to a toy example arising from Bertrand-Nash competition between two firms as well as to data from a social game experiment designed to encourage energy efficient behavior amongst smart building occupants. Using occupant voting data for shared resources such as lighting, we simulate the game defined by the estimated utility functions to demonstrate the performance of the proposed methods.
1
0
1
0
0
0
Evaluating stochastic seeding strategies in networks
When trying to maximize the adoption of a behavior in a population connected by a social network, it is common to strategize about where in the network to seed the behavior, often with an element of randomness. Selecting seeds uniformly at random is a basic but compelling strategy in that it distributes seeds broadly throughout the network. A more sophisticated stochastic strategy, one-hop targeting, is to select random network neighbors of random individuals; this exploits a version of the friendship paradox, whereby the friend of a random individual is expected to have more friends than a random individual, with the hope that seeding a behavior at more connected individuals leads to more adoption. Many seeding strategies have been proposed, but empirical evaluations have demanded large field experiments designed specifically for this purpose and have yielded relatively imprecise comparisons of strategies. Here we show how stochastic seeding strategies can be evaluated more efficiently in such experiments, how they can be evaluated "off-policy" using existing data arising from experiments designed for other purposes, and how to design more efficient experiments. In particular, we consider contrasts between stochastic seeding strategies and analyze nonparametric estimators adapted from policy evaluation and importance sampling. We use simulations on real networks to show that the proposed estimators and designs can dramatically increase precision while yielding valid inference. We then apply our proposed estimators to two field experiments, one that assigned households to an intensive marketing intervention and one that assigned students to an anti-bullying intervention.
1
0
0
0
0
0
Flux noise in a superconducting transmission line
We study a superconducting transmission line (TL) formed by distributed LC oscillators and excited by external magnetic fluxes which are aroused from random magnetization (A) placed in substrate or (B) distributed at interfaces of a two-wire TL. Low-frequency dynamics of a random magnetic field is described based on the diffusion Langevin equation with a short-range source caused by (a) random amplitude or (b) gradient of magnetization. For a TL modeled as a two-port network with open and shorted ends, the effective magnetic flux at the open end has non-local dependency on noise distribution along the TL. The flux-flux correlation function is evaluated and analyzed for the regimes (Aa), (Ab). (Ba), and (Bb). Essential frequency dispersion takes place around the inverse diffusion time of random flux along the TL. Typically, noise effect increases with size faster than the area of TL. The flux-flux correlator can be verified both via the population relaxation rate of the qubit, which is formed by the Josephson junction shunted by the TL with flux noises, and via random voltage at the open end of the TL.
0
1
0
0
0
0
Relativistic Newtonian Dynamics for Objects and Particles
Relativistic Newtonian Dynamics (RND) was introduced in a series of recent papers by the author, in partial cooperation with J. M. Steiner. RND was capable of describing non-classical behavior of motion under a central attracting force. RND incorporates the influence of potential energy on spacetime in Newtonian dynamics, treating gravity as a force in flat spacetime. It was shown that this dynamics predicts accurately gravitational time dilation, the anomalous precession of Mercury and the periastron advance of any binary. In this paper the model is further refined and extended to describe also the motion of both objects with non-zero mass and massless particles, under a conservative attracting force. It is shown that for any conservative force a properly defined energy is conserved on the trajectories and if this force is central, the angular momentum is also preserved. An RND equation of motion is derived for motion under a conservative force. As an application, it is shown that RND predicts accurately also the Shapiro time delay - the fourth test of GR.
0
1
0
0
0
0
Spatio-Temporal Structured Sparse Regression with Hierarchical Gaussian Process Priors
This paper introduces a new sparse spatio-temporal structured Gaussian process regression framework for online and offline Bayesian inference. This is the first framework that gives a time-evolving representation of the interdependencies between the components of the sparse signal of interest. A hierarchical Gaussian process describes such structure and the interdependencies are represented via the covariance matrices of the prior distributions. The inference is based on the expectation propagation method and the theoretical derivation of the posterior distribution is provided in the paper. The inference framework is thoroughly evaluated over synthetic, real video and electroencephalography (EEG) data where the spatio-temporal evolving patterns need to be reconstructed with high accuracy. It is shown that it achieves 15% improvement of the F-measure compared with the alternating direction method of multipliers, spatio-temporal sparse Bayesian learning method and one-level Gaussian process model. Additionally, the required memory for the proposed algorithm is less than in the one-level Gaussian process model. This structured sparse regression framework is of broad applicability to source localisation and object detection problems with sparse signals.
0
0
0
1
0
0
Answer Set Solving with Bounded Treewidth Revisited
Parameterized algorithms are a way to solve hard problems more efficiently, given that a specific parameter of the input is small. In this paper, we apply this idea to the field of answer set programming (ASP). To this end, we propose two kinds of graph representations of programs to exploit their treewidth as a parameter. Treewidth roughly measures to which extent the internal structure of a program resembles a tree. Our main contribution is the design of parameterized dynamic programming algorithms, which run in linear time if the treewidth and weights of the given program are bounded. Compared to previous work, our algorithms handle the full syntax of ASP. Finally, we report on an empirical evaluation that shows good runtime behaviour for benchmark instances of low treewidth, especially for counting answer sets.
1
0
0
0
0
0
Community structure: A comparative evaluation of community detection methods
Discovering community structure in complex networks is a mature field since a tremendous number of community detection methods have been introduced in the literature. Nevertheless, it is still very challenging for practioners to determine which method would be suitable to get insights into the structural information of the networks they study. Many recent efforts have been devoted to investigating various quality scores of the community structure, but the problem of distinguishing between different types of communities is still open. In this paper, we propose a comparative, extensive and empirical study to investigate what types of communities many state-of-the-art and well-known community detection methods are producing. Specifically, we provide comprehensive analyses on computation time, community size distribution, a comparative evaluation of methods according to their optimisation schemes as well as a comparison of their partioning strategy through validation metrics. We process our analyses on a very large corpus of hundreds of networks from five different network categories and propose ways to classify community detection methods, helping a potential user to navigate the complex landscape of community detection.
1
0
0
0
0
0
Classification of isoparametric submanifolds admitting a reflective focal submanifold in symmetric spaces of non-compact type
In this paper, we assume that all isoparametric submanifolds have flat section. The main purpose of this paper is to prove that, if a full irreducible complete isoparametric submanifold of codimension greater than one in a symmetric space of non-compact type admits a reflective focal submanifold and if it of real analytic, then it is a principal orbit of a Hermann type action on the symmetric space. A hyperpolar action on a symmetric space of non-compact type admits a reflective singular orbit if and only if it is a Hermann type action. Hence is not extra the assumption that the isoparametric submanifold admits a reflective focal submanifold. Also, we prove that, if a full irreducible complete isoparametric submanifold of codimension greater than one in a symmetric space of non-compact type satisfies some additional conditions, then it is a principal orbit of the isotropy action of the symmetric space, where we need not impose that the submanifold is of real analytic. We use the building theory in the proof.
0
0
1
0
0
0
Training Group Orthogonal Neural Networks with Privileged Information
Learning rich and diverse representations is critical for the performance of deep convolutional neural networks (CNNs). In this paper, we consider how to use privileged information to promote inherent diversity of a single CNN model such that the model can learn better representations and offer stronger generalization ability. To this end, we propose a novel group orthogonal convolutional neural network (GoCNN) that learns untangled representations within each layer by exploiting provided privileged information and enhances representation diversity effectively. We take image classification as an example where image segmentation annotations are used as privileged information during the training process. Experiments on two benchmark datasets -- ImageNet and PASCAL VOC -- clearly demonstrate the strong generalization ability of our proposed GoCNN model. On the ImageNet dataset, GoCNN improves the performance of state-of-the-art ResNet-152 model by absolute value of 1.2% while only uses privileged information of 10% of the training images, confirming effectiveness of GoCNN on utilizing available privileged knowledge to train better CNNs.
1
0
0
0
0
0
Antiunitary representations and modular theory
Antiunitary representations of Lie groups take values in the group of unitary and antiunitary operators on a Hilbert space H. In quantum physics, antiunitary operators implement time inversion or a PCT symmetry, and in the modular theory of operator algebras they arise as modular conjugations from cyclic separating vectors of von Neumann algebras. We survey some of the key concepts at the borderline between the theory of local observables (Quantum Field Theory (QFT) in the sense of Araki--Haag--Kastler) and modular theory of operator algebras from the perspective of antiunitary group representations. Here a central point is to encode modular objects in standard subspaces V in H which in turn are in one-to-one correspondence with antiunitary representations of the multiplicative group R^x. Half-sided modular inclusions and modular intersections of standard subspaces correspond to antiunitary representations of Aff(R), and these provide the basic building blocks for a general theory started in the 90s with the ground breaking work of Borchers and Wiesbrock and developed in various directions in the QFT context. The emphasis of these notes lies on the translation between configurations of standard subspaces as they arise in the context of modular localization developed by Brunetti, Guido and Longo, and the more classical context of von Neumann algebras with cyclic separating vectors. Our main point is that configurations of standard subspaces can be studied from the perspective of antiunitary Lie group representations and the geometry of the corresponding spaces, which are often fiber bundles over ordered symmetric spaces. We expect this perspective to provide new and systematic insight into the much richer configurations of nets of local observables in QFT.
0
0
1
0
0
0
ARTENOLIS: Automated Reproducibility and Testing Environment for Licensed Software
Motivation: Automatically testing changes to code is an essential feature of continuous integration. For open-source code, without licensed dependencies, a variety of continuous integration services exist. The COnstraint-Based Reconstruction and Analysis (COBRA) Toolbox is a suite of open-source code for computational modelling with dependencies on licensed software. A novel automated framework of continuous integration in a semi-licensed environment is required for the development of the COBRA Toolbox and related tools of the COBRA community. Results: ARTENOLIS is a general-purpose infrastructure software application that implements continuous integration for open-source software with licensed dependencies. It uses a master-slave framework, tests code on multiple operating systems, and multiple versions of licensed software dependencies. ARTENOLIS ensures the stability, integrity, and cross-platform compatibility of code in the COBRA Toolbox and related tools. Availability and Implementation: The continuous integration server, core of the reproducibility and testing infrastructure, can be freely accessed under artenolis.lcsb.uni.lu. The continuous integration framework code is located in the /.ci directory and at the root of the repository freely available under github.com/opencobra/cobratoolbox.
1
0
0
0
0
0
Droplet states in quantum XXZ spin systems on general graphs
We study XXZ spin systems on general graphs. In particular, we describe the formation of droplet states near the bottom of the spectrum in the Ising phase of the model, where the Z-term dominates the XX-term. As key tools we use particle number conservation of XXZ systems and symmetric products of graphs with their associated adjacency matrices and Laplacians. Of particular interest to us are strips and multi-dimensional Euclidean lattices, for which we discuss the existence of spectral gaps above the droplet regime. We also prove a Combes-Thomas bound which shows that the eigenstates in the droplet regime are exponentially small perturbations of strict (classical) droplets.
0
0
1
0
0
0
Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning
There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of meta-algorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the Conditional Average Treatment Effect (CATE) function. Meta-algorithms build on base algorithms---such as Random Forests (RF), Bayesian Additive Regression Trees (BART) or neural networks---to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a new meta-algorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other, and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the meta-learners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our new X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods.
0
0
1
1
0
0
Counting triangles formula for the first Chern class of a circle bundle
We consider the problem of the combinatorial computation of the first Chern class of a circle bundle. N.Mnev found such a formula in terms of canonical shellings. It represents certain invariant of a triangulation computed by analyzing cyclic word in 3-character alphabet associated to the bundle. This curvature is a kind of discretization of Konstevich's curvature differential 2-form. We find a new expression of Mnev's curvature by counting triangles in a cyclic word. Our formula is different from that of Mnev. In particular, it is cyclically invariant by its very form. We present also some sample computations of this invariant and also provide a small Mathematica code for the computation of this invariant.
0
0
1
0
0
0
Probing homogeneity with standard candles
We show that standard candles can provide some valuable information about the density contrast, which could be particularly important at redshifts where other observations are not available. We use an inversion method to reconstruct the local radial density profile from luminosity distance observations assuming background cosmological parameters obtained from large scale observations. Using type Ia Supernovae% (SNe) , Cepheids and the cosmological parameters from the Planck mission we reconstruct the radial density profiles along two different directions of the sky. We compare these profiles to other density maps obtained from luminosity density, in particular Keenan et al. 2013 and the 2M++ galaxy catalogue. The method independently confirms the existence of inhomogeneities, could be particularly useful to correctly normalize density maps from galaxy surveys with respect to the average density of the Universe, and could clarify the apparent discrepancy between local and large scale estimations of the Hubble constant. When better observational supernovae data will be available, the accuracy of the reconstructed density profiles will improve and will allow to further investigate the existence of structures whose size is beyond the reach of galaxy surveys.
0
1
0
0
0
0
Fabrication tolerant chalcogenide mid-infrared multimode interference coupler design with application for Bracewell nulling interferometry
Understanding exoplanet formation and finding potentially habitable exoplanets is vital to an enhanced understanding of the universe. The use of nulling interferometry to strongly attenuate the central starlight provides the opportunity to see objects closer to the star than ever before. Given that exoplanets are usually warm, the 4 microns Mid-Infrared region is advantageous for such observations. The key performance parameters for a nulling interferometer are the extinction ratio it can attain and how well that is maintained across the operational bandwidth. Both parameters depend on the design and fabrication accuracy of the subcomponents and their wavelength dependence. Via detailed simulation it is shown in this paper that a planar chalcogenide photonic chip, consisting of three highly fabrication tolerant multimode interference couplers, can exceed an extinction ratio of 60 dB in double nulling operation and up to 40 dB for a single nulling operation across a wavelength window of 3.9 to 4.2 microns. This provides a beam combiner with sufficient performance, in theory, to image exoplanets.
0
1
0
0
0
0
Accurate Bayesian Data Classification without Hyperparameter Cross-validation
We extend the standard Bayesian multivariate Gaussian generative data classifier by considering a generalization of the conjugate, normal-Wishart prior distribution and by deriving the hyperparameters analytically via evidence maximization. The behaviour of the optimal hyperparameters is explored in the high-dimensional data regime. The classification accuracy of the resulting generalized model is competitive with state-of-the art Bayesian discriminant analysis methods, but without the usual computational burden of cross-validation.
1
0
1
1
0
0
Offline Biases in Online Platforms: a Study of Diversity and Homophily in Airbnb
How diverse are sharing economy platforms? Are they fair marketplaces, where all participants operate on a level playing field, or are they large-scale online aggregators of offline human biases? Often portrayed as easy-to-access digital spaces whose participants receive equal opportunities, such platforms have recently come under fire due to reports of discriminatory behaviours among their users, and have been associated with gentrification phenomena that exacerbate preexisting inequalities along racial lines. In this paper, we focus on the Airbnb sharing economy platform, and analyse the diversity of its user base across five large cities. We find it to be predominantly young, female, and white. Notably, we find this to be true even in cities with a diverse racial composition. We then introduce a method based on the statistical analysis of networks to quantify behaviours of homophily, heterophily and avoidance between Airbnb hosts and guests. Depending on cities and property types, we do find signals of such behaviours relating both to race and gender. We use these findings to provide platform design recommendations, aimed at exposing and possibly reducing the biases we detect, in support of a more inclusive growth of sharing economy platforms.
1
0
0
0
0
0
Parameter Estimation for Thurstone Choice Models
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error.
0
0
1
1
0
0
Quantitative modeling and analysis of bifurcation-induced bursting
Modeling and parameter estimation for neuronal dynamics are often challenging because many parameters can range over orders of magnitude and are difficult to measure experimentally. Moreover, selecting a suitable model complexity requires a sufficient understanding of the model's potential use, such as highlighting essential mechanisms underlying qualitative behavior or precisely quantifying realistic dynamics. We present a novel approach that can guide model development and tuning to achieve desired qualitative and quantitative solution properties. Our approach relies on the presence of disparate time scales and employs techniques of separating the dynamics of fast and slow variables, which are well known in the analysis of qualitative solution features. We build on these methods to show how it is also possible to obtain quantitative solution features by imposing designed dynamics for the slow variables in the form of specified two-dimensional paths in a bifurcation-parameter landscape.
0
1
1
0
0
0
Uniqueness and radial symmetry of minimizers for a nonlocal variational problem
In this paper we prove the uniqueness and radial symmetry of minimizers for variational problems that model several phenomena. The uniqueness is a consequence of the convexity of the functional. The main technique is Fourier transform of tempered distributions.
0
0
1
0
0
0
Improved Accounting for Differentially Private Learning
We consider the problem of differential privacy accounting, i.e. estimation of privacy loss bounds, in machine learning in a broad sense. We propose two versions of a generic privacy accountant suitable for a wide range of learning algorithms. Both versions are derived in a simple and principled way using well-known tools from probability theory, such as concentration inequalities. We demonstrate that our privacy accountant is able to achieve state-of-the-art estimates of DP guarantees and can be applied to new areas like variational inference. Moreover, we show that the latter enjoys differential privacy at minor cost.
1
0
0
1
0
0
Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ
Scattertext is an open source tool for visualizing linguistic variation between document categories in a language-independent way. The tool presents a scatterplot, where each axis corresponds to the rank-frequency a term occurs in a category of documents. Through a tie-breaking strategy, the tool is able to display thousands of visible term-representing points and find space to legibly label hundreds of them. Scattertext also lends itself to a query-based visualization of how the use of terms with similar embeddings differs between document categories, as well as a visualization for comparing the importance scores of bag-of-words features to univariate metrics.
1
0
0
0
0
0
Tight contact structures on Seifert surface complements
We consider complements of standard Seifert surfaces of special alternating links. On these handlebodies, we use Honda's method to enumerate those tight contact structures whose dividing sets are isotopic to the link, and find their number to be the leading coefficient of the Alexander polynomial. The Euler classes of the contact structures are identified with hypertrees in a certain hypergraph. Using earlier work, this establishes a connection between contact topology and the Homfly polynomial. We also show that the contact invariants of our tight contact structures form a basis for sutured Floer homology. Finally, we relate our methods and results to Kauffman's formal knot theory.
0
0
1
0
0
0
Modular invariant representations of the $\mathcal{N}=2$ superconformal algebra
We compute the modular transformation formula of the characters for a certain family of (finitely or uncountably many) simple modules over the simple $\mathcal{N}=2$ vertex operator superalgebra of central charge $c_{p,p'}=3\left(1-\frac{2p'}{p}\right),$ where $(p,p')$ is a pair of coprime positive integers such that $p\geq2$. When $p'=1$, the formula coincides with that of the $\mathcal{N}=2$ unitary minimal series found by F. Ravanini and S.-K. Yang. In addition, we study the properties of the corresponding "modular $S$-matrix", which is no longer a matrix if $p'\geq2$.
0
0
1
0
0
0
The CODALEMA/EXTASIS experiment: Contributions to the 35th International Cosmic Ray Conference (ICRC 2017)
Contributions of the CODALEMA/EXTASIS experiment to the 35th International Cosmic Ray Conference, 12-20 July 2017, Busan, South Korea.
0
1
0
0
0
0
Distributed Kernel K-Means for Large Scale Clustering
Clustering samples according to an effective metric and/or vector space representation is a challenging unsupervised learning task with a wide spectrum of applications. Among several clustering algorithms, k-means and its kernelized version have still a wide audience because of their conceptual simplicity and efficacy. However, the systematic application of the kernelized version of k-means is hampered by its inherent square scaling in memory with the number of samples. In this contribution, we devise an approximate strategy to minimize the kernel k-means cost function in which the trade-off between accuracy and velocity is automatically ruled by the available system memory. Moreover, we define an ad-hoc parallelization scheme well suited for hybrid cpu-gpu state-of-the-art parallel architectures. We proved the effectiveness both of the approximation scheme and of the parallelization method on standard UCI datasets and on molecular dynamics (MD) data in the realm of computational chemistry. In this applicative domain, clustering can play a key role for both quantitively estimating kinetics rates via Markov State Models or to give qualitatively a human compatible summarization of the underlying chemical phenomenon under study. For these reasons, we selected it as a valuable real-world application scenario.
0
0
0
1
0
0
The Origin of Solar Filament Plasma Inferred from in situ Observations of Elemental Abundances
Solar filaments/prominences are one of the most common features in the corona, which may lead to energetic coronal mass ejections (CMEs) and flares when they erupt. Filaments are about one hundred times cooler and denser than the coronal material, and physical understanding of their material origin remains controversial. Two types of scenarios have been proposed: one argues that the filament plasma is brought into the corona from photosphere or chromosphere through a siphon or evaporation/injection process, while the other suggests that the material condenses from the surrounding coronal plasma due to thermal instability. The elemental abundance analysis is a reasonable clue to constrain the models, as the siphon or evaporation/injection model would predict that the filament material abundances are close to the photospheric or chromospheric ones, while the condensation model should have coronal abundances. In this letter, we analyze the elemental abundances of a magnetic cloud that contains the ejected filament material. The corresponding filament eruption occurred on 1998 April 29, accompanying an M6.8 class soft X-ray flare located at the heliographic coordinates S18E20 (NOAA 08210) and a fast halo CME with the linear velocity of 1374 km s$^{-1}$ near the Sun. We find that the abundance ratios of elements with low and high First Ionization Potential such as Fe/O, Mg/O, and Si/O are 0.150, 0.050, and 0.070, respectively, approaching their corresponding photospheric values 0.065, 0.081, and 0.066, which does not support the coronal origin of the filament plasma.
0
1
0
0
0
0
Distributed Stochastic Optimization via Adaptive SGD
Stochastic convex optimization algorithms are the most popular way to train machine learning models on large-scale data. Scaling up the training process of these models is crucial, but the most popular algorithm, Stochastic Gradient Descent (SGD), is a serial method that is surprisingly hard to parallelize. In this paper, we propose an efficient distributed stochastic optimization method by combining adaptivity with variance reduction techniques. Our analysis yields a linear speedup in the number of machines, constant memory footprint, and only a logarithmic number of communication rounds. Critically, our approach is a black-box reduction that parallelizes any serial online learning algorithm, streamlining prior analysis and allowing us to leverage the significant progress that has been made in designing adaptive algorithms. In particular, we achieve optimal convergence rates without any prior knowledge of smoothness parameters, yielding a more robust algorithm that reduces the need for hyperparameter tuning. We implement our algorithm in the Spark distributed framework and exhibit dramatic performance gains on large-scale logistic regression problems.
0
0
0
1
0
0
Bilipschitz Equivalence of Trees and Hyperbolic Fillings
We combine conditions found in [Wh] with results from [MPR] to show that quasi-isometries between uniformly discrete bounded geometry spaces that satisfy linear isoperimetric inequalities are within bounded distance to bilipschitz equivalences. We apply this result to regularly branching trees and hyperbolic fillings of metric spaces.
0
0
1
0
0
0
On the contribution of thermal excitation to the total 630.0 nm emissions in the northern cusp ionosphere
Direct impact excitation by precipitating electrons is believed to be the main source of 630.0 nm emissions in the cusp ionosphere. However, this paper investigates a different source, 630.0 emissions caused by thermally excited atomic oxygen O$(^{1}$D) when high electron temperature prevail in the cusp. On 22 January 2012 and 14 January 2013, the European Incoherent Scatter Scientific Association (EISCAT) radar on Svalbard measured electron temperature enhancements exceeding 3000 K near magnetic noon in the cusp ionosphere over Svalbard. The electron temperature enhancements corresponded to electron density enhancements exceeding $10^{11}$m$^{-3}$ accompanied by intense 630.0 nm emissions in a field of view common to both the EISCAT Svalbard radar and a meridian scanning photometer. This offered an excellent opportunity to investigate the role of thermally excited O$(^{1}$D) 630.0 nm emissions in the cusp ionosphere. The thermal component was derived from the EISCAT Radar measurements and compared with optical data. For both events the calculated thermal component had a correlation coefficient greater than 0.8 to the total observed 630.0 nm intensity which contains both thermal and particle impact components. Despite fairly constant solar wind, the calculated thermal component intensity fluctuated possibly due to dayside transients in the aurora.
0
1
0
0
0
0
Dose finding for new vaccines: the role for immunostimulation/immunodynamic modelling
Current methods to optimize vaccine dose are purely empirically based, whereas in the drug development field, dosing determinations use far more advanced quantitative methodology to accelerate decision-making. Applying these established methods in the field of vaccine development may reduce the currently large clinical trial sample sizes, long time frames, high costs, and ultimately have a better potential to save lives. We propose the field of immunostimulation/immunodynamic (IS/ID) modelling, which aims to translate mathematical frameworks used for drug dosing towards optimizing vaccine dose decision-making. Analogous to PK/PD modelling, IS/ID modelling approaches apply mathematical models to describe the underlying mechanisms by which the immune response is stimulated by vaccination (IS) and the resulting measured immune response dynamics (ID). To move IS/ID modelling forward, existing datasets and further data on vaccine allometry and dose-dependent dynamics need to be generated and collate, requiring a collaborative environment with input from academia, industry, regulators, governmental and non-governmental agencies to share modelling expertise, and connect modellers to vaccine data.
0
0
0
0
1
0
On Learning Mixtures of Well-Separated Gaussians
We consider the problem of efficiently learning mixtures of a large number of spherical Gaussians, when the components of the mixture are well separated. In the most basic form of this problem, we are given samples from a uniform mixture of $k$ standard spherical Gaussians, and the goal is to estimate the means up to accuracy $\delta$ using $poly(k,d, 1/\delta)$ samples. In this work, we study the following question: what is the minimum separation needed between the means for solving this task? The best known algorithm due to Vempala and Wang [JCSS 2004] requires a separation of roughly $\min\{k,d\}^{1/4}$. On the other hand, Moitra and Valiant [FOCS 2010] showed that with separation $o(1)$, exponentially many samples are required. We address the significant gap between these two bounds, by showing the following results. 1. We show that with separation $o(\sqrt{\log k})$, super-polynomially many samples are required. In fact, this holds even when the $k$ means of the Gaussians are picked at random in $d=O(\log k)$ dimensions. 2. We show that with separation $\Omega(\sqrt{\log k})$, $poly(k,d,1/\delta)$ samples suffice. Note that the bound on the separation is independent of $\delta$. This result is based on a new and efficient "accuracy boosting" algorithm that takes as input coarse estimates of the true means and in time $poly(k,d, 1/\delta)$ outputs estimates of the means up to arbitrary accuracy $\delta$ assuming the separation between the means is $\Omega(\min\{\sqrt{\log k},\sqrt{d}\})$ (independently of $\delta$). We also present a computationally efficient algorithm in $d=O(1)$ dimensions with only $\Omega(\sqrt{d})$ separation. These results together essentially characterize the optimal order of separation between components that is needed to learn a mixture of $k$ spherical Gaussians with polynomial samples.
1
0
1
0
0
0
Sylvester's Problem and Mock Heegner Points
We prove that if $p \equiv 4,7 \pmod{9}$ is prime and $3$ is not a cube modulo $p$, then both of the equations $x^3+y^3=p$ and $x^3+y^3=p^2$ have a solution with $x,y \in \mathbb{Q}$.
0
0
1
0
0
0
Community Detection in the Network of German Princes in 1225: a Case Study
Many social networks exhibit some underlying community structure. In particular, in the context of historical research, clustering of different groups into warring or friendly factions can lead to a better understanding of how conflicts may arise, and whether they could be avoided or not. In this work we study the crisis that started in 1225 when the Emperor of the Holy Roman Empire, Frederick II and his son Henry VII got into a conflict which almost led to the rupture and dissolution of the Empire. We use a spin-glass-based community detection algorithm to see how good this method is in detecting this rift and compare the results with an analysis performed by one of the authors (Gramsch) using standard social balance theory applied to History.
1
1
0
0
0
0
Coupon Advertising in Online Social Systems: Algorithms and Sampling Techniques
Online social systems have become important platforms for viral marketing where the advertising of products is carried out with the communication of users. After adopting the product, the seed buyers may spread the information to their friends via online messages e.g. posts and tweets. In another issue, electronic coupon system is one of the relevant promotion vehicles that help manufacturers and retailers attract more potential customers. By offering coupons to seed buyers, there is a chance to convince the influential users who are, however, at first not very interested in the product. In this paper, we propose a coupon based online influence model and consider the problem that how to maximize the profit by selecting appropriate seed buyers. The considered problem herein is markedly different from other influence related problems as its objective function is not monotone. We provide an algorithmic analysis and give several algorithms designed with different sampling techniques. In particular, we propose the RA-T and RA-S algorithms which are not only provably effective but also scalable on large datasets. The proposed theoretical results are evaluated by extensive experiments done on large-scale real-world social networks. The analysis of this paper also provides an algorithmic framework for non-monotone submodular maximization problems in social networks.
1
0
0
0
0
0
Statistical study of auroral omega bands
The presence of very few statistical studies on auroral omega bands motivated us to test-use a semi-automatic method for identifying large-scale undulations of the diffuse aurora boundary and to investigate their occurrence. Five identical all-sky cameras with overlapping fields of view provided data for 438 auroral omega-like structures over Fennoscandian Lapland from 1996 to 2007. The results from this set of omega band events agree remarkably well with previous observations of omega band occurrence in magnetic local time (MLT), lifetime, location between the region 1 and 2 field-aligned currents, as well as current density estimates. The average peak emission height of omega forms corresponds to the estimated precipitation energies of a few keV, which experienced no significant change during the events. Analysis of both local and global magnetic indices demonstrates that omega bands are observed during substorm expansion and recovery phases that are more intense than average substorm expansion and recovery phases in the same region. The omega occurrence with respect to the substorm expansion and recovery phases is in a very good agreement with an earlier observed distribution of fast earthward flows in the plasma sheet during expansion and recovery phases. These findings support the theory that omegas are produced by fast earthward flows and auroral streamers, despite the rarity of good conjugate observations.
0
1
0
0
0
0
Efficient Lightweight Encryption Algorithm for Smart Video Applications
The future generation networks: Internet of things (IoT), in combination with the advanced computer vision techniques poses new challenges for securing videos for end-users. The visual devices generally have constrained resources in respects to their low computation power, small memory with limited power supply. Therefore, to facilitate the video security in smart environment, lightweight security schemes are required instead of inefficient existing traditional cryptography algorithms. This research paper provides the solution to overcome such problems. A novel lightweight cipher algorithm is proposed here which targets multimedia in IoT with an in-house name EXPer i.e. Extended permutation with eXclusive OR (XOR). EXPer is a symmetric stream cipher that consists of simple XOR and left shift operations with three keys of 128 bits. The proposed cipher algorithm has been tested on various sample videos. Comparison of proposed algorithm has been made with the traditional cipher algorithms XOR and Advanced Encryption Standard (AES). Visual results confirm that EXPer provides security level equivalent to the AES algorithm with less computational cost than AES. Therefore, it can easily be perceived that the EXPer is a better replacement of AES for securing real-time video applications in IoT.
1
0
0
0
0
0
Inter-site pair superconductivity: origins and recent validation experiments
The challenge of understanding high-temperature superconductivity has led to a plethora of ideas, but 30 years after its discovery in cuprates, very few have achieved convincing experimental validation. While Hubbard and t-J models were given a lot of attention, a number of recent experiments appear to give decisive support to the model of real-space inter-site pairing and percolative superconductivity in cuprates. Systematic measurements of the doping dependence of the superfluid density show a linear dependence on superfluid density - rather than doping - over the entire phase diagram, in accordance with the model's predictions. The doping-dependence of the anomalous lattice dynamics of in-plane Cu-O mode vibrations observed by inelastic neutron scattering, gives remarkable reciprocal space signature of the inter-site pairing interaction whose doping dependence closely follows the predicted pair density. Symmetry-specific time-domain spectroscopy shows carrier localization, polaron formation, pairing and superconductivity to be distinct processes occurring on distinct timescales throughout the entire superconducting phase diagram. The three diverse experimental results confirm non-trivial predictions made more than a decade ago by the inter-site pairing model in the cuprates, remarkably also confirming some of the fundamental notions mentioned in the seminal paper on the discovery of high-temperature superconductivity in cuprates.
0
1
0
0
0
0
Non-standard FDTD implementation of the Schrödinger equation
In this work, we apply the Cole's non-standard form of the FDTD to solve the time dependent Schrödinger equation. We deduce the equations for the non-standard FDTD considering an electronic wave function in the presence of potentials which can be higher or lower in comparison with the energy of the electron. The non-standard term is found to be almost the same, except for a sine functions which is transformed to a hyperbolic sine function,as the argument is imaginary when the potential has higher energy than the electron. Perfectly Matched Layers using this methodology are also presented.
0
1
0
0
0
0
The magnetic and electronic properties of Oxyselenides - influence of transition metal ions and lanthanides
Magnetic oxyselenides have been the topic of research for several decades being first of interest in the context of photoconductivity and thermoelectricity owing to their intrinsic semiconducting properties and ability to tune the energy gap through metal ion substitution. More recently, interest in the oxyselenides has experienced a resurgence owing to the possible relation to strongly correlated phenomena given the fact that many oxyslenides share a similar structure to unconventional superconducting pnictides and chalcogenides. The two dimensional nature of many oxyselenide systems also draws an analogy to cuprate physics where a strong interplay between unconventional electronic phases and localised magnetism has been studied for several decades. It is therefore timely to review the physics of the oxyselenides in the context of the broader field of strongly correlated magnetism and electronic phenomena. Here we review the current status and progress in this area of research with the focus on the influence of lanthanides and transition metal ions on the intertwined magnetic and electronic properties of oxyselenides. The emphasis of the review is on the magnetic properties and comparisons are made with iron based pnictide and chalcogenide systems.
0
1
0
0
0
0
Localizing the Object Contact through Matching Tactile Features with Visual Map
This paper presents a novel framework for integration of vision and tactile sensing by localizing tactile readings in a visual object map. Intuitively, there are some correspondences, e.g., prominent features, between visual and tactile object identification. To apply it in robotics, we propose to localize tactile readings in visual images by sharing same sets of feature descriptors through two sensing modalities. It is then treated as a probabilistic estimation problem solved in a framework of recursive Bayesian filtering. Feature-based measurement model and Gaussian based motion model are thus built. In our tests, a tactile array sensor is utilized to generate tactile images during interaction with objects and the results have proven the feasibility of our proposed framework.
1
0
0
0
0
0
Scattering Cross Section in a Cylindrical anisotropic layered metamaterial
To design a uniaxial anisotropic metamaterial a layered cylindrical metamaterial is introduced for TE polarization. Unlike to the previous work, which the layers were in radial direction, here the layers are in azimuthal direction. Scattering efficiency for this metamaterial in different frequency is analyzed with solving Maxwell's wave equation. It is observed that in some frequencies when the effective permittivity of the structure goes to zero the scattering efficiency would be negligible. This result approves the previous predictions. It is also found out that the scattering cancellation depends on the relative permittivity of the environmental medium for the cylinder. The finite element simulations are also confirmed the results.
0
1
0
0
0
0