title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Calabi-Yau metrics on canonical bundles of complex flag manifolds
In the present paper we provide a description of complete Calabi-Yau metrics on the canonical bundle of generalized complex flag manifolds. By means of Lie theory we give an explicit description of complete Ricci-flat Kähler metrics obtained through the Calabi ansatz technique. We use this approach to provide several explicit examples of noncompact complete Calabi-Yau manifolds, these examples include canonical bundles of non-toric flag manifolds (e.g. Grassmann manifolds and full flag manifolds).
0
0
1
0
0
0
The ratio of normalizing constants for Bayesian graphical Gaussian model selection
Many graphical Gaussian selection methods in a Bayesian framework use the G-Wishart as the conjugate prior on the precision matrix. The Bayes factor to compare a model governed by a graph G and a model governed by the neighboring graph G-e, derived from G by deleting an edge e, is a function of the ratios of prior and posterior normalizing constants of the G-Wishart for G and G-e. While more recent methods avoid the computation of the posterior ratio, computing the ratio of prior normalizing constants, (2) below, has remained a computational stumbling block. In this paper, we propose an explicit analytic approximation to (2) which is equal to the ratio of two Gamma functions evaluated at (delta+d)/2 and (delta+d+1)/2 respectively, where delta is the shape parameter of the G-Wishart and d is the number of paths of length two between the endpoints of e. This approximation allows us to avoid Monte Carlo methods, is computationally inexpensive and is scalable to high-dimensional problems. We show that the ratio of the approximation to the true value is always between zero and one and so, one cannot incur wild errors. In the particular case where the paths between the endpoints of e are disjoint, we show that the approximation is very good. When the paths between these two endpoints are not disjoint we give a sufficient condition for the approximation to be good. Numerical results show that the ratio of the approximation to the true value of the prior ratio is always between .55 and 1 and very often close to 1. We compare the results obtained with a model search using our approximation and a search using the double Metropolis-Hastings algorithm to compute the prior ratio. The results are extremely close.
0
0
1
1
0
0
Angiogenic Factors produced by Hypoxic Cells are a leading driver of Anastomoses in Sprouting Angiogenesis---a computational study
Angiogenesis - the growth of new blood vessels from a pre-existing vasculature - is key in both physiological processes and on several pathological scenarios such as cancer progression or diabetic retinopathy. For the new vascular networks to be functional, it is required that the growing sprouts merge either with an existing functional mature vessel or with another growing sprout. This process is called anastomosis. We present a systematic 2D and 3D computational study of vessel growth in a tissue to address the capability of angiogenic factor gradients to drive anastomosis formation. We consider that these growth factors are produced only by tissue cells in hypoxia, i.e. until nearby vessels merge and become capable of carrying blood and irrigating their vicinity. We demonstrate that this increased production of angiogenic factors by hypoxic cells is able to promote vessel anastomoses events in both 2D and 3D. The simulations also verify that the morphology of these networks has an increased resilience toward variations in the endothelial cell's proliferation and chemotactic response. The distribution of tissue cell`s and the concentration of the growth factors they produce are the major factors in determining the final morphology of the network.
0
0
0
0
1
0
Henri Bénard: Thermal convection and vortex shedding
We present in this article the work of Henri Bénard (1874-1939), French physicist who began the systematic experimental study of two hydrodynamic systems: the thermal convection of fluids heated from below (the Rayleigh-Bénard convection and the Bénard-Marangoni convection) and the periodical vortex shedding behind a bluff body in a flow (the Bénard-Kármán vortex street). Across his scientific biography, we review the interplay between experiments and theory in these two major subjects of fluid mechanics.
0
1
0
0
0
0
Surjective H-Colouring: New Hardness Results
A homomorphism from a graph G to a graph H is a vertex mapping f from the vertex set of G to the vertex set of H such that there is an edge between vertices f(u) and f(v) of H whenever there is an edge between vertices u and v of G. The H-Colouring problem is to decide whether or not a graph G allows a homomorphism to a fixed graph H. We continue a study on a variant of this problem, namely the Surjective H-Colouring problem, which imposes the homomorphism to be vertex-surjective. We build upon previous results and show that this problem is NP-complete for every connected graph H that has exactly two vertices with a self-loop as long as these two vertices are not adjacent. As a result, we can classify the computational complexity of Surjective H-Colouring for every graph H on at most four vertices.
1
0
1
0
0
0
Subband adaptive filter trained by differential evolution for channel estimation
The normalized subband adaptive filter (NSAF) is widely accepted as a preeminent adaptive filtering algorithm because of its efficiency under the colored excitation. However, the convergence rate of NSAF is slow. To address this drawback, in this paper, a variant of the NSAF, called the differential evolution (DE)-NSAF (DE-NSAF), is proposed for channel estimation based on DE strategy. It is worth noticing that there are several papers concerning designing DE strategies for adaptive filter. But their signal models are still the single adaptive filter model rather than the fullband adaptive filter model considered in this paper. Thus, the problem considered in our work is quite different from those. The proposed DE-NSAF algorithm is based on real-valued manipulations and has fast convergence rate for searching the global solution of optimized weight vector. Moreover, a design step of new algorithm is given in detail. Simulation results demonstrate the improved performance of the proposed DE-NSAF algorithm in terms of the convergence rate.
1
0
0
0
0
0
Distributionally Robust Games: f-Divergence and Learning
In this paper we introduce the novel framework of distributionally robust games. These are multi-player games where each player models the state of nature using a worst-case distribution, also called adversarial distribution. Thus each player's payoff depends on the other players' decisions and on the decision of a virtual player (nature) who selects an adversarial distribution of scenarios. This paper provides three main contributions. Firstly, the distributionally robust game is formulated using the statistical notions of $f$-divergence between two distributions, here represented by the adversarial distribution, and the exact distribution. Secondly, the complexity of the problem is significantly reduced by means of triality theory. Thirdly, stochastic Bregman learning algorithms are proposed to speedup the computation of robust equilibria. Finally, the theoretical findings are illustrated in a convex setting and its limitations are tested with a non-convex non-concave function.
1
0
1
0
0
0
Modeling and Reasoning About Wireless Networks: A Graph-based Calculus Approach
We propose a graph-based process calculus for modeling and reasoning about wireless networks with local broadcasts. Graphs are used at syntactical level to describe the topological structures of networks. This calculus is equipped with a reduction semantics and a labelled transition semantics. The former is used to define weak barbed congruence. The latter is used to define a parameterized weak bisimulation emphasizing locations and local broadcasts. We prove that weak bisimilarity implies weak barbed congruence. The potential applications are illustrated by some examples and two case studies.
1
0
0
0
0
0
Weyl's law on $RCD^*(K,N)$ metric measure spaces
In this paper, we will prove the Weyl's law for the asymptotic formula of Dirichlet eigenvalues on metric measure spaces with generalized Ricci curvature bounded from below.
0
0
1
0
0
0
The area of the Mandelbrot set and Zagier's conjecture
We prove Zagier's conjecture regarding the 2-adic valuation of the coefficients $\{b_m\}$ that appear in Ewing and Schober's series formula for the area of the Mandelbrot set in the case where $m\equiv 2 \mod 4$.
0
0
1
0
0
0
Accelerating Cross-Validation in Multinomial Logistic Regression with $\ell_1$-Regularization
We develop an approximate formula for evaluating a cross-validation estimator of predictive likelihood for multinomial logistic regression regularized by an $\ell_1$-norm. This allows us to avoid repeated optimizations required for literally conducting cross-validation; hence, the computational time can be significantly reduced. The formula is derived through a perturbative approach employing the largeness of the data size and the model dimensionality. An extension to the elastic net regularization is also addressed. The usefulness of the approximate formula is demonstrated on simulated data and the ISOLET dataset from the UCI machine learning repository.
0
1
0
1
0
0
Realizing an optimization approach inspired from Piagets theory on cognitive development
The objective of this paper is to introduce an artificial intelligence based optimization approach, which is inspired from Piagets theory on cognitive development. The approach has been designed according to essential processes that an individual may experience while learning something new or improving his / her knowledge. These processes are associated with the Piagets ideas on an individuals cognitive development. The approach expressed in this paper is a simple algorithm employing swarm intelligence oriented tasks in order to overcome single-objective optimization problems. For evaluating effectiveness of this early version of the algorithm, test operations have been done via some benchmark functions. The obtained results show that the approach / algorithm can be an alternative to the literature in terms of single-objective optimization. The authors have suggested the name: Cognitive Development Optimization Algorithm (CoDOA) for the related intelligent optimization approach.
1
0
1
0
0
0
Catalyst design using actively learned machine with non-ab initio input features towards CO2 reduction reactions
In conventional chemisorption model, the d-band center theory (augmented sometimes with the upper edge of d-band for imporved accuarcy) plays a central role in predicting adsorption energies and catalytic activity as a function of d-band center of the solid surfaces, but it requires density functional calculations that can be quite costly for large scale screening purposes of materials. In this work, we propose to use the d-band width of the muffin-tin orbital theory (to account for local coordination environment) plus electronegativity (to account for adsorbate renormalization) as a simple set of alternative descriptors for chemisorption, which do not demand the ab initio calculations. This pair of descriptors are then combined with machine learning methods, namely, artificial neural network (ANN) and kernel ridge regression (KRR), to allow large scale materials screenings. We show, for a toy set of 263 alloy systems, that the CO adsorption energy can be predicted with a remarkably small mean absolute deviation error of 0.05 eV, a significantly improved result as compared to 0.13 eV obtained with descriptors including costly d-band center calculations in literature. We achieved this high accuracy by utilizing an active learning algorithm, without which the accuracy was 0.18 eV otherwise. As a practical application of this machine, we identified Cu3Y@Cu as a highly active and cost-effective electrochemical CO2 reduction catalyst to produce CO with the overpotential 0.37 V lower than Au catalyst.
0
1
0
1
0
0
Clustering with Temporal Constraints on Spatio-Temporal Data of Human Mobility
Extracting significant places or places of interest (POIs) using individuals' spatio-temporal data is of fundamental importance for human mobility analysis. Classical clustering methods have been used in prior work for detecting POIs, but without considering temporal constraints. Usually, the involved parameters for clustering are difficult to determine, e.g., the optimal cluster number in hierarchical clustering. Currently, researchers either choose heuristic values or use spatial distance-based optimization to determine an appropriate parameter set. We argue that existing research does not optimally address temporal information and thus leaves much room for improvement. Considering temporal constraints in human mobility, we introduce an effective clustering approach - namely POI clustering with temporal constraints (PC-TC) - to extract POIs from spatio-temporal data of human mobility. Following human mobility nature in modern society, our approach aims to extract both global POIs (e.g., workplace or university) and local POIs (e.g., library, lab, and canteen). Based on two publicly available datasets including 193 individuals, our evaluation results show that PC-TC has much potential for next place prediction in terms of granularity (i.e., the number of extracted POIs) and predictability.
0
0
0
1
0
0
The maximum number of zeros of $r(z) - \overline{z}$ revisited
Generalizing several previous results in the literature on rational harmonic functions, we derive bounds on the maximum number of zeros of functions $f(z) = \frac{p(z)}{q(z)} - \overline{z}$, which depend on both $\mathrm{deg}(p)$ and $\mathrm{deg}(q)$. Furthermore, we prove that any function that attains one of these upper bounds is regular.
0
0
1
0
0
0
High Luminosity Large Hadron Collider HL-LHC
HL-LHC federates the efforts and R&D of a large international community towards the ambitious HL- LHC objectives and contributes to establishing the European Research Area (ERA) as a focal point of global research cooperation and a leader in frontier knowledge and technologies. HL-LHC relies on strong participation from various partners, in particular from leading US and Japanese laboratories. This participation will be required for the execution of the construction phase as a global project. In particular, the US LHC Accelerator R&D Program (LARP) has developed some of the key technologies for the HL-LHC, such as the large-aperture niobium-tin ($Nb_{3}Sn) quadrupoles and the crab cavities. The proposed governance model is tailored accordingly and should pave the way for the organization of the construction phase.
0
1
0
0
0
0
Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies
Various approaches have been proposed to learn visuo-motor policies for real-world robotic applications. One solution is first learning in simulation then transferring to the real world. In the transfer, most existing approaches need real-world images with labels. However, the labelling process is often expensive or even impractical in many robotic applications. In this paper, we propose an adversarial discriminative sim-to-real transfer approach to reduce the cost of labelling real data. The effectiveness of the approach is demonstrated with modular networks in a table-top object reaching task where a 7 DoF arm is controlled in velocity mode to reach a blue cuboid in clutter through visual observations. The adversarial transfer approach reduced the labelled real data requirement by 50%. Policies can be transferred to real environments with only 93 labelled and 186 unlabelled real images. The transferred visuo-motor policies are robust to novel (not seen in training) objects in clutter and even a moving target, achieving a 97.8% success rate and 1.8 cm control accuracy.
1
0
0
0
0
0
Algebraic Bethe ansatz for the trigonometric sl(2) Gaudin model with triangular boundary
In the derivation of the generating function of the Gaudin Hamiltonians with boundary terms, we follow the same approach used previously in the rational case, which in turn was based on Sklyanin's method in the periodic case. Our derivation is centered on the quasi-classical expansion of the linear combination of the transfer matrix of the XXZ Heisenberg spin chain and the central element, the so-called Sklyanin determinant. The corresponding Gaudin Hamiltonians with boundary terms are obtained as the residues of the generating function. By defining the appropriate Bethe vectors which yield strikingly simple off-shell action of the generating function, we fully implement the algebraic Bethe ansatz, obtaining the spectrum of the generating function and the corresponding Bethe equations.
0
1
0
0
0
0
Marked Temporal Dynamics Modeling based on Recurrent Neural Network
We are now witnessing the increasing availability of event stream data, i.e., a sequence of events with each event typically being denoted by the time it occurs and its mark information (e.g., event type). A fundamental problem is to model and predict such kind of marked temporal dynamics, i.e., when the next event will take place and what its mark will be. Existing methods either predict only the mark or the time of the next event, or predict both of them, yet separately. Indeed, in marked temporal dynamics, the time and the mark of the next event are highly dependent on each other, requiring a method that could simultaneously predict both of them. To tackle this problem, in this paper, we propose to model marked temporal dynamics by using a mark-specific intensity function to explicitly capture the dependency between the mark and the time of the next event. Extensive experiments on two datasets demonstrate that the proposed method outperforms state-of-the-art methods at predicting marked temporal dynamics.
1
0
0
0
0
0
A Bayesian framework for distributed estimation of arrival rates in asynchronous networks
In this paper we consider a network of agents monitoring a spatially distributed arrival process. Each node measures the number of arrivals seen at its monitoring point in a given time-interval with the objective of estimating the unknown local arrival rate. We propose an asynchronous distributed approach based on a Bayesian model with unknown hyperparameter, where each node computes the minimum mean square error (MMSE) estimator of its local arrival rate in a distributed way. As a result, the estimation at each node "optimally" fuses the information from the whole network through a distributed optimization algorithm. Moreover, we propose an ad-hoc distributed estimator, based on a consensus algorithm for time-varying and directed graphs, which exhibits reduced complexity and exponential convergence. We analyze the performance of the proposed distributed estimators, showing that they: (i) are reliable even in presence of limited local data, and (ii) improve the estimation accuracy compared to the purely decentralized setup. Finally, we provide a statistical characterization of the proposed estimators. In particular, for the ad-hoc estimator, we show that as the number of nodes goes to infinity its mean square error converges to the optimal one. Numerical Monte Carlo simulations confirm the theoretical characterization and highlight the appealing performances of the estimators.
1
0
1
0
0
0
Rigidity of branching microstructures in shape memory alloys
We analyze generic sequences for which the geometrically linear energy \[E_\eta(u,\chi):= \eta^{-\frac{2}{3}}\int_{B_{0}(1)} \left| e(u)- \sum_{i=1}^3 \chi_ie_i\right|^2 d x+\eta^\frac{1}{3} \sum_{i=1}^3 |D\chi_i|(B_{0}(1))\] remains bounded in the limit $\eta \to 0$. Here $ e(u) :=1/2(Du + Du^T)$ is the (linearized) strain of the displacement $u$, the strains $e_i$ correspond to the martensite strains of a shape memory alloy undergoing cubic-to-tetragonal transformations and $\chi_i:B_{0}(1) \to \{0,1\}$ is the partition into phases. In this regime it is known that in addition to simple laminates also branched structures are possible, which if austenite was present would enable the alloy to form habit planes. In an ansatz-free manner we prove that the alignment of macroscopic interfaces between martensite twins is as predicted by well-known rank-one conditions. Our proof proceeds via the non-convex, non-discrete-valued differential inclusion \[e(u) \in \bigcup_{1\leq i\neq j\leq 3} \operatorname{conv} \{e_i,e_j\}\] satisfied by the weak limits of bounded energy sequences and of which we classify all solutions. In particular, there exist no convex integration solutions of the inclusion with complicated geometric structures.
0
0
1
0
0
0
Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository
Machine learning qualifies computers to assimilate with data, without being solely programmed [1, 2]. Machine learning can be classified as supervised and unsupervised learning. In supervised learning, computers learn an objective that portrays an input to an output hinged on training input-output pairs [3]. Most efficient and widely used supervised learning algorithms are K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor (LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this paper is to implement these elegant learning algorithms on eleven different datasets from the UCI machine learning repository to observe the variation of accuracies for each of the algorithms on all datasets. Analyzing the accuracy of the algorithms will give us a brief idea about the relationship of the machine learning algorithms and the data dimensionality. All the algorithms are developed in Matlab. Upon such accuracy observation, the comparison can be built among KNN, SVM, LMNN, and ENN regarding their performances on each dataset.
0
0
0
1
0
0
Network Classification in Temporal Networks Using Motifs
Network classification has a variety of applications, such as detecting communities within networks and finding similarities between those representing different aspects of the real world. However, most existing work in this area focus on examining static undirected networks without considering directed edges or temporality. In this paper, we propose a new methodology that utilizes feature representation for network classification based on the temporal motif distribution of the network and a null model for comparing against random graphs. Experimental results show that our method improves accuracy by up $10\%$ compared to the state-of-the-art embedding method in network classification, for tasks such as classifying network type, identifying communities in email exchange network, and identifying users given their app-switching behaviors.
1
0
0
0
0
0
The strength of Ramsey's theorem for pairs and arbitrarily many colors
In this paper, we show that $\mathrm{RT}^{2}+\mathsf{WKL}_0$ is a $\Pi^{1}_{1}$-conservative extension of $\mathrm{B}\Sigma^0_3$.
0
0
1
0
0
0
Joint Power Allocation and Beamforming for Energy-Efficient Two-Way Multi-Relay Communications
This paper considers the joint design of user power allocation and relay beamforming in relaying communications, in which multiple pairs of single-antenna users exchange information with each other via multiple-antenna relays in two time slots. All users transmit their signals to the relays in the first time slot while the relays broadcast the beamformed signals to all users in the second time slot. The aim is to maximize the system's energy efficiency (EE) subject to quality-of-service (QoS) constraints in terms of exchange throughput requirements. The QoS constraints are nonconvex with many nonlinear cross-terms, so finding a feasible point is already computationally challenging. The sum throughput appears in the numerator while the total consumption power appears in the denominator of the EE objective function. The former is a nonconcave function and the latter is a nonconvex function, making fractional programming useless for EE optimization. Nevertheless, efficient iterations of low complexity to obtain its optimized solutions are developed. The performances of the multiple-user and multiple-relay networks under various scenarios are evaluated to show the merit of the paper development.
1
0
0
0
0
0
Forecasting Internally Displaced Population Migration Patterns in Syria and Yemen
Armed conflict has led to an unprecedented number of internally displaced persons (IDPs) - individuals who are forced out of their homes but remain within their country. IDPs often urgently require shelter, food, and healthcare, yet prediction of when large fluxes of IDPs will cross into an area remains a major challenge for aid delivery organizations. Accurate forecasting of IDP migration would empower humanitarian aid groups to more effectively allocate resources during conflicts. We show that monthly flow of IDPs from province to province in both Syria and Yemen can be accurately forecasted one month in advance, using publicly available data. We model monthly IDP flow using data on food price, fuel price, wage, geospatial, and news data. We find that machine learning approaches can more accurately forecast migration trends than baseline persistence models. Our findings thus potentially enable proactive aid allocation for IDPs in anticipation of forecasted arrivals.
0
0
0
1
0
0
Entropy? Honest!
Here we deconstruct, and then in a reasoned way reconstruct, the concept of "entropy of a system," paying particular attention to where the randomness may be coming from. We start with the core concept of entropy as a COUNT associated with a DESCRIPTION; this count (traditionally expressed in logarithmic form for a number of good reasons) is in essence the number of possibilities---specific instances or "scenarios," that MATCH that description. Very natural (and virtually inescapable) generalizations of the idea of description are the probability distribution and of its quantum mechanical counterpart, the density operator. We track the process of dynamically updating entropy as a system evolves. Three factors may cause entropy to change: (1) the system's INTERNAL DYNAMICS; (2) unsolicited EXTERNAL INFLUENCES on it; and (3) the approximations one has to make when one tries to predict the system's future state. The latter task is usually hampered by hard-to-quantify aspects of the original description, limited data storage and processing resource, and possibly algorithmic inadequacy. Factors 2 and 3 introduce randomness into one's predictions and accordingly degrade them. When forecasting, as long as the entropy bookkeping is conducted in an HONEST fashion, this degradation will ALWAYS lead to an entropy increase. To clarify the above point we introduce the notion of HONEST ENTROPY, which coalesces much of what is of course already done, often tacitly, in responsible entropy-bookkeping practice. This notion, we believe, will help to fill an expressivity gap in scientific discourse. With its help we shall prove that ANY dynamical system---not just our physical universe---strictly obeys Clausius's original formulation of the second law of thermodynamics IF AND ONLY IF it is invertible. Thus this law is a TAUTOLOGICAL PROPERTY of invertible systems!
0
1
0
0
0
0
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and we demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less often than the current state-of-the-art.
0
0
0
1
0
0
A Dynamic Model of Central Counterparty Risk
We introduce a dynamic model of the default waterfall of derivatives CCPs and propose a risk sensitive method for sizing the initial margin (IM), and the default fund (DF) and its allocation among clearing members. Using a Markovian structure model of joint credit migrations, our evaluation of DF takes into account the joint credit quality of clearing members as they evolve over time. Another important aspect of the proposed methodology is the use of the time consistent dynamic risk measures for computation of IM and DF. We carry out a comprehensive numerical study, where, in particular, we analyze the advantages of the proposed methodology and its comparison with the currently prevailing methods used in industry.
0
0
0
0
0
1
Half-Duplex Base Station with Adaptive Scheduling of the in-Band Uplink-Receptions and Downlink-Transmissions
In this paper, we propose a novel reception/transmission scheme for half-duplex base stations (BSs). In particular, we propose a half-duplex BS that employes in-band uplink-receptions from user 1 and downlink-transmissions to user 2, which occur in different time slots. Furthermore, we propose optimal adaptive scheduling of the in-band uplink-receptions and downlink-transmissions of the BS such that the uplink-downlink rate/throughput region is maximized and the outage probabilities of the uplink and downlink channels are minimized. Practically, this results in selecting whether in a given time slot the BS should receive from user 1 or transmit to user 2 based on the qualities of the in-band uplink-reception and downlink-transmission channels. Compared to the performance achieved with a conventional full-duplex division (FDD) base station, two main gains can be highlighted: 1) Increased uplink-downlink rate/throughput region; 2) Doubling of the diversity gain of both the uplink and downlink channels.
1
0
0
0
0
0
HAT-P-26b: A Neptune-Mass Exoplanet with a Well Constrained Heavy Element Abundance
A correlation between giant-planet mass and atmospheric heavy elemental abundance was first noted in the past century from observations of planets in our own Solar System, and has served as a cornerstone of planet formation theory. Using data from the Hubble and Spitzer Space Telescopes from 0.5 to 5 microns, we conducted a detailed atmospheric study of the transiting Neptune-mass exoplanet HAT-P-26b. We detected prominent H2O absorption bands with a maximum base-to-peak amplitude of 525ppm in the transmission spectrum. Using the water abundance as a proxy for metallicity, we measured HAT-P-26b's atmospheric heavy element content [4.8 (-4.0 +21.5) times solar]. This likely indicates that HAT-P-26b's atmosphere is primordial and obtained its gaseous envelope late in its disk lifetime, with little contamination from metal-rich planetesimals.
0
1
0
0
0
0
Robust Localization Using Range Measurements with Unknown and Bounded Errors
Cooperative geolocation has attracted significant research interests in recent years. A large number of localization algorithms rely on the availability of statistical knowledge of measurement errors, which is often difficult to obtain in practice. Compared with the statistical knowledge of measurement errors, it can often be easier to obtain the measurement error bound. This work investigates a localization problem assuming unknown measurement error distribution except for a bound on the error. We first formulate this localization problem as an optimization problem to minimize the worst-case estimation error, which is shown to be a non-convex optimization problem. Then, relaxation is applied to transform it into a convex one. Furthermore, we propose a distributed algorithm to solve the problem, which will converge in a few iterations. Simulation results show that the proposed algorithms are more robust to large measurement errors than existing algorithms in the literature. Geometrical analysis providing additional insights is also provided.
0
0
0
1
0
0
Intuitionistic Non-Normal Modal Logics: A general framework
We define a family of intuitionistic non-normal modal logics; they can bee seen as intuitionistic counterparts of classical ones. We first consider monomodal logics, which contain only one between Necessity and Possibility. We then consider the more important case of bimodal logics, which contain both modal operators. In this case we define several interactions between Necessity and Possibility of increasing strength, although weaker than duality. For all logics we provide both a Hilbert axiomatisation and a cut-free sequent calculus, on its basis we also prove their decidability. We then give a semantic characterisation of our logics in terms of neighbourhood models. Our semantic framework captures modularly not only our systems but also already known intuitionistic non-normal modal logics such as Constructive K (CK) and the propositional fragment of Wijesekera's Constructive Concurrent Dynamic Logic.
1
0
0
0
0
0
Externalities in Socially-Based Resource Sharing Network
This paper investigates the impact of link formation between a pair of agents on resource availability of other agents in a social cloud network, which is a special case of socially-based resource sharing systems. Specifically, we study the correlation between externalities, network size, and network density. We first conjecture and experimentally support that if an agent experiences positive externalities, then its closeness (harmonic centrality measure) should increase. Next, we show the following for ring networks: in less populated networks no agent experiences positive externalities; in more populated networks a set of agents experience positive externalities, and larger the distance between agents forming a link, more the number of beneficiaries; and the number of beneficiaries is always less than the number of non-beneficiaries. Finally, we show that network density is inversely proportional to positive externalities, and further, it plays a crucial role in determining the kind of externalities.
1
0
0
0
0
0
Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
Consider the problem: given data pair $(\mathbf{x}, \mathbf{y})$ drawn from a population with $f_*(x) = \mathbf{E}[\mathbf{y} | \mathbf{x} = x]$, specify a neural network and run gradient flow on the weights over time until reaching any stationarity. How does $f_t$, the function computed by the neural network at time $t$, relate to $f_*$, in terms of approximation and representation? What are the provable benefits of the adaptive representation by neural networks compared to the pre-specified fixed basis representation in the classical nonparametric literature? We answer the above questions via a dynamic reproducing kernel Hilbert space (RKHS) approach indexed by the training process of neural networks. We show that when reaching any local stationarity, gradient flow learns an adaptive RKHS representation, and performs the global least squares projection onto the adaptive RKHS, simultaneously. In addition, we prove that as the RKHS is data-adaptive and task-specific, the residual for $f_*$ lies in a subspace that is smaller than the orthogonal complement of the RKHS, formalizing the representation and approximation benefits of neural networks.
1
0
1
1
0
0
On utility maximization without passing by the dual problem
We treat utility maximization from terminal wealth for an agent with utility function $U:\mathbb{R}\to\mathbb{R}$ who dynamically invests in a continuous-time financial market and receives a possibly unbounded random endowment. We prove the existence of an optimal investment without introducing the associated dual problem. We rely on a recent result of Orlicz space theory, due to Delbaen and Owari which leads to a simple and transparent proof. Our results apply to non-smooth utilities and even strict concavity can be relaxed. We can handle certain random endowments with non-hedgeable risks, complementing earlier papers. Constraints on the terminal wealth can also be incorporated. As examples, we treat frictionless markets with finitely many assets and large financial markets.
0
0
1
0
0
0
Stochastic Generative Hashing
Learning-based binary hashing has become a powerful paradigm for fast search and retrieval in massive databases. However, due to the requirement of discrete outputs for the hash functions, learning such functions is known to be very challenging. In addition, the objective functions adopted by existing hashing techniques are mostly chosen heuristically. In this paper, we propose a novel generative approach to learn hash functions through Minimum Description Length principle such that the learned hash codes maximally compress the dataset and can also be used to regenerate the inputs. We also develop an efficient learning algorithm based on the stochastic distributional gradient, which avoids the notorious difficulty caused by binary output constraints, to jointly optimize the parameters of the hash function and the associated generative model. Extensive experiments on a variety of large-scale datasets show that the proposed method achieves better retrieval results than the existing state-of-the-art methods.
1
0
0
1
0
0
Nonlinear learning and learning advantages in evolutionary games
The idea of incompetence as a learning or adaptation function was introduced in the context of evolutionary games as a fixed parameter. However, live organisms usually perform different nonlinear adaptation functions such as a power law or exponential fitness growth. Here, we examine how the functional form of the learning process may affect the social competition between different behavioral types. Further, we extend our results for the evolutionary games where fluctuations in the environment affect the behavioral adaptation of competing species and demonstrate importance of the starting level of incompetence for survival. Hence, we define a new concept of learning advantages that becomes crucial when environments are constantly changing and requiring rapid adaptation from species. This may lead to the evolutionarily weak phase when even evolutionary stable populations become vulnerable to invasions.
0
0
0
0
1
0
Information-theoretic Limits for Community Detection in Network Models
We analyze the information-theoretic limits for the recovery of node labels in several network models. This includes the Stochastic Block Model, the Exponential Random Graph Model, the Latent Space Model, the Directed Preferential Attachment Model, and the Directed Small-world Model. For the Stochastic Block Model, the non-recoverability condition depends on the probabilities of having edges inside a community, and between different communities. For the Latent Space Model, the non-recoverability condition depends on the dimension of the latent space, and how far and spread are the communities in the latent space. For the Directed Preferential Attachment Model and the Directed Small-world Model, the non-recoverability condition depends on the ratio between homophily and neighborhood size. We also consider dynamic versions of the Stochastic Block Model and the Latent Space Model.
1
0
0
1
0
0
Proofs of life: molecular-biology reasoning simulates cell behaviors from first principles
We axiomatize the molecular-biology reasoning style, verify compliance of the standard reference: Ptashne, A Genetic Switch, and present proof-theory-induced technologies to predict phenotypes and life cycles from genotypes. The key is to note that `reductionist discipline' entails constructive reasoning, i.e., that any argument for a compound property is constructed from more basic arguments. Proof theory makes explicit the inner structure of the axiomatized reasoning style and allows the permissible dynamics to be presented as a mode of computation that can be executed and analyzed. Constructivity and executability guarantee simulation when working over domain-specific languages. Here, we exhibit phenotype properties for genotype reasons: a molecular-biology argument is an open-system concurrent computation that results in compartment changes and is performed among processes of physiology change as determined from the molecular programming of given DNA. Life cycles are the possible sequentializations of the processes. A main implication of our construction is that technical correctness provides a complementary perspective on science that is as fundamental there as it is for pure mathematics, provided mature reductionism exists.
0
0
0
0
1
0
Beyond recursion operators
We briefly recall the history of the Nijenhuis torsion of (1,1)-tensors on manifolds and of the lesser-known Haantjes torsion. We then show how the Haantjes manifolds of Magri and the symplectic-Haantjes structures of Tempesta and Tondo generalize the classical approach to integrable systems in the bi-hamiltonian and symplectic-Nijenhuis formalisms, the sequence of powers of the recursion operator being replaced by a family of commuting Haantjes operators.
0
0
1
0
0
0
On Classical Integrability of the Hydrodynamics of Quantum Integrable Systems
Recently, a hydrodynamic description of local equilibrium dynamics in quantum integrable systems was discovered. In the diffusionless limit, this is equivalent to a certain "Bethe-Boltzmann" kinetic equation, which has the form of an integro-differential conservation law in $(1+1)$D. The purpose of the present work is to investigate the sense in which the Bethe-Boltzmann equation defines an "integrable kinetic equation". To this end, we study a class of $N$ dimensional systems of evolution equations that arise naturally as finite-dimensional approximations to the Bethe-Boltzmann equation. We obtain non-local Poisson brackets and Hamiltonian densities for these equations and derive an infinite family of first integrals, parameterized by $N$ functional degrees of freedom. We find that the conserved charges arising from quantum integrability map to Casimir invariants of the hydrodynamic bracket and their group velocities map to Hamiltonian flows. Some results from the finite-dimensional setting extend to the underlying integro-differential equation, providing evidence for its integrability in the hydrodynamic sense.
0
1
0
0
0
0
Transition to turbulence when the Tollmien-Schlichting and bypass routes coexist
Plane Poiseuille flow, the pressure driven flow between parallel plates, shows a route to turbulence connected with a linear instability to Tollmien-Schlichting (TS) waves, and another one, the bypass transition, that is triggered with finite amplitude perturbation. We use direct numerical simulations to explore the arrangement of the different routes to turbulence among the set of initial conditions. For plates that are a distance $2H$ apart and in a domain of width $2\pi H$ and length $2\pi H$ the subcritical instability to TS waves sets in at $Re_{c}=5815$ that extends down to $Re_{TS}\approx4884$. The bypass route becomes available above $Re_E=459$ with the appearance of three-dimensional finite-amplitude traveling waves. The bypass transition covers a large set of finite amplitude perturbations. Below $Re_c$, TS appear for a tiny set of initial conditions that grows with increasing Reynolds number. Above $Re_c$ the previously stable region becomes unstable via TS waves, but a sharp transition to the bypass route can still be identified. Both routes lead to the same turbulent in the final stage of the transition, but on different time scales. Similar phenomena can be expected in other flows where two or more routes to turbulence compete.
0
1
0
0
0
0
Interactive Exploration and Discovery of Scientific Publications with PubVis
With an exponentially growing number of scientific papers published each year, advanced tools for exploring and discovering publications of interest are becoming indispensable. To empower users beyond a simple keyword search provided e.g. by Google Scholar, we present the novel web application PubVis. Powered by a variety of machine learning techniques, it combines essential features to help researchers find the content most relevant to them. An interactive visualization of a large collection of scientific publications provides an overview of the field and encourages the user to explore articles beyond a narrow research focus. This is augmented by personalized content based article recommendations as well as an advanced full text search to discover relevant references. The open sourced implementation of the app can be easily set up and run locally on a desktop computer to provide access to content tailored to the specific needs of individual users. Additionally, a PubVis demo with access to a collection of 10,000 papers can be tested online.
1
0
0
0
0
0
Improving phase II oncology trials using best observed RECIST response as an endpoint by modelling continuous tumour measurements
In many phase II trials in solid tumours, patients are assessed using endpoints based on the Response Evaluation Criteria in Solid Tumours (RECIST) scale. Often, analyses are based on the response rate. This is the proportion of patients who have an observed tumour shrinkage above a pre-defined level and no new tumour lesions. The augmented binary method has been proposed to improve the precision of the estimator of the response rate. The method involves modelling the tumour shrinkage to avoid dichotomising it. However, in many trials the best observed response is used as the primary outcome. In such trials, patients are followed until progression, and their best observed RECIST outcome is used as the primary endpoint. In this paper, we propose a method that extends the augmented binary method so that it can be used when the outcome is best observed response. We show through simulated data and data from a real phase II cancer trial that this method improves power in both single-arm and randomised trials. The average gain in power compared to the traditional analysis is equivalent to approximately a 35% increase in sample size. A modified version of the method is proposed to reduce the computational effort required. We show this modified method maintains much of the efficiency advantages.
0
0
0
1
0
0
Evans-Selberg potential on planar domains
We provide explicit formulas of Evans kernels, Evans-Selberg potentials and fundamental metrics on potential-theoretically parabolic planar domains.
0
0
1
0
0
0
Bridging the Gap Between Computational Photography and Visual Recognition
What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step to improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG^2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG^2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area.
1
0
0
0
0
0
On some conjectures of Samuels and Feige
Let $\mu_1 \ge \dotsc \ge \mu_n > 0$ and $\mu_1 + \dotsm + \mu_n = 1$. Let $X_1, \dotsc, X_n$ be independent non-negative random variables with $EX_1 = \dotsc = EX_n = 1$, and let $Z = \sum_{i=1}^n \mu_i X_i$. Let $M = \max_{1 \le i \le n} \mu_i = \mu_1$, and let $\delta > 0$ and $T = 1 + \delta$. Both Samuels and Feige formulated conjectures bounding the probability $P(Z < T)$ from above. We prove that Samuels' conjecture implies a conjecture of Feige.
0
0
1
0
0
0
Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States
Making the right decision in traffic is a challenging task that is highly dependent on individual preferences as well as the surrounding environment. Therefore it is hard to model solely based on expert knowledge. In this work we use Deep Reinforcement Learning to learn maneuver decisions based on a compact semantic state representation. This ensures a consistent model of the environment across scenarios as well as a behavior adaptation function, enabling on-line changes of desired behaviors without re-training. The input for the neural network is a simulated object list similar to that of Radar or Lidar sensors, superimposed by a relational semantic scene description. The state as well as the reward are extended by a behavior adaptation function and a parameterization respectively. With little expert knowledge and a set of mid-level actions, it can be seen that the agent is capable to adhere to traffic rules and learns to drive safely in a variety of situations.
1
0
0
1
0
0
Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows
We present Sequential Neural Likelihood (SNL), a new method for Bayesian inference in simulator models, where the likelihood is intractable but simulating data from the model is possible. SNL trains an autoregressive flow on simulated data in order to learn a model of the likelihood in the region of high posterior density. A sequential training procedure guides simulations and reduces simulation cost by orders of magnitude. We show that SNL is more robust, more accurate and requires less tuning than related neural-based methods, and we discuss diagnostics for assessing calibration, convergence and goodness-of-fit.
0
0
0
1
0
0
Creativity: Generating Diverse Questions using Variational Autoencoders
Generating diverse questions for given images is an important task for computational education, entertainment and AI assistants. Different from many conventional prediction techniques is the need for algorithms to generate a diverse set of plausible questions, which we refer to as "creativity". In this paper we propose a creative algorithm for visual question generation which combines the advantages of variational autoencoders with long short-term memory networks. We demonstrate that our framework is able to generate a large set of varying questions given a single input image.
1
0
0
0
0
0
Cooling dynamics of a single trapped ion via elastic collisions with small-mass atoms
We demonstrated sympathetic cooling of a single ion in a buffer gas of ultracold atoms with small mass. Efficient collisional cooling was realized by suppressing collision-induced heating. We attempt to explain the experimental results with a simple rate equation model and provide a quantitative discussion of the cooling efficiency per collision. The knowledge we obtained in this work is an important ingredient for advancing the technique of sympathetic cooling of ions with neutral atoms.
0
1
0
0
0
0
Modulational instability in the full-dispersion Camassa-Holm equation
We determine the stability and instability of a sufficiently small and periodic traveling wave to long wavelength perturbations, for a nonlinear dispersive equation which extends a Camassa-Holm equation to include all the dispersion of water waves and the Whitham equation to include nonlinearities of medium amplitude waves. In the absence of the effects of surface tension, the result qualitatively agrees with the Benjamin-Feir instability of a Stokes wave. In the presence of the effects of surface tension, it qualitatively agrees with those from formal asymptotic expansions of the physical problem and it improves upon that for the Whitham equation, correctly predicting the limit of strong surface tension. We discuss the modulational stability and instability in the Camassa-Holm equation and related models.
0
1
1
0
0
0
Evaluating and Modelling Hanabi-Playing Agents
Agent modelling involves considering how other agents will behave, in order to influence your own actions. In this paper, we explore the use of agent modelling in the hidden-information, collaborative card game Hanabi. We implement a number of rule-based agents, both from the literature and of our own devising, in addition to an Information Set Monte Carlo Tree Search (IS-MCTS) agent. We observe poor results from IS-MCTS, so construct a new, predictor version that uses a model of the agents with which it is paired. We observe a significant improvement in game-playing strength from this agent in comparison to IS-MCTS, resulting from its consideration of what the other agents in a game would do. In addition, we create a flawed rule-based agent to highlight the predictor's capabilities with such an agent.
1
0
0
0
0
0
Rank-related dimension bounds for subspaces of bilinear forms over finite fields
Let q be a power of a prime and let V be a vector space of finite dimension n over the field of order q. Let Bil(V) denote the set of all bilinear forms defined on V x V, let Symm(V) denote the subspace of Bil(V) consisting of symmetric bilinear forms, and Alt(V) denote the subspace of alternating bilinear forms. Let M denote a subspace of any of the spaces Bil(V), Symm(V), or Alt(V). In this paper we investigate hypotheses on the rank of the non-zero elements of M which lead to reasonable bounds for dim M. Typically, we look at the case where exactly two or three non-zero ranks occur, one of which is usually n. In the case that M achieves the maximal dimension predicted by the dimension bound, we try to enumerate the number of forms of a given rank in M and describe geometric properties of the radicals of the degenerate elements of M.
0
0
1
0
0
0
Multi-objective optimization to explicitly account for model complexity when learning Bayesian Networks
Bayesian Networks have been widely used in the last decades in many fields, to describe statistical dependencies among random variables. In general, learning the structure of such models is a problem with considerable theoretical interest that still poses many challenges. On the one hand, this is a well-known NP-complete problem, which is practically hardened by the huge search space of possible solutions. On the other hand, the phenomenon of I-equivalence, i.e., different graphical structures underpinning the same set of statistical dependencies, may lead to multimodal fitness landscapes further hindering maximum likelihood approaches to solve the task. Despite all these difficulties, greedy search methods based on a likelihood score coupled with a regularization term to account for model complexity, have been shown to be surprisingly effective in practice. In this paper, we consider the formulation of the task of learning the structure of Bayesian Networks as an optimization problem based on a likelihood score. Nevertheless, our approach do not adjust this score by means of any of the complexity terms proposed in the literature; instead, it accounts directly for the complexity of the discovered solutions by exploiting a multi-objective optimization procedure. To this extent, we adopt NSGA-II and define the first objective function to be the likelihood of a solution and the second to be the number of selected arcs. We thoroughly analyze the behavior of our method on a wide set of simulated data, and we discuss the performance considering the goodness of the inferred solutions both in terms of their objective functions and with respect to the retrieved structure. Our results show that NSGA-II can converge to solutions characterized by better likelihood and less arcs than classic approaches, although paradoxically frequently characterized by a lower similarity to the target network.
0
0
0
1
0
0
Simultaneous Localization and Layout Model Selection in Manhattan Worlds
In this paper, we will demonstrate how Manhattan structure can be exploited to transform the Simultaneous Localization and Mapping (SLAM) problem, which is typically solved by a nonlinear optimization over feature positions, into a model selection problem solved by a convex optimization over higher order layout structures, namely walls, floors, and ceilings. Furthermore, we show how our novel formulation leads to an optimization procedure that automatically performs data association and loop closure and which ultimately produces the simplest model of the environment that is consistent with the available measurements. We verify our method on real world data sets collected with various sensing modalities.
1
0
0
0
0
0
Robust method for finding sparse solutions to linear inverse problems using an L2 regularization
We analyzed the performance of a biologically inspired algorithm called the Corrected Projections Algorithm (CPA) when a sparseness constraint is required to unambiguously reconstruct an observed signal using atoms from an overcomplete dictionary. By changing the geometry of the estimation problem, CPA gives an analytical expression for a binary variable that indicates the presence or absence of a dictionary atom using an L2 regularizer. The regularized solution can be implemented using an efficient real-time Kalman-filter type of algorithm. The smoother L2 regularization of CPA makes it very robust to noise, and CPA outperforms other methods in identifying known atoms in the presence of strong novel atoms in the signal.
1
0
0
1
0
0
Humanoid Robot-Application and Influence
Application of humanoid robots has been common in the field of healthcare and education. It has been recurrently used to improve social behavior and mollify distress level among children with autism, cancer and cerebral palsy. This article discusses the same from a human factors perspective. It shows how people of different age and gender have a different opinion towards the application and acceptance of humanoid robots. Additionally, this article highlights the influence of cerebral condition and social interaction on a user behavior and attitude towards humanoid robots. Our study performed a literature review and found that (a) children and elderly individuals prefer humanoid robots due to inactive social interaction, (b) The deterministic behavior of humanoid robots can be acknowledged to improve social behavior of autistic children, (c) Trust on humanoid robots is highly driven by its application and a user age, gender, and social life.
1
0
0
0
0
0
Onsager's Conjecture for the Incompressible Euler Equations in Bounded Domains
The goal of this note is to show that, also in a bounded domain $\Omega \subset \mathbb{R}^n$, with $\partial \Omega\in C^2$, any weak solution, $(u(x,t),p(x,t))$, of the Euler equations of ideal incompressible fluid in $\Omega\times (0,T) \subset \mathbb{R}^n\times\mathbb{R}_t$, with the impermeability boundary condition: $u\cdot \vec n =0$ on $\partial\Omega\times(0,T)$, is of constant energy on the interval $(0,T)$ provided the velocity field $u \in L^3((0,T); C^{0,\alpha}(\overline{\Omega}))$, with $\alpha>\frac13\,.$
0
1
1
0
0
0
Fuzzy logic based approaches for gene regulatory network inference
The rapid advancement in high-throughput techniques has fueled the generation of large volume of biological data rapidly with low cost. Some of these techniques are microarray and next generation sequencing which provides genome level insight of living cells. As a result, the size of most of the biological databases, such as NCBI-GEO, NCBI-SRA, is exponentially growing. These biological data are analyzed using computational techniques for knowledge discovery - which is one of the objectives of bioinformatics research. Gene regulatory network (GRN) is a gene-gene interaction network which plays pivotal role in understanding gene regulation process and disease studies. From the last couple of decades, the researchers are interested in developing computational algorithms for GRN inference (GRNI) using high-throughput experimental data. Several computational approaches have been applied for inferring GRN from gene expression data including statistical techniques (correlation coefficient), information theory (mutual information), regression based approaches, probabilistic approaches (Bayesian networks, naive byes), artificial neural networks, and fuzzy logic. The fuzzy logic, along with its hybridization with other intelligent approach, is well studied in GRNI due to its several advantages. In this paper, we present a consolidated review on fuzzy logic and its hybrid approaches for GRNI developed during last two decades.
0
0
0
0
1
0
Instrumentation for nuclear magnetic resonance in zero and ultralow magnetic field
We review instrumentation for nuclear magnetic resonance (NMR) in zero and ultra-low magnetic field (ZULF, below 0.1 $\mu$T) where detection is based on a low-cost, non-cryogenic, spin-exchange relaxation free (SERF) $^{87}$Rb atomic magnetometer. The typical sensitivity is 20-30 fT/Hz$^{1/2}$ for signal frequencies below 1 kHz and NMR linewidths range from Hz all the way down to tens of mHz. These features enable precision measurements of chemically informative nuclear spin-spin couplings as well as nuclear spin precession in ultra-low magnetic fields.
0
1
0
0
0
0
Countable dense homogeneity and the Cantor set
It is shown that CH implies the existence of a compact Hausdorff space that is countable dense homogeneous, crowded and does not contain topological copies of the Cantor set. This contrasts with a previous result by the author which says that for any crowded Hausdorff space $X$ of countable $\pi$-weight, if ${}^\omega{X}$ is countable dense homogeneous, then $X$ must contain a topological copy of the Cantor set.
0
0
1
0
0
0
Dimension Estimation Using Random Connection Models
Information about intrinsic dimension is crucial to perform dimensionality reduction, compress information, design efficient algorithms, and do statistical adaptation. In this paper we propose an estimator for the intrinsic dimension of a data set. The estimator is based on binary neighbourhood information about the observations in the form of two adjacency matrices, and does not require any explicit distance information. The underlying graph is modelled according to a subset of a specific random connection model, sometimes referred to as the Poisson blob model. Computationally the estimator scales like n log n, and we specify its asymptotic distribution and rate of convergence. A simulation study on both real and simulated data shows that our approach compares favourably with some competing methods from the literature, including approaches that rely on distance information.
0
0
1
1
0
0
Optimal projection of observations in a Bayesian setting
Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback-Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback-Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations.
0
0
1
1
0
0
Notes on the Multiplicative Ergodic Theorem
The Oseledets Multiplicative Ergodic theorem is a basic result with numerous applications throughout dynamical systems. These notes provide an introduction to this theorem, as well as subsequent generalizations. They are based on lectures at summer schools in Brazil, France, and Russia.
0
0
1
0
0
0
Targeted Learning with Daily EHR Data
Electronic health records (EHR) data provide a cost and time-effective opportunity to conduct cohort studies of the effects of multiple time-point interventions in the diverse patient population found in real-world clinical settings. Because the computational cost of analyzing EHR data at daily (or more granular) scale can be quite high, a pragmatic approach has been to partition the follow-up into coarser intervals of pre-specified length. Current guidelines suggest employing a 'small' interval, but the feasibility and practical impact of this recommendation has not been evaluated and no formal methodology to inform this choice has been developed. We start filling these gaps by leveraging large-scale EHR data from a diabetes study to develop and illustrate a fast and scalable targeted learning approach that allows to follow the current recommendation and study its practical impact on inference. More specifically, we map daily EHR data into four analytic datasets using 90, 30, 15 and 5-day intervals. We apply a semi-parametric and doubly robust estimation approach, the longitudinal TMLE, to estimate the causal effects of four dynamic treatment rules with each dataset, and compare the resulting inferences. To overcome the computational challenges presented by the size of these data, we propose a novel TMLE implementation, the 'long-format TMLE', and rely on the latest advances in scalable data-adaptive machine-learning software, xgboost and h2o, for estimation of the TMLE nuisance parameters.
0
0
0
1
0
0
Stripe-Based Fragility Analysis of Concrete Bridge Classes Using Machine Learning Techniques
A framework for the generation of bridge-specific fragility utilizing the capabilities of machine learning and stripe-based approach is presented in this paper. The proposed methodology using random forests helps to generate or update fragility curves for a new set of input parameters with less computational effort and expensive re-simulation. The methodology does not place any assumptions on the demand model of various components and helps to identify the relative importance of each uncertain variable in their seismic demand model. The methodology is demonstrated through the case studies of multi-span concrete bridges in California. Geometric, material and structural uncertainties are accounted for in the generation of bridge models and fragility curves. It is also noted that the traditional lognormality assumption on the demand model leads to unrealistic fragility estimates. Fragility results obtained the proposed methodology curves can be deployed in risk assessment platform such as HAZUS for regional loss estimation.
0
0
0
1
0
0
Accountability of AI Under the Law: The Role of Explanation
The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before---applications range from clinical decision support to autonomous driving and predictive policing. That said, there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems. There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation, and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. In this work, we review contexts in which explanation is currently required under the law, and then list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans.
1
0
0
1
0
0
Frequency-oriented sub-sampling by photonic Fourier transform and I/Q demodulation
Sub-sampling can acquire directly a passband within a broad radio frequency (RF) range, avoiding down-conversion and low-phase-noise tunable local oscillation (LO). However, sub-sampling suffers from band folding and self-image interference. In this paper we propose a frequency-oriented sub-sampling to solve the two problems. With ultrashort optical pulse and a pair of chromatic dispersions, the broadband RF signal is firstly short-time Fourier-transformed to a spectrum-spread pulse. Then a time slot, corresponding to the target spectrum slice, is coherently optical-sampled with in-phase/quadrature (I/Q) demodulation. We demonstrate the novel bandpass sampling by a numerical example, which shows the desired uneven intensity response, i.e. pre-filtering. We show in theory that appropriate time-stretch capacity from dispersion can result in pre-filtering bandwidth less than sampling rate. Image rejection due to I/Q sampling is also analyzed. A proof-of-concept experiment, which is based on a time-lens sampling source and chirped fiber Bragg gratings (CFBGs), shows the center-frequency-tunable pre-filtered sub-sampling with bandwidth of 6 GHz around, as well as imaging rejection larger than 26 dB. Our technique may benefit future broadband RF receivers for frequency-agile Radar or channelization.
0
1
0
0
0
0
A moment map picture of relative balanced metrics on extremal Kähler manifolds
We give a moment map interpretation of some relatively balanced metrics. As an application, we extend a result of S. K. Donaldson on constant scalar curvature Kähler metrics to the case of extremal metrics. Namely, we show that a given extremal metric is the limit of some specific relatively balanced metrics. As a corollary, we recover uniqueness and splitting results for extremal metrics in the polarized case.
0
0
1
0
0
0
Traffic models with adversarial vehicle behaviour
We examine the impact of adversarial actions on vehicles in traffic. Current advances in assisted/autonomous driving technologies are supposed to reduce the number of casualties, but this seems to be desired despite the recently proved insecurity of in-vehicle communication buses or components. Fortunately to some extent, while compromised cars have become a reality, the numerous attacks reported so far on in-vehicle electronics are exclusively concerned with impairments of a single target. In this work we put adversarial behavior under a more complex scenario where driving decisions deluded by corrupted electronics can affect more than one vehicle. Particularly, we focus our attention on chain collisions involving multiple vehicles that can be amplified by simple adversarial interventions, e.g., delaying taillights or falsifying speedometer readings. We provide metrics for assessing adversarial impact and consider safety margins against adversarial actions. Moreover, we discuss intelligent adversarial behaviour by which the creation of rogue platoons is possible and speed manipulations become stealthy to human drivers. We emphasize that our work does not try to show the mere fact that imprudent speeds and headways lead to chain-collisions, but points out that an adversary may favour such scenarios (eventually keeping his actions stealthy for human drivers) and further asks for quantifying the impact of adversarial activity or whether existing traffic regulations are prepared for such situations.
1
0
0
0
0
0
Robust Cooperative Manipulation without Force/Torque Measurements: Control Design and Experiments
This paper presents two novel control methodologies for the cooperative manipulation of an object by N robotic agents. Firstly, we design an adaptive control protocol which employs quaternion feedback for the object orientation to avoid potential representation singularities. Secondly, we propose a control protocol that guarantees predefined transient and steady-state performance for the object trajectory. Both methodologies are decentralized, since the agents calculate their own signals without communicating with each other, as well as robust to external disturbances and model uncertainties. Moreover, we consider that the grasping points are rigid, and avoid the need for force/torque measurements. Load distribution is also included via a grasp matrix pseudo-inverse to account for potential differences in the agents' power capabilities. Finally, simulation and experimental results with two robotic arms verify the theoretical findings.
1
0
0
0
0
0
Conditionally conjugate mean-field variational Bayes for logistic models
Variational Bayes (VB) is a common strategy for approximate Bayesian inference, but simple methods are only available for specific classes of models including, in particular, representations having conditionally conjugate constructions within an exponential family. Models with logit components are an apparently notable exception to this class, due to the absence of conjugacy between the logistic likelihood and the Gaussian priors for the coefficients in the linear predictor. To facilitate approximate inference within this widely used class of models, Jaakkola and Jordan (2000) proposed a simple variational approach which relies on a family of tangent quadratic lower bounds of logistic log-likelihoods, thus restoring conjugacy between these approximate bounds and the Gaussian priors. This strategy is still implemented successfully, but less attempts have been made to formally understand the reasons underlying its excellent performance. To cover this key gap, we provide a formal connection between the above bound and a recent Pólya-gamma data augmentation for logistic regression. Such a result places the computational methods associated with the aforementioned bounds within the framework of variational inference for conditionally conjugate exponential family models, thereby allowing recent advances for this class to be inherited also by the methods relying on Jaakkola and Jordan (2000).
0
0
1
1
0
0
A $\frac{3}{2}$-Approximation Algorithm for Tree Augmentation via Chvátal-Gomory Cuts
The weighted tree augmentation problem (WTAP) is a fundamental network design problem. We are given an undirected tree $G = (V,E)$, an additional set of edges $L$ called links and a cost vector $c \in \mathbb{R}^L_{\geq 1}$. The goal is to choose a minimum cost subset $S \subseteq L$ such that $G = (V, E \cup S)$ is $2$-edge-connected. In the unweighted case, that is, when we have $c_\ell = 1$ for all $\ell \in L$, the problem is called the tree augmentation problem (TAP). Both problems are known to be APX-hard, and the best known approximation factors are $2$ for WTAP by (Frederickson and JáJá, '81) and $\tfrac{3}{2}$ for TAP due to (Kortsarz and Nutov, TALG '16). In the case where all link costs are bounded by a constant $M$, (Adjiashvili, SODA '17) recently gave a $\approx 1.96418+\varepsilon$-approximation algorithm for WTAP under this assumption. This is the first approximation with a better guarantee than $2$ that does not require restrictions on the structure of the tree or the links. In this paper, we improve Adjiashvili's approximation to a $\frac{3}{2}+\varepsilon$-approximation for WTAP under the bounded cost assumption. We achieve this by introducing a strong LP that combines $\{0,\frac{1}{2}\}$-Chvátal-Gomory cuts for the standard LP for the problem with bundle constraints from Adjiashvili. We show that our LP can be solved efficiently and that it is exact for some instances that arise at the core of Adjiashvili's approach. This results in the improved guarantee of $\frac{3}{2}+\varepsilon$. For TAP, this is the best known LP-based result, and matches the bound of $\frac{3}{2}+\varepsilon$ achieved by the best SDP-based algorithm due to (Cheriyan and Gao, arXiv '15).
1
0
0
0
0
0
On Identifying Disaster-Related Tweets: Matching-based or Learning-based?
Social media such as tweets are emerging as platforms contributing to situational awareness during disasters. Information shared on Twitter by both affected population (e.g., requesting assistance, warning) and those outside the impact zone (e.g., providing assistance) would help first responders, decision makers, and the public to understand the situation first-hand. Effective use of such information requires timely selection and analysis of tweets that are relevant to a particular disaster. Even though abundant tweets are promising as a data source, it is challenging to automatically identify relevant messages since tweet are short and unstructured, resulting to unsatisfactory classification performance of conventional learning-based approaches. Thus, we propose a simple yet effective algorithm to identify relevant messages based on matching keywords and hashtags, and provide a comparison between matching-based and learning-based approaches. To evaluate the two approaches, we put them into a framework specifically proposed for analyzing disaster-related tweets. Analysis results on eleven datasets with various disaster types show that our technique provides relevant tweets of higher quality and more interpretable results of sentiment analysis tasks when compared to learning approach.
1
0
0
0
0
0
Self-Stabilizing Disconnected Components Detection and Rooted Shortest-Path Tree Maintenance in Polynomial Steps
We deal with the problem of maintaining a shortest-path tree rooted at some process r in a network that may be disconnected after topological changes. The goal is then to maintain a shortest-path tree rooted at r in its connected component, V\_r, and make all processes of other components detecting that r is not part of their connected component. We propose, in the composite atomicity model, a silent self-stabilizing algorithm for this problem working in semi-anonymous networks, where edges have strictly positive weights. This algorithm does not require any a priori knowledge about global parameters of the network. We prove its correctness assuming the distributed unfair daemon, the most general daemon. Its stabilization time in rounds is at most 3nmax+D, where nmax is the maximum number of non-root processes in a connected component and D is the hop-diameter of V\_r. Furthermore, if we additionally assume that edge weights are positive integers, then it stabilizes in a polynomial number of steps: namely, we exhibit a bound in O(maxi nmax^3 n), where maxi is the maximum weight of an edge and n is the number of processes.
1
0
0
0
0
0
Optimizing wearable assistive devices with neuromuscular models and optimal control
The coupling of human movement dynamics with the function and design of wearable assistive devices is vital to better understand the interaction between the two. Advanced neuromuscular models and optimal control formulations provide the possibility to study and improve this interaction. In addition, optimal control can also be used to generate predictive simulations that generate novel movements for the human model under varying optimization criterion.
1
0
0
0
0
0
Preferential placement for community structure formation
Various models have been recently proposed to reflect and predict different properties of complex networks. However, the community structure, which is one of the most important properties, is not well studied and modeled. In this paper, we suggest a principle called "preferential placement", which allows to model a realistic clustering structure. We provide an extensive empirical analysis of the obtained structure as well as some theoretical results.
1
1
0
0
0
0
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
In this paper, we consider the problem of sequentially optimizing a black-box function $f$ based on noisy samples and bandit feedback. We assume that $f$ is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after $T$ rounds, and on the cumulative regret, measuring the sum of regrets over the $T$ chosen points. For the isotropic squared-exponential kernel in $d$ dimensions, we find that an average simple regret of $\epsilon$ requires $T = \Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)$, and the average cumulative regret is at least $\Omega\big( \sqrt{T(\log T)^{d/2}} \big)$, thus matching existing upper bounds up to the replacement of $d/2$ by $2d+O(1)$ in both cases. For the Matérn-$\nu$ kernel, we give analogous bounds of the form $\Omega\big( (\frac{1}{\epsilon})^{2+d/\nu}\big)$ and $\Omega\big( T^{\frac{\nu + d}{2\nu + d}} \big)$, and discuss the resulting gaps to the existing upper bounds.
1
0
0
1
0
0
Hirota bilinear equations for Painlevé transcendents
We present some observations on the tau-function for the fourth Painlevé equation. By considering a Hirota bilinear equation of order four for this tau-function, we describe the general form of the Taylor expansion around an arbitrary movable zero. The corresponding Taylor series for the tau-functions of the first and second Painlevé equations, as well as that for the Weierstrass sigma function, arise naturally as special cases, by setting certain parameters to zero.
0
1
1
0
0
0
Gradient-based Representational Similarity Analysis with Searchlight for Analyzing fMRI Data
Representational Similarity Analysis (RSA) aims to explore similarities between neural activities of different stimuli. Classical RSA techniques employ the inverse of the covariance matrix to explore a linear model between the neural activities and task events. However, calculating the inverse of a large-scale covariance matrix is time-consuming and can reduce the stability and robustness of the final analysis. Notably, it becomes severe when the number of samples is too large. For facing this shortcoming, this paper proposes a novel RSA method called gradient-based RSA (GRSA). Moreover, the proposed method is not restricted to a linear model. In fact, there is a growing interest in finding more effective ways of using multi-subject and whole-brain fMRI data. Searchlight technique can extend RSA from the localized brain regions to the whole-brain regions with smaller memory footprint in each process. Based on Searchlight, we propose a new method called Spatiotemporal Searchlight GRSA (SSL-GRSA) that generalizes our ROI-based GRSA algorithm to the whole-brain data. Further, our approach can handle some computational challenges while dealing with large-scale, multi-subject fMRI data. Experimental studies on multi-subject datasets confirm that both proposed approaches achieve superior performance to other state-of-the-art RSA algorithms.
0
0
0
1
1
0
Diffeomorphic random sampling using optimal information transport
In this article we explore an algorithm for diffeomorphic random sampling of nonuniform probability distributions on Riemannian manifolds. The algorithm is based on optimal information transport (OIT)---an analogue of optimal mass transport (OMT). Our framework uses the deep geometric connections between the Fisher-Rao metric on the space of probability densities and the right-invariant information metric on the group of diffeomorphisms. The resulting sampling algorithm is a promising alternative to OMT, in particular as our formulation is semi-explicit, free of the nonlinear Monge--Ampere equation. Compared to Markov Chain Monte Carlo methods, we expect our algorithm to stand up well when a large number of samples from a low dimensional nonuniform distribution is needed.
0
0
1
1
0
0
On approximations by trigonometric polynomials of classes of functions defined by moduli of smoothness
In this paper, we give a characterization of Nikol'ski\u{\i}-Besov type classes of functions, given by integral representations of moduli of smoothness, in terms of series over the moduli of smoothness. Also, necessary and sufficient conditions in terms of monotone or lacunary Fourier coefficients for a function to belong to a such a class are given. In order to prove our results, we make use of certain recent reverse Copson- and Leindler-type inequalities.
0
0
1
0
0
0
Topologically protected Dirac plasmons in graphene
Topological optical states exhibit unique immunity to defects and the ability to propagate without losses rendering them ideal for photonic applications.A powerful class of such states is based on time-reversal symmetry breaking of the optical response.However, existing proposals either involve sophisticated and bulky structural designs or can only operate in the microwave regime. Here, we propose and provide a theoretical proof-of-principle demonstration for highly confined topologically protected optical states to be realized at infrared frequencies in a simple 2D material structure-a periodically patterned graphene monolayer-subject to a magnetic field below 1 tesla. In our graphene honeycomb superlattice structures plasmons exhibit substantial nonreciprocal behavior at the superlattice junctions, leading to the emergence of topologically protected edge states and localized bulk modes enabled by the strong magneto-optical response of this material, which leads to time-reversal-symmetry breaking already at moderate static magnetic fields. The proposed approach is simple and robust for realizing topologically nontrivial 2D optical states, not only in graphene, but also in other 2D atomic layers, and could pave the way for realizing fast, nanoscale, defect-immune devices for integrated photonics applications.
0
1
0
0
0
0
The K-Nearest Neighbour UCB algorithm for multi-armed bandits with covariates
In this paper we propose and explore the k-Nearest Neighbour UCB algorithm for multi-armed bandits with covariates. We focus on a setting where the covariates are supported on a metric space of low intrinsic dimension, such as a manifold embedded within a high dimensional ambient feature space. The algorithm is conceptually simple and straightforward to implement. The k-Nearest Neighbour UCB algorithm does not require prior knowledge of the either the intrinsic dimension of the marginal distribution or the time horizon. We prove a regret bound for the k-Nearest Neighbour UCB algorithm which is minimax optimal up to logarithmic factors. In particular, the algorithm automatically takes advantage of both low intrinsic dimensionality of the marginal distribution over the covariates and low noise in the data, expressed as a margin condition. In addition, focusing on the case of bounded rewards, we give corresponding regret bounds for the k-Nearest Neighbour KL-UCB algorithm, which is an analogue of the KL-UCB algorithm adapted to the setting of multi-armed bandits with covariates. Finally, we present empirical results which demonstrate the ability of both the k-Nearest Neighbour UCB and k-Nearest Neighbour KL-UCB to take advantage of situations where the data is supported on an unknown sub-manifold of a high-dimensional feature space.
0
0
0
1
0
0
Proportional Mean Residual Life Model with Censored Survival Data under Case-cohort Design
Proportional mean residual life model is studied for analysing survival data from the case-cohort design. To simultaneously estimate the regression parameters and the baseline mean residual life function, weighted estimating equations based on an inverse selection probability are proposed. The resulting regression coefficients estimates are shown to be consistent and asymptotic normal with easily estimated variance-covariance. Simulation studies show that the proposed estimators perform very well. An application to a real dataset from the South Welsh nickel refiners study is also given to illustrate the methodology.
0
0
1
1
0
0
Shape differentiation of a steady-state reaction-diffusion problem arising in Chemical Engineering: the case of non-smooth kinetic with dead core
In this paper we consider an extension of the results in shape differentiation of semilinear equations with smooth nonlinearity presented in J.I. Díaz and D. Gómez-Castro: An Application of Shape Differentiation to the Effectiveness of a Steady State Reaction-Diffusion Problem Arising in Chemical Engineering. Electron. J. Differ. Equations in 2015 to the case in which the nonlinearities might be less smooth. Namely we will show that Gateaux shape derivatives exists when the nonlinearity is only Lipschitz continuous, and we will give a definition of the derivative when the nonlinearity has a blow up. In this direction, we will study the case of root-type nonlinearities.
0
0
1
0
0
0
Analytic evaluation of Coulomb integrals for one, two and three-electron distance operators, $R_{C1}^{-n}R_{D1}^{-m}$, $R_{C1}^{-n}r_{12}^{-m}$ and $r_{12}^{-n}r_{13}^{-m}$ with $n, m=0,1,2$
The state of the art for integral evaluation is that analytical solutions to integrals are far more useful than numerical solutions. We evaluate certain integrals analytically that are necessary in some approaches in quantum chemistry. In the title, where R stands for nucleus-electron and r for electron-electron distances, the $(n,m)=(0,0)$ case is trivial, the $(n,m)=(1,0)$ and (0,1) cases are well known, fundamental milestone in integration and widely used in computation chemistry, as well as based on Laplace transformation with integrand exp(-$a^2t^2$). The rest of the cases are new and need the other Laplace transformation with integrand exp(-$a^2t$) also, as well as the necessity of a two dimensional version of Boys function comes up in case. These analytic expressions (up to Gaussian function integrand) are useful for manipulation with higher moments of inter-electronic distances, for example in correlation calculations.
0
1
0
0
0
0
Herschel-PACS photometry of faint stars
Our aims are to determine flux densities and their photometric accuracy for a set of seventeen stars that range in flux from intermediately bright (<2.5 Jy) to faint (>5 mJy) in the far-infrared (FIR). We also aim to derive signal-to-noise dependence with flux and time, and compare the results with predictions from the Herschel exposure-time calculation tool. The PACS faint star sample has allowed a comprehensive sensitivity assessment of the PACS photometer. Accurate photometry allows us to establish a set of five FIR primary standard candidates, namely alpha Ari, epsilon Lep, omega,Cap, HD41047 and 42Dra, which are 2 -- 20 times fainter than the faintest PACS fiducial standard (gamma Dra) with absolute accuracy of <6%. For three of these primary standard candidates, essential stellar parameters are known, meaning that a dedicated flux model code may be run.
0
1
0
0
0
0
Role of zero synapses in unsupervised feature learning
Synapses in real neural circuits can take discrete values, including zero (silent or potential) synapses. The computational role of zero synapses in unsupervised feature learning of unlabeled noisy data is still unclear, thus it is important to understand how the sparseness of synaptic activity is shaped during learning and its relationship with receptive field formation. Here, we formulate this kind of sparse feature learning by a statistical mechanics approach. We find that learning decreases the fraction of zero synapses, and when the fraction decreases rapidly around a critical data size, an intrinsically structured receptive field starts to develop. Further increasing the data size refines the receptive field, while a very small fraction of zero synapses remain to act as contour detectors. This phenomenon is discovered not only in learning a handwritten digits dataset, but also in learning retinal neural activity measured in a natural-movie-stimuli experiment.
1
1
0
0
0
0
On the Pervasiveness of Difference-Convexity in Optimization and Statistics
With the increasing interest in applying the methodology of difference-of-convex (dc) optimization to diverse problems in engineering and statistics, this paper establishes the dc property of many well-known functions not previously known to be of this class. Motivated by a quadratic programming based recourse function in two-stage stochastic programming, we show that the (optimal) value function of a copositive (thus not necessarily convex) quadratic program is dc on the domain of finiteness of the program when the matrix in the objective function's quadratic term and the constraint matrix are fixed. The proof of this result is based on a dc decomposition of a piecewise LC1 function (i.e., functions with Lipschitz gradients). Armed with these new results and known properties of dc functions existed in the literature, we show that many composite statistical functions in risk analysis, including the value-at-risk (VaR), conditional value-at-risk (CVaR), expectation-based, VaR-based, and CVaR-based random deviation functions are all dc. Adding the known class of dc surrogate sparsity functions that are employed as approximations of the l_0 function in statistical learning, our work significantly expands the family of dc functions and positions them for fruitful applications.
0
0
1
0
0
0
Stochastic Dynamic Optimal Power Flow in Distribution Network with Distributed Renewable Energy and Battery Energy Storage
The penetration of distributed renewable energy (DRE) greatly raises the risk of distribution network operation such as peak shaving and voltage stability. Battery energy storage (BES) has been widely accepted as the most potential application to cope with the challenge of high penetration of DRE. To cope with the uncertainties and variability of DRE, a stochastic day-ahead dynamic optimal power flow (DOPF) and its algorithm are proposed. The overall economy is achieved by fully considering the DRE, BES, electricity purchasing and active power losses. The rainflow algorithm-based cycle counting method of BES is incorporated in the DOPF model to capture the cell degradation, greatly extending the expected BES lifetime and achieving a better economy. DRE scenarios are generated to consider the uncertainties and correlations based on the Copula theory. To solve the DOPF model, we propose a Lagrange relaxation-based algorithm, which has a significantly reduced complexity with respect to the existing techniques. For this reason, the proposed algorithm enables much more scenarios incorporated in the DOPF model and better captures the DRE uncertainties and correlations. Finally, numerical studies for the day-ahead DOPF in the IEEE 123-node test feeder are presented to demonstrate the merits of the proposed method. Results show that the actual BES life expectancy of the proposed model has increased to 4.89 times compared with the traditional ones. The problems caused by DRE are greatly alleviated by fully capturing the uncertainties and correlations with the proposed method.
0
0
1
0
0
0
Achievable Rate Region of the Zero-Forcing Precoder in a 2 X 2 MU-MISO Broadcast VLC Channel with Per-LED Peak Power Constraint and Dimming Control
In this paper, we consider the 2 X 2 multi-user multiple-input-single-output (MU-MISO) broadcast visible light communication (VLC) channel with two light emitting diodes (LEDs) at the transmitter and a single photo diode (PD) at each of the two users. We propose an achievable rate region of the Zero-Forcing (ZF) precoder in this 2 X 2 MU-MISO VLC channel under a per-LED peak and average power constraint, where the average optical power emitted from each LED is fixed for constant lighting, but is controllable (referred to as dimming control in IEEE 802.15.7 standard on VLC). We analytically characterize the proposed rate region boundary and show that it is Pareto-optimal. Further analysis reveals that the largest rate region is achieved when the fixed per-LED average optical power is half of the allowed per-LED peak optical power. We also propose a novel transceiver architecture where the channel encoder and dimming control are separated which greatly simplifies the complexity of the transceiver. A case study of an indoor VLC channel with the proposed transceiver reveals that the achievable information rates are sensitive to the placement of the LEDs and the PDs. An interesting observation is that for a given placement of LEDs in a 5 m X 5 m X 3 m room, even with a substantial displacement of the users from their optimum placement, reduction in the achievable rates is not significant. This observation could therefore be used to define "coverage zones" within a room where the reduction in the information rates to the two users is within an acceptable tolerance limit.
1
0
1
0
0
0
Autonomous Electric Race Car Design
Autonomous driving and electric vehicles are nowadays very active research and development areas. In this paper we present the conversion of a standard Kyburz eRod into an autonomous vehicle that can be operated in challenging environments such as Swiss mountain passes. The overall hardware and software architectures are described in detail with a special emphasis on the sensor requirements for autonomous vehicles operating in partially structured environments. Furthermore, the design process itself and the finalized system architecture are presented. The work shows state of the art results in localization and controls for self-driving high-performance electric vehicles. Test results of the overall system are presented, which show the importance of generalizable state estimation algorithms to handle a plethora of conditions.
1
0
0
0
0
0
Temporal Stable Community in Time-Varying Networks
Identifying community structure of a complex network provides insight to the interdependence between the network topology and emergent collective behaviors of networks, while detecting such invariant communities in a time-varying network is more challenging. In this paper, we define the temporal stable community and newly propose the concept of dynamic modularity to evaluate the stable community structures in time-varying networks, which is robust against small changes as verified by several empirical time-varying network datasets. Besides, using the volatility features of temporal stable communities in functional brain networks, we successfully differentiate the ADHD (Attention Deficit Hyperactivity Disorder) patients and healthy controls efficiently.
0
0
0
0
1
0
The shape of a rapidly rotating polytrope with index unity
We show that the solutions obtained in the paper `An exact solution for arbitrarily rotating gaseous polytropes with index unity' by Kong, Zhang, and Schubert represent only approximate solutions of the free-boundary Euler-Poisson system of equations describing uniformly rotating, self-gravitating polytropes with index unity. We discuss the quality of such solutions as approximations to the rigidly rotating equilibrium polytropic configurations.
0
1
0
0
0
0
Measurement of the Planck constant at the National Institute of Standards and Technology from 2015 to 2017
Researchers at the National Institute of Standards and Technology(NIST) have measured the value of the Planck constant to be $h =6.626\,069\,934(89)\times 10^{-34}\,$J$\,$s (relative standard uncertainty $13\times 10^{-9}$). The result is based on over 10$\,$000 weighings of masses with nominal values ranging from 0.5$\,$kg to 2$\,$kg with the Kibble balance NIST-4. The uncertainty has been reduced by more than twofold relative to a previous determination because of three factors: (1) a much larger data set than previously available, allowing a more realistic, and smaller, Type A evaluation; (2) a more comprehensive measurement of the back action of the weighing current on the magnet by weighing masses up to 2$\,$kg, decreasing the uncertainty associated with magnet non-linearity; (3) a rigorous investigation of the dependence of the geometric factor on the coil velocity reducing the uncertainty assigned to time-dependent leakage of current in the coil.
0
1
0
0
0
0
Event Stream-Based Process Discovery using Abstract Representations
The aim of process discovery, originating from the area of process mining, is to discover a process model based on business process execution data. A majority of process discovery techniques relies on an event log as an input. An event log is a static source of historical data capturing the execution of a business process. In this paper we focus on process discovery relying on online streams of business process execution events. Learning process models from event streams poses both challenges and opportunities, i.e. we need to handle unlimited amounts of data using finite memory and, preferably, constant time. We propose a generic architecture that allows for adopting several classes of existing process discovery techniques in context of event streams. Moreover, we provide several instantiations of the architecture, accompanied by implementations in the process mining tool-kit ProM (this http URL). Using these instantiations, we evaluate several dimensions of stream-based process discovery. The evaluation shows that the proposed architecture allows us to lift process discovery to the streaming domain.
1
0
0
1
0
0
On $C$-bases, partition pairs and filtrations for induced or restricted Specht modules
We obtain alternative explicit Specht filtrations for the induced and the restricted Specht modules in the Hecke algebra of the symmetric group (defined over the ring $A=\mathbb Z[q^{1/2},q^{-1/2}]$ where $q$ is an indeterminate) using $C$-bases for these modules. Moreover, we provide a link between a certain $C$-basis for the induced Specht module and the notion of pairs of partitions.
0
0
1
0
0
0