text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " In the present paper we provide a description of complete Calabi-Yau metrics\non the canonical bundle of generalized complex flag manifolds. By means of Lie\ntheory we give an explicit description of complete Ricci-flat Kähler metrics\nobtained through the Calabi ansatz technique. We use this approach to provide\nseveral explicit examples of noncompact complete Calabi-Yau manifolds, these\nexamples include canonical bundles of non-toric flag manifolds (e.g. Grassmann\nmanifolds and full flag manifolds).\n", "title": "Calabi-Yau metrics on canonical bundles of complex flag manifolds" }
null
null
[ "Mathematics" ]
null
true
null
18701
null
Validated
null
null
null
{ "abstract": " Many graphical Gaussian selection methods in a Bayesian framework use the\nG-Wishart as the conjugate prior on the precision matrix. The Bayes factor to\ncompare a model governed by a graph G and a model governed by the neighboring\ngraph G-e, derived from G by deleting an edge e, is a function of the ratios of\nprior and posterior normalizing constants of the G-Wishart for G and G-e.\nWhile more recent methods avoid the computation of the posterior ratio,\ncomputing the ratio of prior normalizing constants, (2) below, has remained a\ncomputational stumbling block. In this paper, we propose an explicit analytic\napproximation to (2) which is equal to the ratio of two Gamma functions\nevaluated at (delta+d)/2 and (delta+d+1)/2 respectively, where delta is the\nshape parameter of the G-Wishart and d is the number of paths of length two\nbetween the endpoints of e. This approximation allows us to avoid Monte Carlo\nmethods, is computationally inexpensive and is scalable to high-dimensional\nproblems. We show that the ratio of the approximation to the true value is\nalways between zero and one and so, one cannot incur wild errors.\nIn the particular case where the paths between the endpoints of e are\ndisjoint, we show that the approximation is very good. When the paths between\nthese two endpoints are not disjoint we give a sufficient condition for the\napproximation to be good. Numerical results show that the ratio of the\napproximation to the true value of the prior ratio is always between .55 and 1\nand very often close to 1. We compare the results obtained with a model search\nusing our approximation and a search using the double Metropolis-Hastings\nalgorithm to compute the prior ratio. The results are extremely close.\n", "title": "The ratio of normalizing constants for Bayesian graphical Gaussian model selection" }
null
null
null
null
true
null
18702
null
Default
null
null
null
{ "abstract": " Angiogenesis - the growth of new blood vessels from a pre-existing\nvasculature - is key in both physiological processes and on several\npathological scenarios such as cancer progression or diabetic retinopathy. For\nthe new vascular networks to be functional, it is required that the growing\nsprouts merge either with an existing functional mature vessel or with another\ngrowing sprout. This process is called anastomosis. We present a systematic 2D\nand 3D computational study of vessel growth in a tissue to address the\ncapability of angiogenic factor gradients to drive anastomosis formation. We\nconsider that these growth factors are produced only by tissue cells in\nhypoxia, i.e. until nearby vessels merge and become capable of carrying blood\nand irrigating their vicinity. We demonstrate that this increased production of\nangiogenic factors by hypoxic cells is able to promote vessel anastomoses\nevents in both 2D and 3D. The simulations also verify that the morphology of\nthese networks has an increased resilience toward variations in the endothelial\ncell's proliferation and chemotactic response. The distribution of tissue\ncell`s and the concentration of the growth factors they produce are the major\nfactors in determining the final morphology of the network.\n", "title": "Angiogenic Factors produced by Hypoxic Cells are a leading driver of Anastomoses in Sprouting Angiogenesis---a computational study" }
null
null
null
null
true
null
18703
null
Default
null
null
null
{ "abstract": " We present in this article the work of Henri Bénard (1874-1939), French\nphysicist who began the systematic experimental study of two hydrodynamic\nsystems: the thermal convection of fluids heated from below (the\nRayleigh-Bénard convection and the Bénard-Marangoni convection) and the\nperiodical vortex shedding behind a bluff body in a flow (the\nBénard-Kármán vortex street). Across his scientific biography, we review\nthe interplay between experiments and theory in these two major subjects of\nfluid mechanics.\n", "title": "Henri Bénard: Thermal convection and vortex shedding" }
null
null
null
null
true
null
18704
null
Default
null
null
null
{ "abstract": " A homomorphism from a graph G to a graph H is a vertex mapping f from the\nvertex set of G to the vertex set of H such that there is an edge between\nvertices f(u) and f(v) of H whenever there is an edge between vertices u and v\nof G. The H-Colouring problem is to decide whether or not a graph G allows a\nhomomorphism to a fixed graph H. We continue a study on a variant of this\nproblem, namely the Surjective H-Colouring problem, which imposes the\nhomomorphism to be vertex-surjective. We build upon previous results and show\nthat this problem is NP-complete for every connected graph H that has exactly\ntwo vertices with a self-loop as long as these two vertices are not adjacent.\nAs a result, we can classify the computational complexity of Surjective\nH-Colouring for every graph H on at most four vertices.\n", "title": "Surjective H-Colouring: New Hardness Results" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
18705
null
Validated
null
null
null
{ "abstract": " The normalized subband adaptive filter (NSAF) is widely accepted as a\npreeminent adaptive filtering algorithm because of its efficiency under the\ncolored excitation. However, the convergence rate of NSAF is slow. To address\nthis drawback, in this paper, a variant of the NSAF, called the differential\nevolution (DE)-NSAF (DE-NSAF), is proposed for channel estimation based on DE\nstrategy. It is worth noticing that there are several papers concerning\ndesigning DE strategies for adaptive filter. But their signal models are still\nthe single adaptive filter model rather than the fullband adaptive filter model\nconsidered in this paper. Thus, the problem considered in our work is quite\ndifferent from those. The proposed DE-NSAF algorithm is based on real-valued\nmanipulations and has fast convergence rate for searching the global solution\nof optimized weight vector. Moreover, a design step of new algorithm is given\nin detail. Simulation results demonstrate the improved performance of the\nproposed DE-NSAF algorithm in terms of the convergence rate.\n", "title": "Subband adaptive filter trained by differential evolution for channel estimation" }
null
null
null
null
true
null
18706
null
Default
null
null
null
{ "abstract": " In this paper we introduce the novel framework of distributionally robust\ngames. These are multi-player games where each player models the state of\nnature using a worst-case distribution, also called adversarial distribution.\nThus each player's payoff depends on the other players' decisions and on the\ndecision of a virtual player (nature) who selects an adversarial distribution\nof scenarios. This paper provides three main contributions. Firstly, the\ndistributionally robust game is formulated using the statistical notions of\n$f$-divergence between two distributions, here represented by the adversarial\ndistribution, and the exact distribution. Secondly, the complexity of the\nproblem is significantly reduced by means of triality theory. Thirdly,\nstochastic Bregman learning algorithms are proposed to speedup the computation\nof robust equilibria. Finally, the theoretical findings are illustrated in a\nconvex setting and its limitations are tested with a non-convex non-concave\nfunction.\n", "title": "Distributionally Robust Games: f-Divergence and Learning" }
null
null
null
null
true
null
18707
null
Default
null
null
null
{ "abstract": " We propose a graph-based process calculus for modeling and reasoning about\nwireless networks with local broadcasts. Graphs are used at syntactical level\nto describe the topological structures of networks. This calculus is equipped\nwith a reduction semantics and a labelled transition semantics. The former is\nused to define weak barbed congruence. The latter is used to define a\nparameterized weak bisimulation emphasizing locations and local broadcasts. We\nprove that weak bisimilarity implies weak barbed congruence. The potential\napplications are illustrated by some examples and two case studies.\n", "title": "Modeling and Reasoning About Wireless Networks: A Graph-based Calculus Approach" }
null
null
null
null
true
null
18708
null
Default
null
null
null
{ "abstract": " In this paper, we will prove the Weyl's law for the asymptotic formula of\nDirichlet eigenvalues on metric measure spaces with generalized Ricci curvature\nbounded from below.\n", "title": "Weyl's law on $RCD^*(K,N)$ metric measure spaces" }
null
null
[ "Mathematics" ]
null
true
null
18709
null
Validated
null
null
null
{ "abstract": " We prove Zagier's conjecture regarding the 2-adic valuation of the\ncoefficients $\\{b_m\\}$ that appear in Ewing and Schober's series formula for\nthe area of the Mandelbrot set in the case where $m\\equiv 2 \\mod 4$.\n", "title": "The area of the Mandelbrot set and Zagier's conjecture" }
null
null
null
null
true
null
18710
null
Default
null
null
null
{ "abstract": " We develop an approximate formula for evaluating a cross-validation estimator\nof predictive likelihood for multinomial logistic regression regularized by an\n$\\ell_1$-norm. This allows us to avoid repeated optimizations required for\nliterally conducting cross-validation; hence, the computational time can be\nsignificantly reduced. The formula is derived through a perturbative approach\nemploying the largeness of the data size and the model dimensionality. An\nextension to the elastic net regularization is also addressed. The usefulness\nof the approximate formula is demonstrated on simulated data and the ISOLET\ndataset from the UCI machine learning repository.\n", "title": "Accelerating Cross-Validation in Multinomial Logistic Regression with $\\ell_1$-Regularization" }
null
null
null
null
true
null
18711
null
Default
null
null
null
{ "abstract": " The objective of this paper is to introduce an artificial intelligence based\noptimization approach, which is inspired from Piagets theory on cognitive\ndevelopment. The approach has been designed according to essential processes\nthat an individual may experience while learning something new or improving his\n/ her knowledge. These processes are associated with the Piagets ideas on an\nindividuals cognitive development. The approach expressed in this paper is a\nsimple algorithm employing swarm intelligence oriented tasks in order to\novercome single-objective optimization problems. For evaluating effectiveness\nof this early version of the algorithm, test operations have been done via some\nbenchmark functions. The obtained results show that the approach / algorithm\ncan be an alternative to the literature in terms of single-objective\noptimization. The authors have suggested the name: Cognitive Development\nOptimization Algorithm (CoDOA) for the related intelligent optimization\napproach.\n", "title": "Realizing an optimization approach inspired from Piagets theory on cognitive development" }
null
null
null
null
true
null
18712
null
Default
null
null
null
{ "abstract": " In conventional chemisorption model, the d-band center theory (augmented\nsometimes with the upper edge of d-band for imporved accuarcy) plays a central\nrole in predicting adsorption energies and catalytic activity as a function of\nd-band center of the solid surfaces, but it requires density functional\ncalculations that can be quite costly for large scale screening purposes of\nmaterials. In this work, we propose to use the d-band width of the muffin-tin\norbital theory (to account for local coordination environment) plus\nelectronegativity (to account for adsorbate renormalization) as a simple set of\nalternative descriptors for chemisorption, which do not demand the ab initio\ncalculations. This pair of descriptors are then combined with machine learning\nmethods, namely, artificial neural network (ANN) and kernel ridge regression\n(KRR), to allow large scale materials screenings. We show, for a toy set of 263\nalloy systems, that the CO adsorption energy can be predicted with a remarkably\nsmall mean absolute deviation error of 0.05 eV, a significantly improved result\nas compared to 0.13 eV obtained with descriptors including costly d-band center\ncalculations in literature. We achieved this high accuracy by utilizing an\nactive learning algorithm, without which the accuracy was 0.18 eV otherwise. As\na practical application of this machine, we identified Cu3Y@Cu as a highly\nactive and cost-effective electrochemical CO2 reduction catalyst to produce CO\nwith the overpotential 0.37 V lower than Au catalyst.\n", "title": "Catalyst design using actively learned machine with non-ab initio input features towards CO2 reduction reactions" }
null
null
null
null
true
null
18713
null
Default
null
null
null
{ "abstract": " Extracting significant places or places of interest (POIs) using individuals'\nspatio-temporal data is of fundamental importance for human mobility analysis.\nClassical clustering methods have been used in prior work for detecting POIs,\nbut without considering temporal constraints. Usually, the involved parameters\nfor clustering are difficult to determine, e.g., the optimal cluster number in\nhierarchical clustering. Currently, researchers either choose heuristic values\nor use spatial distance-based optimization to determine an appropriate\nparameter set. We argue that existing research does not optimally address\ntemporal information and thus leaves much room for improvement. Considering\ntemporal constraints in human mobility, we introduce an effective clustering\napproach - namely POI clustering with temporal constraints (PC-TC) - to extract\nPOIs from spatio-temporal data of human mobility. Following human mobility\nnature in modern society, our approach aims to extract both global POIs (e.g.,\nworkplace or university) and local POIs (e.g., library, lab, and canteen).\nBased on two publicly available datasets including 193 individuals, our\nevaluation results show that PC-TC has much potential for next place prediction\nin terms of granularity (i.e., the number of extracted POIs) and\npredictability.\n", "title": "Clustering with Temporal Constraints on Spatio-Temporal Data of Human Mobility" }
null
null
null
null
true
null
18714
null
Default
null
null
null
{ "abstract": " Generalizing several previous results in the literature on rational harmonic\nfunctions, we derive bounds on the maximum number of zeros of functions $f(z) =\n\\frac{p(z)}{q(z)} - \\overline{z}$, which depend on both $\\mathrm{deg}(p)$ and\n$\\mathrm{deg}(q)$. Furthermore, we prove that any function that attains one of\nthese upper bounds is regular.\n", "title": "The maximum number of zeros of $r(z) - \\overline{z}$ revisited" }
null
null
null
null
true
null
18715
null
Default
null
null
null
{ "abstract": " HL-LHC federates the efforts and R&D of a large international community\ntowards the ambitious HL- LHC objectives and contributes to establishing the\nEuropean Research Area (ERA) as a focal point of global research cooperation\nand a leader in frontier knowledge and technologies. HL-LHC relies on strong\nparticipation from various partners, in particular from leading US and Japanese\nlaboratories. This participation will be required for the execution of the\nconstruction phase as a global project. In particular, the US LHC Accelerator\nR&D Program (LARP) has developed some of the key technologies for the HL-LHC,\nsuch as the large-aperture niobium-tin ($Nb_{3}Sn) quadrupoles and the crab\ncavities. The proposed governance model is tailored accordingly and should pave\nthe way for the organization of the construction phase.\n", "title": "High Luminosity Large Hadron Collider HL-LHC" }
null
null
null
null
true
null
18716
null
Default
null
null
null
{ "abstract": " Various approaches have been proposed to learn visuo-motor policies for\nreal-world robotic applications. One solution is first learning in simulation\nthen transferring to the real world. In the transfer, most existing approaches\nneed real-world images with labels. However, the labelling process is often\nexpensive or even impractical in many robotic applications. In this paper, we\npropose an adversarial discriminative sim-to-real transfer approach to reduce\nthe cost of labelling real data. The effectiveness of the approach is\ndemonstrated with modular networks in a table-top object reaching task where a\n7 DoF arm is controlled in velocity mode to reach a blue cuboid in clutter\nthrough visual observations. The adversarial transfer approach reduced the\nlabelled real data requirement by 50%. Policies can be transferred to real\nenvironments with only 93 labelled and 186 unlabelled real images. The\ntransferred visuo-motor policies are robust to novel (not seen in training)\nobjects in clutter and even a moving target, achieving a 97.8% success rate and\n1.8 cm control accuracy.\n", "title": "Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies" }
null
null
null
null
true
null
18717
null
Default
null
null
null
{ "abstract": " In the derivation of the generating function of the Gaudin Hamiltonians with\nboundary terms, we follow the same approach used previously in the rational\ncase, which in turn was based on Sklyanin's method in the periodic case. Our\nderivation is centered on the quasi-classical expansion of the linear\ncombination of the transfer matrix of the XXZ Heisenberg spin chain and the\ncentral element, the so-called Sklyanin determinant. The corresponding Gaudin\nHamiltonians with boundary terms are obtained as the residues of the generating\nfunction. By defining the appropriate Bethe vectors which yield strikingly\nsimple off-shell action of the generating function, we fully implement the\nalgebraic Bethe ansatz, obtaining the spectrum of the generating function and\nthe corresponding Bethe equations.\n", "title": "Algebraic Bethe ansatz for the trigonometric sl(2) Gaudin model with triangular boundary" }
null
null
null
null
true
null
18718
null
Default
null
null
null
{ "abstract": " We are now witnessing the increasing availability of event stream data, i.e.,\na sequence of events with each event typically being denoted by the time it\noccurs and its mark information (e.g., event type). A fundamental problem is to\nmodel and predict such kind of marked temporal dynamics, i.e., when the next\nevent will take place and what its mark will be. Existing methods either\npredict only the mark or the time of the next event, or predict both of them,\nyet separately. Indeed, in marked temporal dynamics, the time and the mark of\nthe next event are highly dependent on each other, requiring a method that\ncould simultaneously predict both of them. To tackle this problem, in this\npaper, we propose to model marked temporal dynamics by using a mark-specific\nintensity function to explicitly capture the dependency between the mark and\nthe time of the next event. Extensive experiments on two datasets demonstrate\nthat the proposed method outperforms state-of-the-art methods at predicting\nmarked temporal dynamics.\n", "title": "Marked Temporal Dynamics Modeling based on Recurrent Neural Network" }
null
null
null
null
true
null
18719
null
Default
null
null
null
{ "abstract": " In this paper we consider a network of agents monitoring a spatially\ndistributed arrival process. Each node measures the number of arrivals seen at\nits monitoring point in a given time-interval with the objective of estimating\nthe unknown local arrival rate. We propose an asynchronous distributed approach\nbased on a Bayesian model with unknown hyperparameter, where each node computes\nthe minimum mean square error (MMSE) estimator of its local arrival rate in a\ndistributed way. As a result, the estimation at each node \"optimally\" fuses the\ninformation from the whole network through a distributed optimization\nalgorithm. Moreover, we propose an ad-hoc distributed estimator, based on a\nconsensus algorithm for time-varying and directed graphs, which exhibits\nreduced complexity and exponential convergence. We analyze the performance of\nthe proposed distributed estimators, showing that they: (i) are reliable even\nin presence of limited local data, and (ii) improve the estimation accuracy\ncompared to the purely decentralized setup. Finally, we provide a statistical\ncharacterization of the proposed estimators. In particular, for the ad-hoc\nestimator, we show that as the number of nodes goes to infinity its mean square\nerror converges to the optimal one. Numerical Monte Carlo simulations confirm\nthe theoretical characterization and highlight the appealing performances of\nthe estimators.\n", "title": "A Bayesian framework for distributed estimation of arrival rates in asynchronous networks" }
null
null
null
null
true
null
18720
null
Default
null
null
null
{ "abstract": " We analyze generic sequences for which the geometrically linear energy\n\\[E_\\eta(u,\\chi):= \\eta^{-\\frac{2}{3}}\\int_{B_{0}(1)} \\left| e(u)-\n\\sum_{i=1}^3 \\chi_ie_i\\right|^2 d x+\\eta^\\frac{1}{3} \\sum_{i=1}^3\n|D\\chi_i|(B_{0}(1))\\] remains bounded in the limit $\\eta \\to 0$. Here $ e(u)\n:=1/2(Du + Du^T)$ is the (linearized) strain of the displacement $u$, the\nstrains $e_i$ correspond to the martensite strains of a shape memory alloy\nundergoing cubic-to-tetragonal transformations and $\\chi_i:B_{0}(1) \\to\n\\{0,1\\}$ is the partition into phases. In this regime it is known that in\naddition to simple laminates also branched structures are possible, which if\naustenite was present would enable the alloy to form habit planes.\nIn an ansatz-free manner we prove that the alignment of macroscopic\ninterfaces between martensite twins is as predicted by well-known rank-one\nconditions. Our proof proceeds via the non-convex, non-discrete-valued\ndifferential inclusion \\[e(u) \\in \\bigcup_{1\\leq i\\neq j\\leq 3}\n\\operatorname{conv} \\{e_i,e_j\\}\\] satisfied by the weak limits of bounded\nenergy sequences and of which we classify all solutions. In particular, there\nexist no convex integration solutions of the inclusion with complicated\ngeometric structures.\n", "title": "Rigidity of branching microstructures in shape memory alloys" }
null
null
null
null
true
null
18721
null
Default
null
null
null
{ "abstract": " Machine learning qualifies computers to assimilate with data, without being\nsolely programmed [1, 2]. Machine learning can be classified as supervised and\nunsupervised learning. In supervised learning, computers learn an objective\nthat portrays an input to an output hinged on training input-output pairs [3].\nMost efficient and widely used supervised learning algorithms are K-Nearest\nNeighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor\n(LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this\npaper is to implement these elegant learning algorithms on eleven different\ndatasets from the UCI machine learning repository to observe the variation of\naccuracies for each of the algorithms on all datasets. Analyzing the accuracy\nof the algorithms will give us a brief idea about the relationship of the\nmachine learning algorithms and the data dimensionality. All the algorithms are\ndeveloped in Matlab. Upon such accuracy observation, the comparison can be\nbuilt among KNN, SVM, LMNN, and ENN regarding their performances on each\ndataset.\n", "title": "Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository" }
null
null
null
null
true
null
18722
null
Default
null
null
null
{ "abstract": " Network classification has a variety of applications, such as detecting\ncommunities within networks and finding similarities between those representing\ndifferent aspects of the real world. However, most existing work in this area\nfocus on examining static undirected networks without considering directed\nedges or temporality. In this paper, we propose a new methodology that utilizes\nfeature representation for network classification based on the temporal motif\ndistribution of the network and a null model for comparing against random\ngraphs. Experimental results show that our method improves accuracy by up\n$10\\%$ compared to the state-of-the-art embedding method in network\nclassification, for tasks such as classifying network type, identifying\ncommunities in email exchange network, and identifying users given their\napp-switching behaviors.\n", "title": "Network Classification in Temporal Networks Using Motifs" }
null
null
null
null
true
null
18723
null
Default
null
null
null
{ "abstract": " In this paper, we show that $\\mathrm{RT}^{2}+\\mathsf{WKL}_0$ is a\n$\\Pi^{1}_{1}$-conservative extension of $\\mathrm{B}\\Sigma^0_3$.\n", "title": "The strength of Ramsey's theorem for pairs and arbitrarily many colors" }
null
null
null
null
true
null
18724
null
Default
null
null
null
{ "abstract": " This paper considers the joint design of user power allocation and relay\nbeamforming in relaying communications, in which multiple pairs of\nsingle-antenna users exchange information with each other via multiple-antenna\nrelays in two time slots. All users transmit their signals to the relays in the\nfirst time slot while the relays broadcast the beamformed signals to all users\nin the second time slot. The aim is to maximize the system's energy efficiency\n(EE) subject to quality-of-service (QoS) constraints in terms of exchange\nthroughput requirements. The QoS constraints are nonconvex with many nonlinear\ncross-terms, so finding a feasible point is already computationally\nchallenging. The sum throughput appears in the numerator while the total\nconsumption power appears in the denominator of the EE objective function. The\nformer is a nonconcave function and the latter is a nonconvex function, making\nfractional programming useless for EE optimization. Nevertheless, efficient\niterations of low complexity to obtain its optimized solutions are developed.\nThe performances of the multiple-user and multiple-relay networks under various\nscenarios are evaluated to show the merit of the paper development.\n", "title": "Joint Power Allocation and Beamforming for Energy-Efficient Two-Way Multi-Relay Communications" }
null
null
null
null
true
null
18725
null
Default
null
null
null
{ "abstract": " Armed conflict has led to an unprecedented number of internally displaced\npersons (IDPs) - individuals who are forced out of their homes but remain\nwithin their country. IDPs often urgently require shelter, food, and\nhealthcare, yet prediction of when large fluxes of IDPs will cross into an area\nremains a major challenge for aid delivery organizations. Accurate forecasting\nof IDP migration would empower humanitarian aid groups to more effectively\nallocate resources during conflicts. We show that monthly flow of IDPs from\nprovince to province in both Syria and Yemen can be accurately forecasted one\nmonth in advance, using publicly available data. We model monthly IDP flow\nusing data on food price, fuel price, wage, geospatial, and news data. We find\nthat machine learning approaches can more accurately forecast migration trends\nthan baseline persistence models. Our findings thus potentially enable\nproactive aid allocation for IDPs in anticipation of forecasted arrivals.\n", "title": "Forecasting Internally Displaced Population Migration Patterns in Syria and Yemen" }
null
null
null
null
true
null
18726
null
Default
null
null
null
{ "abstract": " Here we deconstruct, and then in a reasoned way reconstruct, the concept of\n\"entropy of a system,\" paying particular attention to where the randomness may\nbe coming from. We start with the core concept of entropy as a COUNT associated\nwith a DESCRIPTION; this count (traditionally expressed in logarithmic form for\na number of good reasons) is in essence the number of possibilities---specific\ninstances or \"scenarios,\" that MATCH that description. Very natural (and\nvirtually inescapable) generalizations of the idea of description are the\nprobability distribution and of its quantum mechanical counterpart, the density\noperator.\nWe track the process of dynamically updating entropy as a system evolves.\nThree factors may cause entropy to change: (1) the system's INTERNAL DYNAMICS;\n(2) unsolicited EXTERNAL INFLUENCES on it; and (3) the approximations one has\nto make when one tries to predict the system's future state. The latter task is\nusually hampered by hard-to-quantify aspects of the original description,\nlimited data storage and processing resource, and possibly algorithmic\ninadequacy. Factors 2 and 3 introduce randomness into one's predictions and\naccordingly degrade them. When forecasting, as long as the entropy bookkeping\nis conducted in an HONEST fashion, this degradation will ALWAYS lead to an\nentropy increase.\nTo clarify the above point we introduce the notion of HONEST ENTROPY, which\ncoalesces much of what is of course already done, often tacitly, in responsible\nentropy-bookkeping practice. This notion, we believe, will help to fill an\nexpressivity gap in scientific discourse. With its help we shall prove that ANY\ndynamical system---not just our physical universe---strictly obeys Clausius's\noriginal formulation of the second law of thermodynamics IF AND ONLY IF it is\ninvertible. Thus this law is a TAUTOLOGICAL PROPERTY of invertible systems!\n", "title": "Entropy? Honest!" }
null
null
null
null
true
null
18727
null
Default
null
null
null
{ "abstract": " We study the problem of generating adversarial examples in a black-box\nsetting in which only loss-oracle access to a model is available. We introduce\na framework that conceptually unifies much of the existing work on black-box\nattacks, and we demonstrate that the current state-of-the-art methods are\noptimal in a natural sense. Despite this optimality, we show how to improve\nblack-box attacks by bringing a new element into the problem: gradient priors.\nWe give a bandit optimization-based algorithm that allows us to seamlessly\nintegrate any such priors, and we explicitly identify and incorporate two\nexamples. The resulting methods use two to four times fewer queries and fail\ntwo to five times less often than the current state-of-the-art.\n", "title": "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors" }
null
null
null
null
true
null
18728
null
Default
null
null
null
{ "abstract": " We introduce a dynamic model of the default waterfall of derivatives CCPs and\npropose a risk sensitive method for sizing the initial margin (IM), and the\ndefault fund (DF) and its allocation among clearing members. Using a Markovian\nstructure model of joint credit migrations, our evaluation of DF takes into\naccount the joint credit quality of clearing members as they evolve over time.\nAnother important aspect of the proposed methodology is the use of the time\nconsistent dynamic risk measures for computation of IM and DF. We carry out a\ncomprehensive numerical study, where, in particular, we analyze the advantages\nof the proposed methodology and its comparison with the currently prevailing\nmethods used in industry.\n", "title": "A Dynamic Model of Central Counterparty Risk" }
null
null
null
null
true
null
18729
null
Default
null
null
null
{ "abstract": " In this paper, we propose a novel reception/transmission scheme for\nhalf-duplex base stations (BSs). In particular, we propose a half-duplex BS\nthat employes in-band uplink-receptions from user 1 and downlink-transmissions\nto user 2, which occur in different time slots. Furthermore, we propose optimal\nadaptive scheduling of the in-band uplink-receptions and downlink-transmissions\nof the BS such that the uplink-downlink rate/throughput region is maximized and\nthe outage probabilities of the uplink and downlink channels are minimized.\nPractically, this results in selecting whether in a given time slot the BS\nshould receive from user 1 or transmit to user 2 based on the qualities of the\nin-band uplink-reception and downlink-transmission channels. Compared to the\nperformance achieved with a conventional full-duplex division (FDD) base\nstation, two main gains can be highlighted: 1) Increased uplink-downlink\nrate/throughput region; 2) Doubling of the diversity gain of both the uplink\nand downlink channels.\n", "title": "Half-Duplex Base Station with Adaptive Scheduling of the in-Band Uplink-Receptions and Downlink-Transmissions" }
null
null
null
null
true
null
18730
null
Default
null
null
null
{ "abstract": " A correlation between giant-planet mass and atmospheric heavy elemental\nabundance was first noted in the past century from observations of planets in\nour own Solar System, and has served as a cornerstone of planet formation\ntheory. Using data from the Hubble and Spitzer Space Telescopes from 0.5 to 5\nmicrons, we conducted a detailed atmospheric study of the transiting\nNeptune-mass exoplanet HAT-P-26b. We detected prominent H2O absorption bands\nwith a maximum base-to-peak amplitude of 525ppm in the transmission spectrum.\nUsing the water abundance as a proxy for metallicity, we measured HAT-P-26b's\natmospheric heavy element content [4.8 (-4.0 +21.5) times solar]. This likely\nindicates that HAT-P-26b's atmosphere is primordial and obtained its gaseous\nenvelope late in its disk lifetime, with little contamination from metal-rich\nplanetesimals.\n", "title": "HAT-P-26b: A Neptune-Mass Exoplanet with a Well Constrained Heavy Element Abundance" }
null
null
null
null
true
null
18731
null
Default
null
null
null
{ "abstract": " Cooperative geolocation has attracted significant research interests in\nrecent years. A large number of localization algorithms rely on the\navailability of statistical knowledge of measurement errors, which is often\ndifficult to obtain in practice. Compared with the statistical knowledge of\nmeasurement errors, it can often be easier to obtain the measurement error\nbound. This work investigates a localization problem assuming unknown\nmeasurement error distribution except for a bound on the error. We first\nformulate this localization problem as an optimization problem to minimize the\nworst-case estimation error, which is shown to be a non-convex optimization\nproblem. Then, relaxation is applied to transform it into a convex one.\nFurthermore, we propose a distributed algorithm to solve the problem, which\nwill converge in a few iterations. Simulation results show that the proposed\nalgorithms are more robust to large measurement errors than existing algorithms\nin the literature. Geometrical analysis providing additional insights is also\nprovided.\n", "title": "Robust Localization Using Range Measurements with Unknown and Bounded Errors" }
null
null
null
null
true
null
18732
null
Default
null
null
null
{ "abstract": " We define a family of intuitionistic non-normal modal logics; they can bee\nseen as intuitionistic counterparts of classical ones. We first consider\nmonomodal logics, which contain only one between Necessity and Possibility. We\nthen consider the more important case of bimodal logics, which contain both\nmodal operators. In this case we define several interactions between Necessity\nand Possibility of increasing strength, although weaker than duality. For all\nlogics we provide both a Hilbert axiomatisation and a cut-free sequent\ncalculus, on its basis we also prove their decidability. We then give a\nsemantic characterisation of our logics in terms of neighbourhood models. Our\nsemantic framework captures modularly not only our systems but also already\nknown intuitionistic non-normal modal logics such as Constructive K (CK) and\nthe propositional fragment of Wijesekera's Constructive Concurrent Dynamic\nLogic.\n", "title": "Intuitionistic Non-Normal Modal Logics: A general framework" }
null
null
null
null
true
null
18733
null
Default
null
null
null
{ "abstract": " This paper investigates the impact of link formation between a pair of agents\non resource availability of other agents in a social cloud network, which is a\nspecial case of socially-based resource sharing systems. Specifically, we study\nthe correlation between externalities, network size, and network density.\nWe first conjecture and experimentally support that if an agent experiences\npositive externalities, then its closeness (harmonic centrality measure) should\nincrease. Next, we show the following for ring networks: in less populated\nnetworks no agent experiences positive externalities; in more populated\nnetworks a set of agents experience positive externalities, and larger the\ndistance between agents forming a link, more the number of beneficiaries; and\nthe number of beneficiaries is always less than the number of\nnon-beneficiaries. Finally, we show that network density is inversely\nproportional to positive externalities, and further, it plays a crucial role in\ndetermining the kind of externalities.\n", "title": "Externalities in Socially-Based Resource Sharing Network" }
null
null
null
null
true
null
18734
null
Default
null
null
null
{ "abstract": " Consider the problem: given data pair $(\\mathbf{x}, \\mathbf{y})$ drawn from a\npopulation with $f_*(x) = \\mathbf{E}[\\mathbf{y} | \\mathbf{x} = x]$, specify a\nneural network and run gradient flow on the weights over time until reaching\nany stationarity. How does $f_t$, the function computed by the neural network\nat time $t$, relate to $f_*$, in terms of approximation and representation?\nWhat are the provable benefits of the adaptive representation by neural\nnetworks compared to the pre-specified fixed basis representation in the\nclassical nonparametric literature? We answer the above questions via a dynamic\nreproducing kernel Hilbert space (RKHS) approach indexed by the training\nprocess of neural networks. We show that when reaching any local stationarity,\ngradient flow learns an adaptive RKHS representation, and performs the global\nleast squares projection onto the adaptive RKHS, simultaneously. In addition,\nwe prove that as the RKHS is data-adaptive and task-specific, the residual for\n$f_*$ lies in a subspace that is smaller than the orthogonal complement of the\nRKHS, formalizing the representation and approximation benefits of neural\nnetworks.\n", "title": "Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits" }
null
null
null
null
true
null
18735
null
Default
null
null
null
{ "abstract": " We treat utility maximization from terminal wealth for an agent with utility\nfunction $U:\\mathbb{R}\\to\\mathbb{R}$ who dynamically invests in a\ncontinuous-time financial market and receives a possibly unbounded random\nendowment. We prove the existence of an optimal investment without introducing\nthe associated dual problem. We rely on a recent result of Orlicz space theory,\ndue to Delbaen and Owari which leads to a simple and transparent proof.\nOur results apply to non-smooth utilities and even strict concavity can be\nrelaxed. We can handle certain random endowments with non-hedgeable risks,\ncomplementing earlier papers. Constraints on the terminal wealth can also be\nincorporated.\nAs examples, we treat frictionless markets with finitely many assets and\nlarge financial markets.\n", "title": "On utility maximization without passing by the dual problem" }
null
null
null
null
true
null
18736
null
Default
null
null
null
{ "abstract": " Learning-based binary hashing has become a powerful paradigm for fast search\nand retrieval in massive databases. However, due to the requirement of discrete\noutputs for the hash functions, learning such functions is known to be very\nchallenging. In addition, the objective functions adopted by existing hashing\ntechniques are mostly chosen heuristically. In this paper, we propose a novel\ngenerative approach to learn hash functions through Minimum Description Length\nprinciple such that the learned hash codes maximally compress the dataset and\ncan also be used to regenerate the inputs. We also develop an efficient\nlearning algorithm based on the stochastic distributional gradient, which\navoids the notorious difficulty caused by binary output constraints, to jointly\noptimize the parameters of the hash function and the associated generative\nmodel. Extensive experiments on a variety of large-scale datasets show that the\nproposed method achieves better retrieval results than the existing\nstate-of-the-art methods.\n", "title": "Stochastic Generative Hashing" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
18737
null
Validated
null
null
null
{ "abstract": " The idea of incompetence as a learning or adaptation function was introduced\nin the context of evolutionary games as a fixed parameter. However, live\norganisms usually perform different nonlinear adaptation functions such as a\npower law or exponential fitness growth. Here, we examine how the functional\nform of the learning process may affect the social competition between\ndifferent behavioral types. Further, we extend our results for the evolutionary\ngames where fluctuations in the environment affect the behavioral adaptation of\ncompeting species and demonstrate importance of the starting level of\nincompetence for survival. Hence, we define a new concept of learning\nadvantages that becomes crucial when environments are constantly changing and\nrequiring rapid adaptation from species. This may lead to the evolutionarily\nweak phase when even evolutionary stable populations become vulnerable to\ninvasions.\n", "title": "Nonlinear learning and learning advantages in evolutionary games" }
null
null
null
null
true
null
18738
null
Default
null
null
null
{ "abstract": " We analyze the information-theoretic limits for the recovery of node labels\nin several network models. This includes the Stochastic Block Model, the\nExponential Random Graph Model, the Latent Space Model, the Directed\nPreferential Attachment Model, and the Directed Small-world Model. For the\nStochastic Block Model, the non-recoverability condition depends on the\nprobabilities of having edges inside a community, and between different\ncommunities. For the Latent Space Model, the non-recoverability condition\ndepends on the dimension of the latent space, and how far and spread are the\ncommunities in the latent space. For the Directed Preferential Attachment Model\nand the Directed Small-world Model, the non-recoverability condition depends on\nthe ratio between homophily and neighborhood size. We also consider dynamic\nversions of the Stochastic Block Model and the Latent Space Model.\n", "title": "Information-theoretic Limits for Community Detection in Network Models" }
null
null
null
null
true
null
18739
null
Default
null
null
null
{ "abstract": " We axiomatize the molecular-biology reasoning style, verify compliance of the\nstandard reference: Ptashne, A Genetic Switch, and present proof-theory-induced\ntechnologies to predict phenotypes and life cycles from genotypes. The key is\nto note that `reductionist discipline' entails constructive reasoning, i.e.,\nthat any argument for a compound property is constructed from more basic\narguments. Proof theory makes explicit the inner structure of the axiomatized\nreasoning style and allows the permissible dynamics to be presented as a mode\nof computation that can be executed and analyzed. Constructivity and\nexecutability guarantee simulation when working over domain-specific languages.\nHere, we exhibit phenotype properties for genotype reasons: a molecular-biology\nargument is an open-system concurrent computation that results in compartment\nchanges and is performed among processes of physiology change as determined\nfrom the molecular programming of given DNA. Life cycles are the possible\nsequentializations of the processes. A main implication of our construction is\nthat technical correctness provides a complementary perspective on science that\nis as fundamental there as it is for pure mathematics, provided mature\nreductionism exists.\n", "title": "Proofs of life: molecular-biology reasoning simulates cell behaviors from first principles" }
null
null
null
null
true
null
18740
null
Default
null
null
null
{ "abstract": " We briefly recall the history of the Nijenhuis torsion of (1,1)-tensors on\nmanifolds and of the lesser-known Haantjes torsion. We then show how the\nHaantjes manifolds of Magri and the symplectic-Haantjes structures of Tempesta\nand Tondo generalize the classical approach to integrable systems in the\nbi-hamiltonian and symplectic-Nijenhuis formalisms, the sequence of powers of\nthe recursion operator being replaced by a family of commuting Haantjes\noperators.\n", "title": "Beyond recursion operators" }
null
null
null
null
true
null
18741
null
Default
null
null
null
{ "abstract": " Recently, a hydrodynamic description of local equilibrium dynamics in quantum\nintegrable systems was discovered. In the diffusionless limit, this is\nequivalent to a certain \"Bethe-Boltzmann\" kinetic equation, which has the form\nof an integro-differential conservation law in $(1+1)$D. The purpose of the\npresent work is to investigate the sense in which the Bethe-Boltzmann equation\ndefines an \"integrable kinetic equation\". To this end, we study a class of $N$\ndimensional systems of evolution equations that arise naturally as\nfinite-dimensional approximations to the Bethe-Boltzmann equation. We obtain\nnon-local Poisson brackets and Hamiltonian densities for these equations and\nderive an infinite family of first integrals, parameterized by $N$ functional\ndegrees of freedom. We find that the conserved charges arising from quantum\nintegrability map to Casimir invariants of the hydrodynamic bracket and their\ngroup velocities map to Hamiltonian flows. Some results from the\nfinite-dimensional setting extend to the underlying integro-differential\nequation, providing evidence for its integrability in the hydrodynamic sense.\n", "title": "On Classical Integrability of the Hydrodynamics of Quantum Integrable Systems" }
null
null
null
null
true
null
18742
null
Default
null
null
null
{ "abstract": " Plane Poiseuille flow, the pressure driven flow between parallel plates,\nshows a route to turbulence connected with a linear instability to\nTollmien-Schlichting (TS) waves, and another one, the bypass transition, that\nis triggered with finite amplitude perturbation. We use direct numerical\nsimulations to explore the arrangement of the different routes to turbulence\namong the set of initial conditions. For plates that are a distance $2H$ apart\nand in a domain of width $2\\pi H$ and length $2\\pi H$ the subcritical\ninstability to TS waves sets in at $Re_{c}=5815$ that extends down to\n$Re_{TS}\\approx4884$. The bypass route becomes available above $Re_E=459$ with\nthe appearance of three-dimensional finite-amplitude traveling waves. The\nbypass transition covers a large set of finite amplitude perturbations. Below\n$Re_c$, TS appear for a tiny set of initial conditions that grows with\nincreasing Reynolds number. Above $Re_c$ the previously stable region becomes\nunstable via TS waves, but a sharp transition to the bypass route can still be\nidentified. Both routes lead to the same turbulent in the final stage of the\ntransition, but on different time scales. Similar phenomena can be expected in\nother flows where two or more routes to turbulence compete.\n", "title": "Transition to turbulence when the Tollmien-Schlichting and bypass routes coexist" }
null
null
null
null
true
null
18743
null
Default
null
null
null
{ "abstract": " With an exponentially growing number of scientific papers published each\nyear, advanced tools for exploring and discovering publications of interest are\nbecoming indispensable. To empower users beyond a simple keyword search\nprovided e.g. by Google Scholar, we present the novel web application PubVis.\nPowered by a variety of machine learning techniques, it combines essential\nfeatures to help researchers find the content most relevant to them. An\ninteractive visualization of a large collection of scientific publications\nprovides an overview of the field and encourages the user to explore articles\nbeyond a narrow research focus. This is augmented by personalized content based\narticle recommendations as well as an advanced full text search to discover\nrelevant references. The open sourced implementation of the app can be easily\nset up and run locally on a desktop computer to provide access to content\ntailored to the specific needs of individual users. Additionally, a PubVis demo\nwith access to a collection of 10,000 papers can be tested online.\n", "title": "Interactive Exploration and Discovery of Scientific Publications with PubVis" }
null
null
[ "Computer Science" ]
null
true
null
18744
null
Validated
null
null
null
{ "abstract": " In many phase II trials in solid tumours, patients are assessed using\nendpoints based on the Response Evaluation Criteria in Solid Tumours (RECIST)\nscale. Often, analyses are based on the response rate. This is the proportion\nof patients who have an observed tumour shrinkage above a pre-defined level and\nno new tumour lesions. The augmented binary method has been proposed to improve\nthe precision of the estimator of the response rate. The method involves\nmodelling the tumour shrinkage to avoid dichotomising it. However, in many\ntrials the best observed response is used as the primary outcome. In such\ntrials, patients are followed until progression, and their best observed RECIST\noutcome is used as the primary endpoint. In this paper, we propose a method\nthat extends the augmented binary method so that it can be used when the\noutcome is best observed response. We show through simulated data and data from\na real phase II cancer trial that this method improves power in both single-arm\nand randomised trials. The average gain in power compared to the traditional\nanalysis is equivalent to approximately a 35% increase in sample size. A\nmodified version of the method is proposed to reduce the computational effort\nrequired. We show this modified method maintains much of the efficiency\nadvantages.\n", "title": "Improving phase II oncology trials using best observed RECIST response as an endpoint by modelling continuous tumour measurements" }
null
null
null
null
true
null
18745
null
Default
null
null
null
{ "abstract": " We provide explicit formulas of Evans kernels, Evans-Selberg potentials and\nfundamental metrics on potential-theoretically parabolic planar domains.\n", "title": "Evans-Selberg potential on planar domains" }
null
null
null
null
true
null
18746
null
Default
null
null
null
{ "abstract": " What is the current state-of-the-art for image restoration and enhancement\napplied to degraded images acquired under less than ideal circumstances? Can\nthe application of such algorithms as a pre-processing step to improve image\ninterpretability for manual analysis or automatic visual recognition to\nclassify scene content? While there have been important advances in the area of\ncomputational photography to restore or enhance the visual quality of an image,\nthe capabilities of such techniques have not always translated in a useful way\nto visual recognition tasks. Consequently, there is a pressing need for the\ndevelopment of algorithms that are designed for the joint problem of improving\nvisual appearance and recognition, which will be an enabling factor for the\ndeployment of visual recognition tools in many real-world scenarios. To address\nthis, we introduce the UG^2 dataset as a large-scale benchmark composed of\nvideo imagery captured under challenging conditions, and two enhancement tasks\ndesigned to test algorithmic impact on visual quality and automatic object\nrecognition. Furthermore, we propose a set of metrics to evaluate the joint\nimprovement of such tasks as well as individual algorithmic advances, including\na novel psychophysics-based evaluation regime for human assessment and a\nrealistic set of quantitative measures for object recognition performance. We\nintroduce six new algorithms for image restoration or enhancement, which were\ncreated as part of the IARPA sponsored UG^2 Challenge workshop held at CVPR\n2018. Under the proposed evaluation regime, we present an in-depth analysis of\nthese algorithms and a host of deep learning-based and classic baseline\napproaches. From the observed results, it is evident that we are in the early\ndays of building a bridge between computational photography and visual\nrecognition, leaving many opportunities for innovation in this area.\n", "title": "Bridging the Gap Between Computational Photography and Visual Recognition" }
null
null
null
null
true
null
18747
null
Default
null
null
null
{ "abstract": " Let $\\mu_1 \\ge \\dotsc \\ge \\mu_n > 0$ and $\\mu_1 + \\dotsm + \\mu_n = 1$. Let\n$X_1, \\dotsc, X_n$ be independent non-negative random variables with $EX_1 =\n\\dotsc = EX_n = 1$, and let $Z = \\sum_{i=1}^n \\mu_i X_i$. Let $M = \\max_{1 \\le\ni \\le n} \\mu_i = \\mu_1$, and let $\\delta > 0$ and $T = 1 + \\delta$. Both\nSamuels and Feige formulated conjectures bounding the probability $P(Z < T)$\nfrom above. We prove that Samuels' conjecture implies a conjecture of Feige.\n", "title": "On some conjectures of Samuels and Feige" }
null
null
null
null
true
null
18748
null
Default
null
null
null
{ "abstract": " Making the right decision in traffic is a challenging task that is highly\ndependent on individual preferences as well as the surrounding environment.\nTherefore it is hard to model solely based on expert knowledge. In this work we\nuse Deep Reinforcement Learning to learn maneuver decisions based on a compact\nsemantic state representation. This ensures a consistent model of the\nenvironment across scenarios as well as a behavior adaptation function,\nenabling on-line changes of desired behaviors without re-training. The input\nfor the neural network is a simulated object list similar to that of Radar or\nLidar sensors, superimposed by a relational semantic scene description. The\nstate as well as the reward are extended by a behavior adaptation function and\na parameterization respectively. With little expert knowledge and a set of\nmid-level actions, it can be seen that the agent is capable to adhere to\ntraffic rules and learns to drive safely in a variety of situations.\n", "title": "Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States" }
null
null
null
null
true
null
18749
null
Default
null
null
null
{ "abstract": " We present Sequential Neural Likelihood (SNL), a new method for Bayesian\ninference in simulator models, where the likelihood is intractable but\nsimulating data from the model is possible. SNL trains an autoregressive flow\non simulated data in order to learn a model of the likelihood in the region of\nhigh posterior density. A sequential training procedure guides simulations and\nreduces simulation cost by orders of magnitude. We show that SNL is more\nrobust, more accurate and requires less tuning than related neural-based\nmethods, and we discuss diagnostics for assessing calibration, convergence and\ngoodness-of-fit.\n", "title": "Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows" }
null
null
null
null
true
null
18750
null
Default
null
null
null
{ "abstract": " Generating diverse questions for given images is an important task for\ncomputational education, entertainment and AI assistants. Different from many\nconventional prediction techniques is the need for algorithms to generate a\ndiverse set of plausible questions, which we refer to as \"creativity\". In this\npaper we propose a creative algorithm for visual question generation which\ncombines the advantages of variational autoencoders with long short-term memory\nnetworks. We demonstrate that our framework is able to generate a large set of\nvarying questions given a single input image.\n", "title": "Creativity: Generating Diverse Questions using Variational Autoencoders" }
null
null
null
null
true
null
18751
null
Default
null
null
null
{ "abstract": " We demonstrated sympathetic cooling of a single ion in a buffer gas of\nultracold atoms with small mass. Efficient collisional cooling was realized by\nsuppressing collision-induced heating. We attempt to explain the experimental\nresults with a simple rate equation model and provide a quantitative discussion\nof the cooling efficiency per collision. The knowledge we obtained in this work\nis an important ingredient for advancing the technique of sympathetic cooling\nof ions with neutral atoms.\n", "title": "Cooling dynamics of a single trapped ion via elastic collisions with small-mass atoms" }
null
null
null
null
true
null
18752
null
Default
null
null
null
{ "abstract": " We determine the stability and instability of a sufficiently small and\nperiodic traveling wave to long wavelength perturbations, for a nonlinear\ndispersive equation which extends a Camassa-Holm equation to include all the\ndispersion of water waves and the Whitham equation to include nonlinearities of\nmedium amplitude waves. In the absence of the effects of surface tension, the\nresult qualitatively agrees with the Benjamin-Feir instability of a Stokes\nwave. In the presence of the effects of surface tension, it qualitatively\nagrees with those from formal asymptotic expansions of the physical problem and\nit improves upon that for the Whitham equation, correctly predicting the limit\nof strong surface tension. We discuss the modulational stability and\ninstability in the Camassa-Holm equation and related models.\n", "title": "Modulational instability in the full-dispersion Camassa-Holm equation" }
null
null
[ "Physics", "Mathematics" ]
null
true
null
18753
null
Validated
null
null
null
{ "abstract": " Agent modelling involves considering how other agents will behave, in order\nto influence your own actions. In this paper, we explore the use of agent\nmodelling in the hidden-information, collaborative card game Hanabi. We\nimplement a number of rule-based agents, both from the literature and of our\nown devising, in addition to an Information Set Monte Carlo Tree Search\n(IS-MCTS) agent. We observe poor results from IS-MCTS, so construct a new,\npredictor version that uses a model of the agents with which it is paired. We\nobserve a significant improvement in game-playing strength from this agent in\ncomparison to IS-MCTS, resulting from its consideration of what the other\nagents in a game would do. In addition, we create a flawed rule-based agent to\nhighlight the predictor's capabilities with such an agent.\n", "title": "Evaluating and Modelling Hanabi-Playing Agents" }
null
null
null
null
true
null
18754
null
Default
null
null
null
{ "abstract": " Let q be a power of a prime and let V be a vector space of finite dimension n\nover the field of order q. Let Bil(V) denote the set of all bilinear forms\ndefined on V x V, let Symm(V) denote the subspace of Bil(V) consisting of\nsymmetric bilinear forms, and Alt(V) denote the subspace of alternating\nbilinear forms. Let M denote a subspace of any of the spaces Bil(V), Symm(V),\nor Alt(V). In this paper we investigate hypotheses on the rank of the non-zero\nelements of M which lead to reasonable bounds for dim M. Typically, we look at\nthe case where exactly two or three non-zero ranks occur, one of which is\nusually n. In the case that M achieves the maximal dimension predicted by the\ndimension bound, we try to enumerate the number of forms of a given rank in M\nand describe geometric properties of the radicals of the degenerate elements of\nM.\n", "title": "Rank-related dimension bounds for subspaces of bilinear forms over finite fields" }
null
null
null
null
true
null
18755
null
Default
null
null
null
{ "abstract": " Bayesian Networks have been widely used in the last decades in many fields,\nto describe statistical dependencies among random variables. In general,\nlearning the structure of such models is a problem with considerable\ntheoretical interest that still poses many challenges. On the one hand, this is\na well-known NP-complete problem, which is practically hardened by the huge\nsearch space of possible solutions. On the other hand, the phenomenon of\nI-equivalence, i.e., different graphical structures underpinning the same set\nof statistical dependencies, may lead to multimodal fitness landscapes further\nhindering maximum likelihood approaches to solve the task. Despite all these\ndifficulties, greedy search methods based on a likelihood score coupled with a\nregularization term to account for model complexity, have been shown to be\nsurprisingly effective in practice. In this paper, we consider the formulation\nof the task of learning the structure of Bayesian Networks as an optimization\nproblem based on a likelihood score. Nevertheless, our approach do not adjust\nthis score by means of any of the complexity terms proposed in the literature;\ninstead, it accounts directly for the complexity of the discovered solutions by\nexploiting a multi-objective optimization procedure. To this extent, we adopt\nNSGA-II and define the first objective function to be the likelihood of a\nsolution and the second to be the number of selected arcs. We thoroughly\nanalyze the behavior of our method on a wide set of simulated data, and we\ndiscuss the performance considering the goodness of the inferred solutions both\nin terms of their objective functions and with respect to the retrieved\nstructure. Our results show that NSGA-II can converge to solutions\ncharacterized by better likelihood and less arcs than classic approaches,\nalthough paradoxically frequently characterized by a lower similarity to the\ntarget network.\n", "title": "Multi-objective optimization to explicitly account for model complexity when learning Bayesian Networks" }
null
null
null
null
true
null
18756
null
Default
null
null
null
{ "abstract": " In this paper, we will demonstrate how Manhattan structure can be exploited\nto transform the Simultaneous Localization and Mapping (SLAM) problem, which is\ntypically solved by a nonlinear optimization over feature positions, into a\nmodel selection problem solved by a convex optimization over higher order\nlayout structures, namely walls, floors, and ceilings. Furthermore, we show how\nour novel formulation leads to an optimization procedure that automatically\nperforms data association and loop closure and which ultimately produces the\nsimplest model of the environment that is consistent with the available\nmeasurements. We verify our method on real world data sets collected with\nvarious sensing modalities.\n", "title": "Simultaneous Localization and Layout Model Selection in Manhattan Worlds" }
null
null
null
null
true
null
18757
null
Default
null
null
null
{ "abstract": " We analyzed the performance of a biologically inspired algorithm called the\nCorrected Projections Algorithm (CPA) when a sparseness constraint is required\nto unambiguously reconstruct an observed signal using atoms from an\novercomplete dictionary. By changing the geometry of the estimation problem,\nCPA gives an analytical expression for a binary variable that indicates the\npresence or absence of a dictionary atom using an L2 regularizer. The\nregularized solution can be implemented using an efficient real-time\nKalman-filter type of algorithm. The smoother L2 regularization of CPA makes it\nvery robust to noise, and CPA outperforms other methods in identifying known\natoms in the presence of strong novel atoms in the signal.\n", "title": "Robust method for finding sparse solutions to linear inverse problems using an L2 regularization" }
null
null
null
null
true
null
18758
null
Default
null
null
null
{ "abstract": " Application of humanoid robots has been common in the field of healthcare and\neducation. It has been recurrently used to improve social behavior and mollify\ndistress level among children with autism, cancer and cerebral palsy. This\narticle discusses the same from a human factors perspective. It shows how\npeople of different age and gender have a different opinion towards the\napplication and acceptance of humanoid robots. Additionally, this article\nhighlights the influence of cerebral condition and social interaction on a user\nbehavior and attitude towards humanoid robots. Our study performed a literature\nreview and found that (a) children and elderly individuals prefer humanoid\nrobots due to inactive social interaction, (b) The deterministic behavior of\nhumanoid robots can be acknowledged to improve social behavior of autistic\nchildren, (c) Trust on humanoid robots is highly driven by its application and\na user age, gender, and social life.\n", "title": "Humanoid Robot-Application and Influence" }
null
null
null
null
true
null
18759
null
Default
null
null
null
{ "abstract": " The goal of this note is to show that, also in a bounded domain $\\Omega\n\\subset \\mathbb{R}^n$, with $\\partial \\Omega\\in C^2$, any weak solution,\n$(u(x,t),p(x,t))$, of the Euler equations of ideal incompressible fluid in\n$\\Omega\\times (0,T) \\subset \\mathbb{R}^n\\times\\mathbb{R}_t$, with the\nimpermeability boundary condition: $u\\cdot \\vec n =0$ on\n$\\partial\\Omega\\times(0,T)$, is of constant energy on the interval $(0,T)$\nprovided the velocity field $u \\in L^3((0,T);\nC^{0,\\alpha}(\\overline{\\Omega}))$, with $\\alpha>\\frac13\\,.$\n", "title": "Onsager's Conjecture for the Incompressible Euler Equations in Bounded Domains" }
null
null
null
null
true
null
18760
null
Default
null
null
null
{ "abstract": " The rapid advancement in high-throughput techniques has fueled the generation\nof large volume of biological data rapidly with low cost. Some of these\ntechniques are microarray and next generation sequencing which provides genome\nlevel insight of living cells. As a result, the size of most of the biological\ndatabases, such as NCBI-GEO, NCBI-SRA, is exponentially growing. These\nbiological data are analyzed using computational techniques for knowledge\ndiscovery - which is one of the objectives of bioinformatics research. Gene\nregulatory network (GRN) is a gene-gene interaction network which plays pivotal\nrole in understanding gene regulation process and disease studies. From the\nlast couple of decades, the researchers are interested in developing\ncomputational algorithms for GRN inference (GRNI) using high-throughput\nexperimental data. Several computational approaches have been applied for\ninferring GRN from gene expression data including statistical techniques\n(correlation coefficient), information theory (mutual information), regression\nbased approaches, probabilistic approaches (Bayesian networks, naive byes),\nartificial neural networks, and fuzzy logic. The fuzzy logic, along with its\nhybridization with other intelligent approach, is well studied in GRNI due to\nits several advantages. In this paper, we present a consolidated review on\nfuzzy logic and its hybrid approaches for GRNI developed during last two\ndecades.\n", "title": "Fuzzy logic based approaches for gene regulatory network inference" }
null
null
null
null
true
null
18761
null
Default
null
null
null
{ "abstract": " We review instrumentation for nuclear magnetic resonance (NMR) in zero and\nultra-low magnetic field (ZULF, below 0.1 $\\mu$T) where detection is based on a\nlow-cost, non-cryogenic, spin-exchange relaxation free (SERF) $^{87}$Rb atomic\nmagnetometer. The typical sensitivity is 20-30 fT/Hz$^{1/2}$ for signal\nfrequencies below 1 kHz and NMR linewidths range from Hz all the way down to\ntens of mHz. These features enable precision measurements of chemically\ninformative nuclear spin-spin couplings as well as nuclear spin precession in\nultra-low magnetic fields.\n", "title": "Instrumentation for nuclear magnetic resonance in zero and ultralow magnetic field" }
null
null
null
null
true
null
18762
null
Default
null
null
null
{ "abstract": " It is shown that CH implies the existence of a compact Hausdorff space that\nis countable dense homogeneous, crowded and does not contain topological copies\nof the Cantor set. This contrasts with a previous result by the author which\nsays that for any crowded Hausdorff space $X$ of countable $\\pi$-weight, if\n${}^\\omega{X}$ is countable dense homogeneous, then $X$ must contain a\ntopological copy of the Cantor set.\n", "title": "Countable dense homogeneity and the Cantor set" }
null
null
null
null
true
null
18763
null
Default
null
null
null
{ "abstract": " Information about intrinsic dimension is crucial to perform dimensionality\nreduction, compress information, design efficient algorithms, and do\nstatistical adaptation. In this paper we propose an estimator for the intrinsic\ndimension of a data set. The estimator is based on binary neighbourhood\ninformation about the observations in the form of two adjacency matrices, and\ndoes not require any explicit distance information. The underlying graph is\nmodelled according to a subset of a specific random connection model, sometimes\nreferred to as the Poisson blob model. Computationally the estimator scales\nlike n log n, and we specify its asymptotic distribution and rate of\nconvergence. A simulation study on both real and simulated data shows that our\napproach compares favourably with some competing methods from the literature,\nincluding approaches that rely on distance information.\n", "title": "Dimension Estimation Using Random Connection Models" }
null
null
null
null
true
null
18764
null
Default
null
null
null
{ "abstract": " Optimal dimensionality reduction methods are proposed for the Bayesian\ninference of a Gaussian linear model with additive noise in presence of\noverabundant data. Three different optimal projections of the observations are\nproposed based on information theory: the projection that minimizes the\nKullback-Leibler divergence between the posterior distributions of the original\nand the projected models, the one that minimizes the expected Kullback-Leibler\ndivergence between the same distributions, and the one that maximizes the\nmutual information between the parameter of interest and the projected\nobservations. The first two optimization problems are formulated as the\ndetermination of an optimal subspace and therefore the solution is computed\nusing Riemannian optimization algorithms on the Grassmann manifold. Regarding\nthe maximization of the mutual information, it is shown that there exists an\noptimal subspace that minimizes the entropy of the posterior distribution of\nthe reduced model; a basis of the subspace can be computed as the solution to a\ngeneralized eigenvalue problem; an a priori error estimate on the mutual\ninformation is available for this particular solution; and that the\ndimensionality of the subspace to exactly conserve the mutual information\nbetween the input and the output of the models is less than the number of\nparameters to be inferred. Numerical applications to linear and nonlinear\nmodels are used to assess the efficiency of the proposed approaches, and to\nhighlight their advantages compared to standard approaches based on the\nprincipal component analysis of the observations.\n", "title": "Optimal projection of observations in a Bayesian setting" }
null
null
null
null
true
null
18765
null
Default
null
null
null
{ "abstract": " The Oseledets Multiplicative Ergodic theorem is a basic result with numerous\napplications throughout dynamical systems. These notes provide an introduction\nto this theorem, as well as subsequent generalizations. They are based on\nlectures at summer schools in Brazil, France, and Russia.\n", "title": "Notes on the Multiplicative Ergodic Theorem" }
null
null
[ "Mathematics" ]
null
true
null
18766
null
Validated
null
null
null
{ "abstract": " Electronic health records (EHR) data provide a cost and time-effective\nopportunity to conduct cohort studies of the effects of multiple time-point\ninterventions in the diverse patient population found in real-world clinical\nsettings. Because the computational cost of analyzing EHR data at daily (or\nmore granular) scale can be quite high, a pragmatic approach has been to\npartition the follow-up into coarser intervals of pre-specified length. Current\nguidelines suggest employing a 'small' interval, but the feasibility and\npractical impact of this recommendation has not been evaluated and no formal\nmethodology to inform this choice has been developed. We start filling these\ngaps by leveraging large-scale EHR data from a diabetes study to develop and\nillustrate a fast and scalable targeted learning approach that allows to follow\nthe current recommendation and study its practical impact on inference. More\nspecifically, we map daily EHR data into four analytic datasets using 90, 30,\n15 and 5-day intervals. We apply a semi-parametric and doubly robust estimation\napproach, the longitudinal TMLE, to estimate the causal effects of four dynamic\ntreatment rules with each dataset, and compare the resulting inferences. To\novercome the computational challenges presented by the size of these data, we\npropose a novel TMLE implementation, the 'long-format TMLE', and rely on the\nlatest advances in scalable data-adaptive machine-learning software, xgboost\nand h2o, for estimation of the TMLE nuisance parameters.\n", "title": "Targeted Learning with Daily EHR Data" }
null
null
null
null
true
null
18767
null
Default
null
null
null
{ "abstract": " A framework for the generation of bridge-specific fragility utilizing the\ncapabilities of machine learning and stripe-based approach is presented in this\npaper. The proposed methodology using random forests helps to generate or\nupdate fragility curves for a new set of input parameters with less\ncomputational effort and expensive re-simulation. The methodology does not\nplace any assumptions on the demand model of various components and helps to\nidentify the relative importance of each uncertain variable in their seismic\ndemand model. The methodology is demonstrated through the case studies of\nmulti-span concrete bridges in California. Geometric, material and structural\nuncertainties are accounted for in the generation of bridge models and\nfragility curves. It is also noted that the traditional lognormality assumption\non the demand model leads to unrealistic fragility estimates. Fragility results\nobtained the proposed methodology curves can be deployed in risk assessment\nplatform such as HAZUS for regional loss estimation.\n", "title": "Stripe-Based Fragility Analysis of Concrete Bridge Classes Using Machine Learning Techniques" }
null
null
null
null
true
null
18768
null
Default
null
null
null
{ "abstract": " The ubiquity of systems using artificial intelligence or \"AI\" has brought\nincreasing attention to how those systems should be regulated. The choice of\nhow to regulate AI systems will require care. AI systems have the potential to\nsynthesize large amounts of data, allowing for greater levels of\npersonalization and precision than ever before---applications range from\nclinical decision support to autonomous driving and predictive policing. That\nsaid, there exist legitimate concerns about the intentional and unintentional\nnegative consequences of AI systems. There are many ways to hold AI systems\naccountable. In this work, we focus on one: explanation. Questions about a\nlegal right to explanation from AI systems was recently debated in the EU\nGeneral Data Protection Regulation, and thus thinking carefully about when and\nhow explanation from AI systems might improve accountability is timely. In this\nwork, we review contexts in which explanation is currently required under the\nlaw, and then list the technical considerations that must be considered if we\ndesired AI systems that could provide kinds of explanations that are currently\nrequired of humans.\n", "title": "Accountability of AI Under the Law: The Role of Explanation" }
null
null
null
null
true
null
18769
null
Default
null
null
null
{ "abstract": " Sub-sampling can acquire directly a passband within a broad radio frequency\n(RF) range, avoiding down-conversion and low-phase-noise tunable local\noscillation (LO). However, sub-sampling suffers from band folding and\nself-image interference. In this paper we propose a frequency-oriented\nsub-sampling to solve the two problems. With ultrashort optical pulse and a\npair of chromatic dispersions, the broadband RF signal is firstly short-time\nFourier-transformed to a spectrum-spread pulse. Then a time slot, corresponding\nto the target spectrum slice, is coherently optical-sampled with\nin-phase/quadrature (I/Q) demodulation. We demonstrate the novel bandpass\nsampling by a numerical example, which shows the desired uneven intensity\nresponse, i.e. pre-filtering. We show in theory that appropriate time-stretch\ncapacity from dispersion can result in pre-filtering bandwidth less than\nsampling rate. Image rejection due to I/Q sampling is also analyzed. A\nproof-of-concept experiment, which is based on a time-lens sampling source and\nchirped fiber Bragg gratings (CFBGs), shows the center-frequency-tunable\npre-filtered sub-sampling with bandwidth of 6 GHz around, as well as imaging\nrejection larger than 26 dB. Our technique may benefit future broadband RF\nreceivers for frequency-agile Radar or channelization.\n", "title": "Frequency-oriented sub-sampling by photonic Fourier transform and I/Q demodulation" }
null
null
null
null
true
null
18770
null
Default
null
null
null
{ "abstract": " We give a moment map interpretation of some relatively balanced metrics. As\nan application, we extend a result of S. K. Donaldson on constant scalar\ncurvature Kähler metrics to the case of extremal metrics. Namely, we show\nthat a given extremal metric is the limit of some specific relatively balanced\nmetrics. As a corollary, we recover uniqueness and splitting results for\nextremal metrics in the polarized case.\n", "title": "A moment map picture of relative balanced metrics on extremal Kähler manifolds" }
null
null
[ "Mathematics" ]
null
true
null
18771
null
Validated
null
null
null
{ "abstract": " We examine the impact of adversarial actions on vehicles in traffic. Current\nadvances in assisted/autonomous driving technologies are supposed to reduce the\nnumber of casualties, but this seems to be desired despite the recently proved\ninsecurity of in-vehicle communication buses or components. Fortunately to some\nextent, while compromised cars have become a reality, the numerous attacks\nreported so far on in-vehicle electronics are exclusively concerned with\nimpairments of a single target. In this work we put adversarial behavior under\na more complex scenario where driving decisions deluded by corrupted\nelectronics can affect more than one vehicle. Particularly, we focus our\nattention on chain collisions involving multiple vehicles that can be amplified\nby simple adversarial interventions, e.g., delaying taillights or falsifying\nspeedometer readings. We provide metrics for assessing adversarial impact and\nconsider safety margins against adversarial actions. Moreover, we discuss\nintelligent adversarial behaviour by which the creation of rogue platoons is\npossible and speed manipulations become stealthy to human drivers. We emphasize\nthat our work does not try to show the mere fact that imprudent speeds and\nheadways lead to chain-collisions, but points out that an adversary may favour\nsuch scenarios (eventually keeping his actions stealthy for human drivers) and\nfurther asks for quantifying the impact of adversarial activity or whether\nexisting traffic regulations are prepared for such situations.\n", "title": "Traffic models with adversarial vehicle behaviour" }
null
null
null
null
true
null
18772
null
Default
null
null
null
{ "abstract": " This paper presents two novel control methodologies for the cooperative\nmanipulation of an object by N robotic agents. Firstly, we design an adaptive\ncontrol protocol which employs quaternion feedback for the object orientation\nto avoid potential representation singularities. Secondly, we propose a control\nprotocol that guarantees predefined transient and steady-state performance for\nthe object trajectory. Both methodologies are decentralized, since the agents\ncalculate their own signals without communicating with each other, as well as\nrobust to external disturbances and model uncertainties. Moreover, we consider\nthat the grasping points are rigid, and avoid the need for force/torque\nmeasurements. Load distribution is also included via a grasp matrix\npseudo-inverse to account for potential differences in the agents' power\ncapabilities. Finally, simulation and experimental results with two robotic\narms verify the theoretical findings.\n", "title": "Robust Cooperative Manipulation without Force/Torque Measurements: Control Design and Experiments" }
null
null
null
null
true
null
18773
null
Default
null
null
null
{ "abstract": " Variational Bayes (VB) is a common strategy for approximate Bayesian\ninference, but simple methods are only available for specific classes of models\nincluding, in particular, representations having conditionally conjugate\nconstructions within an exponential family. Models with logit components are an\napparently notable exception to this class, due to the absence of conjugacy\nbetween the logistic likelihood and the Gaussian priors for the coefficients in\nthe linear predictor. To facilitate approximate inference within this widely\nused class of models, Jaakkola and Jordan (2000) proposed a simple variational\napproach which relies on a family of tangent quadratic lower bounds of logistic\nlog-likelihoods, thus restoring conjugacy between these approximate bounds and\nthe Gaussian priors. This strategy is still implemented successfully, but less\nattempts have been made to formally understand the reasons underlying its\nexcellent performance. To cover this key gap, we provide a formal connection\nbetween the above bound and a recent Pólya-gamma data augmentation for\nlogistic regression. Such a result places the computational methods associated\nwith the aforementioned bounds within the framework of variational inference\nfor conditionally conjugate exponential family models, thereby allowing recent\nadvances for this class to be inherited also by the methods relying on Jaakkola\nand Jordan (2000).\n", "title": "Conditionally conjugate mean-field variational Bayes for logistic models" }
null
null
null
null
true
null
18774
null
Default
null
null
null
{ "abstract": " The weighted tree augmentation problem (WTAP) is a fundamental network design\nproblem. We are given an undirected tree $G = (V,E)$, an additional set of\nedges $L$ called links and a cost vector $c \\in \\mathbb{R}^L_{\\geq 1}$. The\ngoal is to choose a minimum cost subset $S \\subseteq L$ such that $G = (V, E\n\\cup S)$ is $2$-edge-connected. In the unweighted case, that is, when we have\n$c_\\ell = 1$ for all $\\ell \\in L$, the problem is called the tree augmentation\nproblem (TAP).\nBoth problems are known to be APX-hard, and the best known approximation\nfactors are $2$ for WTAP by (Frederickson and JáJá, '81) and $\\tfrac{3}{2}$\nfor TAP due to (Kortsarz and Nutov, TALG '16). In the case where all link costs\nare bounded by a constant $M$, (Adjiashvili, SODA '17) recently gave a $\\approx\n1.96418+\\varepsilon$-approximation algorithm for WTAP under this assumption.\nThis is the first approximation with a better guarantee than $2$ that does not\nrequire restrictions on the structure of the tree or the links.\nIn this paper, we improve Adjiashvili's approximation to a\n$\\frac{3}{2}+\\varepsilon$-approximation for WTAP under the bounded cost\nassumption. We achieve this by introducing a strong LP that combines\n$\\{0,\\frac{1}{2}\\}$-Chvátal-Gomory cuts for the standard LP for the problem\nwith bundle constraints from Adjiashvili. We show that our LP can be solved\nefficiently and that it is exact for some instances that arise at the core of\nAdjiashvili's approach. This results in the improved guarantee of\n$\\frac{3}{2}+\\varepsilon$. For TAP, this is the best known LP-based result, and\nmatches the bound of $\\frac{3}{2}+\\varepsilon$ achieved by the best SDP-based\nalgorithm due to (Cheriyan and Gao, arXiv '15).\n", "title": "A $\\frac{3}{2}$-Approximation Algorithm for Tree Augmentation via Chvátal-Gomory Cuts" }
null
null
[ "Computer Science" ]
null
true
null
18775
null
Validated
null
null
null
{ "abstract": " Social media such as tweets are emerging as platforms contributing to\nsituational awareness during disasters. Information shared on Twitter by both\naffected population (e.g., requesting assistance, warning) and those outside\nthe impact zone (e.g., providing assistance) would help first responders,\ndecision makers, and the public to understand the situation first-hand.\nEffective use of such information requires timely selection and analysis of\ntweets that are relevant to a particular disaster. Even though abundant tweets\nare promising as a data source, it is challenging to automatically identify\nrelevant messages since tweet are short and unstructured, resulting to\nunsatisfactory classification performance of conventional learning-based\napproaches. Thus, we propose a simple yet effective algorithm to identify\nrelevant messages based on matching keywords and hashtags, and provide a\ncomparison between matching-based and learning-based approaches. To evaluate\nthe two approaches, we put them into a framework specifically proposed for\nanalyzing disaster-related tweets. Analysis results on eleven datasets with\nvarious disaster types show that our technique provides relevant tweets of\nhigher quality and more interpretable results of sentiment analysis tasks when\ncompared to learning approach.\n", "title": "On Identifying Disaster-Related Tweets: Matching-based or Learning-based?" }
null
null
null
null
true
null
18776
null
Default
null
null
null
{ "abstract": " We deal with the problem of maintaining a shortest-path tree rooted at some\nprocess r in a network that may be disconnected after topological changes. The\ngoal is then to maintain a shortest-path tree rooted at r in its connected\ncomponent, V\\_r, and make all processes of other components detecting that r is\nnot part of their connected component. We propose, in the composite atomicity\nmodel, a silent self-stabilizing algorithm for this problem working in\nsemi-anonymous networks, where edges have strictly positive weights. This\nalgorithm does not require any a priori knowledge about global parameters of\nthe network. We prove its correctness assuming the distributed unfair daemon,\nthe most general daemon. Its stabilization time in rounds is at most 3nmax+D,\nwhere nmax is the maximum number of non-root processes in a connected component\nand D is the hop-diameter of V\\_r. Furthermore, if we additionally assume that\nedge weights are positive integers, then it stabilizes in a polynomial number\nof steps: namely, we exhibit a bound in O(maxi nmax^3 n), where maxi is the\nmaximum weight of an edge and n is the number of processes.\n", "title": "Self-Stabilizing Disconnected Components Detection and Rooted Shortest-Path Tree Maintenance in Polynomial Steps" }
null
null
null
null
true
null
18777
null
Default
null
null
null
{ "abstract": " The coupling of human movement dynamics with the function and design of\nwearable assistive devices is vital to better understand the interaction\nbetween the two. Advanced neuromuscular models and optimal control formulations\nprovide the possibility to study and improve this interaction. In addition,\noptimal control can also be used to generate predictive simulations that\ngenerate novel movements for the human model under varying optimization\ncriterion.\n", "title": "Optimizing wearable assistive devices with neuromuscular models and optimal control" }
null
null
null
null
true
null
18778
null
Default
null
null
null
{ "abstract": " Various models have been recently proposed to reflect and predict different\nproperties of complex networks. However, the community structure, which is one\nof the most important properties, is not well studied and modeled. In this\npaper, we suggest a principle called \"preferential placement\", which allows to\nmodel a realistic clustering structure. We provide an extensive empirical\nanalysis of the obtained structure as well as some theoretical results.\n", "title": "Preferential placement for community structure formation" }
null
null
null
null
true
null
18779
null
Default
null
null
null
{ "abstract": " In this paper, we consider the problem of sequentially optimizing a black-box\nfunction $f$ based on noisy samples and bandit feedback. We assume that $f$ is\nsmooth in the sense of having a bounded norm in some reproducing kernel Hilbert\nspace (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian\nprocess bandit optimization. We provide algorithm-independent lower bounds on\nthe simple regret, measuring the suboptimality of a single point reported after\n$T$ rounds, and on the cumulative regret, measuring the sum of regrets over the\n$T$ chosen points. For the isotropic squared-exponential kernel in $d$\ndimensions, we find that an average simple regret of $\\epsilon$ requires $T =\n\\Omega\\big(\\frac{1}{\\epsilon^2} (\\log\\frac{1}{\\epsilon})^{d/2}\\big)$, and the\naverage cumulative regret is at least $\\Omega\\big( \\sqrt{T(\\log T)^{d/2}}\n\\big)$, thus matching existing upper bounds up to the replacement of $d/2$ by\n$2d+O(1)$ in both cases. For the Matérn-$\\nu$ kernel, we give analogous\nbounds of the form $\\Omega\\big( (\\frac{1}{\\epsilon})^{2+d/\\nu}\\big)$ and\n$\\Omega\\big( T^{\\frac{\\nu + d}{2\\nu + d}} \\big)$, and discuss the resulting\ngaps to the existing upper bounds.\n", "title": "Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
18780
null
Validated
null
null
null
{ "abstract": " We present some observations on the tau-function for the fourth Painlevé\nequation. By considering a Hirota bilinear equation of order four for this\ntau-function, we describe the general form of the Taylor expansion around an\narbitrary movable zero. The corresponding Taylor series for the tau-functions\nof the first and second Painlevé equations, as well as that for the\nWeierstrass sigma function, arise naturally as special cases, by setting\ncertain parameters to zero.\n", "title": "Hirota bilinear equations for Painlevé transcendents" }
null
null
null
null
true
null
18781
null
Default
null
null
null
{ "abstract": " Representational Similarity Analysis (RSA) aims to explore similarities\nbetween neural activities of different stimuli. Classical RSA techniques employ\nthe inverse of the covariance matrix to explore a linear model between the\nneural activities and task events. However, calculating the inverse of a\nlarge-scale covariance matrix is time-consuming and can reduce the stability\nand robustness of the final analysis. Notably, it becomes severe when the\nnumber of samples is too large. For facing this shortcoming, this paper\nproposes a novel RSA method called gradient-based RSA (GRSA). Moreover, the\nproposed method is not restricted to a linear model. In fact, there is a\ngrowing interest in finding more effective ways of using multi-subject and\nwhole-brain fMRI data. Searchlight technique can extend RSA from the localized\nbrain regions to the whole-brain regions with smaller memory footprint in each\nprocess. Based on Searchlight, we propose a new method called Spatiotemporal\nSearchlight GRSA (SSL-GRSA) that generalizes our ROI-based GRSA algorithm to\nthe whole-brain data. Further, our approach can handle some computational\nchallenges while dealing with large-scale, multi-subject fMRI data.\nExperimental studies on multi-subject datasets confirm that both proposed\napproaches achieve superior performance to other state-of-the-art RSA\nalgorithms.\n", "title": "Gradient-based Representational Similarity Analysis with Searchlight for Analyzing fMRI Data" }
null
null
[ "Statistics", "Quantitative Biology" ]
null
true
null
18782
null
Validated
null
null
null
{ "abstract": " In this article we explore an algorithm for diffeomorphic random sampling of\nnonuniform probability distributions on Riemannian manifolds. The algorithm is\nbased on optimal information transport (OIT)---an analogue of optimal mass\ntransport (OMT). Our framework uses the deep geometric connections between the\nFisher-Rao metric on the space of probability densities and the right-invariant\ninformation metric on the group of diffeomorphisms. The resulting sampling\nalgorithm is a promising alternative to OMT, in particular as our formulation\nis semi-explicit, free of the nonlinear Monge--Ampere equation. Compared to\nMarkov Chain Monte Carlo methods, we expect our algorithm to stand up well when\na large number of samples from a low dimensional nonuniform distribution is\nneeded.\n", "title": "Diffeomorphic random sampling using optimal information transport" }
null
null
null
null
true
null
18783
null
Default
null
null
null
{ "abstract": " In this paper, we give a characterization of Nikol'ski\\u{\\i}-Besov type\nclasses of functions, given by integral representations of moduli of\nsmoothness, in terms of series over the moduli of smoothness. Also, necessary\nand sufficient conditions in terms of monotone or lacunary Fourier coefficients\nfor a function to belong to a such a class are given. In order to prove our\nresults, we make use of certain recent reverse Copson- and Leindler-type\ninequalities.\n", "title": "On approximations by trigonometric polynomials of classes of functions defined by moduli of smoothness" }
null
null
null
null
true
null
18784
null
Default
null
null
null
{ "abstract": " Topological optical states exhibit unique immunity to defects and the ability\nto propagate without losses rendering them ideal for photonic applications.A\npowerful class of such states is based on time-reversal symmetry breaking of\nthe optical response.However, existing proposals either involve sophisticated\nand bulky structural designs or can only operate in the microwave regime. Here,\nwe propose and provide a theoretical proof-of-principle demonstration for\nhighly confined topologically protected optical states to be realized at\ninfrared frequencies in a simple 2D material structure-a periodically patterned\ngraphene monolayer-subject to a magnetic field below 1 tesla. In our graphene\nhoneycomb superlattice structures plasmons exhibit substantial nonreciprocal\nbehavior at the superlattice junctions, leading to the emergence of\ntopologically protected edge states and localized bulk modes enabled by the\nstrong magneto-optical response of this material, which leads to\ntime-reversal-symmetry breaking already at moderate static magnetic fields. The\nproposed approach is simple and robust for realizing topologically nontrivial\n2D optical states, not only in graphene, but also in other 2D atomic layers,\nand could pave the way for realizing fast, nanoscale, defect-immune devices for\nintegrated photonics applications.\n", "title": "Topologically protected Dirac plasmons in graphene" }
null
null
null
null
true
null
18785
null
Default
null
null
null
{ "abstract": " In this paper we propose and explore the k-Nearest Neighbour UCB algorithm\nfor multi-armed bandits with covariates. We focus on a setting where the\ncovariates are supported on a metric space of low intrinsic dimension, such as\na manifold embedded within a high dimensional ambient feature space. The\nalgorithm is conceptually simple and straightforward to implement. The\nk-Nearest Neighbour UCB algorithm does not require prior knowledge of the\neither the intrinsic dimension of the marginal distribution or the time\nhorizon. We prove a regret bound for the k-Nearest Neighbour UCB algorithm\nwhich is minimax optimal up to logarithmic factors. In particular, the\nalgorithm automatically takes advantage of both low intrinsic dimensionality of\nthe marginal distribution over the covariates and low noise in the data,\nexpressed as a margin condition. In addition, focusing on the case of bounded\nrewards, we give corresponding regret bounds for the k-Nearest Neighbour KL-UCB\nalgorithm, which is an analogue of the KL-UCB algorithm adapted to the setting\nof multi-armed bandits with covariates. Finally, we present empirical results\nwhich demonstrate the ability of both the k-Nearest Neighbour UCB and k-Nearest\nNeighbour KL-UCB to take advantage of situations where the data is supported on\nan unknown sub-manifold of a high-dimensional feature space.\n", "title": "The K-Nearest Neighbour UCB algorithm for multi-armed bandits with covariates" }
null
null
null
null
true
null
18786
null
Default
null
null
null
{ "abstract": " Proportional mean residual life model is studied for analysing survival data\nfrom the case-cohort design. To simultaneously estimate the regression\nparameters and the baseline mean residual life function, weighted estimating\nequations based on an inverse selection probability are proposed. The resulting\nregression coefficients estimates are shown to be consistent and asymptotic\nnormal with easily estimated variance-covariance. Simulation studies show that\nthe proposed estimators perform very well. An application to a real dataset\nfrom the South Welsh nickel refiners study is also given to illustrate the\nmethodology.\n", "title": "Proportional Mean Residual Life Model with Censored Survival Data under Case-cohort Design" }
null
null
null
null
true
null
18787
null
Default
null
null
null
{ "abstract": " In this paper we consider an extension of the results in shape\ndifferentiation of semilinear equations with smooth nonlinearity presented in\nJ.I. Díaz and D. Gómez-Castro: An Application of Shape Differentiation to\nthe Effectiveness of a Steady State Reaction-Diffusion Problem Arising in\nChemical Engineering. Electron. J. Differ. Equations in 2015 to the case in\nwhich the nonlinearities might be less smooth. Namely we will show that Gateaux\nshape derivatives exists when the nonlinearity is only Lipschitz continuous,\nand we will give a definition of the derivative when the nonlinearity has a\nblow up. In this direction, we will study the case of root-type nonlinearities.\n", "title": "Shape differentiation of a steady-state reaction-diffusion problem arising in Chemical Engineering: the case of non-smooth kinetic with dead core" }
null
null
null
null
true
null
18788
null
Default
null
null
null
{ "abstract": " The state of the art for integral evaluation is that analytical solutions to\nintegrals are far more useful than numerical solutions. We evaluate certain\nintegrals analytically that are necessary in some approaches in quantum\nchemistry. In the title, where R stands for nucleus-electron and r for\nelectron-electron distances, the $(n,m)=(0,0)$ case is trivial, the\n$(n,m)=(1,0)$ and (0,1) cases are well known, fundamental milestone in\nintegration and widely used in computation chemistry, as well as based on\nLaplace transformation with integrand exp(-$a^2t^2$). The rest of the cases are\nnew and need the other Laplace transformation with integrand exp(-$a^2t$) also,\nas well as the necessity of a two dimensional version of Boys function comes up\nin case. These analytic expressions (up to Gaussian function integrand) are\nuseful for manipulation with higher moments of inter-electronic distances, for\nexample in correlation calculations.\n", "title": "Analytic evaluation of Coulomb integrals for one, two and three-electron distance operators, $R_{C1}^{-n}R_{D1}^{-m}$, $R_{C1}^{-n}r_{12}^{-m}$ and $r_{12}^{-n}r_{13}^{-m}$ with $n, m=0,1,2$" }
null
null
null
null
true
null
18789
null
Default
null
null
null
{ "abstract": " Our aims are to determine flux densities and their photometric accuracy for a\nset of seventeen stars that range in flux from intermediately bright (<2.5 Jy)\nto faint (>5 mJy) in the far-infrared (FIR). We also aim to derive\nsignal-to-noise dependence with flux and time, and compare the results with\npredictions from the Herschel exposure-time calculation tool. The PACS faint\nstar sample has allowed a comprehensive sensitivity assessment of the PACS\nphotometer. Accurate photometry allows us to establish a set of five FIR\nprimary standard candidates, namely alpha Ari, epsilon Lep, omega,Cap, HD41047\nand 42Dra, which are 2 -- 20 times fainter than the faintest PACS fiducial\nstandard (gamma Dra) with absolute accuracy of <6%. For three of these primary\nstandard candidates, essential stellar parameters are known, meaning that a\ndedicated flux model code may be run.\n", "title": "Herschel-PACS photometry of faint stars" }
null
null
null
null
true
null
18790
null
Default
null
null
null
{ "abstract": " Synapses in real neural circuits can take discrete values, including zero\n(silent or potential) synapses. The computational role of zero synapses in\nunsupervised feature learning of unlabeled noisy data is still unclear, thus it\nis important to understand how the sparseness of synaptic activity is shaped\nduring learning and its relationship with receptive field formation. Here, we\nformulate this kind of sparse feature learning by a statistical mechanics\napproach. We find that learning decreases the fraction of zero synapses, and\nwhen the fraction decreases rapidly around a critical data size, an\nintrinsically structured receptive field starts to develop. Further increasing\nthe data size refines the receptive field, while a very small fraction of zero\nsynapses remain to act as contour detectors. This phenomenon is discovered not\nonly in learning a handwritten digits dataset, but also in learning retinal\nneural activity measured in a natural-movie-stimuli experiment.\n", "title": "Role of zero synapses in unsupervised feature learning" }
null
null
null
null
true
null
18791
null
Default
null
null
null
{ "abstract": " With the increasing interest in applying the methodology of\ndifference-of-convex (dc) optimization to diverse problems in engineering and\nstatistics, this paper establishes the dc property of many well-known functions\nnot previously known to be of this class. Motivated by a quadratic programming\nbased recourse function in two-stage stochastic programming, we show that the\n(optimal) value function of a copositive (thus not necessarily convex)\nquadratic program is dc on the domain of finiteness of the program when the\nmatrix in the objective function's quadratic term and the constraint matrix are\nfixed. The proof of this result is based on a dc decomposition of a piecewise\nLC1 function (i.e., functions with Lipschitz gradients). Armed with these new\nresults and known properties of dc functions existed in the literature, we show\nthat many composite statistical functions in risk analysis, including the\nvalue-at-risk (VaR), conditional value-at-risk (CVaR), expectation-based,\nVaR-based, and CVaR-based random deviation functions are all dc. Adding the\nknown class of dc surrogate sparsity functions that are employed as\napproximations of the l_0 function in statistical learning, our work\nsignificantly expands the family of dc functions and positions them for\nfruitful applications.\n", "title": "On the Pervasiveness of Difference-Convexity in Optimization and Statistics" }
null
null
null
null
true
null
18792
null
Default
null
null
null
{ "abstract": " The penetration of distributed renewable energy (DRE) greatly raises the risk\nof distribution network operation such as peak shaving and voltage stability.\nBattery energy storage (BES) has been widely accepted as the most potential\napplication to cope with the challenge of high penetration of DRE. To cope with\nthe uncertainties and variability of DRE, a stochastic day-ahead dynamic\noptimal power flow (DOPF) and its algorithm are proposed. The overall economy\nis achieved by fully considering the DRE, BES, electricity purchasing and\nactive power losses. The rainflow algorithm-based cycle counting method of BES\nis incorporated in the DOPF model to capture the cell degradation, greatly\nextending the expected BES lifetime and achieving a better economy. DRE\nscenarios are generated to consider the uncertainties and correlations based on\nthe Copula theory. To solve the DOPF model, we propose a Lagrange\nrelaxation-based algorithm, which has a significantly reduced complexity with\nrespect to the existing techniques. For this reason, the proposed algorithm\nenables much more scenarios incorporated in the DOPF model and better captures\nthe DRE uncertainties and correlations. Finally, numerical studies for the\nday-ahead DOPF in the IEEE 123-node test feeder are presented to demonstrate\nthe merits of the proposed method. Results show that the actual BES life\nexpectancy of the proposed model has increased to 4.89 times compared with the\ntraditional ones. The problems caused by DRE are greatly alleviated by fully\ncapturing the uncertainties and correlations with the proposed method.\n", "title": "Stochastic Dynamic Optimal Power Flow in Distribution Network with Distributed Renewable Energy and Battery Energy Storage" }
null
null
null
null
true
null
18793
null
Default
null
null
null
{ "abstract": " In this paper, we consider the 2 X 2 multi-user multiple-input-single-output\n(MU-MISO) broadcast visible light communication (VLC) channel with two light\nemitting diodes (LEDs) at the transmitter and a single photo diode (PD) at each\nof the two users. We propose an achievable rate region of the Zero-Forcing (ZF)\nprecoder in this 2 X 2 MU-MISO VLC channel under a per-LED peak and average\npower constraint, where the average optical power emitted from each LED is\nfixed for constant lighting, but is controllable (referred to as dimming\ncontrol in IEEE 802.15.7 standard on VLC). We analytically characterize the\nproposed rate region boundary and show that it is Pareto-optimal. Further\nanalysis reveals that the largest rate region is achieved when the fixed\nper-LED average optical power is half of the allowed per-LED peak optical\npower. We also propose a novel transceiver architecture where the channel\nencoder and dimming control are separated which greatly simplifies the\ncomplexity of the transceiver. A case study of an indoor VLC channel with the\nproposed transceiver reveals that the achievable information rates are\nsensitive to the placement of the LEDs and the PDs. An interesting observation\nis that for a given placement of LEDs in a 5 m X 5 m X 3 m room, even with a\nsubstantial displacement of the users from their optimum placement, reduction\nin the achievable rates is not significant. This observation could therefore be\nused to define \"coverage zones\" within a room where the reduction in the\ninformation rates to the two users is within an acceptable tolerance limit.\n", "title": "Achievable Rate Region of the Zero-Forcing Precoder in a 2 X 2 MU-MISO Broadcast VLC Channel with Per-LED Peak Power Constraint and Dimming Control" }
null
null
null
null
true
null
18794
null
Default
null
null
null
{ "abstract": " Autonomous driving and electric vehicles are nowadays very active research\nand development areas. In this paper we present the conversion of a standard\nKyburz eRod into an autonomous vehicle that can be operated in challenging\nenvironments such as Swiss mountain passes. The overall hardware and software\narchitectures are described in detail with a special emphasis on the sensor\nrequirements for autonomous vehicles operating in partially structured\nenvironments. Furthermore, the design process itself and the finalized system\narchitecture are presented. The work shows state of the art results in\nlocalization and controls for self-driving high-performance electric vehicles.\nTest results of the overall system are presented, which show the importance of\ngeneralizable state estimation algorithms to handle a plethora of conditions.\n", "title": "Autonomous Electric Race Car Design" }
null
null
[ "Computer Science" ]
null
true
null
18795
null
Validated
null
null
null
{ "abstract": " Identifying community structure of a complex network provides insight to the\ninterdependence between the network topology and emergent collective behaviors\nof networks, while detecting such invariant communities in a time-varying\nnetwork is more challenging. In this paper, we define the temporal stable\ncommunity and newly propose the concept of dynamic modularity to evaluate the\nstable community structures in time-varying networks, which is robust against\nsmall changes as verified by several empirical time-varying network datasets.\nBesides, using the volatility features of temporal stable communities in\nfunctional brain networks, we successfully differentiate the ADHD (Attention\nDeficit Hyperactivity Disorder) patients and healthy controls efficiently.\n", "title": "Temporal Stable Community in Time-Varying Networks" }
null
null
null
null
true
null
18796
null
Default
null
null
null
{ "abstract": " We show that the solutions obtained in the paper `An exact solution for\narbitrarily rotating gaseous polytropes with index unity' by Kong, Zhang, and\nSchubert represent only approximate solutions of the free-boundary\nEuler-Poisson system of equations describing uniformly rotating,\nself-gravitating polytropes with index unity. We discuss the quality of such\nsolutions as approximations to the rigidly rotating equilibrium polytropic\nconfigurations.\n", "title": "The shape of a rapidly rotating polytrope with index unity" }
null
null
null
null
true
null
18797
null
Default
null
null
null
{ "abstract": " Researchers at the National Institute of Standards and Technology(NIST) have\nmeasured the value of the Planck constant to be $h =6.626\\,069\\,934(89)\\times\n10^{-34}\\,$J$\\,$s (relative standard uncertainty $13\\times 10^{-9}$). The\nresult is based on over 10$\\,$000 weighings of masses with nominal values\nranging from 0.5$\\,$kg to 2$\\,$kg with the Kibble balance NIST-4. The\nuncertainty has been reduced by more than twofold relative to a previous\ndetermination because of three factors: (1) a much larger data set than\npreviously available, allowing a more realistic, and smaller, Type A\nevaluation; (2) a more comprehensive measurement of the back action of the\nweighing current on the magnet by weighing masses up to 2$\\,$kg, decreasing the\nuncertainty associated with magnet non-linearity; (3) a rigorous investigation\nof the dependence of the geometric factor on the coil velocity reducing the\nuncertainty assigned to time-dependent leakage of current in the coil.\n", "title": "Measurement of the Planck constant at the National Institute of Standards and Technology from 2015 to 2017" }
null
null
null
null
true
null
18798
null
Default
null
null
null
{ "abstract": " The aim of process discovery, originating from the area of process mining, is\nto discover a process model based on business process execution data. A\nmajority of process discovery techniques relies on an event log as an input. An\nevent log is a static source of historical data capturing the execution of a\nbusiness process. In this paper we focus on process discovery relying on online\nstreams of business process execution events. Learning process models from\nevent streams poses both challenges and opportunities, i.e. we need to handle\nunlimited amounts of data using finite memory and, preferably, constant time.\nWe propose a generic architecture that allows for adopting several classes of\nexisting process discovery techniques in context of event streams. Moreover, we\nprovide several instantiations of the architecture, accompanied by\nimplementations in the process mining tool-kit ProM (this http URL).\nUsing these instantiations, we evaluate several dimensions of stream-based\nprocess discovery. The evaluation shows that the proposed architecture allows\nus to lift process discovery to the streaming domain.\n", "title": "Event Stream-Based Process Discovery using Abstract Representations" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
18799
null
Validated
null
null
null
{ "abstract": " We obtain alternative explicit Specht filtrations for the induced and the\nrestricted Specht modules in the Hecke algebra of the symmetric group (defined\nover the ring $A=\\mathbb Z[q^{1/2},q^{-1/2}]$ where $q$ is an indeterminate)\nusing $C$-bases for these modules. Moreover, we provide a link between a\ncertain $C$-basis for the induced Specht module and the notion of pairs of\npartitions.\n", "title": "On $C$-bases, partition pairs and filtrations for induced or restricted Specht modules" }
null
null
null
null
true
null
18800
null
Default
null
null