text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " For a unimodular random graph $(G,\\rho)$, we consider deformations of its\nintrinsic path metric by a (random) weighting of its vertices. This leads to\nthe notion of the {\\em conformal growth exponent of $(G,\\rho)$}, which is the\nbest asymptotic degree of volume growth of balls that can be achieved by such a\nreweighting. Under moment conditions on the degree of the root, we show that\nthe conformal growth exponent of a unimodular random graph bounds the almost\nsure spectral dimension.\nIn two dimensions, one obtains more precise information. If $(G,\\rho)$ has a\nproperty we call {\\em quadratic conformal growth}, then the following holds: If\nthe degree of the root is uniformly bounded almost surely, then $G$ is almost\nsurely recurrent. Since limits of finite $H$-minor-free graphs have gauged\nquadratic conformal growth, such limits are almost surely recurrent; this\naffirms a conjecture of Benjamini and Schramm (2001). For the special case of\nplanar graphs, this gives a proof of the Benjamini-Schramm Recurrence Theorem\nthat does not proceed via the analysis of circle packings.\nGurel-Gurevich and Nachmias (2013) resolved a central open problem by showing\nthat the uniform infinite planar triangulation (UIPT) and quadrangulation\n(UIPQ) are almost surely recurrent. They proved that this holds for any\ndistributional limit of planar graphs in which the degree of the root has\nexponential tails (which is known to hold for UIPT and UIPQ). We use the\nquadratic conformal growth property to give a new proof of this result that\nholds for distributional limits of finite $H$-minor-free graphs. Moreover, our\narguments yield quantitative bounds on the heat kernel in terms of the degree\ndistribution at the root. This also yields a new approach to subdiffusivity of\nthe random walk on UIPT/UIPQ, using only the volume growth profile of balls in\nthe intrinsic metric.\n", "title": "Conformal growth rates and spectral geometry on distributional limits of graphs" }
null
null
null
null
true
null
16001
null
Default
null
null
null
{ "abstract": " This paper presents a self-supervised method for detecting the active speaker\nin a multi-person spoken interaction scenario. We argue that this capability is\na fundamental prerequisite for any artificial cognitive system attempting to\nacquire language in social settings. Our methods are able to detect an\narbitrary number of possibly overlapping active speakers based exclusively on\nvisual information about their face. Our methods do not rely on external\nannotations, thus complying with cognitive development. Instead, they use\ninformation from the auditory modality to support learning in the visual\ndomain. The methods have been extensively evaluated on a large multi-person\nface-to-face interaction dataset. The results reach an accuracy of 80% on a\nmulti-speaker setting. We believe this system represents an essential component\nof any artificial cognitive system or robotic platform engaging in social\ninteraction.\n", "title": "Self-Supervised Vision-Based Detection of the Active Speaker as a Prerequisite for Socially-Aware Language Acquisition" }
null
null
null
null
true
null
16002
null
Default
null
null
null
{ "abstract": " TraQuad is an autonomous tracking quadcopter capable of tracking any moving\n(or static) object like cars, humans, other drones or any other object\non-the-go. This article describes the applications and advantages of TraQuad\nand the reduction in cost (to about 250$) that has been achieved so far using\nthe hardware and software capabilities and our custom algorithms wherever\nneeded. This description is backed by strong data and the research analyses\nwhich have been drawn out of extant information or conducted on own when\nnecessary. This also describes the development of completely autonomous (even\nGPS is optional) low-cost drone which can act as a major platform for further\ndevelopments in automation, transportation, reconnaissance and more. We\ndescribe our ROS Gazebo simulator and our STATUS algorithms which form the core\nof our development of our object tracking drone for generic purposes.\n", "title": "Monocular Imaging-based Autonomous Tracking for Low-cost Quad-rotor Design - TraQuad" }
null
null
null
null
true
null
16003
null
Default
null
null
null
{ "abstract": " Persistence diagrams have been widely recognized as a compact descriptor for\ncharacterizing multiscale topological features in data. When many datasets are\navailable, statistical features embedded in those persistence diagrams can be\nextracted by applying machine learnings. In particular, the ability for\nexplicitly analyzing the inverse in the original data space from those\nstatistical features of persistence diagrams is significantly important for\npractical applications. In this paper, we propose a unified method for the\ninverse analysis by combining linear machine learning models with persistence\nimages. The method is applied to point clouds and cubical sets, showing the\nability of the statistical inverse analysis and its advantages.\n", "title": "Persistence Diagrams with Linear Machine Learning Models" }
null
null
null
null
true
null
16004
null
Default
null
null
null
{ "abstract": " In a pair of recent papers, Andrews, Fraenkel and Sellers provide a complete\ncharacterization for the number of $m$-ary partitions modulo $m$, with and\nwithout gaps. In this paper we extend these results to the case of coloured\n$m$-ary partitions, with and without gaps. Our method of proof is different,\ngiving explicit expansions for the generating functions modulo $m$\n", "title": "Characterizing the number of coloured $m$-ary partitions modulo $m$, with and without gaps" }
null
null
null
null
true
null
16005
null
Default
null
null
null
{ "abstract": " Mobile computing is one of the main drivers of innovation, yet the future\ngrowth of mobile computing capabilities remains critically threatened by\nhardware constraints, such as the already extremely dense transistor packing\nand limited battery capacity. The breakdown of Dennard scaling and stagnating\nenergy storage improvements further amplify these threats. However, the\ncomputational burden we put on our mobile devices is not always justified. In a\nmyriad of situations the result of a computation is further manipulated,\ninterpreted, and finally acted upon. This allows for the computation to be\nrelaxed, so that the result is calculated with \"good enough\", not perfect\naccuracy. For example, results of a Web search may be perfectly acceptable even\nif the order of the last few listed items is shuffled, as an end user decides\nwhich of the available links to follow. Similarly, the quality of a\nvoice-over-IP call may be acceptable, despite being imperfect, as long as the\ntwo involved parties can clearly understand each other. This novel way of\nthinking about computation is termed Approximate Computing (AC) and promises to\nreduce resource usage, while ensuring that satisfactory performance is\ndelivered to end-users. AC is already experimented with on various levels of\ndesktop computer architecture, from the hardware level where incorrect adders\nhave been designed to sacrifice result correctness for reduced energy\nconsumption, to compiler-level optimisations that omit certain lines of code to\nspeed up video encoding. AC is yet to be attempted on mobile devices and in\nthis article we examine the potential benefits of mobile AC and present an\noverview of AC techniques applicable in the mobile domain.\n", "title": "Towards Approximate Mobile Computing" }
null
null
null
null
true
null
16006
null
Default
null
null
null
{ "abstract": " The mid-infrared (MIR) spectral range, pertaining to important applications\nsuch as molecular 'fingerprint' imaging, remote sensing, free space\ntelecommunication and optical radar, is of particular scientific interest and\ntechnological importance. However, state-of-the-art materials for MIR detection\nare limited by intrinsic noise and inconvenient fabrication processes,\nresulting in high cost photodetectors requiring cryogenic operation. We report\nblack arsenic-phosphorus-based long wavelength infrared photodetectors with\nroom temperature operation up to 8.2 um, entering the second MIR atmospheric\ntransmission window. Combined with a van der Waals heterojunction, room\ntemperature specific detectivity higher than 4.9*10^9 Jones was obtained in the\n3-5 um range. The photodetector works in a zero-bias photovoltaic mode,\nenabling fast photoresponse and low dark noise. Our van der Waals\nheterojunction photodector not only exemplify black arsenic-phosphorus as a\npromising candidate for MIR opto-electronic applications, but also pave the way\nfor a general strategy to suppress 1/f noise in photonic devices.\n", "title": "Room-temperature high detectivity mid-infrared photodetectors based on black arsenic phosphorus" }
null
null
null
null
true
null
16007
null
Default
null
null
null
{ "abstract": " Networks of elastic fibers are ubiquitous in biological systems and often\nprovide mechanical stability to cells and tissues. Fiber reinforced materials\nare also common in technology. An important characteristic of such materials is\ntheir resistance to failure under load. Rupture occurs when fibers break under\nexcessive force and when that failure propagates. Therefore it is crucial to\nunderstand force distributions. Force distributions within such networks are\ntypically highly inhomogeneous and are not well understood. Here we construct a\nsimple one-dimensional model system with periodic boundary conditions by\nrandomly placing linear springs on a circle. We consider ensembles of such\nnetworks that consist of $N$ nodes and have an average degree of connectivity\n$z$, but vary in topology. Using a graph-theoretical approach that accounts for\nthe full topology of each network in the ensemble, we show that, surprisingly,\nthe force distributions can be fully characterized in terms of the parameters\n$(N,z)$. Despite the universal properties of such $(N,z)$-ensembles, our\nanalysis further reveals that a classical mean-field approach fails to capture\nforce distributions correctly. We demonstrate that network topology is a\ncrucial determinant of force distributions in elastic spring networks.\n", "title": "Topology determines force distributions in one-dimensional random spring networks" }
null
null
null
null
true
null
16008
null
Default
null
null
null
{ "abstract": " An elliptic curve $E$ defined over a $p$-adic field $K$ with a $p$-isogeny\n$\\phi:E\\rightarrow E^\\prime$ comes equipped with an invariant $\\alpha_{\\phi/K}$\nthat measures the valuation of the leading term of the formal group\nhomomorphism $\\Phi:\\hat E \\rightarrow \\hat E^\\prime$. We prove that if\n$K/\\mathbb{Q}_p$ is unramified and $E$ has additive, potentially supersingular\nreduction, then $\\alpha_{\\phi/K}$ is determined by the number of distinct\ngeometric components on the special fibers of the minimal proper regular models\nof $E$ and $E^\\prime$.\n", "title": "On a local invariant of elliptic curves with a p-isogeny" }
null
null
null
null
true
null
16009
null
Default
null
null
null
{ "abstract": " Android apps cooperate through message passing via intents. However, when\napps do not have identical sets of privileges inter-app communication (IAC) can\naccidentally or maliciously be misused, e.g., to leak sensitive information\ncontrary to users expectations. Recent research considered static program\nanalysis to detect dangerous data leaks due to inter-component communication\n(ICC) or IAC, but suffers from shortcomings with respect to precision,\nsoundness, and scalability. To solve these issues we propose a novel approach\nfor static ICC/IAC analysis. We perform a fixed-point iteration of ICC/IAC\nsummary information to precisely resolve intent communication with more than\ntwo apps involved. We integrate these results with information flows generated\nby a baseline (i.e. not considering intents) information flow analysis, and\nresolve if sensitive data is flowing (transitively) through components/apps in\norder to be ultimately leaked. Our main contribution is the first fully\nautomatic sound and precise ICC/IAC information flow analysis that is scalable\nfor realistic apps due to modularity, avoiding combinatorial explosion: Our\napproach determines communicating apps using short summaries rather than\ninlining intent calls, which often requires simultaneously analyzing all tuples\nof apps. We evaluated our tool IIFA in terms of scalability, precision, and\nrecall. Using benchmarks we establish that precision and recall of our\nalgorithm are considerably better than prominent state-of-the-art analyses for\nIAC. But foremost, applied to the 90 most popular applications from the Google\nPlaystore, IIFA demonstrated its scalability to a large corpus of real-world\napps. IIFA reports 62 problematic ICC-/IAC-related information flows via two or\nmore apps/components.\n", "title": "IIFA: Modular Inter-app Intent Information Flow Analysis of Android Applications" }
null
null
null
null
true
null
16010
null
Default
null
null
null
{ "abstract": " The usability of small devices such as smartphones or interactive watches is\noften hampered by the limited size of command vocabularies. This paper is an\nattempt at better understanding how finger identification may help users invoke\ncommands on touch screens, even without recourse to multi-touch input. We\ndescribe how finger identification can increase the size of input vocabularies\nunder the constraint of limited real estate, and we discuss some visual cues to\ncommunicate this novel modality to novice users. We report a controlled\nexperiment that evaluated, over a large range of input-vocabulary sizes, the\nefficiency of single-touch command selections with vs. without finger\nidentification. We analyzed the data not only in terms of traditional time and\nerror metrics, but also in terms of a throughput measure based on Shannon's\ntheory, which we show offers a synthetic and parsimonious account of users'\nperformance. The results show that the larger the input vocabulary needed by\nthe designer, the more promising the identification of individual fingers.\n", "title": "Glass+Skin: An Empirical Evaluation of the Added Value of Finger Identification to Basic Single-Touch Interaction on Touch Screens" }
null
null
null
null
true
null
16011
null
Default
null
null
null
{ "abstract": " We construct a complexity-based morphospace to study systems-level properties\nof conscious & intelligent systems. The axes of this space label 3 complexity\ntypes: autonomous, cognitive & social. Given recent proposals to synthesize\nconsciousness, a generic complexity-based conceptualization provides a useful\nframework for identifying defining features of conscious & synthetic systems.\nBased on current clinical scales of consciousness that measure cognitive\nawareness and wakefulness, we take a perspective on how contemporary\nartificially intelligent machines & synthetically engineered life forms measure\non these scales. It turns out that awareness & wakefulness can be associated to\ncomputational & autonomous complexity respectively. Subsequently, building on\ninsights from cognitive robotics, we examine the function that consciousness\nserves, & argue the role of consciousness as an evolutionary game-theoretic\nstrategy. This makes the case for a third type of complexity for describing\nconsciousness: social complexity. Having identified these complexity types,\nallows for a representation of both, biological & synthetic systems in a common\nmorphospace. A consequence of this classification is a taxonomy of possible\nconscious machines. We identify four types of consciousness, based on\nembodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii)\ngroup consciousness (resulting from group interactions), & (iv) simulated\nconsciousness (embodied by virtual agents within a simulated reality). This\ntaxonomy helps in the investigation of comparative signatures of consciousness\nacross domains, in order to highlight design principles necessary to engineer\nconscious machines. This is particularly relevant in the light of recent\ndevelopments at the crossroads of cognitive neuroscience, biomedical\nengineering, artificial intelligence & biomimetics.\n", "title": "The Morphospace of Consciousness" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
16012
null
Validated
null
null
null
{ "abstract": " We use the coupled cluster method (CCM) to study a frustrated\nspin-$\\frac{1}{2}$ $J_{1}$--$J_{2}$--$J_{1}^{\\perp}$ Heisenberg antiferromagnet\non a bilayer honeycomb lattice with $AA$ stacking. Both nearest-neighbor (NN)\nand frustrating next-nearest-neighbor antiferromagnetic (AFM) exchange\ninteractions are present in each layer, with respective exchange coupling\nconstants $J_{1}>0$ and $J_{2} \\equiv \\kappa J_{1} > 0$. The two layers are\ncoupled with NN AFM exchanges with coupling strength $J_{1}^{\\perp}\\equiv\n\\delta J_{1}>0$. We calculate to high orders of approximation within the CCM\nthe zero-field transverse magnetic susceptibility $\\chi$ in the Néel phase.\nWe thus obtain an accurate estimate of the full boundary of the Néel phase in\nthe $\\kappa\\delta$ plane for the zero-temperature quantum phase diagram. We\ndemonstrate explicitly that the phase boundary derived from $\\chi$ is fully\nconsistent with that obtained from the vanishing of the Néel magnetic order\nparameter. We thus conclude that at all points along the Néel phase boundary\nquasiclassical magnetic order gives way to a nonclassical paramagnetic phase\nwith a nonzero energy gap. The Néel phase boundary exhibits a marked\nreentrant behavior, which we discuss in detail.\n", "title": "Transverse Magnetic Susceptibility of a Frustrated Spin-$\\frac{1}{2}$ $J_{1}$--$J_{2}$--$J_{1}^{\\perp}$ Heisenberg Antiferromagnet on a Bilayer Honeycomb Lattice" }
null
null
null
null
true
null
16013
null
Default
null
null
null
{ "abstract": " We study a polyhedron with $n$ vertices of fixed volume having minimum\nsurface area. Completing the proof of Toth, we show that all faces of a minimum\npolyhedron are triangles, and further prove that a minimum polyhedron does not\nallow deformation of a single vertex. We also present possible minimum shapes\nfor $n\\le 12$, some of them are quite unexpected, in particular $n=8$.\n", "title": "Minimum polyhedron with $n$ vertices" }
null
null
null
null
true
null
16014
null
Default
null
null
null
{ "abstract": " We develop a theory of weakly interacting fermionic atoms in shaken optical\nlattices based on the orbital mixing in the presence of time-periodic\nmodulations. Specifically, we focus on fermionic atoms in circularly shaken\nsquare lattice with near resonance frequencies, i.e., tuned close to the energy\nseparation between $s$-band and the $p$-bands. First, we derive a\ntime-independent four-band effective Hamiltonian in the non-interacting limit.\nDiagonalization of the effective Hamiltonian yields a quasi-energy spectrum\nconsistent with the full numerical Floquet solution that includes all higher\nbands. In particular, we find that the hybridized $s$-band develops multiple\nminima and therefore non-trivial Fermi surfaces at different fillings. We then\nobtain the effective interactions for atoms in the hybridized $s$-band\nanalytically and show that they acquire momentum dependence on the Fermi\nsurface even though the bare interaction is contact-like. We apply the theory\nto find the phase diagram of fermions with weak attractive interactions and\ndemonstrate that the pairing symmetry is $s+d$-wave. Our theory is valid for a\nrange of shaking frequencies near resonance, and it can be generalized to other\nphases of interacting fermions in shaken lattices.\n", "title": "Theory of interacting fermions in shaken square optical lattice" }
null
null
null
null
true
null
16015
null
Default
null
null
null
{ "abstract": " The heterogeneity-gap between different modalities brings a significant\nchallenge to multimedia information retrieval. Some studies formalize the\ncross-modal retrieval tasks as a ranking problem and learn a shared multi-modal\nembedding space to measure the cross-modality similarity. However, previous\nmethods often establish the shared embedding space based on linear mapping\nfunctions which might not be sophisticated enough to reveal more complicated\ninter-modal correspondences. Additionally, current studies assume that the\nrankings are of equal importance, and thus all rankings are used\nsimultaneously, or a small number of rankings are selected randomly to train\nthe embedding space at each iteration. Such strategies, however, always suffer\nfrom outliers as well as reduced generalization capability due to their lack of\ninsightful understanding of procedure of human cognition. In this paper, we\ninvolve the self-paced learning theory with diversity into the cross-modal\nlearning to rank and learn an optimal multi-modal embedding space based on\nnon-linear mapping functions. This strategy enhances the model's robustness to\noutliers and achieves better generalization via training the model gradually\nfrom easy rankings by diverse queries to more complex ones. An efficient\nalternative algorithm is exploited to solve the proposed challenging problem\nwith fast convergence in practice. Extensive experimental results on several\nbenchmark datasets indicate that the proposed method achieves significant\nimprovements over the state-of-the-arts in this literature.\n", "title": "Simple to Complex Cross-modal Learning to Rank" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
16016
null
Validated
null
null
null
{ "abstract": " This paper analyzes the iteration-complexity of a generalized alternating\ndirection method of multipliers (G-ADMM) for solving linearly constrained\nconvex problems. This ADMM variant, which was first proposed by Bertsekas and\nEckstein, introduces a relaxation parameter $\\alpha \\in (0,2)$ into the second\nADMM subproblem. Our approach is to show that the G-ADMM is an instance of a\nhybrid proximal extragradient framework with some special properties, and, as a\nby product, we obtain ergodic iteration-complexity for the G-ADMM with\n$\\alpha\\in (0,2]$, improving and complementing related results in the\nliterature. Additionally, we also present pointwise iteration-complexity for\nthe G-ADMM.\n", "title": "Iteration-complexity analysis of a generalized alternating direction method of multipliers" }
null
null
null
null
true
null
16017
null
Default
null
null
null
{ "abstract": " Vehicle bypassing is known to negatively affect delays at traffic diverges.\nHowever, due to the complexities of this phenomenon, accurate and yet simple\nmodels of such lane change maneuvers are hard to develop. In this work, we\npresent a macroscopic model for predicting the number of vehicles that bypass\nat a traffic diverge. We take into account the selfishness of vehicles in\nselecting their lanes; every vehicle selects lanes such that its own cost is\nminimized. We discuss how we model the costs experienced by the vehicles. Then,\ntaking into account the selfish behavior of the vehicles, we model the lane\nchoice of vehicles at a traffic diverge as a Wardrop equilibrium. We state and\nprove the properties of Wardrop equilibrium in our model. We show that there\nalways exists an equilibrium for our model. Moreover, unlike most nonlinear\nasymmetrical routing games, we prove that the equilibrium is unique under mild\nassumptions. We discuss how our model can be easily calibrated by running a\nsimple optimization problem. Using our calibrated model, we validate it through\nsimulation studies and demonstrate that our model successfully predicts the\naggregate lane change maneuvers that are performed by vehicles for bypassing at\na traffic diverge. We further discuss how our model can be employed to obtain\nthe optimal lane choice behavior of the vehicles, where the social or total\ncost of vehicles is minimized. Finally, we demonstrate how our model can be\nutilized in scenarios where a central authority can dictate the lane choice and\ntrajectory of certain vehicles so as to increase the overall vehicle mobility\nat a traffic diverge. Examples of such scenarios include the case when both\nhuman driven and autonomous vehicles coexist in the network. We show how\ncertain decisions of the central authority can affect the total delays in such\nscenarios via an example.\n", "title": "A Game Theoretic Macroscopic Model of Bypassing at Traffic Diverges with Applications to Mixed Autonomy Networks" }
null
null
null
null
true
null
16018
null
Default
null
null
null
{ "abstract": " We use a secular model to describe the non-resonant dynamics of\ntrans-Neptunian objects in the presence of an external ten-earth-mass\nperturber. The secular dynamics is analogous to an \"eccentric Kozai mechanism\"\nbut with both an inner component (the four giant planets) and an outer one (the\neccentric distant perturber). By the means of Poincaré sections, the cases of\na non-inclined or inclined outer planet are successively studied, making the\nconnection with previous works. In the inclined case, the problem is reduced to\ntwo degrees of freedom by assuming a non-precessing argument of perihelion for\nthe perturbing body.\nThe size of the perturbation is typically ruled by the semi-major axis of the\nsmall body: we show that the classic integrable picture is still valid below\nabout 70 AU, but it is progressively destroyed when we get closer to the\nexternal perturber. In particular, for a>150 AU, large-amplitude orbital flips\nbecome possible, and for a>200 AU, the Kozai libration islands are totally\nsubmerged by the chaotic sea. Numerous resonance relations are highlighted. The\nmost large and persistent ones are associated to apsidal alignments or\nanti-alignments with the orbit of the distant perturber.\n", "title": "Non-resonant secular dynamics of trans-Neptunian objects perturbed by a distant super-Earth" }
null
null
null
null
true
null
16019
null
Default
null
null
null
{ "abstract": " This note investigates the stability of both linear and nonlinear switched\nsystems with average dwell time. Two new analysis methods are proposed.\nDifferent from existing approaches, the proposed methods take into account the\nsequence in which the subsystems are switched. Depending on the predecessor or\nsuccessor subsystems to be considered, sequence-based average preceding dwell\ntime (SBAPDT) and sequence-based average subsequence dwell time (SBASDT)\napproaches are proposed and discussed for both continuous and discrete time\nsystems. These proposed methods, when considering the switch sequence, have the\npotential to further reduce the conservativeness of the existing approaches. A\ncomparative numerical example is also given to demonstrate the advantages of\nthe proposed approaches.\n", "title": "Stability Analysis for Switched Systems with Sequence-based Average Dwell Time" }
null
null
[ "Computer Science" ]
null
true
null
16020
null
Validated
null
null
null
{ "abstract": " We develop terminology and methods for working with maximally oriented\npartially directed acyclic graphs (maximal PDAGs). Maximal PDAGs arise from\nimposing restrictions on a Markov equivalence class of directed acyclic graphs,\nor equivalently on its graphical representation as a completed partially\ndirected acyclic graph (CPDAG), for example when adding background knowledge\nabout certain edge orientations. Although maximal PDAGs often arise in\npractice, causal methods have been mostly developed for CPDAGs. In this paper,\nwe extend such methodology to maximal PDAGs. In particular, we develop\nmethodology to read off possible ancestral relationships, we introduce a\ngraphical criterion for covariate adjustment to estimate total causal effects,\nand we adapt the IDA and joint-IDA frameworks to estimate multi-sets of\npossible causal effects. We also present a simulation study that illustrates\nthe gain in identifiability of total causal effects as the background knowledge\nincreases. All methods are implemented in the R package pcalg.\n", "title": "Interpreting and using CPDAGs with background knowledge" }
null
null
null
null
true
null
16021
null
Default
null
null
null
{ "abstract": " This paper introduces a new surgical end-effector probe, which allows to\naccurately apply a contact force on a tissue, while at the same time allowing\nfor high resolution and highly repeatable probe movement. These are achieved by\nimplementing a cable-driven parallel manipulator arrangement, which is deployed\nat the distal-end of a robotic instrument. The combination of the offered\nqualities can be advantageous in several ways, with possible applications\nincluding: large area endomicroscopy and multi-spectral imaging, micro-surgery,\ntissue palpation, safe energy-based and conventional tissue resection. To\ndemonstrate the concept and its adaptability, the probe is integrated with a\nmodified da Vinci robot instrument.\n", "title": "A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy" }
null
null
[ "Computer Science" ]
null
true
null
16022
null
Validated
null
null
null
{ "abstract": " Assisted by the availability of data and high performance computing, deep\nlearning techniques have achieved breakthroughs and surpassed human performance\nempirically in difficult tasks, including object recognition, speech\nrecognition, and natural language processing. As they are being used in\ncritical applications, understanding underlying mechanisms for their successes\nand limitations is imperative. In this paper, we show that overfitting, one of\nthe fundamental issues in deep neural networks, is due to continuous gradient\nupdating and scale sensitiveness of cross entropy loss. By separating samples\ninto correctly and incorrectly classified ones, we show that they behave very\ndifferently, where the loss decreases in the correct ones and increases in the\nincorrect ones. Furthermore, by analyzing dynamics during training, we propose\na consensus-based classification algorithm that enables us to avoid overfitting\nand significantly improve the classification accuracy especially when the\nnumber of training samples is limited. As each trained neural network depends\non extrinsic factors such as initial values as well as training data, requiring\nconsensus among multiple models reduces extrinsic factors substantially; for\nstatistically independent models, the reduction is exponential. Compared to\nensemble algorithms, the proposed algorithm avoids overgeneralization by not\nclassifying ambiguous inputs. Systematic experimental results demonstrate the\neffectiveness of the proposed algorithm. For example, using only 1000 training\nsamples from MNIST dataset, the proposed algorithm achieves 95% accuracy,\nsignificantly higher than any of the individual models, with 90% of the test\nsamples classified.\n", "title": "Overfitting Mechanism and Avoidance in Deep Neural Networks" }
null
null
null
null
true
null
16023
null
Default
null
null
null
{ "abstract": " Can deep learning (DL) guide our understanding of computations happening in\nbiological brain? We will first briefly consider how DL has contributed to the\nresearch on visual object recognition. In the main part we will assess whether\nDL could also help us to clarify the computations underlying higher cognitive\nfunctions such as Theory of Mind. In addition, we will compare the objectives\nand learning signals of brains and machines, leading us to conclude that simply\nscaling up the current DL algorithms will not lead to human level mindreading\nskills. We then provide some insights about how to fairly compare human and DL\nperformance. In the end we find that DL can contribute to our understanding of\nbiological computations by providing an example of an end-to-end algorithm that\nsolves the same problems the biological agents face.\n", "title": "What deep learning can tell us about higher cognitive functions like mindreading?" }
null
null
[ "Quantitative Biology" ]
null
true
null
16024
null
Validated
null
null
null
{ "abstract": " Quantum functional inequalities (e.g. the logarithmic Sobolev- and Poincaré\ninequalities) have found widespread application in the study of the behavior of\nprimitive quantum Markov semigroups. The classical counterparts of these\ninequalities are related to each other via a so-called transportation cost\ninequality of order 2 (TC2). The latter inequality relies on the notion of a\nmetric on the set of probability distributions called the Wasserstein distance\nof order 2. (TC2) in turn implies a transportation cost inequality of order 1\n(TC1). In this paper, we introduce quantum generalizations of the inequalities\n(TC1) and (TC2), making use of appropriate quantum versions of the Wasserstein\ndistances, one recently defined by Carlen and Maas and the other defined by us.\nWe establish that these inequalities are related to each other, and to the\nquantum modified logarithmic Sobolev- and Poincaré inequalities, as in the\nclassical case. We also show that these inequalities imply certain\nconcentration-type results for the invariant state of the underlying semigroup.\nWe consider the example of the depolarizing semigroup to derive concentration\ninequalities for any finite dimensional full-rank quantum state. These\ninequalities are then applied to derive upper bounds on the error probabilities\noccurring in the setting of finite blocklength quantum parameter estimation.\n", "title": "Concentration of quantum states from quantum functional and transportation cost inequalities" }
null
null
null
null
true
null
16025
null
Default
null
null
null
{ "abstract": " This paper proposes a new family of algorithms for training neural networks\n(NNs). These are based on recent developments in the field of non-convex\noptimization, going under the general name of successive convex approximation\n(SCA) techniques. The basic idea is to iteratively replace the original\n(non-convex, highly dimensional) learning problem with a sequence of (strongly\nconvex) approximations, which are both accurate and simple to optimize.\nDifferently from similar ideas (e.g., quasi-Newton algorithms), the\napproximations can be constructed using only first-order information of the\nneural network function, in a stochastic fashion, while exploiting the overall\nstructure of the learning problem for a faster convergence. We discuss several\nuse cases, based on different choices for the loss function (e.g., squared loss\nand cross-entropy loss), and for the regularization of the NN's weights. We\nexperiment on several medium-sized benchmark problems, and on a large-scale\ndataset involving simulated physical data. The results show how the algorithm\noutperforms state-of-the-art techniques, providing faster convergence to a\nbetter minimum. Additionally, we show how the algorithm can be easily\nparallelized over multiple computational units without hindering its\nperformance. In particular, each computational unit can optimize a tailored\nsurrogate function defined on a randomly assigned subset of the input\nvariables, whose dimension can be selected depending entirely on the available\ncomputational power.\n", "title": "Stochastic Training of Neural Networks via Successive Convex Approximations" }
null
null
null
null
true
null
16026
null
Default
null
null
null
{ "abstract": " We propose a generalization of the best arm identification problem in\nstochastic multi-armed bandits (MAB) to the setting where every pull of an arm\nis associated with delayed feedback. The delay in feedback increases the\neffective sample complexity of standard algorithms, but can be offset if we\nhave access to partial feedback received before a pull is completed. We propose\na general framework to model the relationship between partial and delayed\nfeedback, and as a special case we introduce efficient algorithms for settings\nwhere the partial feedback are biased or unbiased estimators of the delayed\nfeedback. Additionally, we propose a novel extension of the algorithms to the\nparallel MAB setting where an agent can control a batch of arms. Our\nexperiments in real-world settings, involving policy search and hyperparameter\noptimization in computational sustainability domains for fast charging of\nbatteries and wildlife corridor construction, demonstrate that exploiting the\nstructure of partial feedback can lead to significant improvements over\nbaselines in both sequential and parallel MAB.\n", "title": "Best arm identification in multi-armed bandits with delayed feedback" }
null
null
null
null
true
null
16027
null
Default
null
null
null
{ "abstract": " In this paper, we study a new class of Finsler metrics, F=\\alpha\\phi(b^2,s),\ns:=\\beta/\\alpha, defined by a Riemannian metric \\alpha and 1-form \\beta. It is\ncalled general (\\alpha, \\beta) metric. In this paper, we assume \\phi be\ncoefficient by s and \\beta be closed and conformal. We find a nessecary and\nsufficient condition for the metric of relatively isotropic mean Landsberg\ncurvature to be Berwald.\n", "title": "General $(α, β)$ metrics with relatively isotroic mean Landsberg curvature" }
null
null
null
null
true
null
16028
null
Default
null
null
null
{ "abstract": " Optimal Transport has recently gained interest in machine learning for\napplications ranging from domain adaptation, sentence similarities to deep\nlearning. Yet, its ability to capture frequently occurring structure beyond the\n\"ground metric\" is limited. In this work, we develop a nonlinear generalization\nof (discrete) optimal transport that is able to reflect much additional\nstructure. We demonstrate how to leverage the geometry of this new model for\nfast algorithms, and explore connections and properties. Illustrative\nexperiments highlight the benefit of the induced structured couplings for tasks\nin domain adaptation and natural language processing.\n", "title": "Structured Optimal Transport" }
null
null
null
null
true
null
16029
null
Default
null
null
null
{ "abstract": " We present a novel analysis of the metal-poor star sample in the complete\nRadial Velocity Experiment (RAVE) Data Release 5 catalog with the goal of\nidentifying and characterizing all very metal-poor stars observed by the\nsurvey. Using a three-stage method, we first identified the candidate stars\nusing only their spectra as input information. We employed an algorithm called\nt-SNE to construct a low-dimensional projection of the spectrum space and\nisolate the region containing metal-poor stars. Following this step, we\nmeasured the equivalent widths of the near-infrared CaII triplet lines with a\nmethod based on flexible Gaussian processes to model the correlated noise\npresent in the spectra. In the last step, we constructed a calibration relation\nthat converts the measured equivalent widths and the color information coming\nfrom the 2MASS and WISE surveys into metallicity and temperature estimates. We\nidentified 877 stars with at least a 50% probability of being very metal-poor\n$(\\rm [Fe/H] < -2\\,\\rm dex)$, out of which 43 are likely extremely metal-poor\n$(\\rm [Fe/H] < -3\\,\\rm dex )$. The comparison of the derived values to a small\nsubsample of stars with literature metallicity values shows that our method\nworks reliably and correctly estimates the uncertainties, which typically have\nvalues $\\sigma_{\\rm [Fe/H]} \\approx 0.2\\,\\mathrm{dex}$. In addition, when\ncompared to the metallicity results derived using the RAVE DR5 pipeline, it is\nevident that we achieve better accuracy than the pipeline and therefore more\nreliably evaluate the very metal-poor subsample. Based on the repeated\nobservations of the same stars, our method gives very consistent results. The\nmethod used in this work can also easily be extended to other large-scale data\nsets, including to the data from the Gaia mission and the upcoming 4MOST\nsurvey.\n", "title": "Very metal-poor stars observed by the RAVE survey" }
null
null
null
null
true
null
16030
null
Default
null
null
null
{ "abstract": " Recently, the intervention calculus when the DAG is absent (IDA) method was\ndeveloped to estimate lower bounds of causal effects from observational\nhigh-dimensional data. Originally it was introduced to assess the effect of\nbaseline biomarkers which do not vary over time. However, in many clinical\nsettings, measurements of biomarkers are repeated at fixed time points during\ntreatment exposure and, therefore, this method need to be extended. The purpose\nof this paper is then to extend the first step of the IDA, the Peter Clarks\n(PC)-algorithm, to a time-dependent exposure in the context of a binary\noutcome. We generalised the PC-algorithm for taking into account the\nchronological order of repeated measurements of the exposure and propose to\napply the IDA with our new version, the chronologically ordered PC-algorithm\n(COPC-algorithm). A simulation study has been performed before applying the\nmethod for estimating causal effects of time-dependent immunological biomarkers\non toxicity, death and progression in patients with metastatic melanoma. The\nsimulation study showed that the completed partially directed acyclic graphs\n(CPDAGs) obtained using COPC-algorithm were structurally closer to the true\nCPDAG than CPDAGs obtained using PC-algorithm. Also, causal effects were more\naccurate when they were estimated based on CPDAGs obtained using\nCOPC-algorithm. Moreover, CPDAGs obtained by COPC-algorithm allowed removing\nnon-chronologic arrows with a variable measured at a time t pointing to a\nvariable measured at a time t' where t'< t. Bidirected edges were less present\nin CPDAGs obtained with the COPC-algorithm, supporting the fact that there was\nless variability in causal effects estimated from these CPDAGs. The\nCOPC-algorithm provided CPDAGs that keep the chronological structure present in\nthe data, thus allowed to estimate lower bounds of the causal effect of\ntime-dependent biomarkers.\n", "title": "Estimating causal effects of time-dependent exposures on a binary endpoint in a high-dimensional setting" }
null
null
[ "Statistics" ]
null
true
null
16031
null
Validated
null
null
null
{ "abstract": " In this note, we present a new proof that the cyclotomic integers constitute\nthe full ring of integers in the cyclotomic field.\n", "title": "A Note on Cyclotomic Integers" }
null
null
null
null
true
null
16032
null
Default
null
null
null
{ "abstract": " We study the stochastic homogenization for a Cauchy problem for a first-order\nHamilton-Jacobi equation whose operator is not coercive w.r.t. the gradient\nvariable. We look at Hamiltonians like $H(x,\\sigma(x)p,\\omega)$ where\n$\\sigma(x)$ is a matrix associated to a Carnot group. The rescaling considered\nis consistent with the underlying Carnot group structure, thus anisotropic. We\nwill prove that under suitable assumptions for the Hamiltonian, the solutions\nof the $\\varepsilon$-problem converge to a deterministic function which can be\ncharacterized as the unique (viscosity) solution of a suitable deterministic\nHamilton-Jacobi problem.\n", "title": "Stochastic homogenization for functionals with anisotropic rescaling and non-coercive Hamilton-Jacobi equations" }
null
null
null
null
true
null
16033
null
Default
null
null
null
{ "abstract": " This article presents a survey on automatic software repair. Automatic\nsoftware repair consists of automatically finding a solution to software bugs\nwithout human intervention. This article considers all kinds of repairs. First,\nit discusses behavioral repair where test suites, contracts, models, and\ncrashing inputs are taken as oracle. Second, it discusses state repair, also\nknown as runtime repair or runtime recovery, with techniques such as checkpoint\nand restart, reconfiguration, and invariant restoration. The uniqueness of this\narticle is that it spans the research communities that contribute to this body\nof knowledge: software engineering, dependability, operating systems,\nprogramming languages, and security. It provides a novel and structured\noverview of the diversity of bug oracles and repair operators used in the\nliterature.\n", "title": "Automatic Software Repair: a Bibliography" }
null
null
null
null
true
null
16034
null
Default
null
null
null
{ "abstract": " The technical details of a balloon stratospheric mission that is aimed at\nmeasuring the Schumann resonances are described. The gondola is designed\nspecifically for the measuring of faint effects of ELF (Extremely Low Frequency\nelectromagnetic waves) phenomena. The prototype met the design requirements.\nThe ELF measuring system worked properly for entire mission; however, the level\nof signal amplification that was chosen taking into account ground-level\nmeasurements was too high. Movement of the gondola in the Earth magnetic field\ninduced the signal in the antenna that saturated the measuring system. This\neffect will be taken into account in the planning of future missions. A large\ntelemetry dataset was gathered during the experiment and is currently under\nprocessing. The payload consists also of biological material as well as\nelectronic equipment that was tested under extreme conditions.\n", "title": "The design and the performance of stratospheric mission in the search for the Schumann resonances" }
null
null
null
null
true
null
16035
null
Default
null
null
null
{ "abstract": " Self-driving technology is advancing rapidly --- albeit with significant\nchallenges and limitations. This progress is largely due to recent developments\nin deep learning algorithms. To date, however, there has been no systematic\ncomparison of how different deep learning architectures perform at such tasks,\nor an attempt to determine a correlation between classification performance and\nperformance in an actual vehicle, a potentially critical factor in developing\nself-driving systems. Here, we introduce the first controlled comparison of\nmultiple deep-learning architectures in an end-to-end autonomous driving task\nacross multiple testing conditions. We compared performance, under identical\ndriving conditions, across seven architectures including a fully-connected\nnetwork, a simple 2 layer CNN, AlexNet, VGG-16, Inception-V3, ResNet, and an\nLSTM by assessing the number of laps each model was able to successfully\ncomplete without crashing while traversing an indoor racetrack. We compared\nperformance across models when the conditions exactly matched those in training\nas well as when the local environment and track were configured differently and\nobjects that were not included in the training dataset were placed on the track\nin various positions. In addition, we considered performance using several\ndifferent data types for training and testing including single grayscale and\ncolor frames, and multiple grayscale frames stacked together in sequence. With\nthe exception of a fully-connected network, all models performed reasonably\nwell (around or above 80\\%) and most very well (~95\\%) on at least one input\ntype but with considerable variation across models and inputs. Overall,\nAlexNet, operating on single color frames as input, achieved the best level of\nperformance (100\\% success rate in phase one and 55\\% in phase two) while\nVGG-16 performed well most consistently across image types.\n", "title": "A Systematic Comparison of Deep Learning Architectures in an Autonomous Vehicle" }
null
null
null
null
true
null
16036
null
Default
null
null
null
{ "abstract": " A synoptic view on the long-established theory of light propagation in\ncrystalline dielectrics is presented, providing a new exact solution for the\nmicroscopic local electromagnetic field thus disclosing the role of the\ndivergence-free (transversal) and curl-free (longitudinal) parts of the\nelectromagnetic field inside a material as a function of the density of\npolarizable atoms. Our results enable fast and efficient calculation of the\nphotonic bandstructure and also the (non-local) dielectric tensor, solely with\nthe crystalline symmetry and atom-individual polarizabilities as input.\n", "title": "On the Theory of Light Propagation in Crystalline Dielectrics" }
null
null
[ "Physics" ]
null
true
null
16037
null
Validated
null
null
null
{ "abstract": " The nucleation and growth of calcite is an important research in scientific\nand industrial field. Both the macroscopic and microscopic observation of\ncalcite growth have been reported. Now, with the development of microfluidic\ndevice, we could focus the nucleation and growth of one single calcite. By\nchanging the flow rate of fluid, the concentration of fluid is controlled. We\nintroduced a new method to study calcite growth in situ and measured the growth\nrate of calcite in microfluidic channel.\n", "title": "Microfluidic control of nucleation and growth of calcite" }
null
null
null
null
true
null
16038
null
Default
null
null
null
{ "abstract": " The Galactic magnetic field (GMF) plays a role in many astrophysical\nprocesses and is a significant foreground to cosmological signals, such as the\nEpoch of Reionization (EoR), but is not yet well understood. Dispersion and\nFaraday rotation measurements (DMs and RMs, respectively) towards a large\nnumber of pulsars provide an efficient method to probe the three-dimensional\nstructure of the GMF. Low-frequency polarisation observations with large\nfractional bandwidth can be used to measure precise DMs and RMs. This is\ndemonstrated by a catalogue of RMs (corrected for ionospheric Faraday rotation)\nfrom the Low Frequency Array (LOFAR), with a growing complementary catalogue in\nthe southern hemisphere from the Murchison Widefield Array (MWA). These data\nfurther our knowledge of the three-dimensional GMF, particularly towards the\nGalactic halo. Recently constructed or upgraded pathfinder and precursor\ntelescopes, such as LOFAR and the MWA, have reinvigorated low-frequency science\nand represent progress towards the construction of the Square Kilometre Array\n(SKA), which will make significant advancements in studies of astrophysical\nmagnetic fields in the future. A key science driver for the SKA-Low is to study\nthe EoR, for which pulsar and polarisation data can provide valuable insights\nin terms of Galactic foreground conditions.\n", "title": "Using low-frequency pulsar observations to study the 3-D structure of the Galactic magnetic field" }
null
null
null
null
true
null
16039
null
Default
null
null
null
{ "abstract": " The mechanisms by which organs acquire their functional structure and realize\nits maintenance (or homeostasis) over time are still largely unknown. In this\npaper, we investigate this question on adipose tissue. Adipose tissue can\nrepresent 20 to 50% of the body weight. Its investigation is key to overcome a\nlarge array of metabolic disorders that heavily strike populations worldwide.\nAdipose tissue consists of lobular clusters of adipocytes surrounded by an\norganized collagen fiber network. By supplying substrates needed for\nadipogenesis, vasculature was believed to induce the regroupment of adipocytes\nnear capillary extremities. This paper shows that the emergence of these\nstructures could be explained by simple mechanical interactions between the\nadipocytes and the collagen fibers. Our assumption is that the fiber network\nresists the pressure induced by the growing adipocytes and forces them to\nregroup into clusters. Reciprocally, cell clusters force the fibers to merge\ninto a well-organized network. We validate this hypothesis by means of a\ntwo-dimensional Individual Based Model (IBM) of interacting adipocytes and\nextra-cellular-matrix fiber elements. The model produces structures that\ncompare quantitatively well to the experimental observations. Our model seems\nto indicate that cell clusters could spontaneously emerge as a result of simple\nmechanical interactions between cells and fibers and surprisingly, vasculature\nis not directly needed for these structures to emerge.\n", "title": "Simple mechanical cues could explain adipose tissue morphology" }
null
null
[ "Physics" ]
null
true
null
16040
null
Validated
null
null
null
{ "abstract": " With the National Toxicology Program issuing its final report on cancer, rats\nand cell phone radiation, one can draw the following conclusions from their\ndata. There is a roughly linear relationship between gliomas (brain cancers)\nand schwannomas (cancers of the nerve sheaths around the heart) with increased\nabsorption of 900 MHz radiofrequency radiation for male rats. The rate of these\ncancers in female rats is about one third the rate in male rats; the rate of\ngliomas in female humans is about two thirds the rate in male humans. Both of\nthese observations can be explained by a decrease in sensitivity to chemical\ncarcinogenesis in both female rats and female humans. The increase in male rat\nlife spans with increased radiofrequency absorption is due to a reduction in\nkidney failure from a decrease in food intake. No such similar increase in the\nlife span of humans who use cell phones is expected.\n", "title": "Comments on the National Toxicology Program Report on Cancer, Rats and Cell Phone Radiation" }
null
null
null
null
true
null
16041
null
Default
null
null
null
{ "abstract": " We answer the following long-standing question of Kolchin: given a system of\nalgebraic-differential equations $\\Sigma(x_1,\\dots,x_n)=0$ in $m$ derivatives\nover a differential field of characteristic zero, is there a computable bound,\nthat only depends on the order of the system (and on the fixed data $m$ and\n$n$), for the typical differential dimension of any prime component of\n$\\Sigma$? We give a positive answer in a strong form; that is, we compute a\n(lower and upper) bound for all the coefficients of the Kolchin polynomial of\nevery such prime component. We then show that, if we look at those components\nof a specified differential type, we can compute a significantly better bound\nfor the typical differential dimension. This latter improvement comes from new\ncombinatorial results on characteristic sets, in combination with the classical\ntheorems of Macaulay and Gotzmann on the growth of Hilbert-Samuel functions.\n", "title": "Estimates for the coefficients of differential dimension polynomials" }
null
null
null
null
true
null
16042
null
Default
null
null
null
{ "abstract": " We investigated an out-of-plane exchange bias system that is based on the\nantiferromagnet MnN. Polycrystalline, highly textured film stacks of Ta / MnN /\nCoFeB / MgO / Ta were grown on SiO$_x$ by (reactive) magnetron sputtering and\nstudied by x-ray diffraction and Kerr magnetometry. Nontrivial modifications of\nthe exchange bias and the perpendicular magnetic anisotropy were observed both\nas functions of film thicknesses as well as field cooling temperatures. In\noptimized film stacks, a giant perpendicular exchange bias of 3600 Oe and a\ncoercive field of 350 Oe were observed at room temperature. The effective\ninterfacial exchange energy is estimated to be $J_\\mathrm{eff} = 0.24$ mJ/m$^2$\nand the effective uniaxial anisotropy constant of the antiferromagnet is\n$K_\\mathrm{eff} = 24$ kJ/m$^3$. The maximum effective perpendicular anisotropy\nfield of the CoFeB layer is $H_\\mathrm{ani} = 3400$ Oe. These values are larger\nthan any previously reported values. These results possibly open a route to\nmagnetically stable, exchange biased perpendicularly magnetized spin valves.\n", "title": "Giant perpendicular exchange bias with antiferromagnetic MnN" }
null
null
null
null
true
null
16043
null
Default
null
null
null
{ "abstract": " Recommendation systems are widely used by different user service providers\nspecially those who have interactions with the large community of users. This\npaper introduces a recommender system based on community detection. The\nrecommendation is provided using the local and global similarities between\nusers. The local information is obtained from communities, and the global ones\nare based on the ratings. Here, a new fuzzy community detection using the\npersonalized PageRank metaphor is introduced. The fuzzy membership values of\nthe users to the communities are utilized to define a similarity measure. The\nmethod is evaluated by using two well-known datasets: MovieLens and FilmTrust.\nThe results show that our method outperforms recent recommender systems.\n", "title": "A Fuzzy Community-Based Recommender System Using PageRank" }
null
null
null
null
true
null
16044
null
Default
null
null
null
{ "abstract": " Complex Event Processing (CEP) has emerged as the unifying field for\ntechnologies that require processing and correlating distributed data sources\nin real-time. CEP finds applications in diverse domains, which has resulted in\na large number of proposals for expressing and processing complex events.\nHowever, existing CEP languages lack from a clear semantics, making them hard\nto understand and generalize. Moreover, there are no general techniques for\nevaluating CEP query languages with clear performance guarantees.\nIn this paper we embark on the task of giving a rigorous and efficient\nframework to CEP. We propose a formal language for specifying complex events,\ncalled CEL, that contains the main features used in the literature and has a\ndenotational and compositional semantics. We also formalize the so-called\nselection strategies, which had only been presented as by-design extensions to\nexisting frameworks. With a well-defined semantics at hand, we study how to\nefficiently evaluate CEL for processing complex events in the case of unary\nfilters. We start by studying the syntactical properties of CEL and propose\nrewriting optimization techniques for simplifying the evaluation of formulas.\nThen, we introduce a formal computational model for CEP, called complex event\nautomata (CEA), and study how to compile CEL formulas into CEA. Furthermore, we\nprovide efficient algorithms for evaluating CEA over event streams using\nconstant time per event followed by constant-delay enumeration of the results.\nBy gathering these results together, we propose a framework for efficiently\nevaluating CEL with unary filters. Finally, we show experimentally that this\nframework consistently outperforms the competition, and even over trivial\nqueries can be orders of magnitude more efficient.\n", "title": "Foundations of Complex Event Processing" }
null
null
null
null
true
null
16045
null
Default
null
null
null
{ "abstract": " In electroencephalography (EEG) source imaging, the inverse source estimates\nare depth biased in such a way that their maxima are often close to the\nsensors. This depth bias can be quantified by inspecting the statistics (mean\nand co-variance) of these estimates. In this paper, we find weighting factors\nwithin a Bayesian framework for the used L1/L2 sparsity prior that the\nresulting maximum a posterior (MAP) estimates do not favor any particular\nsource location. Due to the lack of an analytical expression for the MAP\nestimate when this sparsity prior is used, we solve the weights indirectly.\nFirst, we calculate the Gaussian prior variances that lead to depth un-biased\nmaximum a posterior (MAP) estimates. Subsequently, we approximate the\ncorresponding weight factors in the sparsity prior based on the solved Gaussian\nprior variances. Finally, we reconstruct focal source configurations using the\nsparsity prior with the proposed weights and two other commonly used choices of\nweights that can be found in literature.\n", "title": "Prior Variances and Depth Un-Biased Estimators in EEG Focal Source Imaging" }
null
null
[ "Physics" ]
null
true
null
16046
null
Validated
null
null
null
{ "abstract": " We present a multi-wavelength compilation of new and previously-published\nphotometry for 55 Galactic field RR Lyrae variables. Individual studies,\nspanning a time baseline of up to 30 years, are self-consistently phased to\nproduce light curves in 10 photometric bands covering the wavelength range from\n0.4 to 4.5 microns. Data smoothing via the GLOESS technique is described and\napplied to generate high-fidelity light curves, from which mean magnitudes,\namplitudes, rise-times, and times of minimum and maximum light are derived.\n60,000 observations were acquired using the new robotic Three-hundred\nMilliMeter Telescope (TMMT), which was first deployed at the Carnegie\nObservatories in Pasadena, CA, and is now permanently installed and operating\nat Las Campanas Observatory in Chile. We provide a full description of the TMMT\nhardware, software, and data reduction pipeline. Archival photometry\ncontributed approximately 31,000 observations. Photometric data are given in\nthe standard Johnson UBV, Kron-Cousins RI, 2MASS JHK, and Spitzer [3.6] & [4.5]\nbandpasses.\n", "title": "Standard Galactic Field RR Lyrae. I. Optical to Mid-infrared Phased Photometry" }
null
null
null
null
true
null
16047
null
Default
null
null
null
{ "abstract": " The concept of distance covariance/correlation was introduced recently to\ncharacterize dependence among vectors of random variables. We review some\nstatistical aspects of distance covariance/correlation function and we\ndemonstrate its applicability to time series analysis. We will see that the\nauto-distance covariance/correlation function is able to identify nonlinear\nrelationships and can be employed for testing the i.i.d.\\ hypothesis.\nComparisons with other measures of dependence are included.\n", "title": "An Updated Literature Review of Distance Correlation and its Applications to Time Series" }
null
null
null
null
true
null
16048
null
Default
null
null
null
{ "abstract": " This letter presents a novel method to estimate the relative poses between\nRGB-D cameras with minimal overlapping fields of view in a panoramic RGB-D\ncamera system. This calibration problem is relevant to applications such as\nindoor 3D mapping and robot navigation that can benefit from a 360$^\\circ$\nfield of view using RGB-D cameras. The proposed approach relies on\ndescriptor-based patterns to provide well-matched 2D keypoints in the case of a\nminimal overlapping field of view between cameras. Integrating the matched 2D\nkeypoints with corresponding depth values, a set of 3D matched keypoints are\nconstructed to calibrate multiple RGB-D cameras. Experiments validated the\naccuracy and efficiency of the proposed calibration approach, both superior to\nthose of existing methods (800 ms vs. 5 seconds; rotation error of 0.56 degrees\nvs. 1.6 degrees; and translation error of 1.80 cm vs. 2.5 cm.\n", "title": "A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-Based Patterns" }
null
null
[ "Computer Science" ]
null
true
null
16049
null
Validated
null
null
null
{ "abstract": " We show, that in contrast to the free electron model (standard BCS model), a\nparticular gap in the spectrum of multiband superconductors opens at some\ndistance from the Fermi energy, if conduction band is composed of hybridized\natomic orbitals of different symmetries. This gap has composite\nsuperconducting-hybridization origin, because it exists only if both the\nsuperconductivity and the hybridization between different kinds of orbitals are\npresent. So for many classes of superconductors with multiorbital structure\nsuch spectrum changes should take place. These particular changes in the\nspectrum at some distance from the Fermi level result in slow convergence of\nthe spectral weight of the optical conductivity even in quite conventional\nsuperconductors with isotropic s-wave pairing mechanism.\n", "title": "Particular type of gap in the spectrum of multiband superconductors" }
null
null
null
null
true
null
16050
null
Default
null
null
null
{ "abstract": " Decision-makers are faced with the challenge of estimating what is likely to\nhappen when they take an action. For instance, if I choose not to treat this\npatient, are they likely to die? Practitioners commonly use supervised learning\nalgorithms to fit predictive models that help decision-makers reason about\nlikely future outcomes, but we show that this approach is unreliable, and\nsometimes even dangerous. The key issue is that supervised learning algorithms\nare highly sensitive to the policy used to choose actions in the training data,\nwhich causes the model to capture relationships that do not generalize. We\npropose using a different learning objective that predicts counterfactuals\ninstead of predicting outcomes under an existing action policy as in supervised\nlearning. To support decision-making in temporal settings, we introduce the\nCounterfactual Gaussian Process (CGP) to predict the counterfactual future\nprogression of continuous-time trajectories under sequences of future actions.\nWe demonstrate the benefits of the CGP on two important decision-support tasks:\nrisk prediction and \"what if?\" reasoning for individualized treatment planning.\n", "title": "Reliable Decision Support using Counterfactual Models" }
null
null
null
null
true
null
16051
null
Default
null
null
null
{ "abstract": " We present a new Q-function operator for temporal difference (TD) learning\nmethods that explicitly encodes robustness against significant rare events\n(SRE) in critical domains. The operator, which we call the $\\kappa$-operator,\nallows to learn a safe policy in a model-based fashion without actually\nobserving the SRE. We introduce single- and multi-agent robust TD methods using\nthe operator $\\kappa$. We prove convergence of the operator to the optimal safe\nQ-function with respect to the model using the theory of Generalized Markov\nDecision Processes. In addition we prove convergence to the optimal Q-function\nof the original MDP given that the probability of SREs vanishes. Empirical\nevaluations demonstrate the superior performance of $\\kappa$-based TD methods\nboth in the early learning phase as well as in the final converged stage. In\naddition we show robustness of the proposed method to small model errors, as\nwell as its applicability in a multi-agent context.\n", "title": "Robust temporal difference learning for critical domains" }
null
null
null
null
true
null
16052
null
Default
null
null
null
{ "abstract": " We study the entanglement entropy of gapped phases of matter in three spatial\ndimensions. We focus in particular on size-independent contributions to the\nentropy across entanglement surfaces of arbitrary topologies. We show that for\nlow energy fixed-point theories, the constant part of the entanglement entropy\nacross any surface can be reduced to a linear combination of the entropies\nacross a sphere and a torus. We first derive our results using strong\nsub-additivity inequalities along with assumptions about the entanglement\nentropy of fixed-point models, and identify the topological contribution by\nconsidering the renormalization group flow; in this way we give an explicit\ndefinition of topological entanglement entropy $S_{\\mathrm{topo}}$ in (3+1)D,\nwhich sharpens previous results. We illustrate our results using several\nconcrete examples and independent calculations, and show adding \"twist\" terms\nto the Lagrangian can change $S_{\\mathrm{topo}}$ in (3+1)D. For the generalized\nWalker-Wang models, we find that the ground state degeneracy on a 3-torus is\ngiven by $\\exp(-3S_{\\mathrm{topo}}[T^2])$ in terms of the topological\nentanglement entropy across a 2-torus. We conjecture that a similar\nrelationship holds for Abelian theories in $(d+1)$ dimensional spacetime, with\nthe ground state degeneracy on the $d$-torus given by\n$\\exp(-dS_{\\mathrm{topo}}[T^{d-1}])$.\n", "title": "Structure of the Entanglement Entropy of (3+1)D Gapped Phases of Matter" }
null
null
null
null
true
null
16053
null
Default
null
null
null
{ "abstract": " This paper is concerned with two frequency-dependent SIS epidemic\nreaction-diffusion models in heterogeneous environment, with a cross-diffusion\nterm modeling the effect that susceptible individuals tend to move away from\nhigher concentration of infected individuals. It is first shown that the\ncorresponding Neumann initial-boundary value problem in an $n$-dimensional\nbounded smooth domain possesses a unique global classical solution which is\nuniformly-in-time bounded regardless of the strength of the cross-diffusion and\nthe spatial dimension $n$. It is further shown that, even in the presence of\ncross-diffusion, the models still admit threshold-type dynamics in terms of the\nbasic reproduction number $\\mathcal R_0$; that is, the unique disease free\nequilibrium is globally stable if $\\mathcal R_0<1$, while if $\\mathcal R_0>1$,\nthe disease is uniformly persistent and there is an endemic equilibrium, which\nis globally stable in some special cases with weak chemotactic sensitivity. Our\nresults on the asymptotic profiles of endemic equilibrium illustrate that\nrestricting the motility of susceptible population may eliminate the infectious\ndisease entirely for the first model with constant total population but fails\nfor the second model with varying total population. In particular, this implies\nthat such cross-diffusion does not contribute to the elimination of the\ninfectious disease modelled by the second one.\n", "title": "Dynamics and asymptotic profiles of endemic equilibrium for two frequency-dependent SIS epidemic models with cross-diffusion" }
null
null
null
null
true
null
16054
null
Default
null
null
null
{ "abstract": " Gravitational clustering in the nonlinear regime remains poorly understood.\nGravity dual of gravitational clustering has recently been proposed as a means\nto study the nonlinear regime. The stable clustering ansatz remains a key\ningredient to our understanding of gravitational clustering in the highly\nnonlinear regime. We study certain aspects of violation of the stable\nclustering ansatz in the gravity dual of Large Scale Structure (LSS). We extend\nthe recent studies of gravitational clustering using AdS gravity dual to take\ninto account possible departure from the stable clustering ansatz and to\narbitrary dimensions. Next, we extend the recently introduced consistency\nrelations to arbitrary dimensions. We use the consistency relations to test the\ncommonly used models of gravitational clustering including the halo models and\nhierarchical ansätze. In particular we establish a tower of consistency\nrelations for the hierarchical amplitudes: $Q, R_a, R_b, S_a,S_b,S_c$ etc. as a\nfunctions of the scaled peculiar velocity $h$. We also study the variants of\npopular halo models in this context. In contrast to recent claims, none of\nthese models, in their simplest incarnation, seem to satisfy the consistency\nrelations in the soft limit.\n", "title": "Stable Clustering Ansatz, Consistency Relations and Gravity Dual of Large-Scale Structure" }
null
null
null
null
true
null
16055
null
Default
null
null
null
{ "abstract": " Independent Component Analysis (ICA) is the problem of learning a square\nmatrix $A$, given samples of $X=AS$, where $S$ is a random vector with\nindependent coordinates. Most existing algorithms are provably efficient only\nwhen each $S_i$ has finite and moderately valued fourth moment. However, there\nare practical applications where this assumption need not be true, such as\nspeech and finance. Algorithms have been proposed for heavy-tailed ICA, but\nthey are not practical, using random walks and the full power of the ellipsoid\nalgorithm multiple times. The main contributions of this paper are:\n(1) A practical algorithm for heavy-tailed ICA that we call HTICA. We provide\ntheoretical guarantees and show that it outperforms other algorithms in some\nheavy-tailed regimes, both on real and synthetic data. Like the current\nstate-of-the-art, the new algorithm is based on the centroid body (a first\nmoment analogue of the covariance matrix). Unlike the state-of-the-art, our\nalgorithm is practically efficient. To achieve this, we use explicit analytic\nrepresentations of the centroid body, which bypasses the use of the ellipsoid\nmethod and random walks.\n(2) We study how heavy tails affect different ICA algorithms, including\nHTICA. Somewhat surprisingly, we show that some algorithms that use the\ncovariance matrix or higher moments can successfully solve a range of ICA\ninstances with infinite second moment. We study this theoretically and\nexperimentally, with both synthetic and real-world heavy-tailed data.\n", "title": "Heavy-Tailed Analogues of the Covariance Matrix for ICA" }
null
null
null
null
true
null
16056
null
Default
null
null
null
{ "abstract": " The round trip time of the light pulse limits the maximum detectable\nfrequency response range of vibration in phase-sensitive optical time domain\nreflectometry ({\\phi}-OTDR). We propose a method to break the frequency\nresponse range restriction of {\\phi}-OTDR system by modulating the light pulse\ninterval randomly which enables a random sampling for every vibration point in\na long sensing fiber. This sub-Nyquist randomized sampling method is suits for\ndetecting sparse-wideband-frequency vibration signals. Up to MHz resonance\nvibration signal with over dozens of frequency components and 1.153MHz single\nfrequency vibration signal are clearly identified for a sensing range of 9.6km\nwith 10kHz maximum sampling rate.\n", "title": "Breaking through the bandwidth barrier in distributed fiber vibration sensing by sub-Nyquist randomized sampling" }
null
null
null
null
true
null
16057
null
Default
null
null
null
{ "abstract": " The continuity of the gauge fixing condition $n\\cdot\\partial n\\cdot A=0$ for\n$SU(2)$ gauge theory on the manifold $R\\bigotimes S^{1}\\bigotimes\nS^{1}\\bigotimes S^{1}$ is studied here, where $n^{\\mu}$ stands for directional\nvector along $x_{i}$-axis($i=1,2,3$). It is proved that the gauge fixing\ncondition is continuous given that gauge potentials are differentiable with\ncontinuous derivatives on the manifold $R\\bigotimes S^{1}\\bigotimes\nS^{1}\\bigotimes S^{1}$ which is compact.\n", "title": "The Continuity of the Gauge Fixing Condition $n\\cdot\\partial n\\cdot A=0$ for $SU(2)$ Gauge Theory" }
null
null
null
null
true
null
16058
null
Default
null
null
null
{ "abstract": " Asymptotic theory for approximate martingale estimating functions is\ngeneralised to diffusions with finite-activity jumps, when the sampling\nfrequency and terminal sampling time go to infinity. Rate optimality and\nefficiency are of particular concern. Under mild assumptions, it is shown that\nestimators of drift, diffusion, and jump parameters are consistent and\nasymptotically normal, as well as rate-optimal for the drift and jump\nparameters. Additional conditions are derived, which ensure rate-optimality for\nthe diffusion parameter as well as efficiency for all parameters. The findings\nindicate a potentially fruitful direction for the further development of\nestimation for jump-diffusions.\n", "title": "Estimating functions for jump-diffusions" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
16059
null
Validated
null
null
null
{ "abstract": " Malignant melanoma has one of the most rapidly increasing incidences in the\nworld and has a considerable mortality rate. Early diagnosis is particularly\nimportant since melanoma can be cured with prompt excision. Dermoscopy images\nplay an important role in the non-invasive early detection of melanoma [1].\nHowever, melanoma detection using human vision alone can be subjective,\ninaccurate and poorly reproducible even among experienced dermatologists. This\nis attributed to the challenges in interpreting images with diverse\ncharacteristics including lesions of varying sizes and shapes, lesions that may\nhave fuzzy boundaries, different skin colors and the presence of hair [2].\nTherefore, the automatic analysis of dermoscopy images is a valuable aid for\nclinical decision making and for image-based diagnosis to identify diseases\nsuch as melanoma [1-4]. Deep residual networks (ResNets) has achieved\nstate-of-the-art results in image classification and detection related problems\n[5-8]. In this ISIC 2017 skin lesion analysis challenge [9], we propose to\nexploit the deep ResNets for robust visual features learning and\nrepresentations.\n", "title": "Automatic Skin Lesion Analysis using Large-scale Dermoscopy Images and Deep Residual Networks" }
null
null
null
null
true
null
16060
null
Default
null
null
null
{ "abstract": " We consider the problem of improving kernel approximation via randomized\nfeature maps. These maps arise as Monte Carlo approximation to integral\nrepresentations of kernel functions and scale up kernel methods for larger\ndatasets. Based on an efficient numerical integration technique, we propose a\nunifying approach that reinterprets the previous random features methods and\nextends to better estimates of the kernel approximation. We derive the\nconvergence behaviour and conduct an extensive empirical study that supports\nour hypothesis.\n", "title": "Quadrature-based features for kernel approximation" }
null
null
null
null
true
null
16061
null
Default
null
null
null
{ "abstract": " Various measures can be used to estimate bias or unfairness in a predictor.\nPrevious work has already established that some of these measures are\nincompatible with each other. Here we show that, when groups differ in\nprevalence of the predicted event, several intuitive, reasonable measures of\nfairness (probability of positive prediction given occurrence or\nnon-occurrence; probability of occurrence given prediction or non-prediction;\nand ratio of predictions over occurrences for each group) are all mutually\nexclusive: if one of them is equal among groups, the other two must differ. The\nonly exceptions are for perfect, or trivial (always-positive or\nalways-negative) predictors. As a consequence, any non-perfect, non-trivial\npredictor must necessarily be \"unfair\" under two out of three reasonable sets\nof criteria. This result readily generalizes to a wide range of well-known\nstatistical quantities (sensitivity, specificity, false positive rate,\nprecision, etc.), all of which can be divided into three mutually exclusive\ngroups. Importantly, The results applies to all predictors, whether algorithmic\nor human. We conclude with possible ways to handle this effect when assessing\nand designing prediction methods.\n", "title": "The impossibility of \"fairness\": a generalized impossibility result for decisions" }
null
null
null
null
true
null
16062
null
Default
null
null
null
{ "abstract": " The entropy of a quantum system is a measure of its randomness, and has\napplications in measuring quantum entanglement. We study the problem of\nmeasuring the von Neumann entropy, $S(\\rho)$, and Rényi entropy,\n$S_\\alpha(\\rho)$ of an unknown mixed quantum state $\\rho$ in $d$ dimensions,\ngiven access to independent copies of $\\rho$.\nWe provide an algorithm with copy complexity $O(d^{2/\\alpha})$ for estimating\n$S_\\alpha(\\rho)$ for $\\alpha<1$, and copy complexity $O(d^{2})$ for estimating\n$S(\\rho)$, and $S_\\alpha(\\rho)$ for non-integral $\\alpha>1$. These bounds are\nat least quadratic in $d$, which is the order dependence on the number of\ncopies required for learning the entire state $\\rho$. For integral $\\alpha>1$,\non the other hand, we provide an algorithm for estimating $S_\\alpha(\\rho)$ with\na sub-quadratic copy complexity of $O(d^{2-2/\\alpha})$. We characterize the\ncopy complexity for integral $\\alpha>1$ up to constant factors by providing\nmatching lower bounds. For other values of $\\alpha$, and the von Neumann\nentropy, we show lower bounds on the algorithm that achieves the upper bound.\nThis shows that we either need new algorithms for better upper bounds, or\nbetter lower bounds to tighten the results.\nFor non-integral $\\alpha$, and the von Neumann entropy, we consider the well\nknown Empirical Young Diagram (EYD) algorithm, which is the analogue of\nempirical plug-in estimator in classical distribution estimation. As a\ncorollary, we strengthen a lower bound on the copy complexity of the EYD\nalgorithm for learning the maximally mixed state by showing that the lower\nbound holds with exponential probability (which was previously known to hold\nwith a constant probability). For integral $\\alpha>1$, we provide new\nconcentration results of certain polynomials that arise in Kerov algebra of\nYoung diagrams.\n", "title": "Measuring Quantum Entropy" }
null
null
null
null
true
null
16063
null
Default
null
null
null
{ "abstract": " When studying tropical cyclones using the $f$-plane, axisymmetric, gradient\nbalanced model, there arises a second-order elliptic equation for the\ntransverse circulation. Similarly, when studying zonally symmetric meridional\ncirculations near the equator (the tropical Hadley cells) or the katabatically\nforced meridional circulation over Antarctica, there also arises a second order\nelliptic equation. These elliptic equations are usually derived in the pressure\ncoordinate or the potential temperature coordinate, since the thermal wind\nequation has simple non-Jacobian forms in these two vertical coordinates.\nBecause of the large variations in surface pressure that can occur in tropical\ncyclones and over the Antarctic ice sheet, there is interest in using other\nvertical coordinates, e.g., the height coordinate, the classical\n$\\sigma$-coordinate, or some type of hybrid coordinate typically used in global\nnumerical weather prediction or climate models. Because the thermal wind\nequation in these coordinates takes a Jacobian form, the derivation of the\nelliptic transverse circulation equation is not as simple. Here we present a\nmethod for deriving the elliptic transverse circulation equation in a\ngeneralized vertical coordinate, which allows for many particular vertical\ncoordinates, such as height, pressure, log-pressure, potential temperature,\nclassical $\\sigma$, and most hybrid cases. Advantages and disadvantages of the\nvarious coordinates are discussed.\n", "title": "Elliptic Transverse Circulation Equations for Balanced Models in a Generalized Vertical Coordinate" }
null
null
null
null
true
null
16064
null
Default
null
null
null
{ "abstract": " We present the Voice Conversion Challenge 2018, designed as a follow up to\nthe 2016 edition with the aim of providing a common framework for evaluating\nand comparing different state-of-the-art voice conversion (VC) systems. The\nobjective of the challenge was to perform speaker conversion (i.e. transform\nthe vocal identity) of a source speaker to a target speaker while maintaining\nlinguistic information. As an update to the previous challenge, we considered\nboth parallel and non-parallel data to form the Hub and Spoke tasks,\nrespectively. A total of 23 teams from around the world submitted their\nsystems, 11 of them additionally participated in the optional Spoke task. A\nlarge-scale crowdsourced perceptual evaluation was then carried out to rate the\nsubmitted converted speech in terms of naturalness and similarity to the target\nspeaker identity. In this paper, we present a brief summary of the\nstate-of-the-art techniques for VC, followed by a detailed explanation of the\nchallenge tasks and the results that were obtained.\n", "title": "The Voice Conversion Challenge 2018: Promoting Development of Parallel and Nonparallel Methods" }
null
null
[ "Statistics" ]
null
true
null
16065
null
Validated
null
null
null
{ "abstract": " In a companion paper, we developed an efficient algebraic method for\ncomputing the Fourier transforms of certain functions defined on prehomogeneous\nvector spaces over finite fields, and we carried out these computations in a\nvariety of cases.\nHere we develop a method, based on Fourier analysis and algebraic geometry,\nwhich exploits these Fourier transform formulas to yield level of distribution\nresults, in the sense of analytic number theory. Such results are of the shape\ntypically required for a variety of sieve methods. As an example of such an\napplication we prove that there are $\\gg$ X/log(X) quartic fields whose\ndiscriminant is squarefree, bounded above by X, and has at most eight prime\nfactors.\n", "title": "Levels of distribution for sieve problems in prehomogeneous vector spaces" }
null
null
null
null
true
null
16066
null
Default
null
null
null
{ "abstract": " We introduce Imagination-Augmented Agents (I2As), a novel architecture for\ndeep reinforcement learning combining model-free and model-based aspects. In\ncontrast to most existing model-based reinforcement learning and planning\nmethods, which prescribe how a model should be used to arrive at a policy, I2As\nlearn to interpret predictions from a learned environment model to construct\nimplicit plans in arbitrary ways, by using the predictions as additional\ncontext in deep policy networks. I2As show improved data efficiency,\nperformance, and robustness to model misspecification compared to several\nbaselines.\n", "title": "Imagination-Augmented Agents for Deep Reinforcement Learning" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
16067
null
Validated
null
null
null
{ "abstract": " We study the impact of quenched disorder (random exchange couplings or site\ndilution) on easy-plane pyrochlore antiferromagnets. In the clean system,\norder-by-disorder selects a magnetically ordered state from a classically\ndegenerate manifold. In the presence of randomness, however, different orders\ncan be chosen locally depending on details of the disorder configuration. Using\na combination of analytical considerations and classical Monte-Carlo\nsimulations, we argue that any long-range-ordered magnetic state is destroyed\nbeyond a critical level of randomness where the system breaks into magnetic\ndomains due to random exchange anisotropies, becoming, therefore, a glass of\nspin clusters, in accordance with the available experimental data. These random\nanisotropies originate from off-diagonal exchange couplings in the microscopic\nHamiltonian, establishing their relevance to other magnets with strong\nspin-orbit coupling.\n", "title": "Cluster-glass phase in pyrochlore XY antiferromagnets with quenched disorder" }
null
null
null
null
true
null
16068
null
Default
null
null
null
{ "abstract": " A remote-sensing system that can determine the position of hidden objects has\napplications in many critical real-life scenarios, such as search and rescue\nmissions and safe autonomous driving. Previous work has shown the ability to\nrange and image objects hidden from the direct line of sight, employing\nadvanced optical imaging technologies aimed at small objects at short range. In\nthis work we demonstrate a long-range tracking system based on single laser\nillumination and single-pixel single-photon detection. This enables us to track\none or more people hidden from view at a stand-off distance of over 50~m. These\nresults pave the way towards next generation LiDAR systems that will\nreconstruct not only the direct-view scene but also the main elements hidden\nbehind walls or corners.\n", "title": "Non-line-of-sight tracking of people at long range" }
null
null
null
null
true
null
16069
null
Default
null
null
null
{ "abstract": " Can an algorithm create original and compelling fashion designs to serve as\nan inspirational assistant? To help answer this question, we design and\ninvestigate different image generation models associated with different loss\nfunctions to boost creativity in fashion generation. The dimensions of our\nexplorations include: (i) different Generative Adversarial Networks\narchitectures that start from noise vectors to generate fashion items, (ii)\nnovel loss functions that encourage novelty, inspired from Sharma-Mittal\ndivergence, a generalized mutual information measure for the widely used\nrelative entropies such as Kullback-Leibler, and (iii) a generation process\nfollowing the key elements of fashion design (disentangling shape and texture\ncomponents). A key challenge of this study is the evaluation of generated\ndesigns and the retrieval of best ones, hence we put together an evaluation\nprotocol associating automatic metrics and human experimental studies that we\nhope will help ease future research. We show that our proposed creativity\ncriterion yield better overall appreciation than the one employed in Creative\nAdversarial Networks. In the end, about 61% of our images are thought to be\ncreated by human designers rather than by a computer while also being\nconsidered original per our human subject experiments, and our proposed loss\nscores the highest compared to existing losses in both novelty and likability.\n", "title": "DeSIGN: Design Inspiration from Generative Networks" }
null
null
null
null
true
null
16070
null
Default
null
null
null
{ "abstract": " The paper discusses the challenges of faceted vocabulary organization in\nuniversal classifications which treat the universe of knowledge as a coherent\nwhole and in which the concepts and subjects in different disciplines are\nshared, related and combined. The authors illustrate the challenges of the\nfacet analytical approach using, as an example, the revision of class 72 in\nUDC. The paper reports on the research undertaken in 2013 as preparation for\nthe revision. This consisted of analysis of concept organization in the UDC\nschedules in comparison with the Art & Architecture Thesaurus and class W of\nthe Bliss Bibliographic Classification. The paper illustrates how such research\ncan contribute to a better understanding of the field and may lead to\nimprovements in the facet structure of this segment of the UDC vocabulary.\n", "title": "Challenges of facet analysis and concept placement in universal classifications: the example of architecture in UDC" }
null
null
null
null
true
null
16071
null
Default
null
null
null
{ "abstract": " The reconstruction of a species phylogeny from genomic data faces two\nsignificant hurdles: 1) the trees describing the evolution of each individual\ngene--i.e., the gene trees--may differ from the species phylogeny and 2) the\nmolecular sequences corresponding to each gene often provide limited\ninformation about the gene trees themselves. In this paper we consider an\napproach to species tree reconstruction that addresses both these hurdles.\nSpecifically, we propose an algorithm for phylogeny reconstruction under the\nmultispecies coalescent model with a standard model of site substitution. The\nmultispecies coalescent is commonly used to model gene tree discordance due to\nincomplete lineage sorting, a well-studied population-genetic effect.\nIn previous work, an information-theoretic trade-off was derived in this\ncontext between the number of loci, $m$, needed for an accurate reconstruction\nand the length of the locus sequences, $k$. It was shown that to reconstruct an\ninternal branch of length $f$, one needs $m$ to be of the order of $1/[f^{2}\n\\sqrt{k}]$. That previous result was obtained under the molecular clock\nassumption, i.e., under the assumption that mutation rates (as well as\npopulation sizes) are constant across the species phylogeny.\nHere we generalize this result beyond the restrictive molecular clock\nassumption, and obtain a new reconstruction algorithm that has the same data\nrequirement (up to log factors). Our main contribution is a novel reduction to\nthe molecular clock case under the multispecies coalescent. As a corollary, we\nalso obtain a new identifiability result of independent interest: for any\nspecies tree with $n \\geq 3$ species, the rooted species tree can be identified\nfrom the distribution of its unrooted weighted gene trees even in the absence\nof a molecular clock.\n", "title": "Coalescent-based species tree estimation: a stochastic Farris transform" }
null
null
[ "Computer Science", "Mathematics", "Statistics" ]
null
true
null
16072
null
Validated
null
null
null
{ "abstract": " A new amortized variance-reduced gradient (AVRG) algorithm was developed in\n\\cite{ying2017convergence}, which has constant storage requirement in\ncomparison to SAGA and balanced gradient computations in comparison to SVRG.\nOne key advantage of the AVRG strategy is its amenability to decentralized\nimplementations. In this work, we show how AVRG can be extended to the network\ncase where multiple learning agents are assumed to be connected by a graph\ntopology. In this scenario, each agent observes data that is spatially\ndistributed and all agents are only allowed to communicate with direct\nneighbors. Moreover, the amount of data observed by the individual agents may\ndiffer drastically. For such situations, the balanced gradient computation\nproperty of AVRG becomes a real advantage in reducing idle time caused by\nunbalanced local data storage requirements, which is characteristic of other\nreduced-variance gradient algorithms. The resulting diffusion-AVRG algorithm is\nshown to have linear convergence to the exact solution, and is much more memory\nefficient than other alternative algorithms. In addition, we propose a\nmini-batch strategy to balance the communication and computation efficiency for\ndiffusion-AVRG. When a proper batch size is employed, it is observed in\nsimulations that diffusion-AVRG is more computationally efficient than exact\ndiffusion or EXTRA while maintaining almost the same communication efficiency.\n", "title": "Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling" }
null
null
null
null
true
null
16073
null
Default
null
null
null
{ "abstract": " This paper considers a novel framework to detect communities in a graph from\nthe observation of signals at its nodes. We model the observed signals as noisy\noutputs of an unknown network process -- represented as a graph filter -- that\nis excited by a set of low-rank inputs. Rather than learning the precise\nparameters of the graph itself, the proposed method retrieves the community\nstructure directly; Furthermore, as in blind system identification methods, it\ndoes not require knowledge of the system excitation. The paper shows that\ncommunities can be detected by applying spectral clustering to the low-rank\noutput covariance matrix obtained from the graph signals. The performance\nanalysis indicates that the community detection accuracy depends on the\nspectral properties of the graph filter considered. Furthermore, we show that\nthe accuracy can be improved via a low-rank matrix decomposition method when\nthe excitation signals are known. Numerical experiments demonstrate that our\napproach is effective for analyzing network data from diffusion, consumers, and\nsocial dynamics.\n", "title": "Blind Community Detection from Low-rank Excitations of a Graph Filter" }
null
null
null
null
true
null
16074
null
Default
null
null
null
{ "abstract": " Nanostructures with open shell transition metal or molecular constituents\nhost often strong electronic correlations and are highly sensitive to atomistic\nmaterial details. This tutorial review discusses method developments and\napplications of theoretical approaches for the realistic description of the\nelectronic and magnetic properties of nanostructures with correlated electrons.\nFirst, the implementation of a flexible interface between density functional\ntheory and a variant of dynamical mean field theory (DMFT) highly suitable for\nthe simulation of complex correlated structures is explained and illustrated.\nOn the DMFT side, this interface is largely based on recent developments of\nquantum Monte Carlo and exact diagonalization techniques allowing for efficient\ndescriptions of general four fermion Coulomb interactions, reduced symmetries\nand spin-orbit coupling, which are explained here. With the examples of the Cr\n(001) surfaces, magnetic adatoms, and molecular systems it is shown how the\ninterplay of Hubbard U and Hund's J determines charge and spin fluctuations and\nhow these interactions drive different sorts of correlation effects in\nnanosystems. Non-local interactions and correlations present a particular\nchallenge for the theory of low dimensional systems. We present our method\ndevelopments addressing these two challenges, i.e., advancements of the\ndynamical vertex approximation and a combination of the constrained random\nphase approximation with continuum medium theories. We demonstrate how\nnon-local interaction and correlation phenomena are controlled not only by\ndimensionality but also by coupling to the environment which is typically\nimportant for determining the physics of nanosystems.\n", "title": "Realistic theory of electronic correlations in nanoscopic systems" }
null
null
null
null
true
null
16075
null
Default
null
null
null
{ "abstract": " In the inverse problem of the calculus of variations one is asked to find a\nLagrangian and a multiplier so that a given differential equation, after\nmultiplying with the multiplier, becomes the Euler--Lagrange equation for the\nLagrangian. An answer to this problem for the case of a scalar ordinary\ndifferential equation of order $2n, n\\geq 2,$ is proposed.\n", "title": "The Multiplier Problem of the Calculus of Variations for Scalar Ordinary Differential Equations" }
null
null
null
null
true
null
16076
null
Default
null
null
null
{ "abstract": " Let $\\mathbb{K}$ be an algebraically closed field of characteristic $0$. We\nstudy a monoidal category $\\mathbb{T}_\\alpha$ which is universal among all\nsymmetric $\\mathbb{K}$-linear monoidal categories generated by two objects $A$\nand $B$ such that $A$ has a, possibly transfinite, filtration. We construct\n$\\mathbb{T}_\\alpha$ as a category of representations of the Lie algebra\n$\\mathfrak{gl}^M(V_*,V)$ consisting of endomorphisms of a fixed diagonalizable\npairing $V_*\\otimes V\\to \\mathbb{K}$ of vector spaces $V_*$ and $V$ of\ndimension $\\alpha$. Here $\\alpha$ is an arbitrary cardinal number. We describe\nexplicitly the simple and the injective objects of $\\mathbb{T}_\\alpha$ and\nprove that the category $\\mathbb{T}_\\alpha$ is Koszul. We pay special attention\nto the case where the filtration on $A$ is finite. In this case\n$\\alpha=\\aleph_t$ for $t\\in\\mathbb{Z}_{\\geq 0}$.\n", "title": "Representation categories of Mackey Lie algebras as universal monoidal categories" }
null
null
null
null
true
null
16077
null
Default
null
null
null
{ "abstract": " Conventional textbook treatments on electromagnetic wave propagation consider\nthe induced charge and current densities as \"bound\", and therefore absorb them\ninto a refractive index. In principle it must also be possible to treat the\nmedium as vacuum, but with explicit charge and current densities. This gives a\nmore direct, physical description. However, since the induced waves propagate\nin vacuum in this picture, it is not straightforward to realize that the\nwavelength becomes different compared to that in vacuum. We provide an\nexplanation, and also associated time-domain simulations. As an extra bonus the\nresults turn out to illuminate the behavior of metamaterials.\n", "title": "Dielectric media considered as vacuum with sources" }
null
null
null
null
true
null
16078
null
Default
null
null
null
{ "abstract": " With a large number of sensors and control units in networked systems,\ndistributed support vector machines (DSVMs) play a fundamental role in scalable\nand efficient multi-sensor classification and prediction tasks. However, DSVMs\nare vulnerable to adversaries who can modify and generate data to deceive the\nsystem to misclassification and misprediction. This work aims to design defense\nstrategies for DSVM learner against a potential adversary. We establish a\ngame-theoretic framework to capture the conflicting interests between the DSVM\nlearner and the attacker. The Nash equilibrium of the game allows predicting\nthe outcome of learning algorithms in adversarial environments, and enhancing\nthe resilience of the machine learning through dynamic distributed learning\nalgorithms. We show that the DSVM learner is less vulnerable when he uses a\nbalanced network with fewer nodes and higher degree. We also show that adding\nmore training samples is an efficient defense strategy against an attacker. We\npresent secure and resilient DSVM algorithms with verification method and\nrejection method, and show their resiliency against adversary with numerical\nexperiments.\n", "title": "Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries" }
null
null
null
null
true
null
16079
null
Default
null
null
null
{ "abstract": " Confidence interval procedures used in low dimensional settings are often\ninappropriate for high dimensional applications. When a large number of\nparameters are estimated, marginal confidence intervals associated with the\nmost significant estimates have very low coverage rates: They are too small and\ncentered at biased estimates. The problem of forming confidence intervals in\nhigh dimensional settings has previously been studied through the lens of\nselection adjustment. In this framework, the goal is to control the proportion\nof non-covering intervals formed for selected parameters.\nIn this paper we approach the problem by considering the relationship between\nrank and coverage probability. Marginal confidence intervals have very low\ncoverage rates for significant parameters and high rates for parameters with\nmore boring estimates. Many selection adjusted intervals display the same\npattern. This connection motivates us to propose a new coverage criterion for\nconfidence intervals in multiple testing/covering problems --- the rank\nconditional coverage (RCC). This is the expected coverage rate of an interval\ngiven the significance ranking for the associated estimator. We propose\ninterval construction via bootstrapping which produces small intervals and have\na rank conditional coverage close to the nominal level. These methods are\nimplemented in the R package rcc.\n", "title": "Rank conditional coverage and confidence intervals in high dimensional problems" }
null
null
null
null
true
null
16080
null
Default
null
null
null
{ "abstract": " Surface-functionalized nanomaterials can act as theranostic agents that\ndetect disease and track biological processes using hyperpolarized magnetic\nresonance imaging (MRI). Candidate materials are sparse however, requiring\nspinful nuclei with long spin-lattice relaxation (T1) and spin-dephasing times\n(T2), together with a reservoir of electrons to impart hyperpolarization. Here,\nwe demonstrate the versatility of the nanodiamond material system for\nhyperpolarized 13C MRI, making use of its intrinsic paramagnetic defect\ncenters, hours-long nuclear T1 times, and T2 times suitable for spatially\nresolving millimeter-scale structures. Combining these properties, we enable a\nnew imaging modality that exploits the phase-contrast between spins encoded\nwith a hyperpolarization that is aligned, or anti-aligned with the external\nmagnetic field. The use of phase-encoded hyperpolarization allows nanodiamonds\nto be tagged and distinguished in an MRI based on their spin-orientation alone,\nand could permit the action of specific bio-functionalized complexes to be\ndirectly compared and imaged.\n", "title": "Phase-Encoded Hyperpolarized Nanodiamond for Magnetic Resonance Imaging" }
null
null
null
null
true
null
16081
null
Default
null
null
null
{ "abstract": " Nyquist ghost artifacts in EPI images are originated from phase mismatch\nbetween the even and odd echoes. However, conventional correction methods using\nreference scans often produce erroneous results especially in high-field MRI\ndue to the non-linear and time-varying local magnetic field changes. Recently,\nit was shown that the problem of ghost correction can be transformed into\nk-space data interpolation problem that can be solved using the annihilating\nfilter-based low-rank Hankel structured matrix completion approach (ALOHA).\nAnother recent discovery has shown that the deep convolutional neural network\nis closely related to the data-driven Hankel matrix decomposition. By\nsynergistically combining these findings, here we propose a k-space deep\nlearning approach that immediately corrects the k-space phase mismatch without\na reference scan. Reconstruction results using 7T in vivo data showed that the\nproposed reference-free k-space deep learning approach for EPI ghost correction\nsignificantly improves the image quality compared to the existing methods, and\nthe computing time is several orders of magnitude faster.\n", "title": "k-Space Deep Learning for Reference-free EPI Ghost Correction" }
null
null
null
null
true
null
16082
null
Default
null
null
null
{ "abstract": " In the classical binary search in a path the aim is to detect an unknown\ntarget by asking as few queries as possible, where each query reveals the\ndirection to the target. This binary search algorithm has been recently\nextended by [Emamjomeh-Zadeh et al., STOC, 2016] to the problem of detecting a\ntarget in an arbitrary graph. Similarly to the classical case in the path, the\nalgorithm of Emamjomeh-Zadeh et al. maintains a candidates' set for the target,\nwhile each query asks an appropriately chosen vertex-- the \"median\"--which\nminimises a potential $\\Phi$ among the vertices of the candidates' set. In this\npaper we address three open questions posed by Emamjomeh-Zadeh et al., namely\n(a) detecting a target when the query response is a direction to an\napproximately shortest path to the target, (b) detecting a target when querying\na vertex that is an approximate median of the current candidates' set (instead\nof an exact one), and (c) detecting multiple targets, for which to the best of\nour knowledge no progress has been made so far. We resolve questions (a) and\n(b) by providing appropriate upper and lower bounds, as well as a new potential\n$\\Gamma$ that guarantees efficient target detection even by querying an\napproximate median each time. With respect to (c), we initiate a systematic\nstudy for detecting two targets in graphs and we identify sufficient conditions\non the queries that allow for strong (linear) lower bounds and strong\n(polylogarithmic) upper bounds for the number of queries. All of our positive\nresults can be derived using our new potential $\\Gamma$ that allows querying\napproximate medians.\n", "title": "Binary Search in Graphs Revisited" }
null
null
null
null
true
null
16083
null
Default
null
null
null
{ "abstract": " The ADAM optimizer is exceedingly popular in the deep learning community.\nOften it works very well, sometimes it doesn't. Why? We interpret ADAM as a\ncombination of two aspects: for each weight, the update direction is determined\nby the sign of stochastic gradients, whereas the update magnitude is determined\nby an estimate of their relative variance. We disentangle these two aspects and\nanalyze them in isolation, gaining insight into the mechanisms underlying ADAM.\nThis analysis also extends recent results on adverse effects of ADAM on\ngeneralization, isolating the sign aspect as the problematic one. Transferring\nthe variance adaptation to SGD gives rise to a novel method, completing the\npractitioner's toolbox for problems where ADAM fails.\n", "title": "Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients" }
null
null
null
null
true
null
16084
null
Default
null
null
null
{ "abstract": " Wholesale electricity markets are increasingly integrated via high voltage\ninterconnectors, and inter-regional trade in electricity is growing. To model\nthis, we consider a spatial equilibrium model of price formation, where\nconstraints on inter-regional flows result in three distinct equilibria in\nprices. We use this to motivate an econometric model for the distribution of\nobserved electricity spot prices that captures many of their unique empirical\ncharacteristics. The econometric model features supply and inter-regional trade\ncost functions, which are estimated using Bayesian monotonic regression\nsmoothing methodology. A copula multivariate time series model is employed to\ncapture additional dependence -- both cross-sectional and serial-- in regional\nprices. The marginal distributions are nonparametric, with means given by the\nregression means. The model has the advantage of preserving the heavy\nright-hand tail in the predictive densities of price. We fit the model to\nhalf-hourly spot price data in the five interconnected regions of the\nAustralian national electricity market. The fitted model is then used to\nmeasure how both supply and price shocks in one region are transmitted to the\ndistribution of prices in all regions in subsequent periods. Finally, to\nvalidate our econometric model, we show that prices forecast using the proposed\nmodel compare favorably with those from some benchmark alternatives.\n", "title": "Econometric Modeling of Regional Electricity Spot Prices in the Australian Market" }
null
null
null
null
true
null
16085
null
Default
null
null
null
{ "abstract": " It has been recently demonstrated that textured closed surfaces which are\nmade out of perfect electric conductors (PECs) can mimic highly localized\nsurface plasmons (LSPs). Here, we propose an effective medium which can\naccurately model LSP resonances in a two-dimensional periodically decorated PEC\ncylinder. The accuracy of previous models is limited to structures with\ndeep-subwavelength and high number of grooves. However, we show that our model\ncan successfully predict the ultra-sharp LSP resonances which exist in\nstructures with relatively lower number of grooves. Such resonances are not\ncorrectly predictable with previous models that give some spurious resonances.\nThe success of the proposed model is indebted to the incorporation of an\neffective surface conductivity which is created at the interface of the\ncylinder and the homogeneous medium surrounding the structure. This surface\nconductivity models the effect of higher diffracted orders which are excited in\nthe periodic structure. The validity of the proposed model is verified by\nfull-wave simulations.\n", "title": "Accurate Effective Medium Theory for the Analysis of Spoof Localized Surface Plasmons in Textured Metallic Cylinders" }
null
null
null
null
true
null
16086
null
Default
null
null
null
{ "abstract": " Research on automated image enhancement has gained momentum in recent years,\npartially due to the need for easy-to-use tools for enhancing pictures captured\nby ubiquitous cameras on mobile devices. Many of the existing leading methods\nemploy machine-learning-based techniques, by which some enhancement parameters\nfor a given image are found by relating the image to the training images with\nknown enhancement parameters. While knowing the structure of the parameter\nspace can facilitate search for the optimal solution, none of the existing\nmethods has explicitly modeled and learned that structure. This paper presents\nan end-to-end, novel joint regression and ranking approach to model the\ninteraction between desired enhancement parameters and images to be processed,\nemploying a Gaussian process (GP). GP allows searching for ideal parameters\nusing only the image features. The model naturally leads to a ranking technique\nfor comparing images in the induced feature space. Comparative evaluation using\nthe ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on\nan additional data-set were used to demonstrate the effectiveness of the\nproposed approach.\n", "title": "Joint Regression and Ranking for Image Enhancement" }
null
null
null
null
true
null
16087
null
Default
null
null
null
{ "abstract": " The radially outward flow of fluid through a porous medium occurs in many\npractical problems, from transport across vascular walls to the pressurisation\nof boreholes in the subsurface. When the driving pressure is non-negligible\nrelative to the stiffness of the solid structure, the poromechanical coupling\nbetween the fluid and the solid can control both the steady-state and the\ntransient mechanics of the system. Very large pressures or very soft materials\nlead to large deformations of the solid skeleton, which introduce kinematic and\nconstitutive nonlinearity that can have a nontrivial impact on these mechanics.\nHere, we study the transient response of a poroelastic cylinder to sudden fluid\ninjection. We consider the impacts of kinematic and constitutive nonlinearity,\nboth separately and in combination, and we highlight the central role of\ndriving method in the evolution of the response. We show that the various\nfacets of nonlinearity may either accelerate or decelerate the transient\nresponse relative to linear poroelasticity, depending on the boundary\nconditions and the initial geometry, and that an imposed fluid pressure leads\nto a much faster response than an imposed fluid flux.\n", "title": "From arteries to boreholes: Transient response of a poroelastic cylinder to fluid injection" }
null
null
[ "Physics" ]
null
true
null
16088
null
Validated
null
null
null
{ "abstract": " Even though transitivity is a central structural feature of social networks,\nits influence on epidemic spread on coevolving networks has remained relatively\nunexplored. Here we introduce and study an adaptive SIS epidemic model wherein\nthe infection and network coevolve with non-trivial probability to close\ntriangles during edge rewiring, leading to substantial reinforcement of network\ntransitivity. This new model provides a unique opportunity to study the role of\ntransitivity in altering the SIS dynamics on a coevolving network. Using\nnumerical simulations and Approximate Master Equations (AME), we identify and\nexamine a rich set of dynamical features in the new model. In many cases, the\nAME including transitivity reinforcement provide accurate predictions of\nstationary-state disease prevalence and network degree distributions.\nFurthermore, for some parameter settings, the AME accurately trace the temporal\nevolution of the system. We show that higher transitivity reinforcement in the\nmodel leads to lower levels of infective individuals in the population; when\nclosing a triangle is the only rewiring mechanism. These methods and results\nmay be useful in developing ideas and modeling strategies for controlling SIS\ntype epidemics.\n", "title": "Social Clustering in Epidemic Spread on Coevolving Networks" }
null
null
null
null
true
null
16089
null
Default
null
null
null
{ "abstract": " The Hohenberg-Kohn theorem plays a fundamental role in density functional\ntheory, which has become a basic tool for the study of electronic structure of\nmatter. In this article, we study the Hohenberg-Kohn theorem for a class of\nexternal potentials based on a unique continuation principle.\n", "title": "A Mathematical Aspect of Hohenberg-Kohn Theorem" }
null
null
null
null
true
null
16090
null
Default
null
null
null
{ "abstract": " Digital sculpting is a popular means to create 3D models but remains a\nchallenging task for many users. This can be alleviated by recent advances in\ndata-driven and procedural modeling, albeit bounded by the underlying data and\nprocedures. We propose a 3D sculpting system that assists users in freely\ncreating models without predefined scope. With a brushing interface similar to\ncommon sculpting tools, our system silently records and analyzes users'\nworkflows, and predicts what they might or should do in the future to reduce\ninput labor or enhance output quality. Users can accept, ignore, or modify the\nsuggestions and thus maintain full control and individual style. They can also\nexplicitly select and clone past workflows over output model regions. Our key\nidea is to consider how a model is authored via dynamic workflows in addition\nto what it is shaped in static geometry, for more accurate analysis of user\nintentions and more general synthesis of shape structures. The workflows\ncontain potential repetitions for analysis and synthesis, including user inputs\n(e.g. pen strokes on a pressure sensing tablet), model outputs (e.g. extrusions\non an object surface), and camera viewpoints. We evaluate our method via user\nfeedbacks and authored models.\n", "title": "Autocomplete 3D Sculpting" }
null
null
[ "Computer Science" ]
null
true
null
16091
null
Validated
null
null
null
{ "abstract": " Since a tweet is limited to 140 characters, it is ambiguous and difficult for\ntraditional Natural Language Processing (NLP) tools to analyse. This research\npresents KeyXtract which enhances the machine learning based Stanford CoreNLP\nPart-of-Speech (POS) tagger with the Twitter model to extract essential\nkeywords from a tweet. The system was developed using rule-based parsers and\ntwo corpora. The data for the research was obtained from a Twitter profile of a\ntelecommunication company. The system development consisted of two stages. At\nthe initial stage, a domain specific corpus was compiled after analysing the\ntweets. The POS tagger extracted the Noun Phrases and Verb Phrases while the\nparsers removed noise and extracted any other keywords missed by the POS\ntagger. The system was evaluated using the Turing Test. After it was tested and\ncompared against Stanford CoreNLP, the second stage of the system was developed\naddressing the shortcomings of the first stage. It was enhanced using Named\nEntity Recognition and Lemmatization. The second stage was also tested using\nthe Turing test and its pass rate increased from 50.00% to 83.33%. The\nperformance of the final system output was measured using the F1 score.\nStanford CoreNLP with the Twitter model had an average F1 of 0.69 while the\nimproved system had a F1 of 0.77. The accuracy of the system could be improved\nby using a complete domain specific corpus. Since the system used linguistic\nfeatures of a sentence, it could be applied to other NLP tools.\n", "title": "KeyXtract Twitter Model - An Essential Keywords Extraction Model for Twitter Designed using NLP Tools" }
null
null
[ "Computer Science" ]
null
true
null
16092
null
Validated
null
null
null
{ "abstract": " In this paper, we consider precoder designs for multiuser\nmultiple-input-single-output (MISO) broadcasting channels. Instead of using a\ntraditional linear zero-forcing (ZF) precoder, we propose a generalized ZF\n(GZF) precoder in conjunction with successive dirty-paper coding (DPC) for\ndata-transmissions, namely, the GZF-DP precoder, where the suffix \\lq{}DP\\rq{}\nstands for \\lq{}dirty-paper\\rq{}. The GZF-DP precoder is designed to generate a\nband-shaped and lower-triangular effective channel $\\vec{F}$ such that only the\nentries along the main diagonal and the $\\nu$ first lower-diagonals can take\nnon-zero values. Utilizing the successive DPC, the known non-causal inter-user\ninterferences from the other (up to) $\\nu$ users are canceled through\nsuccessive encoding. We analyze optimal GZF-DP precoder designs both for\nsum-rate and minimum user-rate maximizations. Utilizing Lagrange multipliers,\nthe optimal precoders for both cases are solved in closed-forms in relation to\noptimal power allocations. For the sum-rate maximization, the optimal power\nallocation can be found through water-filling, but with modified water-levels\ndepending on the parameter $\\nu$. While for the minimum user-rate maximization\nthat measures the quality of the service (QoS), the optimal power allocation is\ndirectly solved in closed-form which also depends on $\\nu$. Moreover, we\npropose two low-complexity user-ordering algorithms for the GZF-DP precoder\ndesigns for both maximizations, respectively. We show through numerical results\nthat, the proposed GZF-DP precoder with a small $\\nu$ ($\\leq\\!3$) renders\nsignificant rate increments compared to the previous precoder designs such as\nthe linear ZF and user-grouping based DPC (UG-DP) precoders.\n", "title": "A Generalized Zero-Forcing Precoder with Successive Dirty-Paper Coding in MISO Broadcast Channels" }
null
null
null
null
true
null
16093
null
Default
null
null
null
{ "abstract": " Sampling random graphs is essential in many applications, and often\nalgorithms use Markov chain Monte Carlo methods to sample uniformly from the\nspace of graphs. However, often there is a need to sample graphs with some\nproperty that we are unable, or it is too inefficient, to sample using standard\napproaches. In this paper, we are interested in sampling graphs from a\nconditional ensemble of the underlying graph model. We present an algorithm to\ngenerate samples from an ensemble of connected random graphs using a\nMetropolis-Hastings framework. The algorithm extends to a general framework for\nsampling from a known distribution of graphs, conditioned on a desired\nproperty. We demonstrate the method to generate connected spatially embedded\nrandom graphs, specifically the well known Waxman network, and illustrate the\nconvergence and practicalities of the algorithm.\n", "title": "Generating Connected Random Graphs" }
null
null
null
null
true
null
16094
null
Default
null
null
null
{ "abstract": " Forecasting fault failure is a fundamental but elusive goal in earthquake\nscience. Here we show that by listening to the acoustic signal emitted by a\nlaboratory fault, machine learning can predict the time remaining before it\nfails with great accuracy. These predictions are based solely on the\ninstantaneous physical characteristics of the acoustical signal, and do not\nmake use of its history. Surprisingly, machine learning identifies a signal\nemitted from the fault zone previously thought to be low-amplitude noise that\nenables failure forecasting throughout the laboratory quake cycle. We\nhypothesize that applying this approach to continuous seismic data may lead to\nsignificant advances in identifying currently unknown signals, in providing new\ninsights into fault physics, and in placing bounds on fault failure times.\n", "title": "Machine Learning Predicts Laboratory Earthquakes" }
null
null
null
null
true
null
16095
null
Default
null
null
null
{ "abstract": " New upper limit on a mixing parameter for hidden photons with a mass from 5\neV till 10 keV has been obtained from the results of measurements during 78\ndays in two configurations R1 and R2 of a multicathode counter. For a region of\na maximal sensitivity from 10 eV till 30 eV the upper limit obtained is less\nthan 4 x 10-11. The measurements have been performed at three temperatures:\n26C, 31C and 36C. A positive effect for the spontaneous emission of single\nelectrons has been obtained at the level of more than 7{\\sigma}. A falling\ntendency of a temperature dependence of the spontaneous emission rate indicates\nthat the effect of thermal emission from a copper cathode can be neglected.\n", "title": "New results of the search for hidden photons by means of a multicathode counter" }
null
null
null
null
true
null
16096
null
Default
null
null
null
{ "abstract": " The three gap theorem, also known as the Steinhaus conjecture or three\ndistance theorem, states that the gaps in the fractional parts of\n$\\alpha,2\\alpha,\\ldots, N\\alpha$ take at most three distinct values. Motivated\nby a question of Erdős, Geelen and Simpson, we explore a higher-dimensional\nvariant, which asks for the number of gaps between the fractional parts of a\nlinear form. Using the ergodic properties of the diagonal action on the space\nof lattices, we prove that for almost all parameter values the number of\ndistinct gaps in the higher dimensional problem is unbounded. Our results in\nparticular improve earlier work by Boshernitzan, Dyson and Bleher et al. We\nfurthermore discuss a close link with the Littlewood conjecture in\nmultiplicative Diophantine approximation. Finally, we also demonstrate how our\nmethods can be adapted to obtain similar results for gaps between return times\nof translations to shrinking regions on higher dimensional tori.\n", "title": "Higher dimensional Steinhaus and Slater problems via homogeneous dynamics" }
null
null
null
null
true
null
16097
null
Default
null
null
null
{ "abstract": " 21 st century astrophysicists are confronted with the herculean task of\ndistilling the maximum scientific return from extremely expensive and complex\nspace- or ground-based instrumental projects. This paper concentrates in the\nmining of the time series catalog produced by the European Space Agency Gaia\nmission, launched in December 2013. We tackle in particular the problem of\ninferring the true distribution of the variability properties of Cepheid stars\nin the Milky Way satellite galaxy known as the Large Magellanic Cloud (LMC).\nClassical Cepheid stars are the first step in the so-called distance ladder: a\nseries of techniques to measure cosmological distances and decipher the\nstructure and evolution of our Universe. In this work we attempt to unbias the\ncatalog by modelling the aliasing phenomenon that distorts the true\ndistribution of periods. We have represented the problem by a 2-level\ngenerative Bayesian graphical model and used a Markov chain Monte Carlo (MCMC)\nalgorithm for inference (classification and regression). Our results with\nsynthetic data show that the system successfully removes systematic biases and\nis able to infer the true hyperparameters of the frequency and magnitude\ndistributions.\n", "title": "Bayesian Unbiasing of the Gaia Space Mission Time Series Database" }
null
null
null
null
true
null
16098
null
Default
null
null
null
{ "abstract": " We give an elementary proof for the fact that an irreducible hyperbolic\npolynomial has only one pair of hyperbolicity cones.\n", "title": "On the connectivity of the hyperbolicity region of irreducible polynomials" }
null
null
null
null
true
null
16099
null
Default
null
null
null
{ "abstract": " Predicting unobserved entries of a partially observed matrix has found wide\napplicability in several areas, such as recommender systems, computational\nbiology, and computer vision. Many scalable methods with rigorous theoretical\nguarantees have been developed for algorithms where the matrix is factored into\nlow-rank components, and embeddings are learned for the row and column\nentities. While there has been recent research on incorporating explicit side\ninformation in the low-rank matrix factorization setting, often implicit\ninformation can be gleaned from the data, via higher-order interactions among\nentities. Such implicit information is especially useful in cases where the\ndata is very sparse, as is often the case in real-world datasets. In this\npaper, we design a method to learn embeddings in the context of recommendation\nsystems, using the observation that higher powers of a graph transition\nprobability matrix encode the probability that a random walker will hit that\nnode in a given number of steps. We develop a coordinate descent algorithm to\nsolve the resulting optimization, that makes explicit computation of the higher\norder powers of the matrix redundant, preserving sparsity and making\ncomputations efficient. Experiments on several datasets show that our method,\nthat can use higher order information, outperforms methods that only use\nexplicitly available side information, those that use only second-order\nimplicit information and in some cases, methods based on deep neural networks\nas well.\n", "title": "Matrix Completion via Factorizing Polynomials" }
null
null
null
null
true
null
16100
null
Default
null
null