text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We propose an approximate approach for studying the relativistic regime of\nstellar tidal disruptions by rotating massive black holes. It combines an exact\nrelativistic description of the hydrodynamical evolution of a test fluid in a\nfixed curved spacetime with a Newtonian treatment of the fluid's self-gravity.\nExplicit expressions for the equations of motion are derived for Kerr spacetime\nusing two different coordinate systems. We implement the new methodology within\nan existing Newtonian Smoothed Particle Hydrodynamics code and show that\nincluding the additional physics involves very little extra computational cost.\nWe carefully explore the validity of the novel approach by first testing its\nability to recover geodesic motion, and then by comparing the outcome of tidal\ndisruption simulations against previous relativistic studies. We further\ncompare simulations in Boyer--Lindquist and Kerr--Schild coordinates and\nconclude that our approach allows accurate simulation even of tidal disruption\nevents where the star penetrates deeply inside the tidal radius of a rotating\nblack hole. Finally, we use the new method to study the effect of the black\nhole spin on the morphology and fallback rate of the debris streams resulting\nfrom tidal disruptions, finding that while the spin has little effect on the\nfallback rate, it does imprint heavily on the stream morphology, and can even\nbe a determining factor in the survival or disruption of the star itself. Our\nmethodology is discussed in detail as a reference for future astrophysical\napplications.\n", "title": "Tidal disruptions by rotating black holes: relativistic hydrodynamics with Newtonian codes" }
null
null
null
null
true
null
6001
null
Default
null
null
null
{ "abstract": " Occasionally, developers need to ensure that the compiler treats their code\nin a specific way that is only visible by inspecting intermediate or final\ncompilation artifacts. This is particularly common with carefully crafted\ncompositional libraries, where certain usage patterns are expected to trigger\nan intricate sequence of compiler optimizations -- stream fusion is a\nwell-known example.\nThe developer of such a library has to manually inspect build artifacts and\ncheck for the expected properties. Because this is too tedious to do often, it\nwill likely go unnoticed if the property is broken by a change to the library\ncode, its dependencies or the compiler. The lack of automation has led to\nreleased versions of such libraries breaking their documented promises.\nThis indicates that there is an unrecognized need for a new testing paradigm,\ninspection testing, where the programmer declaratively describes non-functional\nproperties of an compilation artifact and the compiler checks these properties.\nWe define inspection testing abstractly, implement it in the context of Haskell\nand show that it increases the quality of such libraries.\n", "title": "A promise checked is a promise kept: Inspection Testing" }
null
null
[ "Computer Science" ]
null
true
null
6002
null
Validated
null
null
null
{ "abstract": " Classification of sequence data is the topic of interest for dynamic Bayesian\nmodels and Recurrent Neural Networks (RNNs). While the former can explicitly\nmodel the temporal dependencies between class variables, the latter have a\ncapability of learning representations. Several attempts have been made to\nimprove performance by combining these two approaches or increasing the\nprocessing capability of the hidden units in RNNs. This often results in\ncomplex models with a large number of learning parameters. In this paper, a\ncompact model is proposed which offers both representation learning and\ntemporal inference of class variables by rolling Restricted Boltzmann Machines\n(RBMs) and class variables over time. We address the key issue of\nintractability in this variant of RBMs by optimising a conditional\ndistribution, instead of a joint distribution. Experiments reported in the\npaper on melody modelling and optical character recognition show that the\nproposed model can outperform the state-of-the-art. Also, the experimental\nresults on optical character recognition, part-of-speech tagging and text\nchunking demonstrate that our model is comparable to recurrent neural networks\nwith complex memory gates while requiring far fewer parameters.\n", "title": "Linear-Time Sequence Classification using Restricted Boltzmann Machines" }
null
null
null
null
true
null
6003
null
Default
null
null
null
{ "abstract": " In this paper, we introduce a notion of a central $U(1)$-extension of a\ndouble Lie groupoid and show that it defines a cocycle in the certain triple\ncomplex.\n", "title": "A central $U(1)$-extension of a double Lie groupoid" }
null
null
null
null
true
null
6004
null
Default
null
null
null
{ "abstract": " The aim of this paper is to establish some metrical coincidence and common\nfixed point theorems with an arbitrary relation under an implicit contractive\ncondition which is general enough to cover a multitude of well known\ncontraction conditions in one go besides yielding several new ones. We also\nprovide an example to demonstrate the generality of our results over several\nwell known corresponding results of the existing literature. Finally, we\nutilize our results to prove an existence theorem for ensuring the solution of\nan integral equation.\n", "title": "Common fixed point theorems under an implicit contractive condition on metric spaces endowed with an arbitrary binary relation and an application" }
null
null
null
null
true
null
6005
null
Default
null
null
null
{ "abstract": " The nonnegative inverse eigenvalue problem (NIEP) asks which lists of $n$\ncomplex numbers (counting multiplicity) occur as the eigenvalues of some\n$n$-by-$n$ entry-wise nonnegative matrix. The NIEP has a long history and is a\nknown hard (perhaps the hardest in matrix analysis?) and sought after problem.\nThus, there are many subproblems and relevant results in a variety of\ndirections. We survey most work on the problem and its several variants, with\nan emphasis on recent results, and include 130 references. The survey is\ndivided into: a) the single eigenvalue problems; b) necessary conditions; c)\nlow dimensional results; d) sufficient conditions; e) appending 0's to achieve\nrealizability; f) the graph NIEP's; g) Perron similarities; and h) the\nrelevance of Jordan structure.\n", "title": "The NIEP" }
null
null
[ "Mathematics" ]
null
true
null
6006
null
Validated
null
null
null
{ "abstract": " This paper replicates, extends, and refutes conclusions made in a study\npublished in PLoS ONE (\"Even Good Bots Fight\"), which claimed to identify\nsubstantial levels of conflict between automated software agents (or bots) in\nWikipedia using purely quantitative methods. By applying an integrative\nmixed-methods approach drawing on trace ethnography, we place these alleged\ncases of bot-bot conflict into context and arrive at a better understanding of\nthese interactions. We found that overwhelmingly, the interactions previously\ncharacterized as problematic instances of conflict are typically better\ncharacterized as routine, productive, even collaborative work. These results\nchallenge past work and show the importance of qualitative/quantitative\ncollaboration. In our paper, we present quantitative metrics and qualitative\nheuristics for operationalizing bot-bot conflict. We give thick descriptions of\nkinds of events that present as bot-bot reverts, helping distinguish conflict\nfrom non-conflict. We computationally classify these kinds of events through\npatterns in edit summaries. By interpreting found/trace data in the\nsocio-technical contexts in which people give that data meaning, we gain more\nfrom quantitative measurements, drawing deeper understandings about the\ngovernance of algorithmic systems in Wikipedia. We have also released our data\ncollection, processing, and analysis pipeline, to facilitate computational\nreproducibility of our findings and to help other researchers interested in\nconducting similar mixed-method scholarship in other platforms and contexts.\n", "title": "Operationalizing Conflict and Cooperation between Automated Software Agents in Wikipedia: A Replication and Expansion of 'Even Good Bots Fight'" }
null
null
null
null
true
null
6007
null
Default
null
null
null
{ "abstract": " Bidirectional transformations between different data representations occur\nfrequently in modern software systems. They appear as serializers and\ndeserializers, as database views and view updaters, and more. Manually building\nbidirectional transformations---by writing two separate functions that are\nintended to be inverses---is tedious and error prone. A better approach is to\nuse a domain-specific language in which both directions can be written as a\nsingle expression. However, these domain-specific languages can be difficult to\nprogram in, requiring programmers to manage fiddly details while working in a\ncomplex type system.\nTo solve this, we present Optician, a tool for type-directed synthesis of\nbijective string transformers. The inputs to Optician are two ordinary regular\nexpressions representing two data formats and a few concrete examples for\ndisambiguation. The output is a well-typed program in Boomerang (a\nbidirectional language based on the theory of lenses). The main technical\nchallenge involves navigating the vast program search space efficiently enough.\nUnlike most prior work on type-directed synthesis, our system operates in the\ncontext of a language with a rich equivalence relation on types (the theory of\nregular expressions). We synthesize terms of a equivalent language and convert\nthose generated terms into our lens language. We prove the correctness of our\nsynthesis algorithm. We also demonstrate empirically that our new language\nchanges the synthesis problem from one that admits intractable solutions to one\nthat admits highly efficient solutions. We evaluate Optician on a benchmark\nsuite of 39 examples including both microbenchmarks and realistic examples\nderived from other data management systems including Flash Fill, a tool for\nsynthesizing string transformations in spreadsheets, and Augeas, a tool for\nbidirectional processing of Linux system configuration files.\n", "title": "Synthesizing Bijective Lenses" }
null
null
null
null
true
null
6008
null
Default
null
null
null
{ "abstract": " Recent advances in microelectromechanical systems often require\nmultifunctional materials, which are designed so as to optimize more than one\nproperty. Using density functional theory calculations for alloyed nitride\nsystems, we illustrate how co-alloying a piezoelectric material (AlN) with\ndifferent nitrides helps tune both its piezoelectric and mechanical properties\nsimultaneously. Wurtzite AlN-YN alloys display increased piezoelectric response\nwith YN concentration, accompanied by mechanical softening along the\ncrystallographic c direction. Both effects increase the electromechanical\ncoupling coefficients relevant for transducers and actuators. Resonator\napplications, however, require superior stiffness, thus leading to the need to\ndecouple the increased piezoelectric response from a softened lattice. We show\nthat co-alloying of AlN with YN and BN results in improved elastic properties\nwhile retaining most of the piezoelectric enhancements from YN alloying. This\nfinding may lead to new avenues for tuning the design properties of\npiezoelectrics through composition-property maps.\nKeywords: piezoelectricity, electromechanical coupling, density functional\ntheory, co-alloying\n", "title": "Tuning the piezoelectric and mechanical properties of the AlN system via alloying with YN and BN" }
null
null
null
null
true
null
6009
null
Default
null
null
null
{ "abstract": " We present an embedding approach for semiconductors and insulators based on\nor- bital rotations in the space of occupied Kohn-Sham orbitals. We have\nimplemented our approach in the popular VASP software package. We demonstrate\nits power for defect structures in silicon and polaron formation in titania,\ntwo challenging cases for conventional Kohn-Sham density functional theory.\n", "title": "Embedding for bulk systems using localized atomic orbitals" }
null
null
null
null
true
null
6010
null
Default
null
null
null
{ "abstract": " In the last decade, the use of simple rating and comparison surveys has\nproliferated on social and digital media platforms to fuel recommendations.\nThese simple surveys and their extrapolation with machine learning algorithms\nshed light on user preferences over large and growing pools of items, such as\nmovies, songs and ads. Social scientists have a long history of measuring\nperceptions, preferences and opinions, often over smaller, discrete item sets\nwith exhaustive rating or ranking surveys. This paper introduces simple surveys\nfor social science application. We ran experiments to compare the predictive\naccuracy of both individual and aggregate comparative assessments using four\ntypes of simple surveys: pairwise comparisons and ratings on 2, 5 and\ncontinuous point scales in three distinct contexts: perceived Safety of Google\nStreetview Images, Likeability of Artwork, and Hilarity of Animal GIFs. Across\ncontexts, we find that continuous scale ratings best predict individual\nassessments but consume the most time and cognitive effort. Binary choice\nsurveys are quick and perform best to predict aggregate assessments, useful for\ncollective decision tasks, but poorly predict personalized preferences, for\nwhich they are currently used by Netflix to recommend movies. Pairwise\ncomparisons, by contrast, perform well to predict personal assessments, but\npoorly predict aggregate assessments despite being widely used to crowdsource\nideas and collective preferences. We demonstrate how findings from these\nsurveys can be visualized in a low-dimensional space that reveals distinct\nrespondent interpretations of questions asked in each context. We conclude by\nreflecting on differences between sparse, incomplete simple surveys and their\ntraditional survey counterparts in terms of efficiency, information elicited\nand settings in which knowing less about more may be critical for social\nscience.\n", "title": "Simple Surveys: Response Retrieval Inspired by Recommendation Systems" }
null
null
null
null
true
null
6011
null
Default
null
null
null
{ "abstract": " We introduce a reduction from the distinct distances problem in ${\\mathbb\nR}^d$ to an incidence problem with $(d-1)$-flats in ${\\mathbb R}^{2d-1}$.\nDeriving the conjectured bound for this incidence problem (the bound predicted\nby the polynomial partitioning technique) would lead to a tight bound for the\ndistinct distances problem in ${\\mathbb R}^d$. The reduction provides a large\namount of information about the $(d-1)$-flats, and a framework for deriving\nmore restrictions that these satisfy. Our reduction is based on introducing a\nLie group that is a double cover of the special Euclidean group. This group can\nbe seen as a variant of the Spin group, and a large part of our analysis\ninvolves studying its properties.\n", "title": "A Reduction for the Distinct Distances Problem in ${\\mathbb R}^d$" }
null
null
null
null
true
null
6012
null
Default
null
null
null
{ "abstract": " We report the preparation of the interface between graphene and the strong\nRashba-split BiAg$_2$ surface alloy and investigatigation of its structure as\nwell as the electronic properties by means of scanning tunneling\nmicroscopy/spectroscopy and density functional theory calculations. Upon\nevaluation of the quasiparticle interference patterns the unpertrubated linear\ndispersion for the $\\pi$ band of $n$-doped graphene is observed. Our results\nalso reveal the intact nature of the giant Rashba-split surface states of the\nBiAg$_2$ alloy, which demonstrate only a moderate downward energy shift upon\nthe presence of graphene. This effect is explained in the framework of density\nfunctional theory by an inward relaxation of the Bi atoms at the interface and\nsubsequent delocalisation of the wave function of the surface states. Our\nfindings demonstrate a realistic pathway to prepare a graphene protected giant\nRashba-split BiAg$_2$ for possible spintronic applications.\n", "title": "Local electronic properties of the graphene-protected giant Rashba-split BiAg$_2$ surface" }
null
null
null
null
true
null
6013
null
Default
null
null
null
{ "abstract": " Trapping and manipulation of particles using laser beams has become an\nimportant tool in diverse fields of research. In recent years, particular\ninterest is given to the problem of conveying optically trapped particles over\nextended distances either down or upstream the direction of the photons\nmomentum flow. Here, we propose and demonstrate experimentally an optical\nanalogue of the famous Archimedes' screw where the rotation of a\nhelical-intensity beam is transferred to the axial motion of optically-trapped\nmicro-meter scale airborne carbon based particles. With this optical screw,\nparticles were easily conveyed with controlled velocity and direction, upstream\nor downstream the optical flow, over a distance of half a centimeter. Our\nresults offer a very simple optical conveyor that could be adapted to a wide\nrange of optical trapping scenarios.\n", "title": "Particle trapping and conveying using an optical Archimedes' screw" }
null
null
null
null
true
null
6014
null
Default
null
null
null
{ "abstract": " Nested Chinese Restaurant Process (nCRP) topic models are powerful\nnonparametric Bayesian methods to extract a topic hierarchy from a given text\ncorpus, where the hierarchical structure is automatically determined by the\ndata. Hierarchical Latent Dirichlet Allocation (hLDA) is a popular instance of\nnCRP topic models. However, hLDA has only been evaluated at small scale,\nbecause the existing collapsed Gibbs sampling and instantiated weight\nvariational inference algorithms either are not scalable or sacrifice inference\nquality with mean-field assumptions. Moreover, an efficient distributed\nimplementation of the data structures, such as dynamically growing count\nmatrices and trees, is challenging.\nIn this paper, we propose a novel partially collapsed Gibbs sampling (PCGS)\nalgorithm, which combines the advantages of collapsed and instantiated weight\nalgorithms to achieve good scalability as well as high model quality. An\ninitialization strategy is presented to further improve the model quality.\nFinally, we propose an efficient distributed implementation of PCGS through\nvectorization, pre-processing, and a careful design of the concurrent data\nstructures and communication strategy.\nEmpirical studies show that our algorithm is 111 times more efficient than\nthe previous open-source implementation for hLDA, with comparable or even\nbetter model quality. Our distributed implementation can extract 1,722 topics\nfrom a 131-million-document corpus with 28 billion tokens, which is 4-5 orders\nof magnitude larger than the previous largest corpus, with 50 machines in 7\nhours.\n", "title": "Scalable Inference for Nested Chinese Restaurant Process Topic Models" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
6015
null
Validated
null
null
null
{ "abstract": " Following work of Keel and Tevelev, we give explicit polynomials in the Cox\nring of $\\mathbb{P}^1\\times\\cdots\\times\\mathbb{P}^{n-3}$ that, conjecturally,\ndetermine $\\overline{M}_{0,n}$ as a subscheme. Using Macaulay2, we prove that\nthese equations generate the ideal for $n=5, 6, 7, 8$. For $n \\leq 6$ we give a\ncohomological proof that these polynomials realize $\\overline{M}_{0,n}$ as a\nprojective variety, embedded in $\\mathbb{P}^{(n-2)!-1}$ by the complete log\ncanonical linear system.\n", "title": "Equations of $\\,\\overline{M}_{0,n}$" }
null
null
null
null
true
null
6016
null
Default
null
null
null
{ "abstract": " An improved wetting boundary implementation strategy is proposed based on\nlattice Boltzmann color-gradient model in this paper. In this strategy, an\nextra interface force condition is demonstrated based on the diffuse interface\nassumption and is employed in contact line region. It has been validated by\nthree benchmark problems: static droplet wetting on a flat surface and a curved\nsurface, and dynamic capillary filling. Good performances are shown in all\nthree cases. Relied on the strict validation to our scheme, the viscous\nfingering phenomenon of immiscible fluids displacement in a two-dimensional\nchannel has been restudied in this paper. High viscosity ratio, wide range\ncontact angle, accurate moving contact line and mutual independence between\nsurface tension and viscosity are the obvious advantages of our model. We find\nthe linear relationship between the contact angle and displacement velocity or\nvariation of finger length. When the viscosity ratio is smaller than 20, the\ndisplacement velocity is increasing with increasing viscosity ratio and\nreducing capillary number, and when the viscosity ratio is larger than 20, the\ndisplacement velocity tends to a specific constant. A similar conclusion is\nobtained on the variation of finger length.\n", "title": "Lattice Boltzmann simulation of viscous fingering of immiscible displacement in a channel using an improved wetting scheme" }
null
null
null
null
true
null
6017
null
Default
null
null
null
{ "abstract": " In this paper, we propose a sex-structured entomological model that serves as\na basis for design of control strategies relying on releases of sterile male\nmosquitoes (Aedes spp) and aiming at elimination of the wild vector population\nin some target locality. We consider different types of releases (constant and\nperiodic impulsive), providing necessary conditions to reach elimination.\nHowever, the main part of the paper is focused on the study of the periodic\nimpulsive control in different situations. When the size of wild mosquito\npopulation cannot be assessed in real time, we propose the so-called open-loop\ncontrol strategy that relies on periodic impulsive releases of sterile males\nwith constant release size. Under this control mode, global convergence towards\nthe mosquito-free equilibrium is proved on the grounds of sufficient condition\nthat relates the size and frequency of releases. If periodic assessments\n(either synchronized with releases or more sparse) of the wild population size\nare available in real time, we propose the so-called closed-loop control\nstrategy, which is adjustable in accordance with reliable estimations of the\nwild population sizes. Under this control mode, global convergence to the\nmosquito-free equilibrium is proved on the grounds of another sufficient\ncondition that relates not only the size and frequency of periodic releases but\nalso the frequency of sparse measurements taken on wild populations. Finally,\nwe propose a mixed control strategy that combines open-loop and closed-loop\nstrategies. This control mode renders the best result, in terms of overall time\nneeded to reach elimination and the number of releases to be effectively\ncarried out during the whole release campaign, while requiring for a reasonable\namount of released sterile insects.\n", "title": "Implementation of Control Strategies for Sterile Insect Techniques" }
null
null
null
null
true
null
6018
null
Default
null
null
null
{ "abstract": " A Boolean algebra carries a strictly positive exhaustive submeasure if and\nonly if it has a sequential topology that is uniformly Frechet.\n", "title": "Convergence and submeasures in Boolean algebras" }
null
null
null
null
true
null
6019
null
Default
null
null
null
{ "abstract": " This work verifies the instrumental characteristics of the CCD detector which\nis part of the UNI astronomical observatory. We measured the linearity of the\nCCD detector of the SBIG STXL6303E camera, along with the associated gain and\nreadout noise. The linear response to the incident light of the detector is\nextremely linear (R2 =99.99%), its effective gain is 1.65 +/- 0.01 e-/ADU and\nits readout noise is 12.2 e-. These values are in agreement with the\nmanufacturer. We confirm that this detector is extremely precise to make\nmeasurements for astronomical purposes.\n", "title": "Characterizing a CCD detector for astronomical purposes: OAUNI Project" }
null
null
[ "Physics" ]
null
true
null
6020
null
Validated
null
null
null
{ "abstract": " Polarization-based filtering in fiber lasers is well-known to enable spectral\ntunability and a wide range of dynamical operating states. This effect is\nrarely exploited in practical systems, however, because optimization of cavity\nparameters is non-trivial and evolves due to environmental sensitivity. Here,\nwe report a genetic algorithm-based approach, utilizing electronic control of\nthe cavity transfer function, to autonomously achieve broad wavelength tuning\nand the generation of Q-switched pulses with variable repetition rate and\nduration. The practicalities and limitations of simultaneous spectral and\ntemporal self-tuning from a simple fiber laser are discussed, paving the way to\non-demand laser properties through algorithmic control and machine learning\nschemes.\n", "title": "Genetic algorithm-based control of birefringent filtering for self-tuning, self-pulsing fiber lasers" }
null
null
null
null
true
null
6021
null
Default
null
null
null
{ "abstract": " The rising attention to the spreading of fake news and unsubstantiated rumors\non online social media and the pivotal role played by confirmation bias led\nresearchers to investigate different aspects of the phenomenon. Experimental\nevidence showed that confirmatory information gets accepted even if containing\ndeliberately false claims while dissenting information is mainly ignored or\nmight even increase group polarization. It seems reasonable that, to address\nmisinformation problem properly, we have to understand the main determinants\nbehind content consumption and the emergence of narratives on online social\nmedia. In this paper we address such a challenge by focusing on the discussion\naround the Italian Constitutional Referendum by conducting a quantitative,\ncross-platform analysis on both Facebook public pages and Twitter accounts. We\nobserve the spontaneous emergence of well-separated communities on both\nplatforms. Such a segregation is completely spontaneous, since no\ncategorization of contents was performed a priori. By exploring the dynamics\nbehind the discussion, we find that users tend to restrict their attention to a\nspecific set of Facebook pages/Twitter accounts. Finally, taking advantage of\nautomatic topic extraction and sentiment analysis techniques, we are able to\nidentify the most controversial topics inside and across both platforms. We\nmeasure the distance between how a certain topic is presented in the\nposts/tweets and the related emotional response of users. Our results provide\ninteresting insights for the understanding of the evolution of the core\nnarratives behind different echo chambers and for the early detection of\nmassive viral phenomena around false claims.\n", "title": "Public discourse and news consumption on online social media: A quantitative, cross-platform analysis of the Italian Referendum" }
null
null
null
null
true
null
6022
null
Default
null
null
null
{ "abstract": " The recent discovery of gravitational waves by the LIGO-Virgo collaboration\ncreated renewed interest in the investigation of alternative gravitational\ndetector designs, such as small scale resonant detectors. In this article, it\nis shown how proposed small scale detectors can be tested by generating\ndynamical gravitational fields with appropriate distributions of moving masses.\nA series of interesting experiments will be possible with this setup. In\nparticular, small scale detectors can be tested very early in the development\nphase and tests can be used to progress quickly in their development. This\ncould contribute to the emerging field of gravitational wave astronomy.\n", "title": "Testing small scale gravitational wave detectors with dynamical mass distributions" }
null
null
[ "Physics" ]
null
true
null
6023
null
Validated
null
null
null
{ "abstract": " Approximate probabilistic inference algorithms are central to many fields.\nExamples include sequential Monte Carlo inference in robotics, variational\ninference in machine learning, and Markov chain Monte Carlo inference in\nstatistics. A key problem faced by practitioners is measuring the accuracy of\nan approximate inference algorithm on a specific data set. This paper\nintroduces the auxiliary inference divergence estimator (AIDE), an algorithm\nfor measuring the accuracy of approximate inference algorithms. AIDE is based\non the observation that inference algorithms can be treated as probabilistic\nmodels and the random variables used within the inference algorithm can be\nviewed as auxiliary variables. This view leads to a new estimator for the\nsymmetric KL divergence between the approximating distributions of two\ninference algorithms. The paper illustrates application of AIDE to algorithms\nfor inference in regression, hidden Markov, and Dirichlet process mixture\nmodels. The experiments show that AIDE captures the qualitative behavior of a\nbroad class of inference algorithms and can detect failure modes of inference\nalgorithms that are missed by standard heuristics.\n", "title": "AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
6024
null
Validated
null
null
null
{ "abstract": " Data-driven spatial filtering algorithms optimize scores such as the contrast\nbetween two conditions to extract oscillatory brain signal components. Most\nmachine learning approaches for filter estimation, however, disregard\nwithin-trial temporal dynamics and are extremely sensitive to changes in\ntraining data and involved hyperparameters. This leads to highly variable\nsolutions and impedes the selection of a suitable candidate for,\ne.g.,~neurotechnological applications. Fostering component introspection, we\npropose to embrace this variability by condensing the functional signatures of\na large set of oscillatory components into homogeneous clusters, each\nrepresenting specific within-trial envelope dynamics.\nThe proposed method is exemplified by and evaluated on a complex hand force\ntask with a rich within-trial structure. Based on electroencephalography data\nof 18 healthy subjects, we found that the components' distinct temporal\nenvelope dynamics are highly subject-specific. On average, we obtained seven\nclusters per subject, which were strictly confined regarding their underlying\nfrequency bands. As the analysis method is not limited to a specific spatial\nfiltering algorithm, it could be utilized for a wide range of\nneurotechnological applications, e.g., to select and monitor functionally\nrelevant features for brain-computer interface protocols in stroke\nrehabilitation.\n", "title": "Mining within-trial oscillatory brain dynamics to address the variability of optimized spatial filters" }
null
null
null
null
true
null
6025
null
Default
null
null
null
{ "abstract": " Bayesian Optimisation (BO) refers to a class of methods for global\noptimisation of a function $f$ which is only accessible via point evaluations.\nIt is typically used in settings where $f$ is expensive to evaluate. A common\nuse case for BO in machine learning is model selection, where it is not\npossible to analytically model the generalisation performance of a statistical\nmodel, and we resort to noisy and expensive training and validation procedures\nto choose the best model. Conventional BO methods have focused on Euclidean and\ncategorical domains, which, in the context of model selection, only permits\ntuning scalar hyper-parameters of machine learning algorithms. However, with\nthe surge of interest in deep learning, there is an increasing demand to tune\nneural network \\emph{architectures}. In this work, we develop NASBOT, a\nGaussian process based BO framework for neural architecture search. To\naccomplish this, we develop a distance metric in the space of neural network\narchitectures which can be computed efficiently via an optimal transport\nprogram. This distance might be of independent interest to the deep learning\ncommunity as it may find applications outside of BO. We demonstrate that NASBOT\noutperforms other alternatives for architecture search in several cross\nvalidation based model selection tasks on multi-layer perceptrons and\nconvolutional neural networks.\n", "title": "Neural Architecture Search with Bayesian Optimisation and Optimal Transport" }
null
null
null
null
true
null
6026
null
Default
null
null
null
{ "abstract": " A number of improvements have been added to the existing analytical model of\nhysteresis loop defined in parametric form. In particular, three phase shifts\nare included in the model, which permits to tilt the hysteresis loop smoothly\nby the required angle at the split point as well as to smoothly change the\ncurvature of the loop. As a result, the error of approximation of a hysteresis\nloop by the improved model does not exceed 1%, which is several times less than\nthe error of the existing model. The improved model is capable of approximating\nmost of the known types of rate-independent symmetrical hysteresis loops\nencountered in the practice of physical measurements. The model allows building\nsmooth, piecewise-linear, hybrid, minor, mirror-reflected, inverse, reverse,\ndouble and triple loops. One of the possible applications of the model\ndeveloped is linearization of a probe microscope piezoscanner. The improved\nmodel can be found useful for the tasks of simulation of scientific instruments\nthat contain hysteresis elements.\n", "title": "An improved parametric model for hysteresis loop approximation" }
null
null
null
null
true
null
6027
null
Default
null
null
null
{ "abstract": " Selten's game is a kidnapping model where the probability of capturing the\nkidnapper is independent of whether the hostage has been released or executed.\nMost often, in view of the elevated sensitivities involved, authorities put\ngreater effort and resources into capturing the kidnapper if the hostage has\nbeen executed, in contrast to the case when a ransom is paid to secure the\nhostage's release. In this paper, we study the asymmetric game when the\nprobability of capturing the kidnapper depends on whether the hostage has been\nexecuted or not and find a new uniquely determined perfect equilibrium point in\nSelten's game.\n", "title": "Kidnapping Model: An Extension of Selten's Game" }
null
null
null
null
true
null
6028
null
Default
null
null
null
{ "abstract": " Model pruning seeks to induce sparsity in a deep neural network's various\nconnection matrices, thereby reducing the number of nonzero-valued parameters\nin the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep\nnetworks at the cost of only a marginal loss in accuracy and achieve a sizable\nreduction in model size. This hints at the possibility that the baseline models\nin these experiments are perhaps severely over-parameterized at the outset and\na viable alternative for model compression might be to simply reduce the number\nof hidden units while maintaining the model's dense connection structure,\nexposing a similar trade-off in model size and accuracy. We investigate these\ntwo distinct paths for model compression within the context of energy-efficient\ninference in resource-constrained environments and propose a new gradual\npruning technique that is simple and straightforward to apply across a variety\nof models/datasets with minimal tuning and can be seamlessly incorporated\nwithin the training process. We compare the accuracy of large, but pruned\nmodels (large-sparse) and their smaller, but dense (small-dense) counterparts\nwith identical memory footprint. Across a broad range of neural network\narchitectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find\nlarge-sparse models to consistently outperform small-dense models and achieve\nup to 10x reduction in number of non-zero parameters with minimal loss in\naccuracy.\n", "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression" }
null
null
null
null
true
null
6029
null
Default
null
null
null
{ "abstract": " Accurate software defect prediction could help software practitioners\nallocate test resources to defect-prone modules effectively and efficiently. In\nthe last decades, much effort has been devoted to build accurate defect\nprediction models, including developing quality defect predictors and modeling\ntechniques. However, current widely used defect predictors such as code metrics\nand process metrics could not well describe how software modules change over\nthe project evolution, which we believe is important for defect prediction. In\norder to deal with this problem, in this paper, we propose to use the\nHistorical Version Sequence of Metrics (HVSM) in continuous software versions\nas defect predictors. Furthermore, we leverage Recurrent Neural Network (RNN),\na popular modeling technique, to take HVSM as the input to build software\nprediction models. The experimental results show that, in most cases, the\nproposed HVSM-based RNN model has a significantly better effort-aware ranking\neffectiveness than the commonly used baseline models.\n", "title": "Connecting Software Metrics across Versions to Predict Defects" }
null
null
null
null
true
null
6030
null
Default
null
null
null
{ "abstract": " This paper presents an estimator for semiparametric models that uses a\nfeed-forward neural network to fit the nonparametric component. Unlike many\nmethodologies from the machine learning literature, this approach is suitable\nfor longitudinal/panel data. It provides unbiased estimation of the parametric\ncomponent of the model, with associated confidence intervals that have\nnear-nominal coverage rates. Simulations demonstrate (1) efficiency, (2) that\nparametric estimates are unbiased, and (3) coverage properties of estimated\nintervals. An application section demonstrates the method by predicting\ncounty-level corn yield using daily weather data from the period 1981-2015,\nalong with parametric time trends representing technological change. The method\nis shown to out-perform linear methods such as OLS and ridge/lasso, as well as\nrandom forest. The procedures described in this paper are implemented in the R\npackage panelNNET.\n", "title": "Semiparametric panel data models using neural networks" }
null
null
null
null
true
null
6031
null
Default
null
null
null
{ "abstract": " We propose a new sentence simplification task (Split-and-Rephrase) where the\naim is to split a complex sentence into a meaning preserving sequence of\nshorter sentences. Like sentence simplification, splitting-and-rephrasing has\nthe potential of benefiting both natural language processing and societal\napplications. Because shorter sentences are generally better processed by NLP\nsystems, it could be used as a preprocessing step which facilitates and\nimproves the performance of parsers, semantic role labellers and machine\ntranslation systems. It should also be of use for people with reading\ndisabilities because it allows the conversion of longer sentences into shorter\nones. This paper makes two contributions towards this new task. First, we\ncreate and make available a benchmark consisting of 1,066,115 tuples mapping a\nsingle complex sentence to a sequence of sentences expressing the same meaning.\nSecond, we propose five models (vanilla sequence-to-sequence to\nsemantically-motivated models) to understand the difficulty of the proposed\ntask.\n", "title": "Split and Rephrase" }
null
null
null
null
true
null
6032
null
Default
null
null
null
{ "abstract": " We review a (constructive) approach first introduced in [6] and further\ndeveloped in [7, 8, 38, 9] for hydrodynamic limits of asymmetric attractive\nparticle systems, in a weak or in a strong (that is, almost sure) sense, in an\nhomogeneous or in a quenched disordered setting.\n", "title": "Constructive Euler hydrodynamics for one-dimensional attractive particle systems" }
null
null
null
null
true
null
6033
null
Default
null
null
null
{ "abstract": " Heterogeneous wireless networks (HWNs) composed of densely deployed base\nstations of different types with various radio access technologies have become\na prevailing trend to accommodate ever-increasing traffic demand in enormous\nvolume. Nowadays, users rely heavily on HWNs for ubiquitous network access that\ncontains valuable and critical information such as financial transactions,\ne-health, and public safety. Cyber risks, representing one of the most\nsignificant threats to network security and reliability, are increasing in\nseverity. To address this problem, this article introduces the concept of cyber\ninsurance to transfer the cyber risk (i.e., service outage, as a consequence of\ncyber risks in HWNs) to a third party insurer. Firstly, a review of the\nenabling technologies for HWNs and their vulnerabilities to cyber risks is\npresented. Then, the fundamentals of cyber insurance are introduced, and\nsubsequently, a cyber insurance framework for HWNs is presented. Finally, open\nissues are discussed and the challenges are highlighted for integrating cyber\ninsurance as a service of next generation HWNs.\n", "title": "Cyber Insurance for Heterogeneous Wireless Networks" }
null
null
null
null
true
null
6034
null
Default
null
null
null
{ "abstract": " This paper focuses on multi-scale approaches for variational methods and\ncorresponding gradient flows. Recently, for convex regularization functionals\nsuch as total variation, new theory and algorithms for nonlinear eigenvalue\nproblems via nonlinear spectral decompositions have been developed. Those\nmethods open new directions for advanced image filtering. However, for an\neffective use in image segmentation and shape decomposition, a clear\ninterpretation of the spectral response regarding size and intensity scales is\nneeded but lacking in current approaches. In this context, $L^1$ data\nfidelities are particularly helpful due to their interesting multi-scale\nproperties such as contrast invariance. Hence, the novelty of this work is the\ncombination of $L^1$-based multi-scale methods with nonlinear spectral\ndecompositions. We compare $L^1$ with $L^2$ scale-space methods in view of\nspectral image representation and decomposition. We show that the contrast\ninvariant multi-scale behavior of $L^1-TV$ promotes sparsity in the spectral\nresponse providing more informative decompositions. We provide a numerical\nmethod and analyze synthetic and biomedical images at which decomposition leads\nto improved segmentation.\n", "title": "Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition" }
null
null
null
null
true
null
6035
null
Default
null
null
null
{ "abstract": " We present a novel approach for the prediction of anticancer compound\nsensitivity by means of multi-modal attention-based neural networks (PaccMann).\nIn our approach, we integrate three key pillars of drug sensitivity, namely,\nthe molecular structure of compounds, transcriptomic profiles of cancer cells\nas well as prior knowledge about interactions among proteins within cells. Our\nmodels ingest a drug-cell pair consisting of SMILES encoding of a compound and\nthe gene expression profile of a cancer cell and predicts an IC50 sensitivity\nvalue. Gene expression profiles are encoded using an attention-based encoding\nmechanism that assigns high weights to the most informative genes. We present\nand study three encoders for SMILES string of compounds: 1) bidirectional\nrecurrent 2) convolutional 3) attention-based encoders. We compare our devised\nmodels against a baseline model that ingests engineered fingerprints to\nrepresent the molecular structure. We demonstrate that using our\nattention-based encoders, we can surpass the baseline model. The use of\nattention-based encoders enhance interpretability and enable us to identify\ngenes, bonds and atoms that were used by the network to make a prediction.\n", "title": "PaccMann: Prediction of anticancer compound sensitivity with multi-modal attention-based neural networks" }
null
null
null
null
true
null
6036
null
Default
null
null
null
{ "abstract": " Dark matter with momentum- or velocity-dependent interactions with nuclei has\nshown significant promise for explaining the so-called Solar Abundance Problem,\na longstanding discrepancy between solar spectroscopy and helioseismology. The\nbest-fit models are all rather light, typically with masses in the range of 3-5\nGeV. This is exactly the mass range where dark matter evaporation from the Sun\ncan be important, but to date no detailed calculation of the evaporation of\nsuch models has been performed. Here we carry out this calculation, for the\nfirst time including arbitrary velocity- and momentum-dependent interactions,\nthermal effects, and a completely general treatment valid from the optically\nthin limit all the way through to the optically thick regime. We find that\ndepending on the dark matter mass, interaction strength and type, the mass\nbelow which evaporation is relevant can vary from 1 to 4 GeV. This has the\neffect of weakening some of the better-fitting solutions to the Solar Abundance\nProblem, but also improving a number of others. As a by-product, we also\nprovide an improved derivation of the capture rate that takes into account\nthermal and optical depth effects, allowing the standard result to be smoothly\nmatched to the well-known saturation limit.\n", "title": "Evaporation and scattering of momentum- and velocity-dependent dark matter in the Sun" }
null
null
null
null
true
null
6037
null
Default
null
null
null
{ "abstract": " With the rapid development of spaceborne imaging techniques, object detection\nin optical remote sensing imagery has drawn much attention in recent decades.\nWhile many advanced works have been developed with powerful learning\nalgorithms, the incomplete feature representation still cannot meet the demand\nfor effectively and efficiently handling image deformations, particularly\nobjective scaling and rotation. To this end, we propose a novel object\ndetection framework, called optical remote sensing imagery detector (ORSIm\ndetector), integrating diverse channel features extraction, feature learning,\nfast image pyramid matching, and boosting strategy. ORSIm detector adopts a\nnovel spatial-frequency channel feature (SFCF) by jointly considering the\nrotation-invariant channel features constructed in frequency domain and the\noriginal spatial channel features (e.g., color channel, gradient magnitude).\nSubsequently, we refine SFCF using learning-based strategy in order to obtain\nthe high-level or semantically meaningful features. In the test phase, we\nachieve a fast and coarsely-scaled channel computation by mathematically\nestimating a scaling factor in the image domain. Extensive experimental results\nconducted on the two different airborne datasets are performed to demonstrate\nthe superiority and effectiveness in comparison with previous state-of-the-art\nmethods.\n", "title": "ORSIm Detector: A Novel Object Detection Framework in Optical Remote Sensing Imagery Using Spatial-Frequency Channel Features" }
null
null
null
null
true
null
6038
null
Default
null
null
null
{ "abstract": " Large-scale variations still pose a challenge in unconstrained face\ndetection. To the best of our knowledge, no current face detection algorithm\ncan detect a face as large as 800 x 800 pixels while simultaneously detecting\nanother one as small as 8 x 8 pixels within a single image with equally high\naccuracy. We propose a two-stage cascaded face detection framework, Multi-Path\nRegion-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a\ndeep neural network with a classic learning strategy, to tackle this challenge.\nThe first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes\nfaces at three different scales. It simultaneously utilizes three parallel\noutputs of the convolutional feature maps to predict multi-scale candidate face\nregions. The \"atrous\" convolution trick (convolution with up-sampled filters)\nand a newly proposed sampling layer for \"hard\" examples are embedded in MP-RPN\nto further boost its performance. The second stage is a Boosted Forests\nclassifier, which utilizes deep facial features pooled from inside the\ncandidate face regions as well as deep contextual features pooled from a larger\nregion surrounding the candidate face regions. This step is included to further\nremove hard negative samples. Experiments show that this approach achieves\nstate-of-the-art face detection performance on the WIDER FACE dataset \"hard\"\npartition, outperforming the former best result by 9.6% for the Average\nPrecision.\n", "title": "Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained \"Hard Faces\"" }
null
null
null
null
true
null
6039
null
Default
null
null
null
{ "abstract": " The naturally occurring radioisotope $^{32}$Si represents a potentially\nlimiting background in future dark matter direct-detection experiments. We\ninvestigate sources of $^{32}$Si and the vectors by which it comes to reside in\nsilicon crystals used for fabrication of radiation detectors. We infer that the\n$^{32}$Si concentration in commercial single-crystal silicon is likely\nvariable, dependent upon the specific geologic and hydrologic history of the\nsource (or sources) of silicon \"ore\" and the details of the silicon-refinement\nprocess. The silicon production industry is large, highly segmented by refining\nstep, and multifaceted in terms of final product type, from which we conclude\nthat production of $^{32}$Si-mitigated crystals requires both targeted silicon\nmaterial selection and a dedicated refinement-through-crystal-production\nprocess. We review options for source material selection, including quartz from\nan underground source and silicon isotopically reduced in $^{32}$Si. To\nquantitatively evaluate the $^{32}$Si content in silicon metal and precursor\nmaterials, we propose analytic methods employing chemical processing and\nradiometric measurements. Ultimately, it appears feasible to produce silicon\ndetectors with low levels of $^{32}$Si, though significant assay method\ndevelopment is required to validate this claim and thereby enable a quality\nassurance program during an actual controlled silicon-detector production\ncycle.\n", "title": "Naturally occurring $^{32}$Si and low-background silicon dark matter detectors" }
null
null
null
null
true
null
6040
null
Default
null
null
null
{ "abstract": " With the discovery of the first transiting extrasolar planetary system back\nto 1999, a great number of projects started to hunt for other similar systems.\nBecause of the incidence rate of such systems was unknown and the length of the\nshallow transit events is only a few percent of the orbital period, the goal\nwas to monitor continuously as many stars as possible for at least a period of\na few months. Small aperture, large field of view automated telescope systems\nhave been installed with a parallel development of new data reduction and\nanalysis methods, leading to better than 1% per data point precision for\nthousands of stars. With the successful launch of the photometric satellites\nCoRot and Kepler, the precision increased further by one-two orders of\nmagnitude. Millions of stars have been analyzed and searched for transits. In\nthe history of variable star astronomy this is the biggest undertaking so far,\nresulting in photometric time series inventories immensely valuable for the\nwhole field. In this review we briefly discuss the methods of data analysis\nthat were inspired by the main science driver of these surveys and highlight\nsome of the most interesting variable star results that impact the field of\nvariable star astronomy.\n", "title": "Synergies between Exoplanet Surveys and Variable Star Research" }
null
null
null
null
true
null
6041
null
Default
null
null
null
{ "abstract": " Graph games with {\\omega}-regular winning conditions provide a mathematical\nframework to analyze a wide range of problems in the analysis of reactive\nsystems and programs (such as the synthesis of reactive systems, program\nrepair, and the verification of branching time properties). Parity conditions\nare canonical forms to specify {\\omega}-regular winning conditions. Graph games\nwith parity conditions are equivalent to {\\mu}-calculus model checking, and\nthus a very important algorithmic problem. Symbolic algorithms are of great\nsignificance because they provide scalable algorithms for the analysis of large\nfinite-state systems, as well as algorithms for the analysis of infinite-state\nsystems with finite quotient. A set-based symbolic algorithm uses the basic set\noperations and the one-step predecessor operators. We consider graph games with\n$n$ vertices and parity conditions with $c$ priorities. While many explicit\nalgorithms exist for graph games with parity conditions, for set-based symbolic\nalgorithms there are only two algorithms (notice that we use space to refer to\nthe number of sets stored by a symbolic algorithm): (a) the basic algorithm\nthat requires $O(n^c)$ symbolic operations and linear space; and (b) an\nimproved algorithm that requires $O(n^{c/2+1})$ symbolic operations but also\n$O(n^{c/2+1})$ space (i.e., exponential space). In this work we present two\nset-based symbolic algorithms for parity games: (a) our first algorithm\nrequires $O(n^{c/2+1})$ symbolic operations and only requires linear space; and\n(b) developing on our first algorithm, we present an algorithm that requires\n$O(n^{c/3+1})$ symbolic operations and only linear space. We also present the\nfirst linear space set-based symbolic algorithm for parity games that requires\nat most a sub-exponential number of symbolic operations.\n", "title": "Improved Set-based Symbolic Algorithms for Parity Games" }
null
null
null
null
true
null
6042
null
Default
null
null
null
{ "abstract": " Word embedding has become a fundamental component to many NLP tasks such as\nnamed entity recognition and machine translation. However, popular models that\nlearn such embeddings are unaware of the morphology of words, so it is not\ndirectly applicable to highly agglutinative languages such as Korean. We\npropose a syllable-based learning model for Korean using a convolutional neural\nnetwork, in which word representation is composed of trained syllable vectors.\nOur model successfully produces morphologically meaningful representation of\nKorean words compared to the original Skip-gram embeddings. The results also\nshow that it is quite robust to the Out-of-Vocabulary problem.\n", "title": "A Syllable-based Technique for Word Embeddings of Korean Words" }
null
null
null
null
true
null
6043
null
Default
null
null
null
{ "abstract": " The concept of an evolutionarily stable strategy (ESS), introduced by Smith\nand Price, is a refinement of Nash equilibrium in 2-player symmetric games in\norder to explain counter-intuitive natural phenomena, whose existence is not\nguaranteed in every game. The problem of deciding whether a game possesses an\nESS has been shown to be $\\Sigma_{2}^{P}$-complete by Conitzer using the\npreceding important work by Etessami and Lochbihler. The latter, among other\nresults, proved that deciding the existence of ESS is both NP-hard and\ncoNP-hard. In this paper we introduce a \"reduction robustness\" notion and we\nshow that deciding the existence of an ESS remains coNP-hard for a wide range\nof games even if we arbitrarily perturb within some intervals the payoff values\nof the game under consideration. In contrast, ESS exist almost surely for large\ngames with random and independent payoffs chosen from the same distribution.\n", "title": "Existence of Evolutionarily Stable Strategies Remains Hard to Decide for a Wide Range of Payoff Values" }
null
null
null
null
true
null
6044
null
Default
null
null
null
{ "abstract": " We construct an estimator of the Lévy density of a pure jump Lévy\nprocess, possibly of infinite variation, from the discrete observation of one\ntrajectory at high frequency. The novelty of our procedure is that we directly\nestimate the Lévy density relying on a pathwise strategy, whereas existing\nprocedures rely on spectral techniques. By taking advantage of a compound\nPoisson approximation of the Lévy density, we circumvent the use of spectral\ntechniques and in particular of the Lévy-Khintchine formula. A linear wavelet\nestimators is built and its performance is studied in terms of $L_p$ loss\nfunctions, $p\\geq 1$, over Besov balls. The resulting rates are minimax-optimal\nfor a large class of Lévy processes. We discuss the robustness of the\nprocedure to the presence of a Brownian part and to the estimation set getting\nclose to the critical value 0.\n", "title": "Compound Poisson approximation to estimate the Lévy density" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
6045
null
Validated
null
null
null
{ "abstract": " Given $k\\in\\mathbb N$, we study the vanishing of the Dirichlet series\n$$D_k(s,f):=\\sum_{n\\geq1} d_k(n)f(n)n^{-s}$$ at the point $s=1$, where $f$ is a\nperiodic function modulo a prime $p$. We show that if $(k,p-1)=1$ or\n$(k,p-1)=2$ and $p\\equiv 3\\mod 4$, then there are no odd rational-valued\nfunctions $f\\not\\equiv 0$ such that $D_k(1,f)=0$, whereas in all other cases\nthere are examples of odd functions $f$ such that $D_k(1,f)=0$.\nAs a consequence, we obtain, for example, that the set of values\n$L(1,\\chi)^2$, where $\\chi$ ranges over odd characters mod $p$, are linearly\nindependent over $\\mathbb Q$.\n", "title": "On the non-vanishing of certain Dirichlet series" }
null
null
null
null
true
null
6046
null
Default
null
null
null
{ "abstract": " For a prime $p$, let $\\hat F_p$ be a finitely generated free pro-$p$-group of\nrank $\\geq 2$. We show that the second discrete homology group $H_2(\\hat\nF_p,\\mathbb Z/p)$ is an uncountable $\\mathbb Z/p$-vector space. This answers a\nproblem of A.K. Bousfield.\n", "title": "On discrete homology of a free pro-$p$-group" }
null
null
null
null
true
null
6047
null
Default
null
null
null
{ "abstract": " A control model is typically classified into three forms: conceptual,\nmathematical and simulation (computer). This paper analyzes a conceptual\nmodeling application with respect to an inventory management system. Today,\nmost organizations utilize computer systems for inventory control that provide\nprotection when interruptions or breakdowns occur within work processes.\nModeling the inventory processes is an active area of research that utilizes\nmany diagrammatic techniques, including data flow diagrams, Universal Modeling\nLanguage (UML) diagrams and Integration DEFinition (IDEF). We claim that\ncurrent conceptual modeling frameworks lack uniform notions and have inability\nto appeal to designers and analysts. We propose modeling an inventory system as\nan abstract machine, called a Thinging Machine (TM), with five operations:\ncreation, processing, receiving, releasing and transferring. The paper provides\nside-by-side contrasts of some existing examples of conceptual modeling\nmethodologies that apply to TM. Additionally, TM is applied in a case study of\nan actual inventory system that uses IBM Maximo. The resulting conceptual\ndepictions point to the viability of FM as a valuable tool for developing a\nhigh-level representation of inventory processes.\n", "title": "Conceptual Modeling of Inventory Management Processes as a Thinging Machine" }
null
null
null
null
true
null
6048
null
Default
null
null
null
{ "abstract": " We present the first approach for 3D point-cloud to image translation based\non conditional Generative Adversarial Networks (cGAN). The model handles\nmulti-modal information sources from different domains, i.e. raw point-sets and\nimages. The generator is capable of processing three conditions, whereas the\npoint-cloud is encoded as raw point-set and camera projection. An image\nbackground patch is used as constraint to bias environmental texturing. A\nglobal approximation function within the generator is directly applied on the\npoint-cloud (Point-Net). Hence, the representative learning model incorporates\nglobal 3D characteristics directly at the latent feature space. Conditions are\nused to bias the background and the viewpoint of the generated image. This\nopens up new ways in augmenting or texturing 3D data to aim the generation of\nfully individual images. We successfully evaluated our method on the Kitti and\nSunRGBD dataset with an outstanding object detection inception score.\n", "title": "Points2Pix: 3D Point-Cloud to Image Translation using conditional Generative Adversarial Networks" }
null
null
[ "Computer Science" ]
null
true
null
6049
null
Validated
null
null
null
{ "abstract": " Chapter 11 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary\nDesign Report. The Large Hadron Collider (LHC) is one of the largest scientific\ninstruments ever built. Since opening up a new energy frontier for exploration\nin 2010, it has gathered a global user community of about 7,000 scientists\nworking in fundamental particle physics and the physics of hadronic matter at\nextreme temperature and density. To sustain and extend its discovery potential,\nthe LHC will need a major upgrade in the 2020s. This will increase its\nluminosity (rate of collisions) by a factor of five beyond the original design\nvalue and the integrated luminosity (total collisions created) by a factor ten.\nThe LHC is already a highly complex and exquisitely optimised machine so this\nupgrade must be carefully conceived and will require about ten years to\nimplement. The new configuration, known as High Luminosity LHC (HL-LHC), will\nrely on a number of key innovations that push accelerator technology beyond its\npresent limits. Among these are cutting-edge 11-12 tesla superconducting\nmagnets, compact superconducting cavities for beam rotation with ultra-precise\nphase control, new technology and physical processes for beam collimation and\n300 metre-long high-power superconducting links with negligible energy\ndissipation. The present document describes the technologies and components\nthat will be used to realise the project and is intended to serve as the basis\nfor the detailed engineering design of HL-LHC.\n", "title": "11 T Dipole for the Dispersion Suppressor Collimators" }
null
null
null
null
true
null
6050
null
Default
null
null
null
{ "abstract": " Jets from boosted heavy particles have a typical angular scale which can be\nused to distinguish them from QCD jets. We introduce a machine learning\nstrategy for jet substructure analysis using a spectral function on the angular\nscale. The angular spectrum allows us to scan energy deposits over the angle\nbetween a pair of particles in a highly visual way. We set up an artificial\nneural network (ANN) to find out characteristic shapes of the spectra of the\njets from heavy particle decays. By taking the Higgs jets and QCD jets as\nexamples, we show that the ANN of the angular spectrum input has similar\nperformance to existing taggers. In addition, some improvement is seen when\nadditional extra radiations occur. Notably, the new algorithm automatically\ncombines the information of the multi-point correlations in the jet.\n", "title": "Spectral Analysis of Jet Substructure with Neural Networks: Boosted Higgs Case" }
null
null
null
null
true
null
6051
null
Default
null
null
null
{ "abstract": " This study presents systems submitted by the University of Texas at Dallas,\nCenter for Robust Speech Systems (UTD-CRSS) to the MGB-3 Arabic Dialect\nIdentification (ADI) subtask. This task is defined to discriminate between five\ndialects of Arabic, including Egyptian, Gulf, Levantine, North African, and\nModern Standard Arabic. We develop multiple single systems with different\nfront-end representations and back-end classifiers. At the front-end level,\nfeature extraction methods such as Mel-frequency cepstral coefficients (MFCCs)\nand two types of bottleneck features (BNF) are studied for an i-Vector\nframework. As for the back-end level, Gaussian back-end (GB), and Generative\nAdversarial Networks (GANs) classifiers are applied alternately. The best\nsubmission (contrastive) is achieved for the ADI subtask with an accuracy of\n76.94% by augmenting the randomly chosen part of the development dataset.\nFurther, with a post evaluation correction in the submitted system, final\naccuracy is increased to 79.76%, which represents the best performance achieved\nso far for the challenge on the test dataset.\n", "title": "UTD-CRSS Submission for MGB-3 Arabic Dialect Identification: Front-end and Back-end Advancements on Broadcast Speech" }
null
null
[ "Computer Science" ]
null
true
null
6052
null
Validated
null
null
null
{ "abstract": " One of the main challenges in the parametrization of geological models is the\nability to capture complex geological structures often observed in subsurface\nfields. In recent years, Generative Adversarial Networks (GAN) were proposed as\nan efficient method for the generation and parametrization of complex data,\nshowing state-of-the-art performances in challenging computer vision tasks such\nas reproducing natural images (handwritten digits, human faces, etc.). In this\nwork, we study the application of Wasserstein GAN for the parametrization of\ngeological models. The effectiveness of the method is assessed for uncertainty\npropagation tasks using several test cases involving different permeability\npatterns and subsurface flow problems. Results show that GANs are able to\ngenerate samples that preserve the multipoint statistical features of the\ngeological models both visually and quantitatively. The generated samples\nreproduce both the geological structures and the flow properties of the\nreference data.\n", "title": "Parametrization and Generation of Geological Models with Generative Adversarial Networks" }
null
null
null
null
true
null
6053
null
Default
null
null
null
{ "abstract": " In this paper we consider finite element approaches to computing the mean\ncurvature vector and normal at the vertices of piecewise linear triangulated\nsurfaces. In particular, we adopt a stabilization technique which allows for\nfirst order $L^2$-convergence of the mean curvature vector and apply this\nstabilization technique also to the computation of continuous, recovered,\nnormals using $L^2$-projections of the piecewise constant face normals.\nFinally, we use our projected normals to define an adaptive mesh refinement\napproach to geometry resolution where we also employ spline techniques to\nreconstruct the surface before refinement. We compare or results to previously\nproposed approaches.\n", "title": "Finite element procedures for computing normals and mean curvature on triangulated surfaces and their use for mesh refinement" }
null
null
null
null
true
null
6054
null
Default
null
null
null
{ "abstract": " For a large class of orthogonal basis functions, there has been a recent\nidentification of expansion methods for computing accurate, stable\napproximations of a quantity of interest. This paper presents, within the\ncontext of uncertainty quantification, a practical implementation using basis\nadaptation, and coherence motivated sampling, which under assumptions has\nsatisfying guarantees. This implementation is referred to as Basis Adaptive\nSample Efficient Polynomial Chaos (BASE-PC). A key component of this is the use\nof anisotropic polynomial order which admits evolving global bases for\napproximation in an efficient manner, leading to consistently stable\napproximation for a practical class of smooth functionals. This fully adaptive,\nnon-intrusive method, requires no a priori information of the solution, and has\nsatisfying theoretical guarantees of recovery. A key contribution to stability\nis the use of a presented correction sampling for coherence-optimal sampling in\norder to improve stability and accuracy within the adaptive basis scheme.\nTheoretically, the method may dramatically reduce the impact of dimensionality\nin function approximation, and numerically the method is demonstrated to\nperform well on problems with dimension up to 1000.\n", "title": "Basis Adaptive Sample Efficient Polynomial Chaos (BASE-PC)" }
null
null
null
null
true
null
6055
null
Default
null
null
null
{ "abstract": " This paper considers a new method for the binary asteroid orbit determination\nproblem. The method is based on the Bayesian approach with a global\noptimisation algorithm. The orbital parameters to be determined are modelled\nthrough an a posteriori distribution made of a priori and likelihood terms. The\nfirst term constrains the parameters space and it allows the introduction of\navailable knowledge about the orbit. The second term is based on given\nobservations and it allows us to use and compare different observational error\nmodels. Once the a posteriori model is built, the estimator of the orbital\nparameters is computed using a global optimisation procedure: the simulated\nannealing algorithm. The maximum a posteriori (MAP) techniques are verified\nusing simulated and real data. The obtained results validate the proposed\nmethod. The new approach guarantees independence of the initial parameters\nestimation and theoretical convergence towards the global optimisation\nsolution. It is particularly useful in these situations, whenever a good\ninitial orbit estimation is difficult to get, whenever observations are not\nwell-sampled, and whenever the statistical behaviour of the observational\nerrors cannot be stated Gaussian like.\n", "title": "Maximum a posteriori estimation through simulated annealing for binary asteroid orbit determination" }
null
null
null
null
true
null
6056
null
Default
null
null
null
{ "abstract": " The difficulty of multi-class classification generally increases with the\nnumber of classes. Using data from a subset of the classes, can we predict how\nwell a classifier will scale with an increased number of classes? Under the\nassumptions that the classes are sampled identically and independently from a\npopulation, and that the classifier is based on independently learned scoring\nfunctions, we show that the expected accuracy when the classifier is trained on\nk classes is the (k-1)st moment of a certain distribution that can be estimated\nfrom data. We present an unbiased estimation method based on the theory, and\ndemonstrate its application on a facial recognition example.\n", "title": "Extrapolating Expected Accuracies for Large Multi-Class Problems" }
null
null
null
null
true
null
6057
null
Default
null
null
null
{ "abstract": " We present a solution to \"Google Cloud and YouTube-8M Video Understanding\nChallenge\" that ranked 5th place. The proposed model is an ensemble of three\nmodel families, two frame level and one video level. The training was performed\non augmented dataset, with cross validation.\n", "title": "Deep Learning Methods for Efficient Large Scale Video Labeling" }
null
null
null
null
true
null
6058
null
Default
null
null
null
{ "abstract": " Remote sensing image classification is a fundamental task in remote sensing\nimage processing. Remote sensing field still lacks of such a large-scale\nbenchmark compared to ImageNet, Place2. We propose a remote sensing image\nclassification benchmark (RSI-CB) based on crowd-source data which is massive,\nscalable, and diversity. Using crowdsource data, we can efficiently annotate\nground objects in remotes sensing image by point of interests, vectors data\nfrom OSM or other crowd-source data. Based on this method, we construct a\nworldwide large-scale benchmark for remote sensing image classification. In\nthis benchmark, there are two sub datasets with 256 * 256 and 128 * 128 size\nrespectively since different convolution neural networks requirement different\nimage size. The former sub dataset contains 6 categories with 35 subclasses\nwith total of more than 24,000 images; the later one contains 6 categories with\n45 subclasses with total of more than 36,000 images. The six categories are\nagricultural land, construction land and facilities, transportation and\nfacilities, water and water conservancy facilities, woodland and other land,\nand each category has several subclasses. This classification system is defined\naccording to the national standard of land use classification in China, and is\ninspired by the hierarchy mechanism of ImageNet. Finally, we have done a large\nnumber of experiments to compare RSI-CB with SAT-4, UC-Merced datasets on\nhandcrafted features, such as such as SIFT, and classical CNN models, such as\nAlexNet, VGG, GoogleNet, and ResNet. We also show CNN models trained by RSI-CB\nhave good performance when transfer to other dataset, i.e. UC-Merced, and good\ngeneralization ability. The experiments show that RSI-CB is more suitable as a\nbenchmark for remote sensing image classification task than other ones in big\ndata era, and can be potentially used in practical applications.\n", "title": "RSI-CB: A Large Scale Remote Sensing Image Classification Benchmark via Crowdsource Data" }
null
null
null
null
true
null
6059
null
Default
null
null
null
{ "abstract": " For any quasi-triangular Hopf algebra, there exists the universal R-matrix,\nwhich satisfies the Yang-Baxter equation. It is known that the adjoint action\nof the universal R-matrix on the elements of the tensor square of the algebra\nconstitutes a quantum Yang-Baxter map, which satisfies the set-theoretic\nYang-Baxter equation. The map has a zero curvature representation among\nL-operators defined as images of the universal R-matrix. We find that the zero\ncurvature representation can be solved by the Gauss decomposition of a product\nof L-operators. Thereby obtained a quasi-determinant expression of the quantum\nYang-Baxter map associated with the quantum algebra $U_{q}(gl(n))$. Moreover,\nthe map is identified with products of quasi-Plücker coordinates over a\nmatrix composed of the L-operators. We also consider the quasi-classical limit,\nwhere the underlying quantum algebra reduces to a Poisson algebra. The\nquasi-determinant expression of the quantum Yang-Baxter map reduces to ratios\nof determinants, which give a new expression of a classical Yang-Baxter map.\n", "title": "Quantum groups, Yang-Baxter maps and quasi-determinants" }
null
null
null
null
true
null
6060
null
Default
null
null
null
{ "abstract": " This paper proposes a low-level visual navigation algorithm to improve visual\nlocalization of a mobile robot. The algorithm, based on artificial potential\nfields, associates each feature in the current image frame with an attractive\nor neutral potential energy, with the objective of generating a control action\nthat drives the vehicle towards the goal, while still favoring feature rich\nareas within a local scope, thus improving the localization performance. One\nkey property of the proposed method is that it does not rely on mapping, and\ntherefore it is a lightweight solution that can be deployed on miniaturized\naerial robots, in which memory and computational power are major constraints.\nSimulations and real experimental results using a mini quadrotor equipped with\na downward looking camera demonstrate that the proposed method can effectively\ndrive the vehicle to a designated goal through a path that prevents\nlocalization failure.\n", "title": "Low-level Active Visual Navigation: Increasing robustness of vision-based localization using potential fields" }
null
null
[ "Computer Science" ]
null
true
null
6061
null
Validated
null
null
null
{ "abstract": " As the title suggests, we will describe (and justify through the presentation\nof some of the relevant mathematics) prediction methodologies for sensor\nmeasurements. This exposition will mainly be concerned with the mathematics\nrelated to modeling the sensor measurements.\n", "title": "A Brownian Motion Model and Extreme Belief Machine for Modeling Sensor Data Measurements" }
null
null
[ "Computer Science" ]
null
true
null
6062
null
Validated
null
null
null
{ "abstract": " We investigate the nature of the magnetic phase transition induced by the\nshort-ranged electron-electron interactions in a Weyl semimetal by using the\nperturbative renormalization-group method. We find that the critical point\nassociated with the quantum phase transition is characterized by a Gaussian\nfixed point perturbed by a dangerously irrelevant operator. Although the\nlow-energy and long-distance physics is governed by a free theory, the\nvelocities of the fermionic quasiparticles and the magnetic excitations suffer\nfrom nontrivial renormalization effects. In particular, their ratio approaches\none, which indicates an emergent Lorentz symmetry at low energies. We further\ninvestigate the stability of the fixed point in the presence of weak disorder.\nWe show that while the fixed point is generally stable against weak disorder,\namong those disorders that are consistent with the emergent chiral symmetry of\nthe clean system, a moderately strong random chemical potential and/or random\nvector potential may induce a quantum phase transition towards a\ndisorder-dominated phase. We propose a global phase diagram of the Weyl\nsemimetal in the presence of both electron-electron interactions and disorder\nbased on our results.\n", "title": "On the nature of the magnetic phase transition in a Weyl semimetal" }
null
null
[ "Physics" ]
null
true
null
6063
null
Validated
null
null
null
{ "abstract": " Additive Manufacturing (AM, or 3D printing) is a novel manufacturing\ntechnology that is being adopted in industrial and consumer settings. However,\nthe reliance of this technology on computerization has raised various security\nconcerns. In this paper we address sabotage via tampering with the 3D printing\nprocess. We present an object verification system using side-channel\nemanations: sound generated by onboard stepper motors. The contributions of\nthis paper are following. We present two algorithms: one which generates a\nmaster audio fingerprint for the unmodified printing process, and one which\ncomputes the similarity between other print recordings and the master audio\nfingerprint. We then evaluate the deviation due to tampering, focusing on the\ndetection of minimal tampering primitives. By detecting the deviation at the\ntime of its occurrence, we can stop the printing process for compromised\nobjects, thus save time and prevent material waste. We discuss impacts on the\nmethod by aspects like background noise, or different audio recorder positions.\nWe further outline our vision with use cases incorporating our approach.\n", "title": "Detecting Cyber-Physical Attacks in Additive Manufacturing using Digital Audio Signing" }
null
null
null
null
true
null
6064
null
Default
null
null
null
{ "abstract": " We present the results of spectroscopic measurements in the extreme\nultraviolet (EUV) regime (7-17 nm) of molten tin microdroplets illuminated by a\nhigh-intensity 3-J, 60-ns Nd:YAG laser pulse. The strong 13.5 nm emission from\nthis laser-produced plasma is of relevance for next-generation nanolithography\nmachines. Here, we focus on the shorter wavelength features between 7 and 12 nm\nwhich have so far remained poorly investigated despite their diagnostic\nrelevance. Using flexible atomic code calculations and local thermodynamic\nequilibrium arguments, we show that the line features in this region of the\nspectrum can be explained by transitions from high-lying configurations within\nthe Sn$^{8+}$-Sn$^{15+}$ ions. The dominant transitions for all ions but\nSn$^{8+}$ are found to be electric-dipole transitions towards the $n$=4 ground\nstate from the core-excited configuration in which a 4$p$ electron is promoted\nto the 5$s$ sub-shell. Our results resolve some long-standing spectroscopic\nissues and provide reliable charge state identification for Sn laser-produced\nplasma, which could be employed as a useful tool for diagnostic purposes.\n", "title": "Short-wavelength out-of-band EUV emission from Sn laser-produced plasma" }
null
null
null
null
true
null
6065
null
Default
null
null
null
{ "abstract": " Assuming three strongly compact cardinals, it is consistent that \\[ \\aleph_1\n< \\mathrm{add}(\\mathrm{null}) < \\mathrm{cov}(\\mathrm{null}) < \\mathfrak{b} <\n\\mathfrak{d} < \\mathrm{non}(\\mathrm{null}) < \\mathrm{cof}(\\mathrm{null}) <\n2^{\\aleph_0}.\\] Under the same assumption, it is consistent that \\[ \\aleph_1 <\n\\mathrm{add}(\\mathrm{null}) < \\mathrm{cov}(\\mathrm{null}) <\n\\mathrm{non}(\\mathrm{meager}) < \\mathrm{cov}(\\mathrm{meager}) <\n\\mathrm{non}(\\mathrm{null}) < \\mathrm{cof}(\\mathrm{null}) < 2^{\\aleph_0}.\\]\n", "title": "Compact Cardinals and Eight Values in Cichoń's Diagram" }
null
null
null
null
true
null
6066
null
Default
null
null
null
{ "abstract": " A simulation model based on parallel systems is established, aiming to\nexplore the relation between the number of submissions and the overall quality\nof academic journals within a similar discipline under peer review. The model\ncan effectively simulate the submission, review and acceptance behaviors of\nacademic journals, in a distributed manner. According to the simulation\nexperiments, it could possibly happen that the overall standard of academic\njournals may deteriorate due to excessive submissions.\n", "title": "Analysis of Peer Review Effectiveness for Academic Journals Based on Distributed Parallel System" }
null
null
[ "Computer Science" ]
null
true
null
6067
null
Validated
null
null
null
{ "abstract": " In recent years an increasing number of observational studies have hinted at\nthe presence of warps in protoplanetary discs, however a general comprehensive\ndescription of observational diagnostics of warped discs was missing. We\nperformed a series of 3D SPH hydrodynamic simulations and combined them with 3D\nradiative transfer calculations to study the observability of warps in\ncircumbinary discs, whose plane is misaligned with respect to the orbital plane\nof the central binary. Our numerical hydrodynamic simulations confirm previous\nanalytical results on the dependence of the warp structure on the viscosity and\nthe initial misalignment between the binary and the disc. To study the\nobservational signatures of warps we calculate images in the continuum at\nnear-infrared and sub-millimetre wavelengths and in the pure rotational\ntransition of CO in the sub-millimetre. Warped circumbinary discs show surface\nbrightness asymmetry in near-infrared scattered light images as well as in\noptically thick gas lines at sub-millimetre wavelengths. The asymmetry is\ncaused by self-shadowing of the disc by the inner warped regions, thus the\nstrength of the asymmetry depends on the strength of the warp. The projected\nvelocity field, derived from line observations, shows characteristic\ndeviations, twists and a change in the slope of the rotation curve, from that\nof an unperturbed disc. In extreme cases even the direction of rotation appears\nto change in the disc inwards of a characteristic radius. The strength of the\nkinematical signatures of warps decreases with increasing inclination. The\nstrength of all warp signatures decreases with decreasing viscosity.\n", "title": "Observational signatures of linear warps in circumbinary discs" }
null
null
null
null
true
null
6068
null
Default
null
null
null
{ "abstract": " Separating an audio scene into isolated sources is a fundamental problem in\ncomputer audition, analogous to image segmentation in visual scene analysis.\nSource separation systems based on deep learning are currently the most\nsuccessful approaches for solving the underdetermined separation problem, where\nthere are more sources than channels. Traditionally, such systems are trained\non sound mixtures where the ground truth decomposition is already known. Since\nmost real-world recordings do not have such a decomposition available, this\nlimits the range of mixtures one can train on, and the range of mixtures the\nlearned models may successfully separate. In this work, we use a simple blind\nspatial source separation algorithm to generate estimated decompositions of\nstereo mixtures. These estimates, together with a weighting scheme in the\ntime-frequency domain, based on confidence in the separation quality, are used\nto train a deep learning model that can be used for single-channel separation,\nwhere no source direction information is available. This demonstrates how a\nsimple cue such as the direction of origin of source can be used to bootstrap a\nmodel for source separation that can be used in situations where that cue is\nnot available.\n", "title": "Bootstrapping single-channel source separation via unsupervised spatial clustering on stereo mixtures" }
null
null
[ "Computer Science" ]
null
true
null
6069
null
Validated
null
null
null
{ "abstract": " Multi-parameter one-sided hypothesis test problems arise naturally in many\napplications. We are particularly interested in effective tests for monitoring\nmultiple quality indices in forestry products. Our search reveals that there\nare many effective statistical methods in the literature for normal data, and\nthat they can easily be adapted for non-normal data. We find that the beautiful\nlikelihood ratio test is unsatisfactory, because in order to control the size,\nit must cope with the least favorable distributions at the cost of power. In\nthis paper, we find a novel way to slightly ease the size control, obtaining a\nmuch more powerful test. Simulation confirms that the new test retains good\ncontrol of the type I error and is markedly more powerful than the likelihood\nratio test as well as many competitors based on normal data. The new method\nperforms well in the context of monitoring multiple quality indices.\n", "title": "Multi-parameter One-Sided Monitoring Test" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
6070
null
Validated
null
null
null
{ "abstract": " Often, large, high dimensional datasets collected across multiple modalities\ncan be organized as a higher order tensor. Low-rank tensor decomposition then\narises as a powerful and widely used tool to discover simple low dimensional\nstructures underlying such data. However, we currently lack a theoretical\nunderstanding of the algorithmic behavior of low-rank tensor decompositions. We\nderive Bayesian approximate message passing (AMP) algorithms for recovering\narbitrarily shaped low-rank tensors buried within noise, and we employ dynamic\nmean field theory to precisely characterize their performance. Our theory\nreveals the existence of phase transitions between easy, hard and impossible\ninference regimes, and displays an excellent match with simulations. Moreover,\nit reveals several qualitative surprises compared to the behavior of symmetric,\ncubic tensor decomposition. Finally, we compare our AMP algorithm to the most\ncommonly used algorithm, alternating least squares (ALS), and demonstrate that\nAMP significantly outperforms ALS in the presence of noise.\n", "title": "Statistical mechanics of low-rank tensor decomposition" }
null
null
null
null
true
null
6071
null
Default
null
null
null
{ "abstract": " We introduce a model for the short-term dynamics of financial assets based on\nan application to finance of quantum gauge theory, developing ideas of Ilinski.\nWe present a numerical algorithm for the computation of the probability\ndistribution of prices and compare the results with APPLE stocks prices and the\nS&P500 index.\n", "title": "A path integral based model for stocks and order dynamics" }
null
null
[ "Quantitative Finance" ]
null
true
null
6072
null
Validated
null
null
null
{ "abstract": " In inductive learning of a broad concept, an algorithm should be able to\ndistinguish concept examples from exceptions and noisy data. An approach\nthrough recursively finding patterns in exceptions turns out to correspond to\nthe problem of learning default theories. Default logic is what humans employ\nin common-sense reasoning. Therefore, learned default theories are better\nunderstood by humans. In this paper, we present new algorithms to learn default\ntheories in the form of non-monotonic logic programs. Experiments reported in\nthis paper show that our algorithms are a significant improvement over\ntraditional approaches based on inductive logic programming.\n", "title": "A New Algorithm to Automate Inductive Learning of Default Theories" }
null
null
null
null
true
null
6073
null
Default
null
null
null
{ "abstract": " Motivated by recent experiments, we investigate the pressure-dependent\nelectronic structure and electron-phonon (\\emph{e-ph}) coupling for simple\ncubic phosphorus by performing first-principle calculations within the full\npotential linearized augmented plane wave method. As a function of increasing\npressure, our calculations show a valley feature in T$_c$, followed by an\neventual decrease for higher pressures. We demonstrate that this T$_c$ valley\nat low pressures is due to two nearby Lifshitz transitions, as we analyze the\nband-resolved contributions to the \\emph{e-ph} coupling. Below the first\nLifshitz transition, the phonon hardening and shrinking of the $\\gamma$ Fermi\nsurface with $s$ orbital character results in a decreased T$_c$ with increasing\npressure. After the second Lifshitz transition, the appearance of $\\delta$\nFermi surfaces with $3d$ orbital character generate strong \\emph{e-ph}\ninter-band couplings in $\\alpha\\delta$ and $\\beta\\delta$ channels, and hence\nlead to an increase of T$_c$. For higher pressures, the phonon hardening\nfinally dominates, and T$_c$ decreases again. Our study reveals that the\nintriguing T$_c$} valley discovered in experiment can be attributed to Lifshitz\ntransitions, while the plateau of T$_c$ detected at intermediate pressures\nappears to be beyond the scope of our analysis. This strongly suggests that\nbesides \\emph{e-ph} coupling, electronic correlations along with plasmonic\ncontributions may be relevant for simple cubic phosphorous. Our findings hint\nat the notion that increasing pressure can shift the low-energy orbital weight\ntowards $d$ character, and as such even trigger an enhanced importance of\norbital-selective electronic correlations despite an increase of the overall\nbandwidth.\n", "title": "Origin of the pressure-dependent T$_c$ valley in superconducting simple cubic phosphorus" }
null
null
[ "Physics" ]
null
true
null
6074
null
Validated
null
null
null
{ "abstract": " In this paper we analyse the convergence properties of V-cycle multigrid\nalgorithms for the numerical solution of the linear system of equations arising\nfrom discontinuous Galerkin discretization of second-order elliptic partial\ndifferential equations on polytopal meshes. Here, the sequence of spaces that\nstands at the basis of the multigrid scheme is possibly non nested and is\nobtained based on employing agglomeration with possible edge/face coarsening.\nWe prove that the method converges uniformly with respect to the granularity of\nthe grid and the polynomial approximation degree p, provided that the number of\nsmoothing steps, which depends on p, is chosen sufficiently large.\n", "title": "V-cycle multigrid algorithms for discontinuous Galerkin methods on non-nested polytopic meshes" }
null
null
[ "Computer Science" ]
null
true
null
6075
null
Validated
null
null
null
{ "abstract": " We shall consider a result of Fel'dman, where a sharp Baker-type lower bound\nis obtained for linear forms in the values of some E-functions. Fel'dman's\nproof is based on an explicit construction of Padé approximations of the\nfirst kind for these functions. In the present paper we introduce Padé\napproximations of the second kind for the same functions and use these to\nobtain a slightly improved version of Fel'dman's result.\n", "title": "On a result of Fel'dman on linear forms in the values of some E-functions" }
null
null
[ "Mathematics" ]
null
true
null
6076
null
Validated
null
null
null
{ "abstract": " In this work, we mainly study the influence of the 2D warping module for\none-shot face recognition.\n", "title": "Learning Low-shot facial representations via 2D warping" }
null
null
null
null
true
null
6077
null
Default
null
null
null
{ "abstract": " We describe a general theory for surface-catalyzed bimolecular reactions in\nresponsive nanoreactors, catalytically active nanoparticles coated by a\nstimuli-responsive 'gating' shell, whose permeability controls the activity of\nthe process. We address two archetypal scenarios encountered in this system:\nThe first, where two species diffusing from a bulk solution react at the\ncatalyst's surface; the second where only one of the reactants diffuses from\nthe bulk while the other one is produced at the nanoparticle surface, e.g., by\nlight conversion. We find that in both scenarios the total catalytic rate has\nthe same mathematical structure, once diffusion rates are properly redefined.\nMoreover, the diffusional fluxes of the different reactants are strongly\ncoupled, providing a richer behavior than that arising in unimolecular\nreactions. We also show that in stark contrast to bulk reactions, the\nidentification of a limiting reactant is not simply determined by the relative\nbulk concentrations but controlled by the nanoreactor shell permeability.\nFinally, we describe an application of our theory by analyzing experimental\ndata on the reaction between hexacyanoferrate (III) and borohydride ions in\nresponsive hydrogel-based core-shell nanoreactors.\n", "title": "Catalyzed bimolecular reactions in responsive nanoreactors" }
null
null
null
null
true
null
6078
null
Default
null
null
null
{ "abstract": " Fast algorithms for optimal multi-robot path planning are sought after in\nmany real-world applications. Known methods, however, generally do not\nsimultaneously guarantee good solution optimality and fast run time (e.g.,\npolynomial). In this work, we develop a low-polynomial running time algorithm,\ncalled SplitAndGroup (SAG),that solves the multi-robot path planning problem on\ngrids and grid-like environments and produces constant factor makespan-optimal\nsolutions in the average case. That is, SAG is an average case\nO(1)-approximation algorithm. SAG computes solutions with sub-linear makespan\nand is capable of handling cases when the density of robots is extremely high -\nin a graph-theoretic setting, the algorithm supports cases where all vertices\nof the underlying graph are occupied by robots. SAG attains its desirable\nproperties through a careful combination of divide-and-conquer technique and\nnetwork flow based methods for routing the robots. Solutions from SAG, in a\nweaker sense, is also a constant factor approximation on total distance\noptimality.\n", "title": "Average Case Constant Factor Time and Distance Optimal Multi-Robot Path Planning in Well-Connected Environments" }
null
null
null
null
true
null
6079
null
Default
null
null
null
{ "abstract": " We argue that hierarchical methods can become the key for modular robots\nachieving reconfigurability. We present a hierarchical approach for modular\nrobots that allows a robot to simultaneously learn multiple tasks. Our\nevaluation results present an environment composed of two different modular\nrobot configurations, namely 3 degrees-of-freedom (DoF) and 4DoF with two\ncorresponding targets. During the training, we switch between configurations\nand targets aiming to evaluate the possibility of training a neural network\nthat is able to select appropriate motor primitives and robot configuration to\nachieve the target. The trained neural network is then transferred and executed\non a real robot with 3DoF and 4DoF configurations. We demonstrate how this\ntechnique generalizes to robots with different configurations and tasks.\n", "title": "Hierarchical Learning for Modular Robots" }
null
null
null
null
true
null
6080
null
Default
null
null
null
{ "abstract": " In this paper, we investigate the impact of diverse user preference on\nlearning under the stochastic multi-armed bandit (MAB) framework. We aim to\nshow that when the user preferences are sufficiently diverse and each arm can\nbe optimal for certain users, the O(log T) regret incurred by exploring the\nsub-optimal arms under the standard stochastic MAB setting can be reduced to a\nconstant. Our intuition is that to achieve sub-linear regret, the number of\ntimes an optimal arm being pulled should scale linearly in time; when all arms\nare optimal for certain users and pulled frequently, the estimated arm\nstatistics can quickly converge to their true values, thus reducing the need of\nexploration dramatically. We cast the problem into a stochastic linear bandits\nmodel, where both the users preferences and the state of arms are modeled as\n{independent and identical distributed (i.i.d)} d-dimensional random vectors.\nAfter receiving the user preference vector at the beginning of each time slot,\nthe learner pulls an arm and receives a reward as the linear product of the\npreference vector and the arm state vector. We also assume that the state of\nthe pulled arm is revealed to the learner once its pulled. We propose a\nWeighted Upper Confidence Bound (W-UCB) algorithm and show that it can achieve\na constant regret when the user preferences are sufficiently diverse. The\nperformance of W-UCB under general setups is also completely characterized and\nvalidated with synthetic data.\n", "title": "Online Learning with Diverse User Preferences" }
null
null
null
null
true
null
6081
null
Default
null
null
null
{ "abstract": " The paper addresses the problem of passivation of a class of nonlinear\nsystems where the dynamics are unknown. For this purpose, we use the highly\nflexible, data-driven Gaussian process regression for the identification of the\nunknown dynamics for feed-forward compensation. The closed loop system of the\nnonlinear system, the Gaussian process model and a feedback control law is\nguaranteed to be semi-passive with a specific probability. The predicted\nvariance of the Gaussian process regression is used to bound the model error\nwhich additionally allows to specify the state space region where the\nclosed-loop system behaves passive. Finally, the theoretical results are\nillustrated by a simulation.\n", "title": "Gaussian Process based Passivation of a Class of Nonlinear Systems with Unknown Dynamics" }
null
null
null
null
true
null
6082
null
Default
null
null
null
{ "abstract": " We present a new model-based integrative method for clustering objects given\nboth vectorial data, which describes the feature of each object, and network\ndata, which indicates the similarity of connected objects. The proposed general\nmodel is able to cluster the two types of data simultaneously within one\nintegrative probabilistic model, while traditional methods can only handle one\ndata type or depend on transforming one data type to another. Bayesian\ninference of the clustering is conducted based on a Markov chain Monte Carlo\nalgorithm. A special case of the general model combining the Gaussian mixture\nmodel and the stochastic block model is extensively studied. We used both\nsynthetic data and real data to evaluate this new method and compare it with\nalternative methods. The results show that our simultaneous clustering method\nperforms much better. This improvement is due to the power of the model-based\nprobabilistic approach for efficiently integrating information.\n", "title": "A Bayesian Method for Joint Clustering of Vectorial Data and Network Data" }
null
null
null
null
true
null
6083
null
Default
null
null
null
{ "abstract": " We study pointwise-generalized-inverses of linear maps between\nC$^*$-algebras. Let $\\Phi$ and $\\Psi$ be linear maps between complex Banach\nalgebras $A$ and $B$. We say that $\\Psi$ is a pointwise-generalized-inverse of\n$\\Phi$ if $\\Phi(aba)=\\Phi(a)\\Psi(b)\\Phi(a),$ for every $a,b\\in A$. The pair\n$(\\Phi,\\Psi)$ is Jordan-triple multiplicative if $\\Phi$ is a\npointwise-generalized-inverse of $\\Psi$ and the latter is a\npointwise-generalized-inverse of $\\Phi$. We study the basic properties of this\nmaps in connection with Jordan homomorphism, triple homomorphisms and strongly\npreservers. We also determine conditions to guarantee the automatic continuity\nof the pointwise-generalized-inverse of continuous operator between\nC$^*$-algebras. An appropriate generalization is introduced in the setting of\nJB$^*$-triples.\n", "title": "Pointwise-generalized-inverses of linear maps between C$^*$-algebras and JB$^*$-triples" }
null
null
null
null
true
null
6084
null
Default
null
null
null
{ "abstract": " We compute cup product pairings in the integral cohomology ring of the moduli\nspace of rank two stable bundles with odd determinant over a Riemann surface\nusing methods of Zagier. The resulting formula is related to a generating\nfunction for certain skew Schur polynomials. As an application, we compute the\nnilpotency degree of a distinguished degree two generator in the mod two\ncohomology ring. We then give descriptions of the mod two cohomology rings in\nlow genus, and describe the subrings invariant under the mapping class group\naction.\n", "title": "The cohomology of rank two stable bundle moduli: mod two nilpotency & skew Schur polynomials" }
null
null
null
null
true
null
6085
null
Default
null
null
null
{ "abstract": " We present a sound and automated approach to synthesize safe digital feedback\ncontrollers for physical plants represented as linear, time invariant models.\nModels are given as dynamical equations with inputs, evolving over a continuous\nstate space and accounting for errors due to the digitalization of signals by\nthe controller. Our approach has two stages, leveraging counterexample guided\ninductive synthesis (CEGIS) and reachability analysis. CEGIS synthesizes a\nstatic feedback controller that stabilizes the system under restrictions given\nby the safety of the reach space. Safety is verified either via BMC or abstract\nacceleration; if the verification step fails, we refine the controller by\ngeneralizing the counterexample. We synthesize stable and safe controllers for\nintricate physical plant models from the digital control literature.\n", "title": "Automated Formal Synthesis of Digital Controllers for State-Space Physical Plants" }
null
null
null
null
true
null
6086
null
Default
null
null
null
{ "abstract": " This paper presents an interconnected control-planning strategy for redundant\nmanipulators, subject to system and environmental constraints. The method\nincorporates low-level control characteristics and high-level planning\ncomponents into a robust strategy for manipulators acting in complex\nenvironments, subject to joint limits. This strategy is formulated using an\nadaptive control rule, the estimated dynamic model of the robotic system and\nthe nullspace of the linearized constraints. A path is generated that takes\ninto account the capabilities of the platform. The proposed method is\ncomputationally efficient, enabling its implementation on a real multi-body\nrobotic system. Through experimental results with a 7 DOF manipulator, we\ndemonstrate the performance of the method in real-world scenarios.\n", "title": "A constrained control-planning strategy for redundant manipulators" }
null
null
[ "Computer Science" ]
null
true
null
6087
null
Validated
null
null
null
{ "abstract": " We study and formulate the Lagrangian for the LC, RC, RL, and RLC circuits by\nusing the analogy concept with the mechanical problem in classical mechanics\nformulations. We found that the Lagrangian for the LC and RLC circuits are\ngoverned by two terms i. e. kinetic energy-like and potential energy-like\nterms. The Lagrangian for the RC circuit is only a contribution from the\npotential energy-like term and the Lagrangian for the RL circuit is only from\nthe kinetic energy-like term.\n", "title": "Lagrangian for RLC circuits using analogy with the classical mechanics concepts" }
null
null
null
null
true
null
6088
null
Default
null
null
null
{ "abstract": " In this note, we provide critical commentary on two articles that cast doubt\non the validity and implications of Birnbaum's theorem: Evans (2013) and Mayo\n(2014). In our view, the proof is correct and the consequences of the theorem\nare alive and well.\n", "title": "A note on recent criticisms to Birnbaum's theorem" }
null
null
null
null
true
null
6089
null
Default
null
null
null
{ "abstract": " In this paper, we focus on the numerical simulation of phase separation about\nmacromolecule microsphere composite (MMC) hydrogel. The model equation is based\non Time-Dependent Ginzburg-Landau (TDGL) equation with reticular free energy.\nWe have put forward two $L^2$ stable schemes to simulate simplified TDGL\nequation. In numerical experiments, we observe that simulating the whole\nprocess of phase separation requires a considerably long time. We also notice\nthat the total free energy changes significantly in initial time and varies\nslightly in the following time. Based on these properties, we introduce an\nadaptive strategy based on one of stable scheme mentioned. It is found that the\nintroduction of the time adaptivity cannot only resolve the dynamical changes\nof the solution accurately but also can significantly save CPU time for the\nlong time simulation.\n", "title": "High efficiently numerical simulation of the TDGL equation with reticular free energy in hydrogel" }
null
null
null
null
true
null
6090
null
Default
null
null
null
{ "abstract": " In distributed systems based on the Quantum Recursive Network Architecture,\nquantum channels and quantum memories are used to establish entangled quantum\nstates between node pairs. Such systems are robust against attackers that\ninteract with the quantum channels. Conversely, weaknesses emerge when an\nattacker takes full control of a node and alters the configuration of the local\nquantum memory, either to make a denial-of-service attack or to reprogram the\nnode. In such a scenario, entanglement verification over quantum memories is a\nmeans for detecting the intruder. Usually, entanglement verification approaches\nfocus either on untrusted sources of entangled qubits (photons, in most cases)\nor on eavesdroppers that interfere with the quantum channel while entangled\nqubits are transmitted. Instead, in this work we assume that the source of\nentanglement is trusted, but parties may be dishonest. Looking for efficient\nentanglement verification protocols that only require classical channels and\nlocal quantum operations to work, we thoroughly analyze the one proposed by\nNagy and Akl, that we denote as NA2010 for simplicity, and we define and\nanalyze two entanglement verification protocols based on teleportation (denoted\nas AC1 and AC2), characterized by increasing efficiency in terms of intrusion\ndetection probability versus sacrificed quantum resources.\n", "title": "Entanglement verification protocols for distributed systems based on the Quantum Recursive Network Architecture" }
null
null
[ "Computer Science" ]
null
true
null
6091
null
Validated
null
null
null
{ "abstract": " Lower bound on the rate of decrease in time of the uniform radius of spatial\nanalyticity of solutions to the quartic generalized KdV equation is derived,\nwhich improves an earlier result by Bona, Grujić and Kalisch.\n", "title": "On the radius of spatial analyticity for the quartic generalized KdV equation" }
null
null
null
null
true
null
6092
null
Default
null
null
null
{ "abstract": " Datasets are often used multiple times and each successive analysis may\ndepend on the outcome of previous analyses. Standard techniques for ensuring\ngeneralization and statistical validity do not account for this adaptive\ndependence. A recent line of work studies the challenges that arise from such\nadaptive data reuse by considering the problem of answering a sequence of\n\"queries\" about the data distribution where each query may depend arbitrarily\non answers to previous queries.\nThe strongest results obtained for this problem rely on differential privacy\n-- a strong notion of algorithmic stability with the important property that it\n\"composes\" well when data is reused. However the notion is rather strict, as it\nrequires stability under replacement of an arbitrary data element. The simplest\nalgorithm is to add Gaussian (or Laplace) noise to distort the empirical\nanswers. However, analysing this technique using differential privacy yields\nsuboptimal accuracy guarantees when the queries have low variance. Here we\npropose a relaxed notion of stability that also composes adaptively. We\ndemonstrate that a simple and natural algorithm based on adding noise scaled to\nthe standard deviation of the query provides our notion of stability. This\nimplies an algorithm that can answer statistical queries about the dataset with\nsubstantially improved accuracy guarantees for low-variance queries. The only\nprevious approach that provides such accuracy guarantees is based on a more\ninvolved differentially private median-of-means algorithm and its analysis\nexploits stronger \"group\" stability of the algorithm.\n", "title": "Calibrating Noise to Variance in Adaptive Data Analysis" }
null
null
null
null
true
null
6093
null
Default
null
null
null
{ "abstract": " We consider strictly stationary stochastic processes of Hilbert space-valued\nrandom variables and focus on tests of the equality of the lag-zero\nautocovariance operators of several independent functional time series. A\nmoving block bootstrap-based testing procedure is proposed which generates\npseudo random elements that satisfy the null hypothesis of interest. It is\nbased on directly bootstrapping the time series of tensor products which\novercomes some common difficulties associated with applications of the\nbootstrap to related testing problems. The suggested methodology can be\npotentially applied to a broad range of test statistics of the hypotheses of\ninterest. As an example, we establish validity for approximating the\ndistribution under the null of a fully functional test statistic based on the\nHilbert-Schmidt distance of the corresponding sample lag-zero autocovariance\noperators, and show consistency under the alternative. As a prerequisite, we\nprove a central limit theorem for the moving block bootstrap procedure applied\nto the sample autocovariance operator which is of interest on its own. The\nfinite sample size and power performance of the suggested moving block\nbootstrap-based testing procedure is illustrated through simulations and an\napplication to a real-life dataset is discussed.\n", "title": "Testing Equality of Autocovariance Operators for Functional Time Series" }
null
null
null
null
true
null
6094
null
Default
null
null
null
{ "abstract": " We present the Lyman-$\\alpha$ flux power spectrum measurements of the XQ-100\nsample of quasar spectra obtained in the context of the European Southern\nObservatory Large Programme \"Quasars and their absorption lines: a legacy\nsurvey of the high redshift universe with VLT/XSHOOTER\". Using $100$ quasar\nspectra with medium resolution and signal-to-noise ratio we measure the power\nspectrum over a range of redshifts $z = 3 - 4.2$ and over a range of scales $k\n= 0.003 - 0.06\\,\\mathrm{s\\,km^{-1}}$. The results agree well with the\nmeasurements of the one-dimensional power spectrum found in the literature. The\ndata analysis used in this paper is based on the Fourier transform and has been\ntested on synthetic data. Systematic and statistical uncertainties of our\nmeasurements are estimated, with a total error (statistical and systematic)\ncomparable to the one of the BOSS data in the overlapping range of scales, and\nsmaller by more than $50\\%$ for higher redshift bins ($z>3.6$) and small scales\n($k > 0.01\\,\\mathrm{s\\,km^{-1}}$). The XQ-100 data set has the unique feature\nof having signal-to-noise ratios and resolution intermediate between the two\ndata sets that are typically used to perform cosmological studies, i.e. BOSS\nand high-resolution spectra (e.g. UVES/VLT or HIRES). More importantly, the\nmeasured flux power spectra span the high redshift regime which is usually more\nconstraining for structure formation models.\n", "title": "The Lyman-alpha forest power spectrum from the XQ-100 Legacy Survey" }
null
null
[ "Physics" ]
null
true
null
6095
null
Validated
null
null
null
{ "abstract": " We explore all warped $AdS_4\\times_w M^{D-4}$ backgrounds with the most\ngeneral allowed fluxes that preserve more than 16 supersymmetries in $D=10$-\nand $11$-dimensional supergravities. After imposing the assumption that either\nthe internal space $M^{D-4}$ is compact without boundary or the isometry\nalgebra of the background decomposes into that of AdS$_4$ and that of\n$M^{D-4}$, we find that there are no such backgrounds in IIB supergravity.\nSimilarly in IIA supergravity, there is a unique such background with 24\nsupersymmetries locally isometric to $AdS_4\\times \\mathbb{CP}^3$, and in $D=11$\nsupergravity all such backgrounds are locally isometric to the maximally\nsupersymmetric $AdS_4\\times S^7$ solution.\n", "title": "AdS4 backgrounds with N>16 supersymmetries in 10 and 11 dimensions" }
null
null
null
null
true
null
6096
null
Default
null
null
null
{ "abstract": " Polariton lasing is the coherent emission arising from a macroscopic\npolariton condensate first proposed in 1996. Over the past two decades,\npolariton lasing has been demonstrated in a few inorganic and organic\nsemiconductors in both low and room temperatures. Polariton lasing in inorganic\nmaterials significantly relies on sophisticated epitaxial growth of crystalline\ngain medium layers sandwiched by two distributed Bragg reflectors in which\ncombating the built-in strain and mismatched thermal properties is nontrivial.\nOn the other hand, organic active media usually suffer from large threshold\ndensity and weak nonlinearity due to the Frenkel exciton nature. Further\ndevelopment of polariton lasing towards technologically significant\napplications demand more accessible materials, ease of device fabrication and\nbroadly tunable emission at room temperature. Herein, we report the\nexperimental realization of room-temperature polariton lasing based on an\nepitaxy-free all-inorganic cesium lead chloride perovskite microcavity.\nPolariton lasing is unambiguously evidenced by a superlinear power dependence,\nmacroscopic ground state occupation, blueshift of ground state emission,\nnarrowing of the linewidth and the build-up of long-range spatial coherence.\nOur work suggests considerable promise of lead halide perovskites towards\nlarge-area, low-cost, high performance room temperature polariton devices and\ncoherent light sources extending from the ultraviolet to near infrared range.\n", "title": "Room Temperature Polariton Lasing in All-Inorganic Perovskites" }
null
null
[ "Physics" ]
null
true
null
6097
null
Validated
null
null
null
{ "abstract": " Investigation of the reversibility of the directional hierarchy in the\ninterdependency among the notions of conditional independence, conditional mean\nindependence, and zero conditional covariance, for two random variables X and Y\ngiven a conditioning element Z which is not constrained by any topological\nrestriction on its range, reveals that if the first moments of X, Y, and XY\nexist, then conditional independence implies conditional mean independence and\nconditional mean independence implies zero conditional covariance, but the\ndirection of the hierarchy is not reversible in general. If the conditional\nexpectation of Y given X and Z is \"affine in X,\" which happens when X is\nBernoulli, then the \"intercept\" and \"slope\" of the conditional expectation\n(that is, the nonparametric regression function) equal the \"intercept\" and\n\"slope\" of the \"least-squares linear regression function\", as a result of which\nzero conditional covariance implies conditional mean independence.\n", "title": "Conditional Independence, Conditional Mean Independence, and Zero Conditional Covariance" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
6098
null
Validated
null
null
null
{ "abstract": " There is a widely-accepted need to revise current forms of health-care\nprovision, with particular interest in sensing systems in the home. Given a\nmultiple-modality sensor platform with heterogeneous network connectivity, as\nis under development in the Sensor Platform for HEalthcare in Residential\nEnvironment (SPHERE) Interdisciplinary Research Collaboration (IRC), we face\nspecific challenges relating to the fusion of the heterogeneous sensor\nmodalities.\nWe introduce Bayesian models for sensor fusion, which aims to address the\nchallenges of fusion of heterogeneous sensor modalities. Using this approach we\nare able to identify the modalities that have most utility for each particular\nactivity, and simultaneously identify which features within that activity are\nmost relevant for a given activity.\nWe further show how the two separate tasks of location prediction and\nactivity recognition can be fused into a single model, which allows for\nsimultaneous learning an prediction for both tasks.\nWe analyse the performance of this model on data collected in the SPHERE\nhouse, and show its utility. We also compare against some benchmark models\nwhich do not have the full structure,and show how the proposed model compares\nfavourably to these methods\n", "title": "Probabilistic Sensor Fusion for Ambient Assisted Living" }
null
null
null
null
true
null
6099
null
Default
null
null
null
{ "abstract": " We extend the framework by Kawamura and Cook for investigating computational\ncomplexity for operators occurring in analysis. This model is based on\nsecond-order complexity theory for functions on the Baire space, which is\nlifted to metric spaces by means of representations. Time is measured in terms\nof the length of the input encodings and the required output precision. We\npropose the notions of a complete representation and of a regular\nrepresentation. We show that complete representations ensure that any\ncomputable function has a time bound. Regular representations generalize\nKawamura and Cook's more restrictive notion of a second-order representation,\nwhile still guaranteeing fast computability of the length of the encodings.\nApplying these notions, we investigate the relationship between purely metric\nproperties of a metric space and the existence of a representation such that\nthe metric is computable within bounded time. We show that a bound on the\nrunning time of the metric can be straightforwardly translated into size bounds\nof compact subsets of the metric space. Conversely, for compact spaces and for\nBanach spaces we construct a family of admissible, complete, regular\nrepresentations that allow for fast computation of the metric and provide short\nencodings. Here it is necessary to trade the time bound off against the length\nof encodings.\n", "title": "Bounded time computation on metric spaces and Banach spaces" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
6100
null
Validated
null
null