text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Interactive Music Systems (IMS) have introduced a new world of music-making\nmodalities. But can we really say that they create music, as in true autonomous\ncreation? Here we discuss Video Interactive VST Orchestra (VIVO), an IMS that\nconsiders extra-musical information by adopting a simple salience based model\nof user-system interaction when simulating intentionality in automatic music\ngeneration. Key features of the theoretical framework, a brief overview of\npilot research, and a case study providing validation of the model are\npresented. This research demonstrates that a meaningful user/system interplay\nis established in what we define as reflexive multidominance.\n", "title": "Autonomy in the interactive music system VIVO" }
null
null
null
null
true
null
2001
null
Default
null
null
null
{ "abstract": " We study the relationship between information- and estimation-theoretic\nquantities in time-evolving systems. We focus on the Fokker-Planck channel\ndefined by a general stochastic differential equation, and show that the time\nderivatives of entropy, KL divergence, and mutual information are characterized\nby estimation-theoretic quantities involving an appropriate generalization of\nthe Fisher information. Our results vastly extend De Bruijn's identity and the\nclassical I-MMSE relation.\n", "title": "Information and estimation in Fokker-Planck channels" }
null
null
[ "Computer Science", "Mathematics", "Statistics" ]
null
true
null
2002
null
Validated
null
null
null
{ "abstract": " How do regions acquire the knowledge they need to diversify their economic\nactivities? How does the migration of workers among firms and industries\ncontribute to the diffusion of that knowledge? Here we measure the industry,\noccupation, and location-specific knowledge carried by workers from one\nestablishment to the next using a dataset summarizing the individual work\nhistory for an entire country. We study pioneer firms--firms operating in an\nindustry that was not present in a region--because the success of pioneers is\nthe basic unit of regional economic diversification. We find that the growth\nand survival of pioneers increase significantly when their first hires are\nworkers with experience in a related industry, and with work experience in the\nsame location, but not with past experience in a related occupation. We compare\nthese results with new firms that are not pioneers and find that\nindustry-specific knowledge is significantly more important for pioneer than\nnon-pioneer firms. To address endogeneity we use Bartik instruments, which\nleverage national fluctuations in the demand for an activity as shocks for\nlocal labor supply. The instrumental variable estimates support the finding\nthat industry-related knowledge is a predictor of the survival and growth of\npioneer firms. These findings expand our understanding of the micro-mechanisms\nunderlying regional economic diversification events.\n", "title": "The role of industry, occupation, and location specific knowledge in the survival of new firms" }
null
null
[ "Quantitative Finance" ]
null
true
null
2003
null
Validated
null
null
null
{ "abstract": " We analyze the problem of learning a single user's preferences in an active\nlearning setting, sequentially and adaptively querying the user over a finite\ntime horizon. Learning is conducted via choice-based queries, where the user\nselects her preferred option among a small subset of offered alternatives.\nThese queries have been shown to be a robust and efficient way to learn an\nindividual's preferences. We take a parametric approach and model the user's\npreferences through a linear classifier, using a Bayesian prior to encode our\ncurrent knowledge of this classifier. The rate at which we learn depends on the\nalternatives offered at every time epoch. Under certain noise assumptions, we\nshow that the Bayes-optimal policy for maximally reducing entropy of the\nposterior distribution of this linear classifier is a greedy policy, and that\nthis policy achieves a linear lower bound when alternatives can be constructed\nfrom the continuum. Further, we analyze a different metric called\nmisclassification error, proving that the performance of the optimal policy\nthat minimizes misclassification error is bounded below by a linear function of\ndifferential entropy. Lastly, we numerically compare the greedy entropy\nreduction policy with a knowledge gradient policy under a number of scenarios,\nexamining their performance under both differential entropy and\nmisclassification error.\n", "title": "Bayes-Optimal Entropy Pursuit for Active Choice-Based Preference Learning" }
null
null
null
null
true
null
2004
null
Default
null
null
null
{ "abstract": " It is widely observed that deep learning models with learned parameters\ngeneralize well, even with much more model parameters than the number of\ntraining samples. We systematically investigate the underlying reasons why deep\nneural networks often generalize well, and reveal the difference between the\nminima (with the same training error) that generalize well and those they\ndon't. We show that it is the characteristics the landscape of the loss\nfunction that explains the good generalization capability. For the landscape of\nloss function for deep networks, the volume of basin of attraction of good\nminima dominates over that of poor minima, which guarantees optimization\nmethods with random initialization to converge to good minima. We theoretically\njustify our findings through analyzing 2-layer neural networks; and show that\nthe low-complexity solutions have a small norm of Hessian matrix with respect\nto model parameters. For deeper networks, extensive numerical evidence helps to\nsupport our arguments.\n", "title": "Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes" }
null
null
null
null
true
null
2005
null
Default
null
null
null
{ "abstract": " Artifical Neural Networks are a particular class of learning systems modeled\nafter biological neural functions with an interesting penchant for Hebbian\nlearning, that is \"neurons that wire together, fire together\". However, unlike\ntheir natural counterparts, artificial neural networks have a close and\nstringent coupling between the modules of neurons in the network. This coupling\nor locking imposes upon the network a strict and inflexible structure that\nprevent layers in the network from updating their weights until a full\nfeed-forward and backward pass has occurred. Such a constraint though may have\nsufficed for a while, is now no longer feasible in the era of very-large-scale\nmachine learning, coupled with the increased desire for parallelization of the\nlearning process across multiple computing infrastructures. To solve this\nproblem, synthetic gradients (SG) with decoupled neural interfaces (DNI) are\nintroduced as a viable alternative to the backpropagation algorithm. This paper\nperforms a speed benchmark to compare the speed and accuracy capabilities of\nSG-DNI as opposed to a standard neural interface using multilayer perceptron\nMLP. SG-DNI shows good promise, in that it not only captures the learning\nproblem, it is also over 3-fold faster due to it asynchronous learning\ncapabilities.\n", "title": "Benchmarking Decoupled Neural Interfaces with Synthetic Gradients" }
null
null
null
null
true
null
2006
null
Default
null
null
null
{ "abstract": " Macquarie University's contribution to the BioASQ challenge (Task 5b Phase B)\nfocused on the use of query-based extractive summarisation techniques for the\ngeneration of the ideal answers. Four runs were submitted, with approaches\nranging from a trivial system that selected the first $n$ snippets, to the use\nof deep learning approaches under a regression framework. Our experiments and\nthe ROUGE results of the five test batches of BioASQ indicate surprisingly good\nresults for the trivial approach. Overall, most of our runs on the first three\ntest batches achieved the best ROUGE-SU4 results in the challenge.\n", "title": "Macquarie University at BioASQ 5b -- Query-based Summarisation Techniques for Selecting the Ideal Answers" }
null
null
null
null
true
null
2007
null
Default
null
null
null
{ "abstract": " Internet-of-things (IoT) architectures connecting a massive number of\nheterogeneous devices need energy efficient, low hardware complexity, low cost,\nsimple and secure mechanisms to realize communication among devices. One of the\nemerging schemes is to realize simultaneous wireless information and power\ntransfer (SWIPT) in an energy harvesting network. Radio frequency (RF)\nsolutions require special hardware and modulation methods for RF to direct\ncurrent (DC) conversion and optimized operation to achieve SWIPT which are\ncurrently in an immature phase. On the other hand, magneto-inductive (MI)\ncommunication transceivers are intrinsically energy harvesting with potential\nfor SWIPT in an efficient manner. In this article, novel modulation and\ndemodulation mechanisms are presented in a combined framework with\nmultiple-access channel (MAC) communication and wireless power transmission.\nThe network topology of power transmitting active coils in a transceiver\ncomposed of a grid of coils is changed as a novel method to transmit\ninformation. Practical demodulation schemes are formulated and numerically\nsimulated for two-user MAC topology of small size coils. The transceivers are\nsuitable to attach to everyday objects to realize reliable local area network\n(LAN) communication performances with tens of meters communication ranges. The\ndesigned scheme is promising for future IoT applications requiring SWIPT with\nenergy efficient, low cost, low power and low hardware complexity solutions.\n", "title": "Network Topology Modulation for Energy and Data Transmission in Internet of Magneto-Inductive Things" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
2008
null
Validated
null
null
null
{ "abstract": " We report $T=0$ diffusion Monte Carlo results for the ground-state and vortex\nexcitation of unpolarized spin-1/2 fermions in a two-dimensional disk. We\ninvestigate how vortex core structure properties behave over the BEC-BCS\ncrossover. We calculate the vortex excitation energy, density profiles, and\nvortex core properties related to the current. We find a density suppression at\nthe vortex core on the BCS side of the crossover, and a depleted core on the\nBEC limit. Size-effect dependencies in the disk geometry were carefully\nstudied.\n", "title": "Core structure of two-dimensional Fermi gas vortices in the BEC-BCS crossover region" }
null
null
null
null
true
null
2009
null
Default
null
null
null
{ "abstract": " We prove that that the number p of positive eigenvalues of the connection\nLaplacian L of a finite abstract simplicial complex G matches the number b of\neven dimensional simplices in G and that the number n of negative eigenvalues\nmatches the number f of odd-dimensional simplices in G. The Euler\ncharacteristic X(G) of G therefore can be spectrally described as X(G)=p-n.\nThis is in contrast to the more classical Hodge Laplacian H which acts on the\nsame Hilbert space, where X(G) is not yet known to be accessible from the\nspectrum of H. Given an ordering of G coming from a build-up as a CW complex,\nevery simplex x in G is now associated to a unique eigenvector of L and the\ncorrespondence is computable. The Euler characteristic is now not only the\npotential energy summing over all g(x,y) with g=L^{-1} but also agrees with a\nlogarithmic energy tr(log(i L)) 2/(i pi) of the spectrum of L. We also give\nhere examples of L-isospectral but non-isomorphic abstract finite simplicial\ncomplexes. One example shows that we can not hear the cohomology of the\ncomplex.\n", "title": "One can hear the Euler characteristic of a simplicial complex" }
null
null
null
null
true
null
2010
null
Default
null
null
null
{ "abstract": " Motivated by the study of collapsing Calabi-Yau threefolds with a Lefschetz\nK3 fibration, we construct a complete Calabi-Yau metric on $\\mathbb{C}^3$ with\nmaximal volume growth, which in the appropriate scale is expected to model the\ncollapsing metric near the nodal point. This new Calabi-Yau metric has singular\ntangent cone at infinity, and its Riemannian geometry has certain non-standard\nfeatures near the singularity of the tangent cone $\\mathbb{C}^2/\\mathbb{Z}_2\n\\times \\mathbb{C}$, which are more typical of adiabatic limit problems. The\nproof uses an existence result in H-J. Hein's PhD thesis to perturb an\nasymptotic approximate solution into an actual solution, and the main\ndifficulty lies in correcting the slowly decaying error terms.\n", "title": "A new complete Calabi-Yau metric on $\\mathbb{C}^3$" }
null
null
null
null
true
null
2011
null
Default
null
null
null
{ "abstract": " When it comes to searches for extensions to general relativity, large efforts\nare being dedicated to accurate predictions for the power spectrum of density\nperturbations. While this observable is known to be sensitive to the\ngravitational theory, its efficiency as a diagnostic for gravity is\nsignificantly reduced when Solar System constraints are strictly adhered to. We\nshow that this problem can be overcome by studying weigthed density fields. We\npropose a transformation of the density field for which the impact of modified\ngravity on the power spectrum can be increased by more than a factor of three.\nThe signal is not only amplified, but the modified gravity features are shifted\nto larger scales which are less affected by baryonic physics. Furthermore, the\noverall signal-to-noise increases, which in principle makes identifying\nsignatures of modified gravity with future galaxy surveys more feasible. While\nour analysis is focused on modified gravity, the technique can be applied to\nother problems in cosmology, such as the detection of neutrinos, the effects of\nbaryons or baryon acoustic oscillations.\n", "title": "Weighted density fields as improved probes of modified gravity models" }
null
null
[ "Physics" ]
null
true
null
2012
null
Validated
null
null
null
{ "abstract": " This paper presents the design of a nonlinear control law for a typical\nelectromagnetic actuator system. Electromagnetic actuators are widely\nimplemented in industrial applications, and especially as linear positioning\nsystem. In this work, we aim at taking into account a magnetic phenomenon that\nis usually neglected: flux fringing. This issue is addressed with an uncertain\nmodeling approach. The proposed control law consists of two steps, a\nbackstepping control regulates the mechanical part and a sliding mode approach\ncontrols the coil current and the magnetic force implicitly. An illustrative\nexample shows the effectiveness of the presented approach.\n", "title": "Nonlinear control for an uncertain electromagnetic actuator" }
null
null
null
null
true
null
2013
null
Default
null
null
null
{ "abstract": " Probabilistic representations of movement primitives open important new\npossibilities for machine learning in robotics. These representations are able\nto capture the variability of the demonstrations from a teacher as a\nprobability distribution over trajectories, providing a sensible region of\nexploration and the ability to adapt to changes in the robot environment.\nHowever, to be able to capture variability and correlations between different\njoints, a probabilistic movement primitive requires the estimation of a larger\nnumber of parameters compared to their deterministic counterparts, that focus\non modeling only the mean behavior. In this paper, we make use of prior\ndistributions over the parameters of a probabilistic movement primitive to make\nrobust estimates of the parameters with few training instances. In addition, we\nintroduce general purpose operators to adapt movement primitives in joint and\ntask space. The proposed training method and adaptation operators are tested in\na coffee preparation and in robot table tennis task. In the coffee preparation\ntask we evaluate the generalization performance to changes in the location of\nthe coffee grinder and brewing chamber in a target area, achieving the desired\nbehavior after only two demonstrations. In the table tennis task we evaluate\nthe hit and return rates, outperforming previous approaches while using fewer\ntask specific heuristics.\n", "title": "Adaptation and Robust Learning of Probabilistic Movement Primitives" }
null
null
null
null
true
null
2014
null
Default
null
null
null
{ "abstract": " Access to the transverse spin of light has unlocked new regimes in\ntopological photonics and optomechanics. To achieve the transverse spin of\nnonzero longitudinal fields, various platforms that derive transversely\nconfined waves based on focusing, interference, or evanescent waves have been\nsuggested. Nonetheless, because of the transverse confinement inherently\naccompanying sign reversal of the field derivative, the resulting transverse\nspin handedness experiences spatial inversion, which leads to a mismatch\nbetween the densities of the wavefunction and its spin component and hinders\nthe global observation of the transverse spin. Here, we reveal a globally pure\ntransverse spin in which the wavefunction density signifies the spin\ndistribution, by employing inverse molding of the eigenmode in the spin basis.\nStarting from the target spin profile, we analytically obtain the potential\nlandscape and then show that the elliptic-hyperbolic transition around the\nepsilon-near-zero permittivity allows for the global conservation of transverse\nspin handedness across the topological interface between anisotropic\nmetamaterials. Extending to the non-Hermitian regime, we also develop\nannihilated transverse spin modes to cover the entire Poincare sphere of the\nmeridional plane. Our results enable the complete transfer of optical energy to\ntransverse spinning motions and realize the classical analogy of 3-dimensional\nquantum spin states.\n", "title": "Transverse spinning of light with globally unique handedness" }
null
null
null
null
true
null
2015
null
Default
null
null
null
{ "abstract": " In survival studies, classical inferences for left-truncated data require\nquasi-independence, a property that the joint density of truncation time and\nfailure time is factorizable into their marginal densities in the observable\nregion. The quasi-independence hypothesis is testable; many authors have\ndeveloped tests for left-truncated data with or without right-censoring. In\nthis paper, we propose a class of test statistics for testing the\nquasi-independence which unifies the existing methods and generates new useful\nstatistics such as conditional Spearman's rank correlation coefficient.\nAsymptotic normality of the proposed class of statistics is given. We show that\na new set of tests can be powerful under certain alternatives by theoretical\nand empirical power comparison.\n", "title": "A general class of quasi-independence tests for left-truncated right-censored data" }
null
null
null
null
true
null
2016
null
Default
null
null
null
{ "abstract": " We present a strongly interacting quadruple system associated with the K2\ntarget EPIC 220204960. The K2 target itself is a Kp = 12.7 magnitude star at\nTeff ~ 6100 K which we designate as \"B-N\" (blue northerly image). The host of\nthe quadruple system, however, is a Kp = 17 magnitude star with a composite\nM-star spectrum, which we designate as \"R-S\" (red southerly image). With a 3.2\"\nseparation and similar radial velocities and photometric distances, 'B-N' is\nlikely physically associated with 'R-S', making this a quintuple system, but\nthat is incidental to our main claim of a strongly interacting quadruple system\nin 'R-S'. The two binaries in 'R-S' have orbital periods of 13.27 d and 14.41\nd, respectively, and each has an inclination angle of >89 degrees. From our\nanalysis of radial velocity measurements, and of the photometric lightcurve, we\nconclude that all four stars are very similar with masses close to 0.4 Msun.\nBoth of the binaries exhibit significant ETVs where those of the primary and\nsecondary eclipses 'diverge' by 0.05 days over the course of the 80-day\nobservations. Via a systematic set of numerical simulations of quadruple\nsystems consisting of two interacting binaries, we conclude that the outer\norbital period is very likely to be between 300 and 500 days. If sufficient\ntime is devoted to RV studies of this faint target, the outer orbit should be\nmeasurable within a year.\n", "title": "EPIC 220204960: A Quadruple Star System Containing Two Strongly Interacting Eclipsing Binaries" }
null
null
null
null
true
null
2017
null
Default
null
null
null
{ "abstract": " We develop the notion of higher Cheeger constants for a measurable set\n$\\Omega \\subset \\mathbb{R}^N$. By the $k$-th Cheeger constant we mean the value\n\\[h_k(\\Omega) = \\inf \\max \\{h_1(E_1), \\dots, h_1(E_k)\\},\\] where the infimum is\ntaken over all $k$-tuples of mutually disjoint subsets of $\\Omega$, and\n$h_1(E_i)$ is the classical Cheeger constant of $E_i$. We prove the existence\nof minimizers satisfying additional \"adjustment\" conditions and study their\nproperties. A relation between $h_k(\\Omega)$ and spectral minimal\n$k$-partitions of $\\Omega$ associated with the first eigenvalues of the\n$p$-Laplacian under homogeneous Dirichlet boundary conditions is stated. The\nresults are applied to determine the second Cheeger constant of some planar\ndomains.\n", "title": "On the higher Cheeger problem" }
null
null
null
null
true
null
2018
null
Default
null
null
null
{ "abstract": " Petri nets are an established graphical formalism for modeling and analyzing\nthe behavior of systems. An important consideration of the value of Petri nets\nis their use in describing both the syntax and semantics of modeling\nformalisms. Describing a modeling notation in terms of a formal technique such\nas Petri nets provides a way to minimize ambiguity. Accordingly, it is\nimperative to develop a deep and diverse understanding of Petri nets. This\npaper is directed toward a new, but preliminary, exploration of the semantics\nof such an important tool. Specifically, the concern in this paper is with the\nsemantics of Petri nets interpreted in a modeling language based on the notion\nof machines of things that flow. The semantics of several Petri net diagrams\nare analyzed in terms of flow of things. The results point to the viability of\nthe approach for exploring the underlying assumptions of Petri nets.\n", "title": "Petri Nets and Machines of Things That Flow" }
null
null
null
null
true
null
2019
null
Default
null
null
null
{ "abstract": " We consider the problem of low canonical polyadic (CP) rank tensor\ncompletion. A completion is a tensor whose entries agree with the observed\nentries and its rank matches the given CP rank. We analyze the manifold\nstructure corresponding to the tensors with the given rank and define a set of\npolynomials based on the sampling pattern and CP decomposition. Then, we show\nthat finite completability of the sampled tensor is equivalent to having a\ncertain number of algebraically independent polynomials among the defined\npolynomials. Our proposed approach results in characterizing the maximum number\nof algebraically independent polynomials in terms of a simple geometric\nstructure of the sampling pattern, and therefore we obtain the deterministic\nnecessary and sufficient condition on the sampling pattern for finite\ncompletability of the sampled tensor. Moreover, assuming that the entries of\nthe tensor are sampled independently with probability $p$ and using the\nmentioned deterministic analysis, we propose a combinatorial method to derive a\nlower bound on the sampling probability $p$, or equivalently, the number of\nsampled entries that guarantees finite completability with high probability. We\nalso show that the existing result for the matrix completion problem can be\nused to obtain a loose lower bound on the sampling probability $p$. In\naddition, we obtain deterministic and probabilistic conditions for unique\ncompletability. It is seen that the number of samples required for finite or\nunique completability obtained by the proposed analysis on the CP manifold is\norders-of-magnitude lower than that is obtained by the existing analysis on the\nGrassmannian manifold.\n", "title": "Fundamental Conditions for Low-CP-Rank Tensor Completion" }
null
null
[ "Computer Science", "Mathematics", "Statistics" ]
null
true
null
2020
null
Validated
null
null
null
{ "abstract": " We introduce a statistical method to investigate the impact of dyadic\nrelations on complex networks generated from repeated interactions. It is based\non generalised hypergeometric ensembles, a class of statistical network\nensembles developed recently. We represent different types of known relations\nbetween system elements by weighted graphs, separated in the different layers\nof a multiplex network. With our method we can regress the influence of each\nrelational layer, the independent variables, on the interaction counts, the\ndependent variables. Moreover, we can test the statistical significance of the\nrelations as explanatory variables for the observed interactions. To\ndemonstrate the power of our approach and its broad applicability, we will\npresent examples based on synthetic and empirical data.\n", "title": "Multiplex Network Regression: How do relations drive interactions?" }
null
null
null
null
true
null
2021
null
Default
null
null
null
{ "abstract": " We study a new model of interactive particle systems which we call the\nrandomly activated cascading exclusion process (RACEP). Particles wake up\naccording to exponential clocks and then take a geometric number of steps. If\nanother particle is encountered during these steps, the first particle goes to\nsleep at that location and the second is activated and proceeds accordingly. We\nconsider a totally asymmetric version of this model which we refer as\nHall-Littlewood-PushTASEP (HL-PushTASEP) on $\\mathbb{Z}_{\\geq 0}$ lattice where\nparticles only move right and where initially particles are distributed\naccording to Bernoulli product measure on $\\mathbb{Z}_{\\geq 0}$. We prove\nKPZ-class limit theorems for the height function fluctuations. Under a\nparticular weak scaling, we also prove convergence to the solution of the KPZ\nequation.\n", "title": "Hall-Littlewood-PushTASEP and its KPZ limit" }
null
null
null
null
true
null
2022
null
Default
null
null
null
{ "abstract": " We present pricing mechanisms for several online resource allocation problems\nwhich obtain tight or nearly tight approximations to social welfare. In our\nsettings, buyers arrive online and purchase bundles of items; buyers' values\nfor the bundles are drawn from known distributions. This problem is closely\nrelated to the so-called prophet-inequality of Krengel and Sucheston and its\nextensions in recent literature. Motivated by applications to cloud economics,\nwe consider two kinds of buyer preferences. In the first, items correspond to\ndifferent units of time at which a resource is available; the items are\narranged in a total order and buyers desire intervals of items. The second\ncorresponds to bandwidth allocation over a tree network; the items are edges in\nthe network and buyers desire paths.\nBecause buyers' preferences have complementarities in the settings we\nconsider, recent constant-factor approximations via item prices do not apply,\nand indeed strong negative results are known. We develop static, anonymous\nbundle pricing mechanisms.\nFor the interval preferences setting, we show that static, anonymous bundle\npricings achieve a sublogarithmic competitive ratio, which is optimal (within\nconstant factors) over the class of all online allocation algorithms, truthful\nor not. For the path preferences setting, we obtain a nearly-tight logarithmic\ncompetitive ratio. Both of these results exhibit an exponential improvement\nover item pricings for these settings. Our results extend to settings where the\nseller has multiple copies of each item, with the competitive ratio decreasing\nlinearly with supply. Such a gradual tradeoff between supply and the\ncompetitive ratio for welfare was previously known only for the single item\nprophet inequality.\n", "title": "Pricing for Online Resource Allocation: Intervals and Paths" }
null
null
null
null
true
null
2023
null
Default
null
null
null
{ "abstract": " Case-Based Reasoning (CBR) has been widely used to generate good software\neffort estimates. The predictive performance of CBR is a dataset dependent and\nsubject to extremely large space of configuration possibilities. Regardless of\nthe type of adaptation technique, deciding on the optimal number of similar\ncases to be used before applying CBR is a key challenge. In this paper we\npropose a new technique based on Bisecting k-medoids clustering algorithm to\nbetter understanding the structure of a dataset and discovering the the optimal\ncases for each individual project by excluding irrelevant cases. Results\nobtained showed that understanding of the data characteristic prior prediction\nstage can help in automatically finding the best number of cases for each test\nproject. Performance figures of the proposed estimation method are better than\nthose of other regular K-based CBR methods.\n", "title": "Learning best K analogies from data distribution for case-based software effort estimation" }
null
null
null
null
true
null
2024
null
Default
null
null
null
{ "abstract": " In this paper we present a new method for determining optimal designs for\nenzyme inhibition kinetic models, which are used to model the influence of the\nconcentration of a substrate and an inhibition on the velocity of a reaction.\nThe approach uses a nonlinear transformation of the vector of predictors such\nthat the model in the new coordinates is given by an incomplete response\nsurface model. Although there exist no explicit solutions of the optimal design\nproblem for incomplete response surface models so far, the corresponding design\nproblem in the new coordinates is substantially more transparent, such that\nexplicit or numerical solutions can be determined more easily. The designs for\nthe original problem can finally be found by an inverse transformation of the\noptimal designs determined for the response surface model. We illustrate the\nmethod determining explicit solutions for the $D$-optimal design and for the\noptimal design problem for estimating the individual coefficients in a\nnon-competitive enzyme inhibition kinetic model.\n", "title": "Optimal designs for enzyme inhibition kinetic models" }
null
null
null
null
true
null
2025
null
Default
null
null
null
{ "abstract": " The CALICE collaboration is developing highly granular calorimeters for\nexperiments at a future lepton collider primarily to establish technologies for\nparticle flow event reconstruction. These technologies also find applications\nelsewhere, such as detector upgrades for the LHC. Meanwhile, the large data\nsets collected in an extensive series of beam tests have enabled detailed\nstudies of the properties of hadronic showers in calorimeter systems, resulting\nin improved simulation models and development of sophisticated reconstruction\ntechniques. In this proceeding, highlights are included from studies of the\nstructure of hadronic showers and results on reconstruction techniques for\nimaging calorimetry. In addition, current R&D activities within CALICE are\nsummarized, focusing on technological prototypes that address challenges from\nfull detector system integration and production techniques amenable to mass\nproduction for electromagnetic and hadronic calorimeters based on silicon,\nscintillator, and gas techniques.\n", "title": "Highly Granular Calorimeters: Technologies and Results" }
null
null
[ "Physics" ]
null
true
null
2026
null
Validated
null
null
null
{ "abstract": " Hamiltonian Monte Carlo has emerged as a standard tool for posterior\ncomputation. In this article, we present an extension that can efficiently\nexplore target distributions with discontinuous densities, which in turn\nenables efficient sampling from ordinal parameters though embedding of\nprobability mass functions into continuous spaces. We motivate our approach\nthrough a theory of discontinuous Hamiltonian dynamics and develop a numerical\nsolver of discontinuous dynamics. The proposed numerical solver is the first of\nits kind, with a remarkable ability to exactly preserve the Hamiltonian and\nthus yield a type of rejection-free proposals. We apply our algorithm to\nchallenging posterior inference problems to demonstrate its wide applicability\nand competitive performance.\n", "title": "Discontinuous Hamiltonian Monte Carlo for discrete parameters and discontinuous likelihoods" }
null
null
null
null
true
null
2027
null
Default
null
null
null
{ "abstract": " In this paper, we derive the pointwise upper bounds and lower bounds on the\ngradients of solutions to the Lamé systems with partially infinite\ncoefficients as the surface of discontinuity of the coefficients of the system\nis located very close to the boundary. When the distance tends to zero, the\noptimal blow-up rates of the gradients are established for inclusions with\narbitrary shapes and in all dimensions.\n", "title": "Optimal boundary gradient estimates for Lamé systems with partially infinite coefficients" }
null
null
[ "Mathematics" ]
null
true
null
2028
null
Validated
null
null
null
{ "abstract": " For simulating large networks of neurons Hines proposed a method which uses\nextensively the structure of the arising systems of ordinary differential\nequations in order to obtain an efficient implementation. The original method\nrequires constant step sizes and produces the solution on a staggered grid. In\nthe present paper a one-step modification of this method is introduced and\nanalyzed with respect to their stability properties. The new method allows for\nstep size control. Local error estimators are constructed. The method has been\nimplemented in matlab and tested using simple Hodgkin-Huxley type models.\nComparisons with standard state-of-the-art solvers are provided.\n", "title": "On a variable step size modification of Hines' method in computational neuroscience" }
null
null
null
null
true
null
2029
null
Default
null
null
null
{ "abstract": " This paper describes our submission to the 2017 BioASQ challenge. We\nparticipated in Task B, Phase B which is concerned with biomedical question\nanswering (QA). We focus on factoid and list question, using an extractive QA\nmodel, that is, we restrict our system to output substrings of the provided\ntext snippets. At the core of our system, we use FastQA, a state-of-the-art\nneural QA system. We extended it with biomedical word embeddings and changed\nits answer layer to be able to answer list questions in addition to factoid\nquestions. We pre-trained the model on a large-scale open-domain QA dataset,\nSQuAD, and then fine-tuned the parameters on the BioASQ training set. With our\napproach, we achieve state-of-the-art results on factoid questions and\ncompetitive results on list questions.\n", "title": "Neural Question Answering at BioASQ 5B" }
null
null
null
null
true
null
2030
null
Default
null
null
null
{ "abstract": " Recently, decentralised (on-blockchain) platforms have emerged to complement\ncentralised (off-blockchain) platforms for the implementation of automated,\ndigital (smart) contracts. However, neither alternative can individually\nsatisfy the requirements of a large class of applications. On-blockchain\nplatforms suffer from scalability, performance, transaction costs and other\nlimitations. Off-blockchain platforms are afflicted by drawbacks due to their\ndependence on single trusted third parties. We argue that in several\napplication areas, hybrid platforms composed from the integration of on- and\noff-blockchain platforms are more able to support smart contracts that deliver\nthe desired quality of service (QoS). Hybrid architectures are largely\nunexplored. To help cover the gap, in this paper we discuss the implementation\nof smart contracts on hybrid architectures. As a proof of concept, we show how\na smart contract can be split and executed partially on an off-blockchain\ncontract compliance checker and partially on the Rinkeby Ethereum network. To\ntest the solution, we expose it to sequences of contractual operations\ngenerated mechanically by a contract validator tool.\n", "title": "Implementation of Smart Contracts Using Hybrid Architectures with On- and Off-Blockchain Components" }
null
null
null
null
true
null
2031
null
Default
null
null
null
{ "abstract": " We demonstrate electro-mechanical control of an on-chip GaAs optical beam\nsplitter containing a quantum dot single-photon source. The beam splitter\nconsists of two nanobeam waveguides, which form a directional coupler (DC). The\nsplitting ratio of the DC is controlled by varying the out-of-plane separation\nof the two waveguides using electro-mechanical actuation. We reversibly tune\nthe beam splitter between an initial state, with emission into both output\narms, and a final state with photons emitted into a single output arm. The\ndevice represents a compact and scalable tuning approach for use in III-V\nsemiconductor integrated quantum optical circuits.\n", "title": "Electro-mechanical control of an on-chip optical beam splitter containing an embedded quantum emitter" }
null
null
null
null
true
null
2032
null
Default
null
null
null
{ "abstract": " People speak at different levels of specificity in different situations.\nDepending on their knowledge, interlocutors, mood, etc.} A conversational agent\nshould have this ability and know when to be specific and when to be general.\nWe propose an approach that gives a neural network--based conversational agent\nthis ability. Our approach involves alternating between \\emph{data\ndistillation} and model training : removing training examples that are closest\nto the responses most commonly produced by the model trained from the last\nround and then retrain the model on the remaining dataset. Dialogue generation\nmodels trained with different degrees of data distillation manifest different\nlevels of specificity.\nWe then train a reinforcement learning system for selecting among this pool\nof generation models, to choose the best level of specificity for a given\ninput. Compared to the original generative model trained without distillation,\nthe proposed system is capable of generating more interesting and\nhigher-quality responses, in addition to appropriately adjusting specificity\ndepending on the context.\nOur research constitutes a specific case of a broader approach involving\ntraining multiple subsystems from a single dataset distinguished by differences\nin a specific property one wishes to model. We show that from such a set of\nsubsystems, one can use reinforcement learning to build a system that tailors\nits output to different input contexts at test time.\n", "title": "Data Distillation for Controlling Specificity in Dialogue Generation" }
null
null
null
null
true
null
2033
null
Default
null
null
null
{ "abstract": " We prove that the meet level $m$ of the Trotter-Weil, $\\mathsf{V}_m$ is not\nlocal for all $m \\geq 1$, as conjectured in a paper by Kufleitner and Lauser.\nIn order to show this, we explicitly provide a language whose syntactic\nsemigroup is in $L \\mathsf{V}_m$ and not in $\\mathsf{V}_m*\\mathsf{D}$.\n", "title": "Non-locality of the meet levels of the Trotter-Weil Hierarchy" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
2034
null
Validated
null
null
null
{ "abstract": " Aboria is a powerful and flexible C++ library for the implementation of\nparticle-based numerical methods. The particles in such methods can represent\nactual particles (e.g. Molecular Dynamics) or abstract particles used to\ndiscretise a continuous function over a domain (e.g. Radial Basis Functions).\nAboria provides a particle container, compatible with the Standard Template\nLibrary, spatial search data structures, and a Domain Specific Language to\nspecify non-linear operators on the particle set. This paper gives an overview\nof Aboria's design, an example of use, and a performance benchmark.\n", "title": "Particle-based and Meshless Methods with Aboria" }
null
null
null
null
true
null
2035
null
Default
null
null
null
{ "abstract": " In classical mechanics well-known cryptographic algorithms and protocols can\nbe very useful for construction canonical transformations preserving form of\nHamiltonians. We consider application of a standard generic divisor doubling\nfor construction of new auto Bäcklund transformations for the Lagrange top\nand Hénon-Heiles system separable in parabolic coordinates.\n", "title": "Backlund transformations and divisor doubling" }
null
null
null
null
true
null
2036
null
Default
null
null
null
{ "abstract": " Autoencoders have been successful in learning meaningful representations from\nimage datasets. However, their performance on text datasets has not been widely\nstudied. Traditional autoencoders tend to learn possibly trivial\nrepresentations of text documents due to their confounding properties such as\nhigh-dimensionality, sparsity and power-law word distributions. In this paper,\nwe propose a novel k-competitive autoencoder, called KATE, for text documents.\nDue to the competition between the neurons in the hidden layer, each neuron\nbecomes specialized in recognizing specific data patterns, and overall the\nmodel can learn meaningful representations of textual data. A comprehensive set\nof experiments show that KATE can learn better representations than traditional\nautoencoders including denoising, contractive, variational, and k-sparse\nautoencoders. Our model also outperforms deep generative models, probabilistic\ntopic models, and even word representation models (e.g., Word2Vec) in terms of\nseveral downstream tasks such as document classification, regression, and\nretrieval.\n", "title": "KATE: K-Competitive Autoencoder for Text" }
null
null
null
null
true
null
2037
null
Default
null
null
null
{ "abstract": " We consider a large market model of defaultable assets in which the asset\nprice processes are modelled as Heston-type stochastic volatility models with\ndefault upon hitting a lower boundary. We assume that both the asset prices and\ntheir volatilities are correlated through systemic Brownian motions. We are\ninterested in the loss process that arises in this setting and we prove the\nexistence of a large portfolio limit for the empirical measure process of this\nsystem. This limit evolves as a measure valued process and we show that it will\nhave a density given in terms of a solution to a stochastic partial\ndifferential equation of filtering type in the two-dimensional half-space, with\na Dirichlet boundary condition. We employ Malliavin calculus to establish the\nexistence of a regular density for the volatility component, and an\napproximation by models of piecewise constant volatilities combined with a\nkernel smoothing technique to obtain existence and regularity for the full\ntwo-dimensional filtering problem. We are able to establish good regularity\nproperties for solutions, however uniqueness remains an open problem.\n", "title": "Stochastic evolution equations for large portfolios of stochastic volatility models" }
null
null
null
null
true
null
2038
null
Default
null
null
null
{ "abstract": " Life-expectancy is a complex outcome driven by genetic, socio-demographic,\nenvironmental and geographic factors. Increasing socio-economic and health\ndisparities in the United States are propagating the longevity-gap, making it a\ncause for concern. Earlier studies have probed individual factors but an\nintegrated picture to reveal quantifiable actions has been missing. There is a\ngrowing concern about a further widening of healthcare inequality caused by\nArtificial Intelligence (AI) due to differential access to AI-driven services.\nHence, it is imperative to explore and exploit the potential of AI for\nilluminating biases and enabling transparent policy decisions for positive\nsocial and health impact. In this work, we reveal actionable interventions for\ndecreasing the longevity-gap in the United States by analyzing a County-level\ndata resource containing healthcare, socio-economic, behavioral, education and\ndemographic features. We learn an ensemble-averaged structure, draw inferences\nusing the joint probability distribution and extend it to a Bayesian Decision\nNetwork for identifying policy actions. We draw quantitative estimates for the\nimpact of diversity, preventive-care quality and stable-families within the\nunified framework of our decision network. Finally, we make this analysis and\ndashboard available as an interactive web-application for enabling users and\npolicy-makers to validate our reported findings and to explore the impact of\nones beyond reported in this work.\n", "title": "Learning to Address Health Inequality in the United States with a Bayesian Decision Network" }
null
null
[ "Statistics" ]
null
true
null
2039
null
Validated
null
null
null
{ "abstract": " Recent several years have witnessed the surge of asynchronous (async-)\nparallel computing methods due to the extremely big data involved in many\nmodern applications and also the advancement of multi-core machines and\ncomputer clusters. In optimization, most works about async-parallel methods are\non unconstrained problems or those with block separable constraints.\nIn this paper, we propose an async-parallel method based on block coordinate\nupdate (BCU) for solving convex problems with nonseparable linear constraint.\nRunning on a single node, the method becomes a novel randomized primal-dual BCU\nwith adaptive stepsize for multi-block affinely constrained problems. For these\nproblems, Gauss-Seidel cyclic primal-dual BCU needs strong convexity to have\nconvergence. On the contrary, merely assuming convexity, we show that the\nobjective value sequence generated by the proposed algorithm converges in\nprobability to the optimal value and also the constraint residual to zero. In\naddition, we establish an ergodic $O(1/k)$ convergence result, where $k$ is the\nnumber of iterations. Numerical experiments are performed to demonstrate the\nefficiency of the proposed method and significantly better speed-up performance\nthan its sync-parallel counterpart.\n", "title": "Asynchronous parallel primal-dual block update methods" }
null
null
null
null
true
null
2040
null
Default
null
null
null
{ "abstract": " We define some new invariants for 3-manifolds using the space of taut codim-1\nfoliations along with various techniques from noncommutative geometry. These\ninvariants originate from our attempt to generalise Topological Quantum Field\nTheories in the Noncommutative geometry / topology realm.\n", "title": "Towards Noncommutative Topological Quantum Field Theory: New invariants for 3-manifolds" }
null
null
[ "Mathematics" ]
null
true
null
2041
null
Validated
null
null
null
{ "abstract": " In this paper, we propose an image encryption algorithm based on the chaos,\nsubstitution boxes, nonlinear transformation in Galois field and Latin square.\nInitially, the dynamic S boxes are generated using Fisher Yates shuffle method\nand piece wise linear chaotic map. The algorithm utilizes advantages of keyed\nLatin square and transformation to substitute highly correlated digital images\nand yield encrypted image with valued performance. The chaotic behavior is\nachieved using Logistic map which is used to select one of thousand S boxes and\nalso decides the row and column of selected S box. The selected S box value is\ntransformed using nonlinear transformation. Along with the keyed Latin square\ngenerated using a 256 bit external key, used to substitute secretly plain image\npixels in cipher block chaining mode. To further strengthen the security of\nalgorithm, round operation are applied to obtain final ciphered image. The\nexperimental results are performed to evaluate algorithm and the anticipated\nalgorithm is compared with a recent encryption scheme. The analyses demonstrate\nalgorithms effectiveness in providing high security to digital media.\n", "title": "Chaotic Dynamic S Boxes Based Substitution Approach for Digital Images" }
null
null
null
null
true
null
2042
null
Default
null
null
null
{ "abstract": " The recently developed bag-of-paths framework consists in setting a\nGibbs-Boltzmann distribution on all feasible paths of a graph. This probability\ndistribution favors short paths over long ones, with a free parameter (the\ntemperature $T > 0$) controlling the entropic level of the distribution. This\nformalism enables the computation of new distances or dissimilarities,\ninterpolating between the shortest-path and the resistance distance, which have\nbeen shown to perform well in clustering and classification tasks. In this\nwork, the bag-of-paths formalism is extended by adding two independent equality\nconstraints fixing starting and ending nodes distributions of paths. When the\ntemperature is low, this formalism is shown to be equivalent to a relaxation of\nthe optimal transport problem on a network where paths carry a flow between two\ndiscrete distributions on nodes. The randomization is achieved by considering\nfree energy minimization instead of traditional cost minimization. Algorithms\ncomputing the optimal free energy solution are developed for two types of\npaths: hitting (or absorbing) paths and non-hitting, regular paths, and require\nthe inversion of an $n \\times n$ matrix with $n$ being the number of nodes.\nInterestingly, for regular paths, the resulting optimal policy interpolates\nbetween the deterministic optimal transport policy ($T \\rightarrow 0^{+}$) and\nthe solution to the corresponding electrical circuit ($T \\rightarrow \\infty$).\nTwo distance measures between nodes and a dissimilarity between groups of\nnodes, both integrating weights on nodes, are derived from this framework.\n", "title": "Randomized Optimal Transport on a Graph: Framework and New Distance Measures" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
2043
null
Validated
null
null
null
{ "abstract": " Recent advances in analysis of subband amplitude envelopes of natural sounds\nhave resulted in convincing synthesis, showing subband amplitudes to be a\ncrucial component of perception. Probabilistic latent variable analysis is\nparticularly revealing, but existing approaches don't incorporate prior\nknowledge about the physical behaviour of amplitude envelopes, such as\nexponential decay and feedback. We use latent force modelling, a probabilistic\nlearning paradigm that incorporates physical knowledge into Gaussian process\nregression, to model correlation across spectral subband envelopes. We augment\nthe standard latent force model approach by explicitly modelling correlations\nover multiple time steps. Incorporating this prior knowledge strengthens the\ninterpretation of the latent functions as the source that generated the signal.\nWe examine this interpretation via an experiment which shows that sounds\ngenerated by sampling from our probabilistic model are perceived to be more\nrealistic than those generated by similar models based on nonnegative matrix\nfactorisation, even in cases where our model is outperformed from a\nreconstruction error perspective.\n", "title": "A Generative Model for Natural Sounds Based on Latent Force Modelling" }
null
null
null
null
true
null
2044
null
Default
null
null
null
{ "abstract": " We study the Vladimirov fractional differentiation operator $D^\\alpha_N$,\n$\\alpha >0, N\\in \\mathbb Z$, on a $p$-adic ball $B_N=\\{ x\\in \\mathbb Q_p:\\\n|x|_p\\le p^N\\}$. To its known interpretations via restriction from a similar\noperator on $\\mathbb Q_p$ and via a certain stochastic process on $B_N$, we add\nan interpretation as a pseudo-differential operator in terms of the Pontryagin\nduality on the additive group of $B_N$. We investigate the Green function of\n$D^\\alpha_N$ and a nonlinear equation on $B_N$, an analog the classical porous\nmedium equation.\n", "title": "Linear and Nonlinear Heat Equations on a p-Adic Ball" }
null
null
[ "Mathematics" ]
null
true
null
2045
null
Validated
null
null
null
{ "abstract": " We propose position-velocity encoders (PVEs) which learn---without\nsupervision---to encode images to positions and velocities of task-relevant\nobjects. PVEs encode a single image into a low-dimensional position state and\ncompute the velocity state from finite differences in position. In contrast to\nautoencoders, position-velocity encoders are not trained by image\nreconstruction, but by making the position-velocity representation consistent\nwith priors about interacting with the physical world. We applied PVEs to\nseveral simulated control tasks from pixels and achieved promising preliminary\nresults.\n", "title": "PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations" }
null
null
[ "Computer Science" ]
null
true
null
2046
null
Validated
null
null
null
{ "abstract": " Convolutional neural networks (CNNs) are one of the driving forces for the\nadvancement of computer vision. Despite their promising performances on many\ntasks, CNNs still face major obstacles on the road to achieving ideal machine\nintelligence. One is that CNNs are complex and hard to interpret. Another is\nthat standard CNNs require large amounts of annotated data, which is sometimes\nhard to obtain, and it is desirable to learn to recognize objects from few\nexamples. In this work, we address these limitations of CNNs by developing\nnovel, flexible, and interpretable models for few-shot learning. Our models are\nbased on the idea of encoding objects in terms of visual concepts (VCs), which\nare interpretable visual cues represented by the feature vectors within CNNs.\nWe first adapt the learning of VCs to the few-shot setting, and then uncover\ntwo key properties of feature encoding using VCs, which we call category\nsensitivity and spatial pattern. Motivated by these properties, we present two\nintuitive models for the problem of few-shot learning. Experiments show that\nour models achieve competitive performances, while being more flexible and\ninterpretable than alternative state-of-the-art few-shot learning methods. We\nconclude that using VCs helps expose the natural capability of CNNs for\nfew-shot learning.\n", "title": "Few-shot Learning by Exploiting Visual Concepts within CNNs" }
null
null
null
null
true
null
2047
null
Default
null
null
null
{ "abstract": " Linear regression models contaminated by Gaussian noise (inlier) and possibly\nunbounded sparse outliers are common in many signal processing applications.\nSparse recovery inspired robust regression (SRIRR) techniques are shown to\ndeliver high quality estimation performance in such regression models.\nUnfortunately, most SRIRR techniques assume \\textit{a priori} knowledge of\nnoise statistics like inlier noise variance or outlier statistics like number\nof outliers. Both inlier and outlier noise statistics are rarely known\n\\textit{a priori} and this limits the efficient operation of many SRIRR\nalgorithms. This article proposes a novel noise statistics oblivious algorithm\ncalled residual ratio thresholding GARD (RRT-GARD) for robust regression in the\npresence of sparse outliers. RRT-GARD is developed by modifying the recently\nproposed noise statistics dependent greedy algorithm for robust de-noising\n(GARD). Both finite sample and asymptotic analytical results indicate that\nRRT-GARD performs nearly similar to GARD with \\textit{a priori} knowledge of\nnoise statistics. Numerical simulations in real and synthetic data sets also\npoint to the highly competitive performance of RRT-GARD.\n", "title": "Noise Statistics Oblivious GARD For Robust Regression With Sparse Outliers" }
null
null
null
null
true
null
2048
null
Default
null
null
null
{ "abstract": " This paper addresses the problem of depth estimation from a single still\nimage. Inspired by recent works on multi- scale convolutional neural networks\n(CNN), we propose a deep model which fuses complementary information derived\nfrom multiple CNN side outputs. Different from previous methods, the\nintegration is obtained by means of continuous Conditional Random Fields\n(CRFs). In particular, we propose two different variations, one based on a\ncascade of multiple CRFs, the other on a unified graphical model. By designing\na novel CNN implementation of mean-field updates for continuous CRFs, we show\nthat both proposed models can be regarded as sequential deep networks and that\ntraining can be performed end-to-end. Through extensive experimental evaluation\nwe demonstrate the effective- ness of the proposed approach and establish new\nstate of the art results on publicly available datasets.\n", "title": "Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation" }
null
null
null
null
true
null
2049
null
Default
null
null
null
{ "abstract": " One of the fundamental results in computability is the existence of\nwell-defined functions that cannot be computed. In this paper we study the\neffects of data representation on computability; we show that, while for each\npossible way of representing data there exist incomputable functions, the\ncomputability of a specific abstract function is never an absolute property,\nbut depends on the representation used for the function domain. We examine the\nscope of this dependency and provide mathematical criteria to favour some\nrepresentations over others. As we shall show, there are strong reasons to\nsuggest that computational enumerability should be an additional axiom for\ncomputation models. We analyze the link between the techniques and effects of\nrepresentation changes and those of oracle machines, showing an important\nconnection between their hierarchies. Finally, these notions enable us to gain\na new insight on the Church-Turing thesis: its interpretation as the underlying\nalgebraic structure to which computation is invariant.\n", "title": "On the relation between representations and computability" }
null
null
null
null
true
null
2050
null
Default
null
null
null
{ "abstract": " Research on mobile collocated interactions has been exploring situations\nwhere collocated users engage in collaborative activities using their personal\nmobile devices (e.g., smartphones and tablets), thus going from\npersonal/individual toward shared/multiuser experiences and interactions. The\nproliferation of ever-smaller computers that can be worn on our wrists (e.g.,\nApple Watch) and other parts of the body (e.g., Google Glass), have expanded\nthe possibilities and increased the complexity of interaction in what we term\nmobile collocated situations. Research on F-formations (or facing formations)\nhas been conducted in traditional settings (e.g., home, office, parties) where\nthe context and the presence of physical elements (e.g., furniture) can\nstrongly influence the way people socially interact with each other. While we\nmay be aware of how people arrange themselves spatially and interact with each\nother at a dinner table, in a classroom, or at a waiting room in a hospital,\nthere are other less-structured, dynamic, and larger-scale spaces that present\ndifferent types of challenges and opportunities for technology to enrich how\npeople experience these (semi-) public spaces. In this article, the authors\nexplore proxemic mobile collocated interactions by looking at F-formations in\nthe wild. They discuss recent efforts to observe how people socially interact\nin dynamic, unstructured, non-traditional settings. The authors also report the\nresults of exploratory F-formation observations conducted in the wild (i.e.,\ntourist attraction).\n", "title": "Towards Proxemic Mobile Collocated Interactions" }
null
null
null
null
true
null
2051
null
Default
null
null
null
{ "abstract": " Instrumental variable (IV) methods are widely used for estimating average\ntreatment effects in the presence of unmeasured confounders. However, the\ncapability of existing IV procedures, and most notably the two-stage residual\ninclusion (2SRI) procedure recommended for use in nonlinear contexts, to\naccount for unmeasured confounders in the Cox proportional hazard model is\nunclear. We show that instrumenting an endogenous treatment induces an\nunmeasured covariate, referred to as an individual frailty in survival analysis\nparlance, which if not accounted for leads to bias. We propose a new procedure\nthat augments 2SRI with an individual frailty and prove that it is consistent\nunder certain conditions. The finite sample-size behavior is studied across a\nbroad set of conditions via Monte Carlo simulations. Finally, the proposed\nmethodology is used to estimate the average effect of carotid endarterectomy\nversus carotid artery stenting on the mortality of patients suffering from\ncarotid artery disease. Results suggest that the 2SRI-frailty estimator\ngenerally reduces the bias of both point and interval estimators compared to\ntraditional 2SRI.\n", "title": "Adjusting for bias introduced by instrumental variable estimation in the Cox Proportional Hazards Model" }
null
null
null
null
true
null
2052
null
Default
null
null
null
{ "abstract": " We prove that the length function for perverse sheaves and algebraic regular\nholonomic D-modules on a smooth complex algebraic variety Y is an absolute\nQ-constructible function. One consequence is: for \"any\" fixed natural (derived)\nfunctor F between constructible complexes or perverse sheaves on two smooth\nvarieties X and Y, the loci of rank one local systems L on X whose image F(L)\nhas prescribed length are Zariski constructible subsets defined over Q,\nobtained from finitely many torsion-translated complex affine algebraic subtori\nof the moduli of rank one local systems via a finite sequence of taking union,\nintersection, and complement.\n", "title": "On the length of perverse sheaves and D-modules" }
null
null
[ "Mathematics" ]
null
true
null
2053
null
Validated
null
null
null
{ "abstract": " Deep neural networks are commonly developed and trained in 32-bit floating\npoint format. Significant gains in performance and energy efficiency could be\nrealized by training and inference in numerical formats optimized for deep\nlearning. Despite advances in limited precision inference in recent years,\ntraining of neural networks in low bit-width remains a challenging problem.\nHere we present the Flexpoint data format, aiming at a complete replacement of\n32-bit floating point format training and inference, designed to support modern\ndeep network topologies without modifications. Flexpoint tensors have a shared\nexponent that is dynamically adjusted to minimize overflows and maximize\navailable dynamic range. We validate Flexpoint by training AlexNet, a deep\nresidual network and a generative adversarial network, using a simulator\nimplemented with the neon deep learning framework. We demonstrate that 16-bit\nFlexpoint closely matches 32-bit floating point in training all three models,\nwithout any need for tuning of model hyperparameters. Our results suggest\nFlexpoint as a promising numerical format for future hardware for training and\ninference.\n", "title": "Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks" }
null
null
null
null
true
null
2054
null
Default
null
null
null
{ "abstract": " We show that the expected size of the maximum agreement subtree of two\n$n$-leaf trees, uniformly random among all trees with the shape, is\n$\\Theta(\\sqrt{n})$. To derive the lower bound, we prove a global structural\nresult on a decomposition of rooted binary trees into subgroups of leaves\ncalled blobs. To obtain the upper bound, we generalize a first moment argument\nfor random tree distributions that are exchangeable and not necessarily\nsampling consistent.\n", "title": "Bounds on the expected size of the maximum agreement subtree for a given tree shape" }
null
null
null
null
true
null
2055
null
Default
null
null
null
{ "abstract": " A systematic first-principles study has been performed to understand the\nmagnetism of thin film SrRuO$_3$ which lots of research efforts have been\ndevoted to but no clear consensus has been reached about its ground state\nproperties. The relative t$_{2g}$ level difference, lattice distortion as well\nas the layer thickness play together in determining the spin order. In\nparticular, it is important to understand the difference between two standard\napproximations, namely LDA and GGA, in describing this metallic magnetism.\nLandau free energy analysis and the magnetization-energy-ratio plot clearly\nshow the different tendency of favoring the magnetic moment formation, and it\nis magnified when applied to the thin film limit where the experimental\ninformation is severely limited. As a result, LDA gives a qualitatively\ndifferent prediction from GGA in the experimentally relevant region of strain\nwhereas both approximations give reasonable results for the bulk phase. We\ndiscuss the origin of this difference and the applicability of standard methods\nto the correlated oxide and the metallic magnetic systems.\n", "title": "Magnetic ground state of SrRuO$_3$ thin film and applicability of standard first-principles approximations to metallic magnetism" }
null
null
null
null
true
null
2056
null
Default
null
null
null
{ "abstract": " Answering a question of the second listed author we show that there is no\ntall Borel ideal minimal among all tall Borel ideals in the Katětov order.\n", "title": "No minimal tall Borel ideal in the Katětov order" }
null
null
null
null
true
null
2057
null
Default
null
null
null
{ "abstract": " Statistical relational AI (StarAI) aims at reasoning and learning in noisy\ndomains described in terms of objects and relationships by combining\nprobability with first-order logic. With huge advances in deep learning in the\ncurrent years, combining deep networks with first-order logic has been the\nfocus of several recent studies. Many of the existing attempts, however, only\nfocus on relations and ignore object properties. The attempts that do consider\nobject properties are limited in terms of modelling power or scalability. In\nthis paper, we develop relational neural networks (RelNNs) by adding hidden\nlayers to relational logistic regression (the relational counterpart of\nlogistic regression). We learn latent properties for objects both directly and\nthrough general rules. Back-propagation is used for training these models. A\nmodular, layer-wise architecture facilitates utilizing the techniques developed\nwithin deep learning community to our architecture. Initial experiments on\neight tasks over three real-world datasets show that RelNNs are promising\nmodels for relational learning.\n", "title": "RelNN: A Deep Neural Model for Relational Learning" }
null
null
null
null
true
null
2058
null
Default
null
null
null
{ "abstract": " Gravitinos are a fundamental prediction of supergravity, their mass ($m_{G}$)\nis informative of the value of the SUSY breaking scale, and, if produced during\nreheating, their number density is a function of the reheating temperature\n($T_{\\text{rh}}$). As a result, constraining their parameter space provides in\nturn significant constraints on particles physics and cosmology. We have\npreviously shown that for gravitinos decaying into photons or charged particles\nduring the ($\\mu$ and $y$) distortion eras, upcoming CMB spectral distortions\nbounds are highly effective in constraining the $T_{\\text{rh}}-m_{G}$ space.\nFor heavier gravitinos (with lifetimes shorter than a few $\\times10^6$ sec),\ndistortions are quickly thermalized and energy injections cause a temperature\nrise for the CMB bath. If the decay occurs after neutrino decoupling, its\noverall effect is a suppression of the effective number of relativistic degrees\nof freedom ($N_{\\text{eff}}$). In this paper, we utilize the observational\nbounds on $N_{\\text{eff}}$ to constrain gravitino decays, and hence provide new\nconstaints on gravitinos and reheating. For gravitino masses less than $\\approx\n10^5$ GeV, current observations give an upper limit on the reheating scale in\nthe range of $\\approx 5 \\times 10^{10}- 5 \\times 10^{11}$GeV. For masses\ngreater than $\\approx 4 \\times 10^3$ GeV they are more stringent than previous\nbounds from BBN constraints, coming from photodissociation of deuterium, by\nalmost 2 orders of magnitude.\n", "title": "$ΔN_{\\text{eff}}$ and entropy production from early-decaying gravitinos" }
null
null
null
null
true
null
2059
null
Default
null
null
null
{ "abstract": " We study networks of human decision-makers who independently decide how to\nprotect themselves against Susceptible-Infected-Susceptible (SIS) epidemics.\nMotivated by studies in behavioral economics showing that humans perceive\nprobabilities in a nonlinear fashion, we examine the impacts of such\nmisperceptions on the equilibrium protection strategies. In our setting, nodes\nchoose their curing rates to minimize the infection probability under the\ndegree-based mean-field approximation of the SIS epidemic plus the cost of\ntheir selected curing rate. We establish the existence of a degree based\nequilibrium under both true and nonlinear perceptions of infection\nprobabilities (under suitable assumptions). When the per-unit cost of curing\nrate is sufficiently high, we show that true expectation minimizers choose the\ncuring rate to be zero at the equilibrium, while curing rate is nonzero under\nnonlinear probability weighting.\n", "title": "Game-Theoretic Choice of Curing Rates Against Networked SIS Epidemics by Human Decision-Makers" }
null
null
null
null
true
null
2060
null
Default
null
null
null
{ "abstract": " Membrane proteins constitute a large portion of the human proteome and\nperform a variety of important functions as membrane receptors, transport\nproteins, enzymes, signaling proteins, and more. The computational studies of\nmembrane proteins are usually much more complicated than those of globular\nproteins. Here we propose a new continuum model for Poisson-Boltzmann\ncalculations of membrane channel proteins. Major improvements over the existing\ncontinuum slab model are as follows: 1) The location and thickness of the slab\nmodel are fine-tuned based on explicit-solvent MD simulations. 2) The highly\ndifferent accessibility in the membrane and water regions are addressed with a\ntwo-step, two-probe grid labeling procedure, and 3) The water pores/channels\nare automatically identified. The new continuum membrane model is optimized (by\nadjusting the membrane probe, as well as the slab thickness and center) to best\nreproduce the distributions of buried water molecules in the membrane region as\nsampled in explicit water simulations. Our optimization also shows that the\nwidely adopted water probe of 1.4 {\\AA} for globular proteins is a very\nreasonable default value for membrane protein simulations. It gives an overall\nminimum number of inconsistencies between the continuum and explicit\nrepresentations of water distributions in membrane channel proteins, at least\nin the water accessible pore/channel regions that we focus on. Finally, we\nvalidate the new membrane model by carrying out binding affinity calculations\nfor a potassium channel, and we observe a good agreement with experiment\nresults.\n", "title": "A Continuum Poisson-Boltzmann Model for Membrane Channel Proteins" }
null
null
null
null
true
null
2061
null
Default
null
null
null
{ "abstract": " We present spectroscopic redshifts of S(870)>2mJy submillimetre galaxies\n(SMGs) which have been identified from the ALMA follow-up observations of 870um\ndetected sources in the Extended Chandra Deep Field South (the ALMA-LESS\nsurvey). We derive spectroscopic redshifts for 52 SMGs, with a median of\nz=2.4+/-0.1. However, the distribution features a high redshift tail, with ~25%\nof the SMGs at z>3. Spectral diagnostics suggest that the SMGs are young\nstarbursts, and the velocity offsets between the nebular emission and UV ISM\nabsorption lines suggest that many are driving winds, with velocity offsets up\nto 2000km/s. Using the spectroscopic redshifts and the extensive UV-to-radio\nphotometry in this field, we produce optimised spectral energy distributions\n(SEDs) using Magphys, and use the SEDs to infer a median stellar mass of\nM*=(6+/-1)x10^{10}Msol for our SMGs with spectroscopic redshifts. By combining\nthese stellar masses with the star-formation rates (measured from the\nfar-infrared SEDs), we show that SMGs (on average) lie a factor ~5 above the\nmain-sequence at z~2. We provide this library of 52 template fits with robust\nand well-sampled SEDs available as a resource for future studies of SMGs, and\nalso release the spectroscopic catalog of ~2000 (mostly infrared-selected)\ngalaxies targeted as part of the spectroscopic campaign.\n", "title": "An ALMA survey of submillimetre galaxies in the Extended Chandra Deep Field South: Spectroscopic redshifts" }
null
null
null
null
true
null
2062
null
Default
null
null
null
{ "abstract": " The paper treats several aspects of the truncated matricial\n$[\\alpha,\\beta]$-Hausdorff type moment problems. It is shown that each\n$[\\alpha,\\beta]$-Hausdorff moment sequence has a particular intrinsic\nstructure. More precisely, each element of this sequence varies within a closed\nbounded matricial interval. The case that the corresponding moment coincides\nwith one of the endpoints of the interval plays a particular important role.\nThis leads to distinguished molecular solutions of the truncated matricial\n$[\\alpha,\\beta]$-Hausdorff moment problem, which satisfy some extremality\nproperties. The proofs are mainly of algebraic character. The use of the\nparallel sum of matrices is an essential tool in the proofs.\n", "title": "On the structure of Hausdorff moment sequences of complex matrices" }
null
null
null
null
true
null
2063
null
Default
null
null
null
{ "abstract": " Let $\\Omega$ be a pseudoconvex domain in $\\mathbb C^n$ satisfying an\n$f$-property for some function $f$. We show that the Bergman metric associated\nto $\\Omega$ has the lower bound $\\tilde g(\\delta_\\Omega(z)^{-1})$ where\n$\\delta_\\Omega(z)$ is the distance from $z$ to the boundary $\\partial\\Omega$\nand $\\tilde g$ is a specific function defined by $f$. This refines\nKhanh-Zampieri's work in \\cite{KZ12} with reducing the smoothness assumption of\nthe boundary.\n", "title": "Lower bounds on the Bergman metric near points of infinite type" }
null
null
null
null
true
null
2064
null
Default
null
null
null
{ "abstract": " In observational studies and sample surveys, and regression settings,\nweighting methods are widely used to adjust for or balance observed covariates.\nRecently, a few weighting methods have been proposed that focus on directly\nbalancing the covariates while minimizing the dispersion of the weights. In\nthis paper, we call this class of weights minimal approximately balancing\nweights (MABW); we study their asymptotic properties and address two\npracticalities. We show that, under standard technical conditions, MABW are\nconsistent estimates of the true inverse probability weights; the resulting\nweighting estimator is consistent, asymptotically normal, and\nsemiparametrically efficient. For applications, we present a finite sample\noracle inequality showing that the loss incurred by balancing too many\nfunctions of the covariates is limited in MABW. We also provide an algorithm\nfor choosing the degree of approximate balancing in MABW. Finally, we conclude\nwith numerical results that suggest approximate balancing is preferable to\nexact balancing, especially when there is limited overlap in covariate\ndistributions: the root mean squared error of the weighting estimator can be\nreduced by nearly a half.\n", "title": "Minimal Approximately Balancing Weights: Asymptotic Properties and Practical Considerations" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
2065
null
Validated
null
null
null
{ "abstract": " Recent studies have shown that frame-level deep speaker features can be\nderived from a deep neural network with the training target set to discriminate\nspeakers by a short speech segment. By pooling the frame-level features,\nutterance-level representations, called d-vectors, can be derived and used in\nthe automatic speaker verification (ASV) task. This simple average pooling,\nhowever, is inherently sensitive to the phonetic content of the utterance. An\ninteresting idea borrowed from machine translation is the attention-based\nmechanism, where the contribution of an input word to the translation at a\nparticular time is weighted by an attention score. This score reflects the\nrelevance of the input word and the present translation. We can use the same\nidea to align utterances with different phonetic contents. This paper proposes\na phonetic-attention scoring approach for d-vector systems. By this approach,\nan attention score is computed for each frame pair. This score reflects the\nsimilarity of the two frames in phonetic content, and is used to weigh the\ncontribution of this frame pair in the utterance-based scoring. This new\nscoring approach emphasizes the frame pairs with similar phonetic contents,\nwhich essentially provides a soft alignment for utterances with any phonetic\ncontents. Experimental results show that compared with the naive average\npooling, this phonetic-attention scoring approach can deliver consistent\nperformance improvement in ASV tasks of both text-dependent and\ntext-independent.\n", "title": "Phonetic-attention scoring for deep speaker features in speaker verification" }
null
null
[ "Computer Science" ]
null
true
null
2066
null
Validated
null
null
null
{ "abstract": " Collective urban mobility embodies the residents' local insights on the city.\nMobility practices of the residents are produced from their spatial choices,\nwhich involve various considerations such as the atmosphere of destinations,\ndistance, past experiences, and preferences. The advances in mobile computing\nand the rise of geo-social platforms have provided the means for capturing the\nmobility practices; however, interpreting the residents' insights is\nchallenging due to the scale and complexity of an urban environment, and its\nunique context. In this paper, we present MobInsight, a framework for making\nlocalized interpretations of urban mobility that reflect various aspects of the\nurbanism. MobInsight extracts a rich set of neighborhood features through\nholistic semantic aggregation, and models the mobility between all-pairs of\nneighborhoods. We evaluate MobInsight with the mobility data of Barcelona and\ndemonstrate diverse localized and semantically-rich interpretations.\n", "title": "MobInsight: A Framework Using Semantic Neighborhood Features for Localized Interpretations of Urban Mobility" }
null
null
[ "Computer Science" ]
null
true
null
2067
null
Validated
null
null
null
{ "abstract": " We investigate the macroeconomic consequences of narrow banking in the\ncontext of stock-flow consistent models. We begin with an extension of the\nGoodwin-Keen model incorporating time deposits, government bills, cash, and\ncentral bank reserves to the base model with loans and demand deposits and use\nit to describe a fractional reserve banking system. We then characterize narrow\nbanking by a full reserve requirement on demand deposits and describe the\nresulting separation between the payment system and lending functions of the\nresulting banking sector. By way of numerical examples, we explore the\nproperties of fractional and full reserve versions of the model and compare\ntheir asymptotic properties. We find that narrow banking does not lead to any\nloss in economic growth when the models converge to a finite equilibrium, while\nallowing for more direct monitoring and prevention of financial breakdowns in\nthe case of explosive asymptotic behaviour.\n", "title": "The Broad Consequences of Narrow Banking" }
null
null
null
null
true
null
2068
null
Default
null
null
null
{ "abstract": " Convolutional Neural Networks (CNNs) are widely used to solve classification\ntasks in computer vision. However, they can be tricked into misclassifying\nspecially crafted `adversarial' samples -- and samples built to trick one model\noften work alarmingly well against other models trained on the same task. In\nthis paper we introduce Sitatapatra, a system designed to block the transfer of\nadversarial samples. It diversifies neural networks using a key, as in\ncryptography, and provides a mechanism for detecting attacks. What's more, when\nadversarial samples are detected they can typically be traced back to the\nindividual device that was used to develop them. The run-time overheads are\nminimal permitting the use of Sitatapatra on constrained systems.\n", "title": "Sitatapatra: Blocking the Transfer of Adversarial Samples" }
null
null
null
null
true
null
2069
null
Default
null
null
null
{ "abstract": " Neurofeedback is a form of brain training in which subjects are fed back\ninformation about some measure of their brain activity which they are\ninstructed to modify in a way thought to be functionally advantageous. Over the\nlast twenty years, NF has been used to treat various neurological and\npsychiatric conditions, and to improve cognitive function in various contexts.\nHowever, despite its growing popularity, each of the main steps in NF comes\nwith its own set of often covert assumptions. Here we critically examine some\nconceptual and methodological issues associated with the way general objectives\nand neural targets of NF are defined, and review the neural mechanisms through\nwhich NF may act, and the way its efficacy is gauged. The NF process is\ncharacterised in terms of functional dynamics, and possible ways in which it\nmay be controlled are discussed. Finally, it is proposed that improving NF will\nrequire better understanding of various fundamental aspects of brain dynamics\nand a more precise definition of functional brain activity and brain-behaviour\nrelationships.\n", "title": "Neurofeedback: principles, appraisal and outstanding issues" }
null
null
null
null
true
null
2070
null
Default
null
null
null
{ "abstract": " Image and video analysis is often a crucial step in the study of animal\nbehavior and kinematics. Often these analyses require that the position of one\nor more animal landmarks are annotated (marked) in numerous images. The process\nof annotating landmarks can require a significant amount of time and tedious\nlabor, which motivates the need for algorithms that can automatically annotate\nlandmarks. In the community of scientists that use image and video analysis to\nstudy the 3D flight of animals, there has been a trend of developing more\nautomated approaches for annotating landmarks, yet they fall short of being\ngenerally applicable. Inspired by the success of Deep Neural Networks (DNNs) on\nmany problems in the field of computer vision, we investigate how suitable DNNs\nare for accurate and automatic annotation of landmarks in video datasets\nrepresentative of those collected by scientists studying animals.\nOur work shows, through extensive experimentation on videos of hawkmoths,\nthat DNNs are suitable for automatic and accurate landmark localization. In\nparticular, we show that one of our proposed DNNs is more accurate than the\ncurrent best algorithm for automatic localization of landmarks on hawkmoth\nvideos. Moreover, we demonstrate how these annotations can be used to\nquantitatively analyze the 3D flight of a hawkmoth. To facilitate the use of\nDNNs by scientists from many different fields, we provide a self contained\nexplanation of what DNNs are, how they work, and how to apply them to other\ndatasets using the freely available library Caffe and supplemental code that we\nprovide.\n", "title": "Automating Image Analysis by Annotating Landmarks with Deep Neural Networks" }
null
null
null
null
true
null
2071
null
Default
null
null
null
{ "abstract": " We give a survey of recent results on weak-strong uniqueness for compressible\nand incompressible Euler and Navier-Stokes equations, and also make some new\nobservations. The importance of the weak-strong uniqueness principle stems, on\nthe one hand, from the instances of non-uniqueness for the Euler equations\nexhibited in the past years; and on the other hand from the question of\nconvergence of singular limits, for which weak-strong uniqueness represents an\nelegant tool.\n", "title": "Weak-strong uniqueness in fluid dynamics" }
null
null
null
null
true
null
2072
null
Default
null
null
null
{ "abstract": " Accurate diagnosis of Alzheimer's Disease (AD) entails clinical evaluation of\nmultiple cognition metrics and biomarkers. Metrics such as the Alzheimer's\nDisease Assessment Scale - Cognitive test (ADAS-cog) comprise multiple\nsubscores that quantify different aspects of a patient's cognitive state such\nas learning, memory, and language production/comprehension. Although\ncomputer-aided diagnostic techniques for classification of a patient's current\ndisease state exist, they provide little insight into the relationship between\nchanges in brain structure and different aspects of a patient's cognitive state\nthat occur over time in AD. We have developed a Convolutional Neural Network\narchitecture that can concurrently predict the trajectories of the 13 subscores\ncomprised by a subject's ADAS-cog examination results from a current minimally\npreprocessed structural MRI scan up to 36 months from image acquisition time\nwithout resorting to manual feature extraction. Mean performance metrics are\nwithin range of those of existing techniques that require manual feature\nselection and are limited to predicting aggregate scores.\n", "title": "Cognitive Subscore Trajectory Prediction in Alzheimer's Disease" }
null
null
null
null
true
null
2073
null
Default
null
null
null
{ "abstract": " We introduce a novel approach for training adversarial models by replacing\nthe discriminator score with a bi-modal Gaussian distribution over the\nreal/fake indicator variables. In order to do this, we train the Gaussian\nclassifier to match the target bi-modal distribution implicitly through\nmeta-adversarial training. We hypothesize that this approach ensures a non-zero\ngradient to the generator, even in the limit of a perfect classifier. We test\nour method against standard benchmark image datasets as well as show the\nclassifier output distribution is smooth and has overlap between the real and\nfake modes.\n", "title": "Variance Regularizing Adversarial Learning" }
null
null
null
null
true
null
2074
null
Default
null
null
null
{ "abstract": " The potential for machine learning (ML) systems to amplify social inequities\nand unfairness is receiving increasing popular and academic attention. A surge\nof recent work has focused on the development of algorithmic tools to assess\nand mitigate such unfairness. If these tools are to have a positive impact on\nindustry practice, however, it is crucial that their design be informed by an\nunderstanding of real-world needs. Through 35 semi-structured interviews and an\nanonymous survey of 267 ML practitioners, we conduct the first systematic\ninvestigation of commercial product teams' challenges and needs for support in\ndeveloping fairer ML systems. We identify areas of alignment and disconnect\nbetween the challenges faced by industry practitioners and solutions proposed\nin the fair ML research literature. Based on these findings, we highlight\ndirections for future ML and HCI research that will better address industry\npractitioners' needs.\n", "title": "Improving fairness in machine learning systems: What do industry practitioners need?" }
null
null
[ "Computer Science" ]
null
true
null
2075
null
Validated
null
null
null
{ "abstract": " The prospect of pileup induced backgrounds at the High Luminosity LHC\n(HL-LHC) has stimulated intense interest in technology for charged particle\ntiming at high rates. In contrast to the role of timing for particle\nidentification, which has driven incremental improvements in timing, the LHC\ntiming challenge dictates a specific level of timing performance- roughly 20-30\npicoseconds. Since the elapsed time for an LHC bunch crossing (with standard\ndesign book parameters) has an rms spread of 170 picoseconds, the $\\sim50-100$\npicosecond resolution now commonly achieved in TOF systems would be\ninsufficient to resolve multiple \"in-time\" pileup. Here we present a MicroMegas\nbased structure which achieves the required time precision (ie 24 picoseconds\nfor 150 GeV $\\mu$'s) and could potentially offer an inexpensive solution\ncovering large areas with $\\sim 1$ cm$^2$ pixel size. We present here a\nproof-of-principle which motivates further work in our group toward realizing a\npractical design capable of long-term survival in a high rate experiment.\n", "title": "PICOSEC: Charged particle Timing to 24 picosecond Precision with MicroPattern Gas Detectors" }
null
null
null
null
true
null
2076
null
Default
null
null
null
{ "abstract": " We report the first result on Ge-76 neutrinoless double beta decay from\nCDEX-1 experiment at China Jinping Underground Laboratory. A mass of 994 g\np-type point-contact high purity germanium detector has been installed to\nsearch the neutrinoless double beta decay events, as well as to directly detect\ndark matter particles. An exposure of 304 kg*day has been analyzed. The\nwideband spectrum from 500 keV to 3 MeV was obtained and the average event rate\nat the 2.039 MeV energy range is about 0.012 count per keV per kg per day. The\nhalf-life of Ge-76 neutrinoless double beta decay has been derived based on\nthis result as: T 1/2 > 6.4*10^22 yr (90% C.L.). An upper limit on the\neffective Majorana-neutrino mass of 5.0 eV has been achieved. The possible\nmethods to further decrease the background level have been discussed and will\nbe pursued in the next stage of CDEX experiment.\n", "title": "The first result on 76Ge neutrinoless double beta decay from CDEX-1 experiment" }
null
null
null
null
true
null
2077
null
Default
null
null
null
{ "abstract": " Keywords are important for information retrieval. They are used to classify\nand sort papers. However, these terms can also be used to study trends within\nand across fields. We want to explore the lifecycle of new keywords. How often\ndo new terms come into existence and how long till they fade out? In this\npaper, we present our preliminary analysis where we measure the burstiness of\nkeywords within the field of AI. We examine 150k keywords in approximately 100k\njournal and conference papers. We find that nearly 80\\% of the keywords die off\nbefore year one for both journals and conferences but that terms last longer in\njournals versus conferences. We also observe time periods of thematic bursts in\nAI -- one where the terms are more neuroscience inspired and one more oriented\nto computational optimization. This work shows promise of using author keywords\nto better understand dynamics of buzz within science.\n", "title": "Measuring scientific buzz" }
null
null
[ "Computer Science" ]
null
true
null
2078
null
Validated
null
null
null
{ "abstract": " Free space optical communication techniques have been the subject of numerous\ninvestigations in recent years, with multiple missions expected to fly in the\nnear future. Existing methods require high pointing accuracies, drastically\ndriving up overall system cost. Recent developments in LED-based visible light\ncommunication (VLC) and past in-orbit experiments have convinced us that the\ntechnology has reached a critical level of maturity. On these premises, we\npropose a new optical communication system utilizing a VLC downlink and a high\nthroughput, omnidirectional photovoltaic cell receiver system. By performing\nerror-correction via deep learning methods and by utilizing phase-delay\ninterference, the system is able to deliver data rates that match those of\ntraditional laser-based solutions. A prototype of the proposed system has been\nconstructed, demonstrating the scheme to be a feasible alternative to\nlaser-based methods. This creates an opportunity for the full scale development\nof optical communication techniques on small spacecraft as a backup telemetry\nbeacon or as a high throughput link.\n", "title": "Fully Optical Spacecraft Communications: Implementing an Omnidirectional PV-Cell Receiver and 8Mb/s LED Visible Light Downlink with Deep Learning Error Correction" }
null
null
null
null
true
null
2079
null
Default
null
null
null
{ "abstract": " This work focuses on the question of how identifiability of a mathematical\nmodel, that is, whether parameters can be recovered from data, is related to\nidentifiability of its submodels. We look specifically at linear compartmental\nmodels and investigate when identifiability is preserved after adding or\nremoving model components. In particular, we examine whether identifiability is\npreserved when an input, output, edge, or leak is added or deleted. Our\napproach, via differential algebra, is to analyze specific input-output\nequations of a model and the Jacobian of the associated coefficient map. We\nclarify a prior determinantal formula for these equations, and then use it to\nprove that, under some hypotheses, a model's input-output equations can be\nunderstood in terms of certain submodels we call \"output-reachable\". Our proofs\nuse algebraic and combinatorial techniques.\n", "title": "Linear compartmental models: input-output equations and operations that preserve identifiability" }
null
null
null
null
true
null
2080
null
Default
null
null
null
{ "abstract": " We develop theory for nonlinear dimensionality reduction (NLDR). A number of\nNLDR methods have been developed, but there is limited understanding of how\nthese methods work and the relationships between them. There is limited basis\nfor using existing NLDR theory for deriving new algorithms. We provide a novel\nframework for analysis of NLDR via a connection to the statistical theory of\nlinear smoothers. This allows us to both understand existing methods and derive\nnew ones. We use this connection to smoothing to show that asymptotically,\nexisting NLDR methods correspond to discrete approximations of the solutions of\nsets of differential equations given a boundary condition. In particular, we\ncan characterize many existing methods in terms of just three limiting\ndifferential operators and boundary conditions. Our theory also provides a way\nto assert that one method is preferable to another; indeed, we show Local\nTangent Space Alignment is superior within a class of methods that assume a\nglobal coordinate chart defines an isometric embedding of the manifold.\n", "title": "On Nonlinear Dimensionality Reduction, Linear Smoothing and Autoencoding" }
null
null
null
null
true
null
2081
null
Default
null
null
null
{ "abstract": " Hashing has been widely used for large-scale approximate nearest neighbor\nsearch because of its storage and search efficiency. Recent work has found that\ndeep supervised hashing can significantly outperform non-deep supervised\nhashing in many applications. However, most existing deep supervised hashing\nmethods adopt a symmetric strategy to learn one deep hash function for both\nquery points and database (retrieval) points. The training of these symmetric\ndeep supervised hashing methods is typically time-consuming, which makes them\nhard to effectively utilize the supervised information for cases with\nlarge-scale database. In this paper, we propose a novel deep supervised hashing\nmethod, called asymmetric deep supervised hashing (ADSH), for large-scale\nnearest neighbor search. ADSH treats the query points and database points in an\nasymmetric way. More specifically, ADSH learns a deep hash function only for\nquery points, while the hash codes for database points are directly learned.\nThe training of ADSH is much more efficient than that of traditional symmetric\ndeep supervised hashing methods. Experiments show that ADSH can achieve\nstate-of-the-art performance in real applications.\n", "title": "Asymmetric Deep Supervised Hashing" }
null
null
null
null
true
null
2082
null
Default
null
null
null
{ "abstract": " Based upon the idea that network functionality is impaired if two nodes in a\nnetwork are sufficiently separated in terms of a given metric, we introduce two\ncombinatorial \\emph{pseudocut} problems generalizing the classical min-cut and\nmulti-cut problems. We expect the pseudocut problems will find broad relevance\nto the study of network reliability. We comprehensively analyze the\ncomputational complexity of the pseudocut problems and provide three\napproximation algorithms for these problems.\nMotivated by applications in communication networks with strict\nQuality-of-Service (QoS) requirements, we demonstrate the utility of the\npseudocut problems by proposing a targeted vulnerability assessment for the\nstructure of communication networks using QoS metrics; we perform experimental\nevaluations of our proposed approximation algorithms in this context.\n", "title": "Pseudo-Separation for Assessment of Structural Vulnerability of a Network" }
null
null
null
null
true
null
2083
null
Default
null
null
null
{ "abstract": " In this paper we use Gaussian Process (GP) regression to propose a novel\napproach for predicting volatility of financial returns by forecasting the\nenvelopes of the time series. We provide a direct comparison of their\nperformance to traditional approaches such as GARCH. We compare the forecasting\npower of three approaches: GP regression on the absolute and squared returns;\nregression on the envelope of the returns and the absolute returns; and\nregression on the envelope of the negative and positive returns separately. We\nuse a maximum a posteriori estimate with a Gaussian prior to determine our\nhyperparameters. We also test the effect of hyperparameter updating at each\nforecasting step. We use our approaches to forecast out-of-sample volatility of\nfour currency pairs over a 2 year period, at half-hourly intervals. From three\nkernels, we select the kernel giving the best performance for our data. We use\ntwo published accuracy measures and four statistical loss functions to evaluate\nthe forecasting ability of GARCH vs GPs. In mean squared error the GP's perform\n20% better than a random walk model, and 50% better than GARCH for the same\ndata.\n", "title": "A Novel Approach to Forecasting Financial Volatility with Gaussian Process Envelopes" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
2084
null
Validated
null
null
null
{ "abstract": " We propose a new method to evaluate GANs, namely EvalGAN. EvalGAN relies on a\ntest set to directly measure the reconstruction quality in the original sample\nspace (no auxiliary networks are necessary), and it also computes the\n(log)likelihood for the reconstructed samples in the test set. Further, EvalGAN\nis agnostic to the GAN algorithm and the dataset. We decided to test it on\nthree state-of-the-art GANs over the well-known CIFAR-10 and CelebA datasets.\n", "title": "Out-of-Sample Testing for GANs" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
2085
null
Validated
null
null
null
{ "abstract": " We use superconducting rings with asymmetric link-up of current leads for\nexperimental investigation of winding number change at magnetic field\ncorresponding to the half of the flux quantum inside the ring. According to the\nconventional theory, the critical current of such rings should change by jump\ndue to this change. Experimental data obtained at measurements of aluminum\nrings agree with theoretical prediction in magnetic flux region close to\ninteger numbers of the flux quantum and disagree in the region close to the\nhalf of the one, where a smooth change is observed instead of the jump. First\nmeasurements of tantalum ring give a hope for the jump. Investigation of this\nproblem may have both fundamental and practical importance.\n", "title": "Quantum periodicity in the critical current of superconducting rings with asymmetric link-up of current leads" }
null
null
null
null
true
null
2086
null
Default
null
null
null
{ "abstract": " Human societies around the world interact with each other by developing and\nmaintaining social norms, and it is critically important to understand how such\nnorms emerge and change. In this work, we define an evolutionary game-theoretic\nmodel to study how norms change in a society, based on the idea that different\nstrength of norms in societies translate to different game-theoretic\ninteraction structures and incentives. We use this model to study, both\nanalytically and with extensive agent-based simulations, the evolutionary\nrelationships of the need for coordination in a society (which is related to\nits norm strength) with two key aspects of norm change: cultural inertia\n(whether or how quickly the population responds when faced with conditions that\nmake a norm change desirable), and exploration rate (the willingness of agents\nto try out new strategies). Our results show that a high need for coordination\nleads to both high cultural inertia and a low exploration rate, while a low\nneed for coordination leads to low cultural inertia and high exploration rate.\nThis is the first work, to our knowledge, on understanding the evolutionary\ncausal relationships among these factors.\n", "title": "Understanding Norm Change: An Evolutionary Game-Theoretic Approach (Extended Version)" }
null
null
null
null
true
null
2087
null
Default
null
null
null
{ "abstract": " This paper presents a proposal (story) of how statically detecting\nunreachable objects (in Java) could be used to improve a particular runtime\nverification approach (for Java), namely parametric trace slicing. Monitoring\nalgorithms for parametric trace slicing depend on garbage collection to (i)\ncleanup data-structures storing monitored objects, ensuring they do not become\nunmanageably large, and (ii) anticipate the violation of (non-safety)\nproperties that cannot be satisfied as a monitored object can no longer appear\nlater in the trace. The proposal is that both usages can be improved by making\nthe unreachability of monitored objects explicit in the parametric property and\nstatically introducing additional instrumentation points generating related\nevents. The ideas presented in this paper are still exploratory and the\nintention is to integrate the described techniques into the MarQ monitoring\ntool for quantified event automata.\n", "title": "A Story of Parametric Trace Slicing, Garbage and Static Analysis" }
null
null
null
null
true
null
2088
null
Default
null
null
null
{ "abstract": " We define a symmetric monoidal (4,3)-category with duals whose objects are\ncertain enriched multi-fusion categories. For every modular tensor category\n$\\mathcal{C}$, there is a self enriched multi-fusion category $\\mathfrak{C}$\ngiving rise to an object of this symmetric monoidal (4,3)-category. We\nconjecture that the extended 3D TQFT given by the fully dualizable object\n$\\mathfrak{C}$ extends the 1-2-3-dimensional Reshetikhin-Turaev TQFT associated\nto the modular tensor category $\\mathcal{C}$ down to dimension zero.\n", "title": "Extended TQFT arising from enriched multi-fusion categories" }
null
null
null
null
true
null
2089
null
Default
null
null
null
{ "abstract": " Permutation codes, in the form of rank modulation, have shown promise for\napplications such as flash memory. One of the metrics recently suggested as\nappropriate for rank modulation is the Ulam metric, which measures the minimum\ntranslocation distance between permutations. Multipermutation codes have also\nbeen proposed as a generalization of permutation codes that would improve code\nsize (and consequently the code rate). In this paper we analyze the Ulam metric\nin the context of multipermutations, noting some similarities and differences\nbetween the Ulam metric in the context of permutations. We also consider sphere\nsizes for multipermutations under the Ulam metric and resulting bounds on code\nsize.\n", "title": "Multipermutation Ulam Sphere Analysis Toward Characterizing Maximal Code Size" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
2090
null
Validated
null
null
null
{ "abstract": " (349) Dembowska, a large, bright main-belt asteroid, has a fast rotation and\noblique spin axis. It may have experienced partial melting and differentiation.\nWe constrain Dembowska's thermophysical properties, e.g., thermal inertia,\nroughness fraction, geometric albedo and effective diameter within 3$\\sigma$\nuncertainty of $\\Gamma=20^{+12}_{-7}\\rm~Jm^{-2}s^{-0.5}K^{-1}$, $f_{\\rm\nr}=0.25^{+0.60}_{-0.25}$, $p_{\\rm v}=0.309^{+0.026}_{-0.038}$, and $D_{\\rm\neff}=155.8^{+7.5}_{-6.2}\\rm~km$, by utilizing the Advanced Thermophysical Model\n(ATPM) to analyse four sets of thermal infrared data obtained by IRAS, AKARI,\nWISE and Subaru/COMICS at different epochs. In addition, by modeling the\nthermal lightcurve observed by WISE, we obtain the rotational phases of each\ndataset. These rotationally resolved data do not reveal significant variations\nof thermal inertia and roughness across the surface, indicating the surface of\nDembowska should be covered by a dusty regolith layer with few rocks or\nboulders. Besides, the low thermal inertia of Dembowska show no significant\ndifference with other asteroids larger than 100 km, indicating the dynamical\nlives of these large asteroids are long enough to make the surface to have\nsufficiently low thermal inertia. Furthermore, based on the derived surface\nthermophysical properties, as well as the known orbital and rotational\nparameters, we can simulate Dembowska's surface and subsurface temperature\nthroughout its orbital period. The surface temperature varies from $\\sim40$ K\nto $\\sim220$ K, showing significant seasonal variation, whereas the subsurface\ntemperature achieves equilibrium temperature about $120\\sim160$ K below\n$30\\sim50$ cm depth.\n", "title": "Thermophysical characteristics of the large main-belt asteroid (349) Dembowska" }
null
null
null
null
true
null
2091
null
Default
null
null
null
{ "abstract": " We propose CM3, a new deep reinforcement learning method for cooperative\nmulti-agent problems where agents must coordinate for joint success in\nachieving different individual goals. We restructure multi-agent learning into\na two-stage curriculum, consisting of a single-agent stage for learning to\naccomplish individual tasks, followed by a multi-agent stage for learning to\ncooperate in the presence of other agents. These two stages are bridged by\nmodular augmentation of neural network policy and value functions. We further\nadapt the actor-critic framework to this curriculum by formulating local and\nglobal views of the policy gradient and learning via a double critic,\nconsisting of a decentralized value function and a centralized action-value\nfunction. We evaluated CM3 on a new high-dimensional multi-agent environment\nwith sparse rewards: negotiating lane changes among multiple autonomous\nvehicles in the Simulation of Urban Mobility (SUMO) traffic simulator. Detailed\nablation experiments show the positive contribution of each component in CM3,\nand the overall synthesis converges significantly faster to higher performance\npolicies than existing cooperative multi-agent methods.\n", "title": "CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning" }
null
null
null
null
true
null
2092
null
Default
null
null
null
{ "abstract": " This work addresses the problem of robust attitude control of quadcopters.\nFirst, the mathematical model of the quadcopter is derived considering factors\nsuch as nonlinearity, external disturbances, uncertain dynamics and strong\ncoupling. An adaptive twisting sliding mode control algorithm is then developed\nwith the objective of controlling the quadcopter to track desired attitudes\nunder various conditions. For this, the twisting sliding mode control law is\nmodified with a proposed gain adaptation scheme to improve the control\ntransient and tracking performance. Extensive simulation studies and\ncomparisons with experimental data have been carried out for a Solo quadcopter.\nThe results show that the proposed control scheme can achieve strong robustness\nagainst disturbances while is adaptable to parametric variations.\n", "title": "Adaptive twisting sliding mode control for quadrotor unmanned aerial vehicles" }
null
null
null
null
true
null
2093
null
Default
null
null
null
{ "abstract": " With its origin in sociology, Social Network Analysis (SNA), quickly emerged\nand spread to other areas of research, including anthropology, biology,\ninformation science, organizational studies, political science, and computer\nscience. Being it's objective the investigation of social structures through\nthe use of networks and graph theory, Social Network Analysis is, nowadays, an\nimportant research area in several domains. Social Network Analysis cope with\ndifferent problems namely network metrics, models, visualization and\ninformation spreading, each one with several approaches, methods and\nalgorithms. One of the critical areas of Social Network Analysis involves the\ncalculation of different centrality measures (i.e.: the most important vertices\nwithin a graph). Today, the challenge is how to do this fast and efficiently,\nas many increasingly larger datasets are available. Recently, the need to apply\nsuch centrality algorithms to non static networks (i.e.: networks that evolve\nover time) is also a new challenge. Incremental and dynamic versions of\ncentrality measures are starting to emerge (betweenness, closeness, etc). Our\ncontribution is the proposal of two incremental versions of the Laplacian\nCentrality measure, that can be applied not only to large graphs but also to,\nweighted or unweighted, dynamically changing networks. The experimental\nevaluation was performed with several tests in different types of evolving\nnetworks, incremental or fully dynamic. Results have shown that our incremental\nversions of the algorithm can calculate node centralities in large networks,\nfaster and efficiently than the corresponding batch version in both incremental\nand full dynamic network setups.\n", "title": "Dynamic Laplace: Efficient Centrality Measure for Weighted or Unweighted Evolving Networks" }
null
null
null
null
true
null
2094
null
Default
null
null
null
{ "abstract": " Accounting fraud is a global concern representing a significant threat to the\nfinancial system stability due to the resulting diminishing of the market\nconfidence and trust of regulatory authorities. Several tricks can be used to\ncommit accounting fraud, hence the need for non-static regulatory interventions\nthat take into account different fraudulent patterns. Accordingly, this study\naims to improve the detection of accounting fraud via the implementation of\nseveral machine learning methods to better differentiate between fraud and\nnon-fraud companies, and to further assist the task of examination within the\nriskier firms by evaluating relevant financial indicators. Out-of-sample\nresults suggest there is a great potential in detecting falsified financial\nstatements through statistical modelling and analysis of publicly available\naccounting information. The proposed methodology can be of assistance to public\nauditors and regulatory agencies as it facilitates auditing processes, and\nsupports more targeted and effective examinations of accounting reports.\n", "title": "Fighting Accounting Fraud Through Forensic Data Analytics" }
null
null
null
null
true
null
2095
null
Default
null
null
null
{ "abstract": " Gaussian processes (GPs) are a good choice for function approximation as they\nare flexible, robust to over-fitting, and provide well-calibrated predictive\nuncertainty. Deep Gaussian processes (DGPs) are multi-layer generalisations of\nGPs, but inference in these models has proved challenging. Existing approaches\nto inference in DGP models assume approximate posteriors that force\nindependence between the layers, and do not work well in practice. We present a\ndoubly stochastic variational inference algorithm, which does not force\nindependence between layers. With our method of inference we demonstrate that a\nDGP model can be used effectively on data ranging in size from hundreds to a\nbillion points. We provide strong empirical evidence that our inference scheme\nfor DGPs works well in practice in both classification and regression.\n", "title": "Doubly Stochastic Variational Inference for Deep Gaussian Processes" }
null
null
null
null
true
null
2096
null
Default
null
null
null
{ "abstract": " The complex electric modulus and the ac conductivity of carbon\nnanoonion/polyaniline composites were studied from 1 mHz to 1 MHz at isothermal\nconditions ranging from 15 K to room temperature. The temperature dependence of\nthe electric modulus and the dc conductivity analyses indicate a couple of\nhopping mechanisms. The distinction between thermally activated processes and\nthe determination of cross-over temperature were achieved by exploring the\ntemperature dependence of the fractional exponent of the dispersive ac\nconductivity and the bifurcation of the scaled ac conductivity isotherms. The\nresults are analyzed by combining the granular metal model(inter-grain charge\ntunneling of extended electron states located within mesoscopic highly\nconducting polyaniline grains) and a 3D Mott variable range hopping model\n(phonon assisted tunneling within the carbon nano-onions and clusters).\n", "title": "Electric properties of carbon nano-onion/polyaniline composites: a combined electric modulus and ac conductivity study" }
null
null
null
null
true
null
2097
null
Default
null
null
null
{ "abstract": " Uniform convergence rates are provided for asymptotic representations of\nsample extremes. These bounds which are universal in the sense that they do not\ndepend on the extreme value index are meant to be extended to arbitrary samples\nextremes in coming papers.\n", "title": "Uniform Rates of Convergence of Some Representations of Extremes : a first approach" }
null
null
null
null
true
null
2098
null
Default
null
null
null
{ "abstract": " Predicting Arctic sea ice extent is a notoriously difficult forecasting\nproblem, even for lead times as short as one month. Motivated by Arctic\nintraannual variability phenomena such as reemergence of sea surface\ntemperature and sea ice anomalies, we use a prediction approach for sea ice\nanomalies based on analog forecasting. Traditional analog forecasting relies on\nidentifying a single analog in a historical record, usually by minimizing\nEuclidean distance, and forming a forecast from the analog's historical\ntrajectory. Here an ensemble of analogs are used to make forecasts, where the\nensemble weights are determined by a dynamics-adapted similarity kernel, which\ntakes into account the nonlinear geometry on the underlying data manifold. We\napply this method for forecasting pan-Arctic and regional sea ice area and\nvolume anomalies from multi-century climate model data, and in many cases find\nimprovement over the benchmark damped persistence forecast. Examples of success\ninclude the 3--6 month lead time prediction of pan-Arctic area, the winter sea\nice area prediction of some marginal ice zone seas, and the 3--12 month lead\ntime prediction of sea ice volume anomalies in many central Arctic basins. We\ndiscuss possible connections between KAF success and sea ice reemergence, and\nfind KAF to be successful in regions and seasons exhibiting high interannual\nvariability.\n", "title": "Predicting regional and pan-Arctic sea ice anomalies with kernel analog forecasting" }
null
null
null
null
true
null
2099
null
Default
null
null
null
{ "abstract": " Plasmids are autonomously replicating genetic elements in bacteria. At cell\ndivision plasmids are distributed among the two daughter cells. This gene\ntransfer from one generation to the next is called vertical gene transfer. We\nstudy the dynamics of a bacterial population carrying plasmids and are in\nparticular interested in the long-time distribution of plasmids. Starting with\na model for a bacterial population structured by the discrete number of\nplasmids, we proceed to the continuum limit in order to derive a continuous\nmodel. The model incorporates plasmid reproduction, division and death of\nbacteria, and distribution of plasmids at cell division. It is a hyperbolic\nintegro-differential equation and a so-called growth-fragmentation-death model.\nAs we are interested in the long-time distribution of plasmids we study the\nassociated eigenproblem and show existence of eigensolutions. The stability of\nthis solution is studied by analyzing the spectrum of the integro-differential\noperator given by the eigenproblem. By relating the spectrum with the spectrum\nof an integral operator we find a simple real dominating eigenvalue with a\nnon-negative corresponding eigenfunction. Moreover, we describe an iterative\nmethod for the numerical construction of the eigenfunction.\n", "title": "Eigensolutions and spectral analysis of a model for vertical gene transfer of plasmids" }
null
null
null
null
true
null
2100
null
Default
null
null