text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " A new form of the variational autoencoder (VAE) is proposed, based on the\nsymmetric Kullback-Leibler divergence. It is demonstrated that learning of the\nresulting symmetric VAE (sVAE) has close connections to previously developed\nadversarial-learning methods. This relationship helps unify the previously\ndistinct techniques of VAE and adversarially learning, and provides insights\nthat allow us to ameliorate shortcomings with some previously developed\nadversarial methods. In addition to an analysis that motivates and explains the\nsVAE, an extensive set of experiments validate the utility of the approach.\n",
"title": "Symmetric Variational Autoencoder and Connections to Adversarial Learning"
}
| null | null | null | null | true | null |
16701
| null |
Default
| null | null |
null |
{
"abstract": " Matrix Product Vectors form the appropriate framework to study and classify\none-dimensional quantum systems. In this work, we develop the structure theory\nof Matrix Product Unitary operators (MPUs) which appear e.g. in the description\nof time evolutions of one-dimensional systems. We prove that all MPUs have a\nstrict causal cone, making them Quantum Cellular Automata (QCAs), and derive a\ncanonical form for MPUs which relates different MPU representations of the same\nunitary through a local gauge. We use this canonical form to prove an Index\nTheorem for MPUs which gives the precise conditions under which two MPUs are\nadiabatically connected, providing an alternative derivation to that of\n[Commun. Math. Phys. 310, 419 (2012), arXiv:0910.3675] for QCAs. We also\ndiscuss the effect of symmetries on the MPU classification. In particular, we\ncharacterize the tensors corresponding to MPU that are invariant under\nconjugation, time reversal, or transposition. In the first case, we give a full\ncharacterization of all equivalence classes. Finally, we give several examples\nof MPU possessing different symmetries.\n",
"title": "Matrix Product Unitaries: Structure, Symmetries, and Topological Invariants"
}
| null | null | null | null | true | null |
16702
| null |
Default
| null | null |
null |
{
"abstract": " Deep Learning has recently become hugely popular in machine learning,\nproviding significant improvements in classification accuracy in the presence\nof highly-structured and large databases.\nResearchers have also considered privacy implications of deep learning.\nModels are typically trained in a centralized manner with all the data being\nprocessed by the same training algorithm. If the data is a collection of users'\nprivate data, including habits, personal pictures, geographical positions,\ninterests, and more, the centralized server will have access to sensitive\ninformation that could potentially be mishandled. To tackle this problem,\ncollaborative deep learning models have recently been proposed where parties\nlocally train their deep learning structures and only share a subset of the\nparameters in the attempt to keep their respective training sets private.\nParameters can also be obfuscated via differential privacy (DP) to make\ninformation extraction even more challenging, as proposed by Shokri and\nShmatikov at CCS'15.\nUnfortunately, we show that any privacy-preserving collaborative deep\nlearning is susceptible to a powerful attack that we devise in this paper. In\nparticular, we show that a distributed, federated, or decentralized deep\nlearning approach is fundamentally broken and does not protect the training\nsets of honest participants. The attack we developed exploits the real-time\nnature of the learning process that allows the adversary to train a Generative\nAdversarial Network (GAN) that generates prototypical samples of the targeted\ntraining set that was meant to be private (the samples generated by the GAN are\nintended to come from the same distribution as the training data).\nInterestingly, we show that record-level DP applied to the shared parameters of\nthe model, as suggested in previous work, is ineffective (i.e., record-level DP\nis not designed to address our attack).\n",
"title": "Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning"
}
| null | null | null | null | true | null |
16703
| null |
Default
| null | null |
null |
{
"abstract": " Alastair Graham Walker Cameron was an astrophysicist and planetary scientist\nof broad interests and exceptional originality. A founder of the field of\nnuclear astrophysics, he developed the theoretical understanding of the\nchemical elementsâ origins and made pioneering connections between the\nabundances of elements in meteorites to advance the theory that the Moon\noriginated from a giant impact with the young Earth by an object at least the\nsize of Mars. Cameron was an early and persistent exploiter of computer\ntechnology in the theoretical study of complex astronomical systemsâincluding\nnuclear reactions in supernovae, the structure of neutron stars, and planetary\ncollisions.\n",
"title": "A. G. W. Cameron 1925-2005, Biographical Memoir, National Academy of Sciences"
}
| null | null |
[
"Physics"
] | null | true | null |
16704
| null |
Validated
| null | null |
null |
{
"abstract": " We demonstrate how non-convex \"time crystal\" Lagrangians arise in the\neffective description of conventional, realizable physical systems. Such\nembeddings allow for the resolution of dynamical singularities that arise in\nthe reduced description. Sisyphus dynamics, featuring intervals of forward\nmotion interrupted by quick resets, is a generic consequence. Near the would-be\nsingularity of the time crystal, we find striking microstructure.\n",
"title": "Realization of \"Time Crystal\" Lagrangians and Emergent Sisyphus Dynamics"
}
| null | null |
[
"Physics"
] | null | true | null |
16705
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we introduce a novel method of gradient normalization and decay\nwith respect to depth. Our method leverages the simple concept of normalizing\nall gradients in a deep neural network, and then decaying said gradients with\nrespect to their depth in the network. Our proposed normalization and decay\ntechniques can be used in conjunction with most current state of the art\noptimizers and are a very simple addition to any network. This method, although\nsimple, showed improvements in convergence time on state of the art networks\nsuch as DenseNet and ResNet on image classification tasks, as well as on an\nLSTM for natural language processing tasks.\n",
"title": "Gradient Normalization & Depth Based Decay For Deep Learning"
}
| null | null | null | null | true | null |
16706
| null |
Default
| null | null |
null |
{
"abstract": " Dynamic Pushdown Networks (DPNs) are a natural model for multithreaded\nprograms with (recursive) procedure calls and thread creation. On the other\nhand, CARET is a temporal logic that allows to write linear temporal formulas\nwhile taking into account the matching between calls and returns. We consider\nin this paper the model-checking problem of DPNs against CARET formulas. We\nshow that this problem can be effectively solved by a reduction to the\nemptiness problem of Büchi Dynamic Pushdown Systems. We then show that CARET\nmodel checking is also decidable for DPNs communicating with locks. Our results\ncan, in particular, be used for the detection of concurrent malware.\n",
"title": "CARET analysis of multithreaded programs"
}
| null | null | null | null | true | null |
16707
| null |
Default
| null | null |
null |
{
"abstract": " A new approach of solving the ill-conditioned inverse problem for analytical\ncontinuation is proposed. The root of the problem lies in the fact that even\ntiny noise of imaginary-time input data has a serious impact on the inferred\nreal-frequency spectra. By means of a modern regularization technique, we\neliminate redundant degrees of freedom that essentially carry the noise,\nleaving only relevant information unaffected by the noise. The resultant\nspectrum is represented with minimal bases and thus a stable analytical\ncontinuation is achieved. This framework further provides a tool for analyzing\nto what extent the Monte Carlo data need to be accurate to resolve details of\nan expected spectral function.\n",
"title": "Sparse modeling approach to analytical continuation of imaginary-time quantum Monte Carlo data"
}
| null | null | null | null | true | null |
16708
| null |
Default
| null | null |
null |
{
"abstract": " Most deep reinforcement learning algorithms are data inefficient in complex\nand rich environments, limiting their applicability to many scenarios. One\ndirection for improving data efficiency is multitask learning with shared\nneural network parameters, where efficiency may be improved through transfer\nacross related tasks. In practice, however, this is not usually observed,\nbecause gradients from different tasks can interfere negatively, making\nlearning unstable and sometimes even less data efficient. Another issue is the\ndifferent reward schemes between tasks, which can easily lead to one task\ndominating the learning of a shared model. We propose a new approach for joint\ntraining of multiple tasks, which we refer to as Distral (Distill & transfer\nlearning). Instead of sharing parameters between the different workers, we\npropose to share a \"distilled\" policy that captures common behaviour across\ntasks. Each worker is trained to solve its own task while constrained to stay\nclose to the shared policy, while the shared policy is trained by distillation\nto be the centroid of all task policies. Both aspects of the learning process\nare derived by optimizing a joint objective function. We show that our approach\nsupports efficient transfer on complex 3D environments, outperforming several\nrelated methods. Moreover, the proposed learning process is more robust and\nmore stable---attributes that are critical in deep reinforcement learning.\n",
"title": "Distral: Robust Multitask Reinforcement Learning"
}
| null | null | null | null | true | null |
16709
| null |
Default
| null | null |
null |
{
"abstract": " We demonstrate a technique for obtaining the density of atomic vapor, by\ndoing a fit of the resonant absorption spectrum to a density-matrix model. In\norder to demonstrate the usefulness of the technique, we apply it to absorption\nin the ${\\rm D_2}$ line of a Cs vapor cell at room temperature. The lineshape\nof the spectrum is asymmetric due to the role of open transitions. This\nasymmetry is explained in the model using transit-time relaxation as the atoms\ntraverse the laser beam. We also obtain the latent heat of evaporation by\nstudying the number density as a function of temperature close to room\ntemperature.\n",
"title": "Finding the number density of atomic vapor by studying its absorption profile"
}
| null | null |
[
"Physics"
] | null | true | null |
16710
| null |
Validated
| null | null |
null |
{
"abstract": " We prove the first rigidity and classification theorems for crossed product\nvon Neumann algebras given by actions of non-discrete, locally compact groups.\nWe prove that for arbitrary free probability measure preserving actions of\nconnected simple Lie groups of real rank one, the crossed product has a unique\nCartan subalgebra up to unitary conjugacy. We then deduce a W* strong rigidity\ntheorem for irreducible actions of products of such groups. More generally, our\nresults hold for products of locally compact groups that are nonamenable,\nweakly amenable and that belong to Ozawa's class S.\n",
"title": "Rigidity for von Neumann algebras given by locally compact groups and their crossed products"
}
| null | null | null | null | true | null |
16711
| null |
Default
| null | null |
null |
{
"abstract": " For inhomogeneous interacting electronic systems under a time-dependent\nelectromagnetic perturbation, we derive the linear equation for response\nfunctions in a quantum mechanical manner. It is a natural extension of the\noriginal semi-classical Singwi-Tosi-Land-Sjoelander (STLS) approach for an\nelectron gas. The factorization ansatz for the two-particle distribution is an\nindispensable ingredient in the STLS approaches for determination of the\nresponse function and the pair correlation function. In this study, we choose\nan analytically solvable interacting two-electron system as the target for\nwhich we examine the validity of the approximation. It is demonstrated that the\nSTLS response function reproduces well the exact one for low-energy\nexcitations. The interaction energy contributed from the STLS response function\nis also discussed.\n",
"title": "Quantum Singwi-Tosi-Land-Sjoelander approach for interacting inhomogeneous systems under electromagnetic fields: Comparison with exact results"
}
| null | null | null | null | true | null |
16712
| null |
Default
| null | null |
null |
{
"abstract": " This paper analyzes the use of 3D Convolutional Neural Networks for brain\ntumor segmentation in MR images. We address the problem using three different\narchitectures that combine fine and coarse features to obtain the final\nsegmentation. We compare three different networks that use multi-resolution\nfeatures in terms of both design and performance and we show that they improve\ntheir single-resolution counterparts.\n",
"title": "3D Convolutional Neural Networks for Brain Tumor Segmentation: A Comparison of Multi-resolution Architectures"
}
| null | null | null | null | true | null |
16713
| null |
Default
| null | null |
null |
{
"abstract": " The kernel embedding algorithm is an important component for adapting kernel\nmethods to large datasets. Since the algorithm consumes a major computation\ncost in the testing phase, we propose a novel teacher-learner framework of\nlearning computation-efficient kernel embeddings from specific data. In the\nframework, the high-precision embeddings (teacher) transfer the data\ninformation to the computation-efficient kernel embeddings (learner). We\njointly select informative embedding functions and pursue an orthogonal\ntransformation between two embeddings. We propose a novel approach of\nconstrained variational expectation maximization (CVEM), where the alternate\ndirection method of multiplier (ADMM) is applied over a nonconvex domain in the\nmaximization step. We also propose two specific formulations based on the\nprevalent Random Fourier Feature (RFF), the masked and blocked version of\nComputation-Efficient RFF (CERF), by imposing a random binary mask or a block\nstructure on the transformation matrix. By empirical studies of several\napplications on different real-world datasets, we demonstrate that the CERF\nsignificantly improves the performance of kernel methods upon the RFF, under\ncertain arithmetic operation requirements, and suitable for structured matrix\nmultiplication in Fastfood type algorithms.\n",
"title": "Learning Random Fourier Features by Hybrid Constrained Optimization"
}
| null | null | null | null | true | null |
16714
| null |
Default
| null | null |
null |
{
"abstract": " This paper is the first attempt to systematically study properties of the\neffective Hamiltonian $\\overline{H}$ arising in the periodic homogenization of\nsome coercive but nonconvex Hamilton-Jacobi equations. Firstly, we introduce a\nnew and robust decomposition method to obtain min-max formulas for a class of\nnonconvex $\\overline{H}$. Secondly, we analytically and numerically investigate\nother related interesting phenomena, such as \"quasi-convexification\" and\nbreakdown of symmetry, of $\\overline{H}$ from other typical nonconvex\nHamiltonians. Finally, in the appendix, we show that our new method and those a\npriori formulas from the periodic setting can be used to obtain stochastic\nhomogenization for same class of nonconvex Hamilton-Jacobi equations. Some\nconjectures and problems are also proposed.\n",
"title": "Min-max formulas and other properties of certain classes of nonconvex effective Hamiltonians"
}
| null | null | null | null | true | null |
16715
| null |
Default
| null | null |
null |
{
"abstract": " Consider a compact Lie group $G$ and a closed subgroup $H<G$. Suppose\n$\\mathcal M$ is the set of $G$-invariant Riemannian metrics on the homogeneous\nspace $M=G/H$. We obtain a sufficient condition for the existence of\n$g\\in\\mathcal M$ and $c>0$ such that the Ricci curvature of $g$ equals $cT$ for\na given $T\\in\\mathcal M$. This condition is also necessary if the isotropy\nrepresentation of $M$ splits into two inequivalent irreducible summands.\nImmediate and potential applications include new existence results for Ricci\niterations.\n",
"title": "The Prescribed Ricci Curvature Problem on Homogeneous Spaces with Intermediate Subgroups"
}
| null | null | null | null | true | null |
16716
| null |
Default
| null | null |
null |
{
"abstract": " An algorithm for constructing a control function that transfers a wide class\nof stationary nonlinear systems of ordinary differential equations from an\ninitial state to a final state under certain control restrictions is proposed.\nThe algorithm is designed to be convenient for numerical implementation. A\nconstructive criterion of the desired transfer possibility is presented. The\nproblem of an interorbital flight is considered as a test example and it is\nsimulated numerically with the presented method.\n",
"title": "Solving Boundary Value Problem for a Nonlinear Stationary Controllable System with Synthesizing Control"
}
| null | null | null | null | true | null |
16717
| null |
Default
| null | null |
null |
{
"abstract": " Over the last few years there has been a growing interest in using financial\ntrading networks to understand the microstructure of financial markets. Most of\nthe methodologies developed so far for this purpose have been based on the\nstudy of descriptive summaries of the networks such as the average node degree\nand the clustering coefficient. In contrast, this paper develops novel\nstatistical methods for modeling sequences of financial trading networks. Our\napproach uses a stochastic blockmodel to describe the structure of the network\nduring each period, and then links multiple time periods using a hidden Markov\nmodel. This structure allows us to identify events that affect the structure of\nthe market and make accurate short-term prediction of future transactions. The\nmethodology is illustrated using data from the NYMEX natural gas futures market\nfrom January 2005 to December 2008.\n",
"title": "Modelling and prediction of financial trading networks: An application to the NYMEX natural gas futures market"
}
| null | null | null | null | true | null |
16718
| null |
Default
| null | null |
null |
{
"abstract": " Computers are increasingly used to make decisions that have significant\nimpact in people's lives. Often, these predictions can affect different\npopulation subgroups disproportionately. As a result, the issue of fairness has\nreceived much recent interest, and a number of fairness-enhanced classifiers\nand predictors have appeared in the literature. This paper seeks to study the\nfollowing questions: how do these different techniques fundamentally compare to\none another, and what accounts for the differences? Specifically, we seek to\nbring attention to many under-appreciated aspects of such fairness-enhancing\ninterventions. Concretely, we present the results of an open benchmark we have\ndeveloped that lets us compare a number of different algorithms under a variety\nof fairness measures, and a large number of existing datasets. We find that\nalthough different algorithms tend to prefer specific formulations of fairness\npreservations, many of these measures strongly correlate with one another. In\naddition, we find that fairness-preserving algorithms tend to be sensitive to\nfluctuations in dataset composition (simulated in our benchmark by varying\ntraining-test splits), indicating that fairness interventions might be more\nbrittle than previously thought.\n",
"title": "A comparative study of fairness-enhancing interventions in machine learning"
}
| null | null |
[
"Statistics"
] | null | true | null |
16719
| null |
Validated
| null | null |
null |
{
"abstract": " Opioid addiction is a severe public health threat in the U.S, causing massive\ndeaths and many social problems. Accurate relapse prediction is of practical\nimportance for recovering patients since relapse prediction promotes timely\nrelapse preventions that help patients stay clean. In this paper, we introduce\na Generative Adversarial Networks (GAN) model to predict the addiction relapses\nbased on sentiment images and social influences. Experimental results on real\nsocial media data from Reddit.com demonstrate that the GAN model delivers a\nbetter performance than comparable alternative techniques. The sentiment images\ngenerated by the model show that relapse is closely connected with two emotions\n`joy' and `negative'. This work is one of the first attempts to predict\nrelapses using massive social media data and generative adversarial nets. The\nproposed method, combined with knowledge of social media mining, has the\npotential to revolutionize the practice of opioid addiction prevention and\ntreatment.\n",
"title": "Predicting Opioid Relapse Using Social Media Data"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16720
| null |
Validated
| null | null |
null |
{
"abstract": " We examine by a perturbation method how the self-trapping of g-mode\noscillations in geometrically thin relativistic disks is affected by uniform\nvertical magnetic fields. Disks which we consider are isothermal in the\nvertical direction, but are truncated at a certain height by presence of hot\ncoronae. We find that the characteristics of self-trapping of axisymmetric\ng-mode oscillations in non-magnetized disks is kept unchanged in magnetized\ndisks at least till a strength of the fields, depending on vertical thickness\nof disks. These magnetic fields become stronger as the disk becomes thinner.\nThis result suggests that trapped g-mode oscillations still remain as one of\npossible candidates of quasi-periodic oscillations observed in black-hole and\nneutron-star X-ray binaries in the cases where vertical magnetic fields in\ndisks are weak.\n",
"title": "Self-Trapping of G-Mode Oscillations in Relativistic Thin Disks, Revisited"
}
| null | null | null | null | true | null |
16721
| null |
Default
| null | null |
null |
{
"abstract": " We discuss computability and computational complexity of conformal mappings\nand their boundary extensions. As applications, we review the state of the art\nregarding computability and complexity of Julia sets, their invariant measures\nand external rays impressions.\n",
"title": "Computable geometric complex analysis and complex dynamics"
}
| null | null | null | null | true | null |
16722
| null |
Default
| null | null |
null |
{
"abstract": " Estimating the 6D pose of known objects is important for robots to interact\nwith the real world. The problem is challenging due to the variety of objects\nas well as the complexity of a scene caused by clutter and occlusions between\nobjects. In this work, we introduce PoseCNN, a new Convolutional Neural Network\nfor 6D object pose estimation. PoseCNN estimates the 3D translation of an\nobject by localizing its center in the image and predicting its distance from\nthe camera. The 3D rotation of the object is estimated by regressing to a\nquaternion representation. We also introduce a novel loss function that enables\nPoseCNN to handle symmetric objects. In addition, we contribute a large scale\nvideo dataset for 6D object pose estimation named the YCB-Video dataset. Our\ndataset provides accurate 6D poses of 21 objects from the YCB dataset observed\nin 92 videos with 133,827 frames. We conduct extensive experiments on our\nYCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is\nhighly robust to occlusions, can handle symmetric objects, and provide accurate\npose estimation using only color images as input. When using depth data to\nfurther refine the poses, our approach achieves state-of-the-art results on the\nchallenging OccludedLINEMOD dataset. Our code and dataset are available at\nthis https URL.\n",
"title": "PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes"
}
| null | null | null | null | true | null |
16723
| null |
Default
| null | null |
null |
{
"abstract": " The Steiner Forest problem is among the fundamental network design problems.\nFinding tight linear programming bounds for the problem is the key for both\nfast Branch-and-Bound algorithms and good primal-dual approximations. On the\ntheoretical side, the best known bound can be obtained from an integer program\n[KLSv08]. It guarantees a value that is a (2-eps)-approximation of the integer\noptimum. On the practical side, bounds from a mixed integer program by Magnanti\nand Raghavan [MR05] are very close to the integer optimum in computational\nexperiments, but the size of the model limits its practical usefulness. We\ncompare a number of known integer programming formulations for the problem and\npropose three new formulations. We can show that the bounds from our two new\ncut-based formulations for the problem are within a factor of 2 of the integer\noptimum. In our experiments, the formulations prove to be both tractable and\nprovide better bounds than all other tractable formulations. In particular, the\nfactor to the integer optimum is much better than 2 in the experiments.\n",
"title": "MIP Formulations for the Steiner Forest Problem"
}
| null | null | null | null | true | null |
16724
| null |
Default
| null | null |
null |
{
"abstract": " We analyze the origins of the luminescence in germania-silica fibers with\nhigh germanium concentration (about 30 mol. % GeO2) in the region 1-2 {\\mu}m\nwith a laser pump at the wavelength 532 nm. We show that such fibers\ndemonstrate the high level of luminescence which unlikely allows the\nobservation of photon triplets, generated in a third-order spontaneous\nparametric down-conversion process in such fibers. The only efficient approach\nto the luminescence reduction is the hydrogen saturation of fiber samples,\nhowever, even in this case the level of residual luminescence is still too high\nfor three-photon registration.\n",
"title": "Luminescence in germania-silica fibers in 1-2 μm region"
}
| null | null | null | null | true | null |
16725
| null |
Default
| null | null |
null |
{
"abstract": " We define a new class of languages of $\\omega$-words, strictly extending\n$\\omega$-regular languages.\nOne way to present this new class is by a type of regular expressions. The\nnew expressions are an extension of $\\omega$-regular expressions where two new\nvariants of the Kleene star $L^*$ are added: $L^B$ and $L^S$. These new\nexponents are used to say that parts of the input word have bounded size, and\nthat parts of the input can have arbitrarily large sizes, respectively. For\ninstance, the expression $(a^Bb)^\\omega$ represents the language of infinite\nwords over the letters $a,b$ where there is a common bound on the number of\nconsecutive letters $a$. The expression $(a^Sb)^\\omega$ represents a similar\nlanguage, but this time the distance between consecutive $b$'s is required to\ntend toward the infinite.\nWe develop a theory for these languages, with a focus on decidability and\nclosure. We define an equivalent automaton model, extending Büchi automata.\nThe main technical result is a complementation lemma that works for languages\nwhere only one type of exponent---either $L^B$ or $L^S$---is used.\nWe use the closure and decidability results to obtain partial decidability\nresults for the logic MSOLB, a logic obtained by extending monadic second-order\nlogic with new quantifiers that speak about the size of sets.\n",
"title": "Boundedness in languages of infinite words"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16726
| null |
Validated
| null | null |
null |
{
"abstract": " Maximum regularized likelihood estimators (MRLEs) are arguably the most\nestablished class of estimators in high-dimensional statistics. In this paper,\nwe derive guarantees for MRLEs in Kullback-Leibler divergence, a general\nmeasure of prediction accuracy. We assume only that the densities have a convex\nparametrization and that the regularization is definite and positive\nhomogenous. The results thus apply to a very large variety of models and\nestimators, such as tensor regression and graphical models with convex and\nnon-convex regularized methods. A main conclusion is that MRLEs are broadly\nconsistent in prediction - regardless of whether restricted eigenvalues or\nsimilar conditions hold.\n",
"title": "Maximum Regularized Likelihood Estimators: A General Prediction Theory and Applications"
}
| null | null | null | null | true | null |
16727
| null |
Default
| null | null |
null |
{
"abstract": " While the enhancement of the spin-space symmetry from the usual\n$\\mathrm{SU}(2)$ to $\\mathrm{SU}(N)$ is promising for finding nontrivial\nquantum spin liquids, its realization in magnetic materials remains\nchallenging. Here we propose a new mechanism by which the $\\mathrm{SU}(4)$\nsymmetry emerges in the strong spin-orbit coupling limit. In $d^1$ transition\nmetal compounds with edge-sharing anion octahedra, the spin-orbit coupling\ngives rise to strongly bond-dependent and apparently $\\mathrm{SU}(4)$-breaking\nhopping between the $J_\\textrm{eff}=3/2$ quartets. However, in the honeycomb\nstructure, a gauge transformation maps the system to an\n$\\mathrm{SU}(4)$-symmetric Hubbard model. In the strong repulsion limit at\nquarter filling, as realized in $\\alpha$-ZrCl$_3,$ the low-energy effective\nmodel is the $\\mathrm{SU}(4)$ Heisenberg model on the honeycomb lattice, which\ncannot have a trivial gapped ground state and is expected to host a gapless\nspin-orbital liquid. By generalizing this model to other three-dimensional\nlattices, we also propose crystalline spin-orbital liquids protected by this\nemergent $\\mathrm{SU}(4)$ symmetry and space group symmetries.\n",
"title": "Emergent $\\mathrm{SU}(4)$ Symmetry in $α$-ZrCl$_3$ and Crystalline Spin-Orbital Liquids"
}
| null | null | null | null | true | null |
16728
| null |
Default
| null | null |
null |
{
"abstract": " From Morita theoretic viewpoint, computing Morita invariants is important. We\nprove that the intersection of the center and the $n$th (right) socle $ZS^n(A)\n:= Z(A) \\cap \\operatorname{Soc}^n(A)$ of a finite-dimensional algebra $A$ is a\nMorita invariant; This is a generalization of important Morita invariants ---\nthe center $Z(A)$ and the Reynolds ideal $ZS^1(A)$. As an example, we also\nstudied $ZS^n(FG)$ for the group algebra $FG$ of a finite $p$-group $G$ over a\nfield $F$ of positive characteristic $p$. Such an algebra has a basis along the\nsocle filtration, known as the Jennings basis. We prove certain elements of the\nJennings basis are central and hence form a linearly independent set of\n$ZS^n(FG)$. In fact, such elements form a basis of $ZS^n(FG)$ for every integer\n$1 \\le n \\le p$ if $G$ is powerful. As a corollary we have\n$\\operatorname{Soc}^p(FG) \\subseteq Z(FG)$ if $G$ is powerful.\n",
"title": "Central elements of the Jennings basis and certain Morita invariants"
}
| null | null | null | null | true | null |
16729
| null |
Default
| null | null |
null |
{
"abstract": " Limited annotated data available for the recognition of facial expression and\naction units embarrasses the training of deep networks, which can learn\ndisentangled invariant features. However, a linear model with just several\nparameters normally is not demanding in terms of training data. In this paper,\nwe propose an elegant linear model to untangle confounding factors in\nchallenging realistic multichannel signals such as 2D face videos. The simple\nyet powerful model does not rely on huge training data and is natural for\nrecognizing facial actions without explicitly disentangling the identity. Base\non well-understood intuitive linear models such as Sparse Representation based\nClassification (SRC), previous attempts require a prepossessing of explicit\ndecoupling which is practically inexact. Instead, we exploit the low-rank\nproperty across frames to subtract the underlying neutral faces which are\nmodeled jointly with sparse representation on the action components with group\nsparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot\nautomatic method on raw face videos performs as competitive as SRC applied on\nmanually prepared action components and performs even better than SRC in terms\nof true positive rate. We apply the model to the even more challenging task of\nfacial action unit recognition, verified on the MPI Face Video Database\n(MPI-VDB) achieving a decent performance. All the programs and data have been\nmade publicly available.\n",
"title": "Linear Disentangled Representation Learning for Facial Actions"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
16730
| null |
Validated
| null | null |
null |
{
"abstract": " Generating molecules with desired chemical properties is important for drug\ndiscovery. The use of generative neural networks is promising for this task.\nHowever, from visual inspection, it often appears that generated samples lack\ndiversity. In this paper, we quantify this internal chemical diversity, and we\nraise the following challenge: can a nontrivial AI model reproduce natural\nchemical diversity for desired molecules? To illustrate this question, we\nconsider two generative models: a Reinforcement Learning model and the recently\nintroduced ORGAN. Both fail at this challenge. We hope this challenge will\nstimulate research in this direction.\n",
"title": "ChemGAN challenge for drug discovery: can AI reproduce natural chemical diversity?"
}
| null | null | null | null | true | null |
16731
| null |
Default
| null | null |
null |
{
"abstract": " This survey is a short version of a chapter written by the first two authors\nin the book [A. Henrot, editor. Shape optimization and spectral theory. Berlin:\nDe Gruyter, 2017] (where more details and references are given) but we have\ndecided here to put more emphasis on the role of the Aharonov-Bohm operators\nwhich appear to be a useful tool coming from physics for understanding a\nproblem motivated either by spectral geometry or dynamics of population.\nSimilar questions appear also in Bose-Einstein theory. Finally some open\nproblems which might be of interest are mentioned.\n",
"title": "Nodal domains, spectral minimal partitions, and their relation to Aharonov-Bohm operators"
}
| null | null | null | null | true | null |
16732
| null |
Default
| null | null |
null |
{
"abstract": " Optimization of the fidelity of control operations is of critical importance\nin the pursuit of fault-tolerant quantum computation. We apply optimal control\ntechniques to demonstrate that a single drive via the cavity in circuit quantum\nelectrodynamics can implement a high-fidelity two-qubit all-microwave gate that\ndirectly entangles the qubits via the mutual qubit-cavity couplings. This is\nperformed by driving at one of the qubits' frequencies which generates a\nconditional two-qubit gate, but will also generate other spurious interactions.\nThese optimal control techniques are used to find pulse shapes that can perform\nthis two-qubit gate with high fidelity, robust against errors in the system\nparameters. The simulations were all performed using experimentally relevant\nparameters and constraints.\n",
"title": "Optimal control of two qubits via a single cavity drive in circuit quantum electrodynamics"
}
| null | null |
[
"Physics"
] | null | true | null |
16733
| null |
Validated
| null | null |
null |
{
"abstract": " Layered neural networks have greatly improved the performance of various\napplications including image processing, speech recognition, natural language\nprocessing, and bioinformatics. However, it is still difficult to discover or\ninterpret knowledge from the inference provided by a layered neural network,\nsince its internal representation has many nonlinear and complex parameters\nembedded in hierarchical layers. Therefore, it becomes important to establish a\nnew methodology by which layered neural networks can be understood.\nIn this paper, we propose a new method for extracting a global and simplified\nstructure from a layered neural network. Based on network analysis, the\nproposed method detects communities or clusters of units with similar\nconnection patterns. We show its effectiveness by applying it to three use\ncases. (1) Network decomposition: it can decompose a trained neural network\ninto multiple small independent networks thus dividing the problem and reducing\nthe computation time. (2) Training assessment: the appropriateness of a trained\nresult with a given hyperparameter or randomly chosen initial parameters can be\nevaluated by using a modularity index. And (3) data analysis: in practical data\nit reveals the community structure in the input, hidden, and output layers,\nwhich serves as a clue for discovering knowledge from a trained neural network.\n",
"title": "Modular Representation of Layered Neural Networks"
}
| null | null | null | null | true | null |
16734
| null |
Default
| null | null |
null |
{
"abstract": " This Chapter, \"A Guide to General-Purpose ABC Software\", is to appear in the\nforthcoming Handbook of Approximate Bayesian Computation (2018). We present\ngeneral-purpose software to perform Approximate Bayesian Computation (ABC) as\nimplemented in the R-packages abc and EasyABC and the c++ program ABCtoolbox.\nWith simple toy models we demonstrate how to perform parameter inference, model\nselection, validation and optimal choice of summary statistics. We demonstrate\nhow to combine ABC with Markov Chain Monte Carlo and describe a realistic\npopulation genetics application.\n",
"title": "A Guide to General-Purpose Approximate Bayesian Computation Software"
}
| null | null | null | null | true | null |
16735
| null |
Default
| null | null |
null |
{
"abstract": " Feature selection can facilitate the learning of mixtures of discrete random\nvariables as they arise, e.g. in crowdsourcing tasks. Intuitively, not all\nworkers are equally reliable but, if the less reliable ones could be\neliminated, then learning should be more robust. By analogy with Gaussian\nmixture models, we seek a low-order statistical approach, and here introduce an\nalgorithm based on the (pairwise) mutual information. This induces an order\nover workers that is well structured for the `one coin' model. More generally,\nit is justified by a goodness-of-fit measure and is validated empirically.\nImprovement in real data sets can be substantial.\n",
"title": "Feature Selection Facilitates Learning Mixtures of Discrete Product Distributions"
}
| null | null | null | null | true | null |
16736
| null |
Default
| null | null |
null |
{
"abstract": " Robots have the potential to assist people in bed, such as in healthcare\nsettings, yet bedding materials like sheets and blankets can make observation\nof the human body difficult for robots. A pressure-sensing mat on a bed can\nprovide pressure images that are relatively insensitive to bedding materials.\nHowever, prior work on estimating human pose from pressure images has been\nrestricted to 2D pose estimates and flat beds. In this work, we present two\nconvolutional neural networks to estimate the 3D joint positions of a person in\na configurable bed from a single pressure image. The first network directly\noutputs 3D joint positions, while the second outputs a kinematic model that\nincludes estimated joint angles and limb lengths. We evaluated our networks on\ndata from 17 human participants with two bed configurations: supine and seated.\nOur networks achieved a mean joint position error of 77 mm when tested with\ndata from people outside the training set, outperforming several baselines. We\nalso present a simple mechanical model that provides insight into ambiguity\nassociated with limbs raised off of the pressure mat, and demonstrate that\nMonte Carlo dropout can be used to estimate pose confidence in these\nsituations. Finally, we provide a demonstration in which a mobile manipulator\nuses our network's estimated kinematic model to reach a location on a person's\nbody in spite of the person being seated in a bed and covered by a blanket.\n",
"title": "3D Human Pose Estimation on a Configurable Bed from a Pressure Image"
}
| null | null | null | null | true | null |
16737
| null |
Default
| null | null |
null |
{
"abstract": " Hierarchical attention networks have recently achieved remarkable performance\nfor document classification in a given language. However, when multilingual\ndocument collections are considered, training such models separately for each\nlanguage entails linear parameter growth and lack of cross-language transfer.\nLearning a single multilingual model with fewer parameters is therefore a\nchallenging but potentially beneficial objective. To this end, we propose\nmultilingual hierarchical attention networks for learning document structures,\nwith shared encoders and/or shared attention mechanisms across languages, using\nmulti-task learning and an aligned semantic space as input. We evaluate the\nproposed models on multilingual document classification with disjoint label\nsets, on a large dataset which we provide, with 600k news documents in 8\nlanguages, and 5k labels. The multilingual models outperform monolingual ones\nin low-resource as well as full-resource settings, and use fewer parameters,\nthus confirming their computational efficiency and the utility of\ncross-language transfer.\n",
"title": "Multilingual Hierarchical Attention Networks for Document Classification"
}
| null | null | null | null | true | null |
16738
| null |
Default
| null | null |
null |
{
"abstract": " Let $t$ be a positive real number. A graph is called $t$-tough, if the\nremoval of any cutset $S$ leaves at most $|S|/t$ components. The toughness of a\ngraph is the largest $t$ for which the graph is $t$-tough. A graph is minimally\n$t$-tough, if the toughness of the graph is $t$ and the deletion of any edge\nfrom the graph decreases the toughness. The complexity class DP is the set of\nall languages that can be expressed as the intersection of a language in NP and\na language in coNP. We prove that recognizing minimally $t$-tough graphs is\nDP-complete for any positive integer $t$ and for any positive rational number\n$t \\leq 1/2$.\n",
"title": "The complexity of recognizing minimally tough graphs"
}
| null | null | null | null | true | null |
16739
| null |
Default
| null | null |
null |
{
"abstract": " In vitro and in vivo spiking activity clearly differ. Whereas networks in\nvitro develop strong bursts separated by periods of very little spiking\nactivity, in vivo cortical networks show continuous activity. This is puzzling\nconsidering that both networks presumably share similar single-neuron dynamics\nand plasticity rules. We propose that the defining difference between in vitro\nand in vivo dynamics is the strength of external input. In vitro, networks are\nvirtually isolated, whereas in vivo every brain area receives continuous input.\nWe analyze a model of spiking neurons in which the input strength, mediated by\nspike rate homeostasis, determines the characteristics of the dynamical state.\nIn more detail, our analytical and numerical results on various network\ntopologies show consistently that under increasing input, homeostatic\nplasticity generates distinct dynamic states, from bursting, to\nclose-to-critical, reverberating and irregular states. This implies that the\ndynamic state of a neural network is not fixed but can readily adapt to the\ninput strengths. Indeed, our results match experimental spike recordings in\nvitro and in vivo: the in vitro bursting behavior is consistent with a state\ngenerated by very low network input (< 0.1%), whereas in vivo activity suggests\nthat on the order of 1% recorded spikes are input-driven, resulting in\nreverberating dynamics. Importantly, this predicts that one can abolish the\nubiquitous bursts of in vitro preparations, and instead impose dynamics\ncomparable to in vivo activity by exposing the system to weak long-term\nstimulation, thereby opening new paths to establish an in vivo-like assay in\nvitro for basic as well as neurological studies.\n",
"title": "Homeostatic plasticity and external input shape neural network dynamics"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
16740
| null |
Validated
| null | null |
null |
{
"abstract": " We propose a simple objective evaluation measure for explanations of a\ncomplex black-box machine learning model. While most such model explanations\nhave largely been evaluated via qualitative measures, such as how humans might\nqualitatively perceive the explanations, it is vital to also consider objective\nmeasures such as the one we propose in this paper. Our evaluation measure that\nwe naturally call sensitivity is simple: it characterizes how an explanation\nchanges as we vary the test input, and depending on how we measure these\nchanges, and how we vary the input, we arrive at different notions of\nsensitivity. We also provide a calculus for deriving sensitivity of complex\nexplanations in terms of that for simpler explanations, which thus allows an\neasy computation of sensitivities for yet to be proposed explanations. One\nadvantage of an objective evaluation measure is that we can optimize the\nexplanation with respect to the measure: we show that (1) any given explanation\ncan be simply modified to improve its sensitivity with just a modest deviation\nfrom the original explanation, and (2) gradient based explanations of an\nadversarially trained network are less sensitive. Perhaps surprisingly, our\nexperiments show that explanations optimized to have lower sensitivity can be\nmore faithful to the model predictions.\n",
"title": "How Sensitive are Sensitivity-Based Explanations?"
}
| null | null | null | null | true | null |
16741
| null |
Default
| null | null |
null |
{
"abstract": " We present many new results related to reliable (interactive) communication\nover insertion-deletion channels. Synchronization errors, such as insertions\nand deletions, strictly generalize the usual symbol corruption errors and are\nmuch harder to protect against.\nWe show how to hide the complications of synchronization errors in many\napplications by introducing very general channel simulations which efficiently\ntransform an insertion-deletion channel into a regular symbol corruption\nchannel with an error rate larger by a constant factor and a slightly smaller\nalphabet. We generalize synchronization string based methods which were\nrecently introduced as a tool to design essentially optimal error correcting\ncodes for insertion-deletion channels. Our channel simulations depend on the\nfact that, at the cost of increasing the error rate by a constant factor,\nsynchronization strings can be decoded in a streaming manner that preserves\nlinearity of time. We also provide a lower bound showing that this constant\nfactor cannot be improved to $1+\\epsilon$, in contrast to what is achievable\nfor error correcting codes. Our channel simulations drastically generalize the\napplicability of synchronization strings.\nWe provide new interactive coding schemes which simulate any interactive\ntwo-party protocol over an insertion-deletion channel. Our results improve over\nthe interactive coding schemes of Braverman et al. [TransInf 2017] and Sherstov\nand Wu [FOCS 2017], which achieve a small constant rate and require exponential\ntime computations, with respect to computational and communication\ncomplexities. We provide the first computationally efficient interactive coding\nschemes for synchronization errors, the first coding scheme with a rate\napproaching one for small noise rates, and also the first coding scheme that\nworks over arbitrarily small alphabet sizes.\n",
"title": "Synchronization Strings: Channel Simulations and Interactive Coding for Insertions and Deletions"
}
| null | null | null | null | true | null |
16742
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we investigate a novel training procedure to learn a generative\nmodel as the transition operator of a Markov chain, such that, when applied\nrepeatedly on an unstructured random noise sample, it will denoise it into a\nsample that matches the target distribution from the training set. The novel\ntraining procedure to learn this progressive denoising operation involves\nsampling from a slightly different chain than the model chain used for\ngeneration in the absence of a denoising target. In the training chain we\ninfuse information from the training target example that we would like the\nchains to reach with a high probability. The thus learned transition operator\nis able to produce quality and varied samples in a small number of steps.\nExperiments show competitive results compared to the samples generated with a\nbasic Generative Adversarial Net\n",
"title": "Learning to Generate Samples from Noise through Infusion Training"
}
| null | null | null | null | true | null |
16743
| null |
Default
| null | null |
null |
{
"abstract": " Pairwise comparison data arises in many domains, including tournament\nrankings, web search, and preference elicitation. Given noisy comparisons of a\nfixed subset of pairs of items, we study the problem of estimating the\nunderlying comparison probabilities under the assumption of strong stochastic\ntransitivity (SST). We also consider the noisy sorting subclass of the SST\nmodel. We show that when the assignment of items to the topology is arbitrary,\nthese permutation-based models, unlike their parametric counterparts, do not\nadmit consistent estimation for most comparison topologies used in practice. We\nthen demonstrate that consistent estimation is possible when the assignment of\nitems to the topology is randomized, thus establishing a dichotomy between\nworst-case and average-case designs. We propose two estimators in the\naverage-case setting and analyze their risk, showing that it depends on the\ncomparison topology only through the degree sequence of the topology. The rates\nachieved by these estimators are shown to be optimal for a large class of\ngraphs. Our results are corroborated by simulations on multiple comparison\ntopologies.\n",
"title": "Worst-case vs Average-case Design for Estimation from Fixed Pairwise Comparisons"
}
| null | null | null | null | true | null |
16744
| null |
Default
| null | null |
null |
{
"abstract": " Decoupling multivariate polynomials is useful for obtaining an insight into\nthe workings of a nonlinear mapping, performing parameter reduction, or\napproximating nonlinear functions. Several different tensor-based approaches\nhave been proposed independently for this task, involving different tensor\nrepresentations of the functions, and ultimately leading to a canonical\npolyadic decomposition.\nWe first show that the involved tensors are related by a linear\ntransformation, and that their CP decompositions and uniqueness properties are\nclosely related. This connection provides a way to better assess which of the\nmethods should be favored in certain problem settings, and may be a starting\npoint to unify the two approaches. Second, we show that taking into account the\npreviously ignored intrinsic structure in the tensor decompositions improves\nthe uniqueness properties of the decompositions and thus enlarges the\napplicability range of the methods.\n",
"title": "Decoupling multivariate polynomials: interconnections between tensorizations"
}
| null | null | null | null | true | null |
16745
| null |
Default
| null | null |
null |
{
"abstract": " In a voice-controlled smart-home, a controller must respond not only to\nuser's requests but also according to the interaction context. This paper\ndescribes Arcades, a system which uses deep reinforcement learning to extract\ncontext from a graphical representation of home automation system and to update\ncontinuously its behavior to the user's one. This system is robust to changes\nin the environment (sensor breakdown or addition) through its graphical\nrepresentation (scale well) and the reinforcement mechanism (adapt well). The\nexperiments on realistic data demonstrate that this method promises to reach\nlong life context-aware control of smart-home.\n",
"title": "Arcades: A deep model for adaptive decision making in voice controlled smart-home"
}
| null | null | null | null | true | null |
16746
| null |
Default
| null | null |
null |
{
"abstract": " This paper introduces an evolutionary approach to enhance the process of\nfinding central nodes in mobile networks. This can provide essential\ninformation and important applications in mobile and social networks. This\nevolutionary approach considers the dynamics of the network and takes into\nconsideration the central nodes from previous time slots. We also study the\napplicability of maximal cliques algorithms in mobile social networks and how\nit can be used to find the central nodes based on the discovered maximal\ncliques. The experimental results are promising and show a significant\nenhancement in finding the central nodes.\n",
"title": "Evolutionary Centrality and Maximal Cliques in Mobile Social Networks"
}
| null | null | null | null | true | null |
16747
| null |
Default
| null | null |
null |
{
"abstract": " Plasmonics currently faces the problem of seemingly inevitable optical losses\noccurring in the metallic components that challenges the implementation of\nessentially any application. In this work we show that Ohmic losses are reduced\nin certain layered metals, such as the transition metal dichalcogenide TaS$_2$,\ndue to an extraordinarily small density of states for scattering in the near-IR\noriginating from their special electronic band structure. Based on this\nobservation we propose a new class of band structure engineered van der Waals\nlayered metals composed of hexagonal transition metal chalcogenide-halide\nlayers with greatly suppressed intrinsic losses. Using first-principles\ncalculations we show that the suppression of optical losses lead to improved\nperformance for thin film waveguiding and transformation optics.\n",
"title": "Band structure engineered layered metals for low-loss plasmonics"
}
| null | null | null | null | true | null |
16748
| null |
Default
| null | null |
null |
{
"abstract": " We consider the problem of minimizing a smooth convex function by reducing\nthe optimization to computing the Nash equilibrium of a particular zero-sum\nconvex-concave game. Zero-sum games can be solved using online learning\ndynamics, where a classical technique involves simulating two no-regret\nalgorithms that play against each other and, after $T$ rounds, the average\niterate is guaranteed to solve the original optimization problem with error\ndecaying as $O(\\log T/T)$. In this paper we show that the technique can be\nenhanced to a rate of $O(1/T^2)$ by extending recent work \\cite{RS13,SALS15}\nthat leverages \\textit{optimistic learning} to speed up equilibrium\ncomputation. The resulting optimization algorithm derived from this analysis\ncoincides \\textit{exactly} with the well-known \\NA \\cite{N83a} method, and\nindeed the same story allows us to recover several variants of the Nesterov's\nalgorithm via small tweaks. We are also able to establish the accelerated\nlinear rate for a function which is both strongly-convex and smooth. This\nmethodology unifies a number of different iterative optimization methods: we\nshow that the \\HB algorithm is precisely the non-optimistic variant of \\NA, and\nrecent prior work already established a similar perspective on \\FW\n\\cite{AW17,ALLW18}.\n",
"title": "Acceleration through Optimistic No-Regret Dynamics"
}
| null | null | null | null | true | null |
16749
| null |
Default
| null | null |
null |
{
"abstract": " Economic evaluations from individual-level data are an important component of\nthe process of technology appraisal, with a view to informing resource\nallocation decisions. A critical problem in these analyses is that both\neffectiveness and cost data typically present some complexity (e.g. non\nnormality, spikes and missingness) that should be addressed using appropriate\nmethods. However, in routine analyses, simple standardised approaches are\ntypically used, possibly leading to biased inferences. We present a general\nBayesian framework that can handle the complexity. We show the benefits of\nusing our approach with a motivating example, the MenSS trial, for which there\nare spikes at one in the effectiveness and missingness in both outcomes. We\ncontrast a set of increasingly complex models and perform sensitivity analysis\nto assess the robustness of the conclusions to a range of plausible missingness\nassumptions. This paper highlights the importance of adopting a comprehensive\nmodelling approach to economic evaluations and the strategic advantages of\nbuilding these complex models within a Bayesian framework.\n",
"title": "A Full Bayesian Model to Handle Structural Ones and Missingness in Economic Evaluations from Individual-Level Data"
}
| null | null | null | null | true | null |
16750
| null |
Default
| null | null |
null |
{
"abstract": " In the past 50 years, calorimeters have become the most important detectors\nin many particle physics experiments, especially experiments in colliding-beam\naccelerators at the energy frontier. In this paper, we describe and discuss a\nnumber of common misconceptions about these detectors, as well as the\nconsequences of these misconceptions. We hope that it may serve as a useful\nsource of information for young colleagues who want to familiarize themselves\nwith these tricky instruments.\n",
"title": "Misconceptions about Calorimetry"
}
| null | null | null | null | true | null |
16751
| null |
Default
| null | null |
null |
{
"abstract": " Recent experiments have revealed that the diffusivity of exothermic and fast\nenzymes is enhanced when they are catalytically active, and different physical\nmechanisms have been explored and quantified to account for this observation.\nWe perform measurements on the endothermic and relatively slow enzyme aldolase,\nwhich also shows substrate-induced enhanced diffusion. We propose a new\nphysical paradigm, which reveals that the diffusion coefficient of a model\nenzyme hydrodynamically coupled to its environment increases significantly when\nundergoing changes in conformational fluctuations in a substrate-dependent\nmanner, and is independent of the overall turnover rate of the underlying\nenzymatic reaction. Our results show that substrate-induced enhanced diffusion\nof enzyme molecules can be explained within an equilibrium picture, and that\nthe exothermicity of the catalyzed reaction is not a necessary condition for\nthe observation of this phenomenon.\n",
"title": "Exothermicity is not a necessary condition for enhanced diffusion of enzymes"
}
| null | null | null | null | true | null |
16752
| null |
Default
| null | null |
null |
{
"abstract": " This paper introduces a new approach to automatically quantify the severity\nof knee OA using X-ray images. Automatically quantifying knee OA severity\ninvolves two steps: first, automatically localizing the knee joints; next,\nclassifying the localized knee joint images. We introduce a new approach to\nautomatically detect the knee joints using a fully convolutional neural network\n(FCN). We train convolutional neural networks (CNN) from scratch to\nautomatically quantify the knee OA severity optimizing a weighted ratio of two\nloss functions: categorical cross-entropy and mean-squared loss. This joint\ntraining further improves the overall quantification of knee OA severity, with\nthe added benefit of naturally producing simultaneous multi-class\nclassification and regression outputs. Two public datasets are used to evaluate\nour approach, the Osteoarthritis Initiative (OAI) and the Multicenter\nOsteoarthritis Study (MOST), with extremely promising results that outperform\nexisting approaches.\n",
"title": "Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity using Convolutional Neural Networks"
}
| null | null | null | null | true | null |
16753
| null |
Default
| null | null |
null |
{
"abstract": " We propose a data-driven filtered reduced order model (DDF-ROM) framework for\nthe numerical simulation of fluid flows. The novel DDF-ROM framework consists\nof two steps: (i) In the first step, we use explicit ROM spatial filtering of\nthe nonlinear PDE to construct a filtered ROM. This filtered ROM is\nlow-dimensional, but is not closed (because of the nonlinearity in the given\nPDE). (ii) In the second step, we use data-driven modeling to close the\nfiltered ROM, i.e., to model the interaction between the resolved and\nunresolved modes. To this end, we use a quadratic ansatz to model this\ninteraction and close the filtered ROM. To find the new coefficients in the\nclosed filtered ROM, we solve an optimization problem that minimizes the\ndifference between the full order model data and our ansatz. We emphasize that\nthe new DDF-ROM is built on general ideas of spatial filtering and optimization\nand is independent of (restrictive) phenomenological arguments.\nWe investigate the DDF-ROM in the numerical simulation of a 2D channel flow\npast a circular cylinder at Reynolds number $Re=100$. The DDF-ROM is\nsignificantly more accurate than the standard projection ROM. Furthermore, the\ncomputational costs of the DDF-ROM and the standard projection ROM are similar,\nboth costs being orders of magnitude lower than the computational cost of the\nfull order model. We also compare the new DDF-ROM with modern ROM closure\nmodels in the numerical simulation of the 1D Burgers equation. The DDF-ROM is\nmore accurate and significantly more efficient than these ROM closure models.\n",
"title": "Data-Driven Filtered Reduced Order Modeling Of Fluid Flows"
}
| null | null | null | null | true | null |
16754
| null |
Default
| null | null |
null |
{
"abstract": " The linear FEAST algorithm is a method for solving linear eigenvalue\nproblems. It uses complex contour integration to calculate the eigenvectors\nwhose eigenvalues that are located inside some user-defined region in the\ncomplex plane. This makes it possible to parallelize the process of solving\neigenvalue problems by simply dividing the complex plane into a collection of\ndisjoint regions and calculating the eigenpairs in each region independently of\nthe eigenpairs in the other regions. In this paper we present a generalization\nof the linear FEAST algorithm that can be used to solve nonlinear eigenvalue\nproblems. Like its linear progenitor, the nonlinear FEAST algorithm can be used\nto solve nonlinear eigenvalue problems for the eigenpairs whose eigenvalues lie\nin a user-defined region in the complex plane, thereby allowing for the\ncalculation of large numbers of eigenpairs in parallel. We describe the\nnonlinear FEAST algorithm, and use several physically-motivated examples to\ndemonstrate its properties.\n",
"title": "FEAST Eigensolver for Nonlinear Eigenvalue Problems"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16755
| null |
Validated
| null | null |
null |
{
"abstract": " A recent heuristic argument based on basic concepts in spectral analysis\nshowed that the twin prime conjecture and a few other related primes counting\nproblems are valid. A rigorous version of the spectral method, and a proof for\nthe existence of infinitely many quadratic twin primes $n^{2}+1$ and $n^{2}+3$,\n$n \\geq 1$, are proposed in this note.\n",
"title": "Twin Primes In Quadratic Arithmetic Progressions"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16756
| null |
Validated
| null | null |
null |
{
"abstract": " We establish upper bounds of bit complexity of computing solution operators\nfor symmetric hyperbolic systems of PDEs. Here we continue the research started\nin in our revious publications where computability, in the rigorous sense of\ncomputable analysis, has been established for solution operators of Cauchy and\ndissipative boundary-value problems for such systems.\n",
"title": "Bit Complexity of Computing Solutions for Symmetric Hyperbolic Systems of PDEs with Guaranteed Precision"
}
| null | null | null | null | true | null |
16757
| null |
Default
| null | null |
null |
{
"abstract": " We recalculate the leading relativistic corrections for the ground electronic\nstate of the hydrogen molecule using variational method with explicitly\ncorrelated functions which satisfy the interelectronic cusp condition. The new\ncomputational approach allowed for the control of the numerical precision which\nreached about 8 significant digits. More importantly, the updated theoretical\nenergies became discrepant with the known experimental values and we conclude\nthat the yet unknown relativistic recoil corrections might be larger than\npreviously anticipated.\n",
"title": "Relativistic corrections for the ground electronic state of molecular hydrogen"
}
| null | null |
[
"Physics"
] | null | true | null |
16758
| null |
Validated
| null | null |
null |
{
"abstract": " We establish interior Lipschitz estimates at the macroscopic scale for\nsolutions to systems of linear elasticity with rapidly oscillating periodic\ncoefficients and mixed boundary conditions in domains periodically perforated\nat a microscopic scale $\\varepsilon$ by establishing $H^1$-convergence rates\nfor such solutions. The interior estimates are derived directly without the use\nof compactness via an argument presented in [3] that was adapted for elliptic\nequations in [2] and [11]. As a consequence, we derive a Liouville type\nestimate for solutions to the systems of linear elasticity in unbounded\nperiodically perforated domains.\n",
"title": "Homogenization in Perforated Domains and Interior Lipschitz Estimates"
}
| null | null | null | null | true | null |
16759
| null |
Default
| null | null |
null |
{
"abstract": " Most people simultaneously belong to several distinct social networks, in\nwhich their relations can be different. They have opinions about certain\ntopics, which they share and spread on these networks, and are influenced by\nthe opinions of other persons. In this paper, we build upon this observation to\npropose a new nodal centrality measure for multiplex networks. Our measure,\ncalled Opinion centrality, is based on a stochastic model representing opinion\npropagation dynamics in such a network. We formulate an optimization problem\nconsisting in maximizing the opinion of the whole network when controlling an\nexternal influence able to affect each node individually. We find a\nmathematical closed form of this problem, and use its solution to derive our\ncentrality measure. According to the opinion centrality, the more a node is\nworth investing external influence, and the more it is central. We perform an\nempirical study of the proposed centrality over a toy network, as well as a\ncollection of real-world networks. Our measure is generally negatively\ncorrelated with existing multiplex centrality measures, and highlights\ndifferent types of nodes, accordingly to its definition.\n",
"title": "Opinion-Based Centrality in Multiplex Networks: A Convex Optimization Approach"
}
| null | null |
[
"Computer Science",
"Physics"
] | null | true | null |
16760
| null |
Validated
| null | null |
null |
{
"abstract": " We present a proof of concept for solving a 1+1D complex-valued, delay\npartial differential equation (PDE) that emerges in the study of waveguide\nquantum electrodynamics (QED) by adapting the finite-difference time-domain\n(FDTD) method. The delay term is spatially non-local, rendering conventional\napproaches such as the method of lines inapplicable. We show that by properly\ndesigning the grid and by supplying the (partial) exact solution as the\nboundary condition, the delay PDE can be numerically solved. In addition, we\ndemonstrate that while the delay imposes strong data dependency, multi-thread\nparallelization can nevertheless be applied to such a problem. Our code\nprovides a numerically exact solution to the time-dependent multi-photon\nscattering problem in waveguide QED.\n",
"title": "FDTD: solving 1+1D delay PDE in parallel"
}
| null | null | null | null | true | null |
16761
| null |
Default
| null | null |
null |
{
"abstract": " Range anxiety, the persistent worry about not having enough battery power to\ncomplete a trip, remains one of the major obstacles to widespread\nelectric-vehicle adoption. As cities look to attract more users to adopt\nelectric vehicles, the emergence of wireless in-motion car charging technology\npresents itself as a solution to range anxiety. For a limited budget, cities\ncould face the decision problem of where to install these wireless charging\nunits. With a heavy price tag, an installation without a careful study can lead\nto inefficient use of limited resources. In this work, we model the\ninstallation of wireless charging units as an integer programming problem. We\nuse our basic formulation as a building block for different realistic\nscenarios, carry out experiments using real geospatial data, and compare our\nresults to different heuristics.\n",
"title": "Optimal Installation for Electric Vehicle Wireless Charging Lanes"
}
| null | null | null | null | true | null |
16762
| null |
Default
| null | null |
null |
{
"abstract": " The Nyström method is a popular technique for computing fixed-rank\napproximations of large kernel matrices using a small number of landmark\npoints. In practice, to ensure high quality approximations, the number of\nlandmark points is chosen to be greater than the target rank. However, the\nstandard Nyström method uses a sub-optimal procedure for rank reduction\nmainly due to its simplicity. In this paper, we highlight the drawbacks of\nstandard Nyström in terms of poor performance and lack of theoretical\nguarantees. To address these issues, we present an efficient method for\ngenerating improved fixed-rank Nyström approximations. Theoretical analysis\nand numerical experiments are provided to demonstrate the advantages of the\nmodified method over the standard Nyström method. Overall, the aim of this\npaper is to convince researchers to use the modified method, as it has nearly\nidentical computational complexity, is easy to code, and has greatly improved\naccuracy in many cases.\n",
"title": "Improved Fixed-Rank Nyström Approximation via QR Decomposition: Practical and Theoretical Aspects"
}
| null | null | null | null | true | null |
16763
| null |
Default
| null | null |
null |
{
"abstract": " We present a quantitative analysis on the response of a dilute active\nsuspension of self-propelled rods (swimmers) in a planar channel subjected to\nan imposed shear flow. To best capture the salient features of shear-induced\neffects, we consider the case of an imposed Couette flow, providing a constant\nshear rate across the channel. We argue that the steady-state behavior of\nswimmers can be understood in the light of a population splitting phenomenon,\noccurring as the shear rate exceeds a certain threshold, initiating the\nreversal of swimming direction for a finite fraction of swimmers from down- to\nupstream or vice versa, depending on swimmer position within the channel.\nSwimmers thus split into two distinct, statistically significant and oppositely\nswimming majority and minority populations. The onset of population splitting\ntranslates into a transition from a self-propulsion-dominated regime to a\nshear-dominated regime, corresponding to a unimodal-to-bimodal change in the\nprobability distribution function of the swimmer orientation. We present a\nphase diagram in terms of the swim and flow Peclet numbers showing the\nseparation of these two regimes by a discontinuous transition line. Our results\nshed further light on the behavior of swimmers in a shear flow and provide an\nexplanation for the previously reported non-monotonic behavior of the mean,\nnear-wall, parallel-to-flow orientation of swimmers with increasing shear\nstrength.\n",
"title": "Population splitting of rodlike swimmers in Couette flow"
}
| null | null | null | null | true | null |
16764
| null |
Default
| null | null |
null |
{
"abstract": " Future multiprocessor chips will integrate many different units, each\ntailored to a specific computation. When designing such a system, the chip\narchitect must decide how to distribute limited system resources such as area,\npower, and energy among the computational units. We extend MultiAmdahl, an\nanalytical optimization technique for resource allocation in heterogeneous\narchitectures, for energy optimality under a variety of constant system power\nscenarios. We conclude that reduction in constant system power should be met by\nreallocating resources from general-purpose computing to heterogeneous\naccelerator-dominated computing, to keep the overall energy consumption at a\nminimum. We extend this conclusion to offer an intuition regarding\nenergy-optimal resource allocation in data center computing.\n",
"title": "MultiAmdahl: Optimal Resource Allocation in Heterogeneous Architectures"
}
| null | null | null | null | true | null |
16765
| null |
Default
| null | null |
null |
{
"abstract": " We give an explicit description for the weight three generator of the coset\nvertex operator algebra $C_{L_{\\widehat{\\sl_{n}}}(l,0)\\otimes\nL_{\\widehat{\\sl_{n}}}(1,0)}(L_{\\widehat{\\sl_{n}}}(l+1,0))$, for $n\\geq 2, l\\geq\n1$. Furthermore, we prove that the commutant\n$C_{L_{\\widehat{\\sl_{3}}}(l,0)\\otimes\nL_{\\widehat{\\sl_{3}}}(1,0)}(L_{\\widehat{\\sl_{3}}}(l+1,0))$ is isomorphic to the\n$\\W$-algebra $\\W_{-3+\\frac{l+3}{l+4}}(\\sl_3)$, which confirms the conjecture\nfor the $\\sl_3$ case that $C_{L_{\\widehat{\\frak g}}(l,0)\\otimes\nL_{\\widehat{\\frak g}}(1,0)}(L_{\\widehat{\\frak g}}(l+1,0))$ is isomorphic to\n$\\W_{-h+\\frac{l+h}{l+h+1}}(\\frak g)$ for simply-laced Lie algebras ${\\frak g}$\nwith its Coxeter number $h$ for a positive integer $l$.\n",
"title": "Coset Vertex Operator Algebras and $\\W$-Algebras"
}
| null | null | null | null | true | null |
16766
| null |
Default
| null | null |
null |
{
"abstract": " We prove the Moore and the Myhill property for strongly irreducible subshifts\nover right amenable and finitely right generated left homogeneous spaces with\nfinite stabilisers. Both properties together mean that the global transition\nfunction of each big-cellular automaton with finite set of states and finite\nneighbourhood over such a subshift is surjective if and only if it is\npre-injective. This statement is known as Garden of Eden theorem.\nPre-Injectivity means that two global configurations that differ at most on a\nfinite subset and have the same image under the global transition function must\nbe identical.\n",
"title": "The Moore and the Myhill Property For Strongly Irreducible Subshifts Of Finite Type Over Group Sets"
}
| null | null | null | null | true | null |
16767
| null |
Default
| null | null |
null |
{
"abstract": " Objective: The Learning Health System (LHS) requires integration of research\ninto routine practice. eSource or embedding clinical trial functionalities into\nroutine electronic health record (EHR) systems has long been put forward as a\nsolution to the rising costs of research. We aimed to create and validate an\neSource solution that would be readily extensible as part of a LHS.\nMaterials and Methods: The EU FP7 TRANSFoRm project's approach is based on\ndual modelling, using the Clinical Research Information Model (CRIM) and the\nClinical Data Integration Model of meaning (CDIM) to bridge the gap between\nclinical and research data structures, using the CDISC Operational Data Model\n(ODM) standard. Validation against GCP requirements was conducted in a clinical\nsite, and a cluster randomised evaluation by site nested into a live clinical\ntrial.\nResults: Using the form definition element of ODM, we linked precisely\nmodelled data queries to data elements, constrained against CDIM concepts, to\nenable automated patient identification for specific protocols and\nprepopulation of electronic case report forms (e-CRF). Both control and eSource\nsites recruited better than expected with no significant difference.\nCompleteness of clinical forms was significantly improved by eSource, but\nPatient Related Outcome Measures (PROMs) were less well completed on\nsmartphones than paper in this population.\nDiscussion: The TRANSFoRm approach provides an ontologically-based approach\nto eSource in a low-resource, heterogeneous, highly distributed environment,\nthat allows precise prospective mapping of data elements in the EHR.\nConclusion: Further studies using this approach to CDISC should optimise the\ndelivery of PROMS, whilst building a sustainable infrastructure for eSource\nwith research networks, trials units and EHR vendors.\n",
"title": "eSource for clinical trials: Implementation and evaluation of a standards-based approach in a real world trial"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16768
| null |
Validated
| null | null |
null |
{
"abstract": " Simplified Molecular Input Line Entry System (SMILES) is a single line text\nrepresentation of a unique molecule. One molecule can however have multiple\nSMILES strings, which is a reason that canonical SMILES have been defined,\nwhich ensures a one to one correspondence between SMILES string and molecule.\nHere the fact that multiple SMILES represent the same molecule is explored as a\ntechnique for data augmentation of a molecular QSAR dataset modeled by a long\nshort term memory (LSTM) cell based neural network. The augmented dataset was\n130 times bigger than the original. The network trained with the augmented\ndataset shows better performance on a test set when compared to a model built\nwith only one canonical SMILES string per molecule. The correlation coefficient\nR2 on the test set was improved from 0.56 to 0.66 when using SMILES\nenumeration, and the root mean square error (RMS) likewise fell from 0.62 to\n0.55. The technique also works in the prediction phase. By taking the average\nper molecule of the predictions for the enumerated SMILES a further improvement\nto a correlation coefficient of 0.68 and a RMS of 0.52 was found.\n",
"title": "SMILES Enumeration as Data Augmentation for Neural Network Modeling of Molecules"
}
| null | null | null | null | true | null |
16769
| null |
Default
| null | null |
null |
{
"abstract": " We approach the tomographic problem in terms of linear system of equations\n$A\\mathbf{x}=\\mathbf{p}$ in an $(M\\times N)$-sized lattice grid $\\mathcal{A}$.\nUsing a finite number of directions always yields the presence of ghosts, so\npreventing uniqueness. Ghosts can be managed by increasing the number of\ndirections, which implies that also the number of collected projections (also\ncalled bins) increases. Therefore, for a best performing outcome, a kind of\ncompromise should be sought among the number of employed directions, the number\nof collected projections, and the percentage of exactly reconstructed image. In\nthis paper we wish to investigate such a problem in the case of binary images.\nWe move from a theoretical result that allow uniqueness in $\\mathcal{A}$ with\njust four suitably selected X-ray directions. This is exploited in studying the\nstructure of the allowed ghosts in the given lattice grid. The knowledge of the\nghost sizes, combined with geometrical information concerning the real valued\nsolution of $A\\mathbf{x}=\\mathbf{p}$ having minimal Euclidean norm, leads to an\nexplicit implementation of the previously obtained uniqueness theorem. This\nprovides an easy binary algorithm (BRA) that, in the grid model, quickly\nreturns perfect noise-free tomographic reconstructions.\nThen we focus on the tomography-side relevant problem of reducing the number\nof collected projections and, in the meantime, preserving a good quality of\nreconstruction. It turns out that, using sets of just four suitable directions,\na high percentage of reconstructed pixel is preserved, even when the size of\nthe projection vector $\\mathbf{p}$ is considerably smaller than the size of the\nimage to be reconstructed.\nResults are commented and discussed, also showing applications of BRA on\nphantoms with different features.\n",
"title": "Binary Tomography Reconstructions With Few Projections"
}
| null | null | null | null | true | null |
16770
| null |
Default
| null | null |
null |
{
"abstract": " We give an elementary combinatorial proof of the following fact: Every real\nor complex analytic complete intersection germ X is equisingular -- in the\nsense of the Hilbert-Samuel function -- with a germ of an algebraic set defined\nby sufficiently long truncations of the defining equations of X.\n",
"title": "On finite determinacy of complete intersection singularities"
}
| null | null | null | null | true | null |
16771
| null |
Default
| null | null |
null |
{
"abstract": " Composite materials comprised of ferroelectric nanoparticles in a dielectric\nmatrix are being actively investigated for a variety of functional properties\nattractive for a wide range of novel electronic and energy harvesting devices.\nHowever, the dependence of these functionalities on shapes, sizes, orientation\nand mutual arrangement of ferroelectric particles is currently not fully\nunderstood. In this study, we utilize a time-dependent Ginzburg-Landau approach\ncombined with coupled-physics finite-element-method based simulations to\nelucidate the behavior of polarization in isolated spherical PbTiO3 or BaTiO3\nnanoparticles embedded in a dielectric medium, including air. The equilibrium\npolarization topology is strongly affected by particle diameter, as well as the\nchoice of inclusion and matrix materials, with monodomain, vortex-like and\nmultidomain patterns emerging for various combinations of size and materials\nparameters. This leads to radically different polarization vs electric field\nresponses, resulting in highly tunable size-dependent dielectric properties\nthat should be possible to observe experimentally. Our calculations show that\nthere is a critical particle size below which ferroelectricity vanishes. For\nthe PbTiO3 particle, this size is 2 and 3.4 nm, respectively, for high- and\nlow-permittivity media. For the BaTiO3 particle, it is ~3.6 nm regardless of\nthe medium dielectric strength.\n",
"title": "Topological phase transformations and intrinsic size effects in ferroelectric nanoparticles"
}
| null | null | null | null | true | null |
16772
| null |
Default
| null | null |
null |
{
"abstract": " We study the kernel of the evaluated Burau representation through the braid\nelement $\\sigma_i \\sigma_{i+1} \\sigma_i$. The element is significant as a part\nof the standard braid relation. We establish the form of this element's image\nraised to the $n^{th}$ power. Interestingly, the cyclotomic polynomials arise\nand can be used to define the expression. The main result of this paper is that\nthe Burau representation of the braid group of $n$ strands for $n \\geq 3$ is\nunfaithful at any primitive root of unity, excepting the first three.\n",
"title": "Honors Thesis: On the faithfulness of the Burau representation at roots of unity"
}
| null | null | null | null | true | null |
16773
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we study possibilities of interpolation and symbol elimination\nin extensions of a theory $\\mathcal{T}_0$ with additional function symbols\nwhose properties are axiomatised using a set of clauses. We analyze situations\nin which we can perform such tasks in a hierarchical way, relying on existing\nmechanisms for symbol elimination in $\\mathcal{T}_0$. This is for instance\npossible if the base theory allows quantifier elimination. We analyze\npossibilities of extending such methods to situations in which the base theory\ndoes not allow quantifier elimination but has a model completion which does. We\nillustrate the method on various examples.\n",
"title": "On Interpolation and Symbol Elimination in Theory Extensions"
}
| null | null | null | null | true | null |
16774
| null |
Default
| null | null |
null |
{
"abstract": " We have constructed the database of stars in the local group using the\nextended version of the SAGA (Stellar Abundances for Galactic Archaeology)\ndatabase that contains stars in 24 dwarf spheroidal galaxies and ultra faint\ndwarfs. The new version of the database includes more than 4500 stars in the\nMilky Way, by removing the previous metallicity criterion of [Fe/H] <= -2.5,\nand more than 6000 stars in the local group galaxies. We examined a validity of\nusing a combined data set for elemental abundances. We also checked a\nconsistency between the derived distances to individual stars and those to\ngalaxies in the literature values. Using the updated database, the\ncharacteristics of stars in dwarf galaxies are discussed. Our statistical\nanalyses of alpha-element abundances show that the change of the slope of the\n[alpha/Fe] relative to [Fe/H] (so-called \"knee\") occurs at [Fe/H] = -1.0+-0.1\nfor the Milky Way. The knee positions for selected galaxies are derived by\napplying the same method. Star formation history of individual galaxies are\nexplored using the slope of the cumulative metallicity distribution function.\nRadial gradients along the four directions are inspected in six galaxies where\nwe find no direction dependence of metallicity gradients along the major and\nminor axes. The compilation of all the available data shows a lack of CEMP-s\npopulation in dwarf galaxies, while there may be some CEMP-no stars at [Fe/H]\n<~ -3 even in the very small sample. The inspection of the relationship between\nEu and Ba abundances confirms an anomalously Ba-rich population in Fornax,\nwhich indicates a pre-enrichment of interstellar gas with r-process elements.\nWe do not find any evidence of anti-correlations in O-Na and Mg-Al abundances,\nwhich characterises the abundance trends in the Galactic globular clusters.\n",
"title": "Stellar Abundances for Galactic Archaeology Database IV - Compilation of Stars in Dwarf Galaxies"
}
| null | null | null | null | true | null |
16775
| null |
Default
| null | null |
null |
{
"abstract": " General relativistic effects have long been predicted to subtly influence the\nobserved large-scale structure of the universe. The current generation of\ngalaxy redshift surveys have reached a size where detection of such effects is\nbecoming feasible. In this paper, we report the first detection of the redshift\nasymmetry from the cross-correlation function of two galaxy populations which\nis consistent with relativistic effects. The dataset is taken from the Sloan\nDigital Sky Survey DR12 CMASS galaxy sample, and we detect the asymmetry at the\n$2.7\\sigma$ level by applying a shell-averaged estimator to the\ncross-correlation function. Our measurement dominates at scales around $10$\nh$^{-1}$Mpc, larger than those over which the gravitational redshift profile\nhas been recently measured in galaxy clusters, but smaller than scales for\nwhich linear perturbation theory is likely to be accurate. The detection\nsignificance varies by 0.5$\\sigma$ with the details of our measurement and\ntests for systematic effects. We have also devised two null tests to check for\nvarious survey systematics and show that both results are consistent with the\nnull hypothesis. We measure the dipole moment of the cross-correlation\nfunction, and from this the asymmetry is also detected, at the $2.8 \\sigma$\nlevel. The amplitude and scale-dependence of the clustering asymmetries are\napproximately consistent with the expectations of General Relativity and a\nbiased galaxy population, within large uncertainties. We explore theoretical\npredictions using numerical simulations in a companion paper.\n",
"title": "Relativistic distortions in the large-scale clustering of SDSS-III BOSS CMASS galaxies"
}
| null | null | null | null | true | null |
16776
| null |
Default
| null | null |
null |
{
"abstract": " The role of phase separation in the emergence of superconductivity in alkali\nmetal doped iron selenides A$_{x}$Fe$_{2-y}$Se$_{2}$ (A = K, Rb, Cs) is\nrevisited. High energy X-ray diffraction and Monte Carlo simulation were used\nto investigate the crystal structure of quenched superconducting (SC) and\nas-grown non-superconducting (NSC) K$_{x}$Fe$_{2-y}$Se$_{2}$ single crystals.\nThe coexistence of superlattice structures with the in-plane\n$\\sqrt{2}\\times\\sqrt{2}$ K-vacancy ordering and the $\\sqrt{5}\\times\\sqrt{5}$\nFe-vacancy ordering were observed in SC and NSC crystals along side the\n\\textit{I4/mmm} Fe-vacancy free phase. Moreover, in the SC crystal an\nFe-vacancy disordered phase is additionally present. It appears at the boundary\nbetween the \\textit{I4/mmm} vacancy free phase and the \\textit{I4/m} vacancy\nordered phase ($\\sqrt{5}\\times\\sqrt{5}$). The vacancy disordered phase is most\nlikely the host of superconductivity.\n",
"title": "Superconductivity at the vacancy disorder boundary in K$_x$Fe$_{2-y}$Se$_2$"
}
| null | null | null | null | true | null |
16777
| null |
Default
| null | null |
null |
{
"abstract": " Primordial black holes (PBHs) have long been suggested as a candidate for\nmaking up some or all of the dark matter in the Universe. Most of the\ntheoretically possible mass range for PBH dark matter has been ruled out with\nvarious null observations of expected signatures of their interaction with\nstandard astrophysical objects. However, current constraints are significantly\nless robust in the 20 M_sun < M_PBH < 100 M_sun mass window, which has received\nmuch attention recently, following the detection of merging black holes with\nestimated masses of ~30 M_sun by LIGO and the suggestion that these could be\nblack holes formed in the early Universe. We consider the potential of advanced\nLIGO (aLIGO) operating at design sensitivity to probe this mass range by\nlooking for peaks in the mass spectrum of detected events. To quantify the\nbackground, which is due to black holes that are formed from dying stars, we\nmodel the shape of the stellar-black-hole mass function and calibrate its\namplitude to match the O1 results. Adopting very conservative assumptions about\nthe PBH and stellar-black-hole merger rates, we show that ~5 years of aLIGO\ndata can be used to detect a contribution of >20 M_sun PBHs to dark matter down\nto f_PBH<0.5 at >99.9% confidence level. Combined with other probes that\nalready suggest tension with f_PBH=1, the obtainable independent limits from\naLIGO will thus enable a firm test of the scenario that PBHs make up all of\ndark matter.\n",
"title": "Probing Primordial-Black-Hole Dark Matter with Gravitational Waves"
}
| null | null | null | null | true | null |
16778
| null |
Default
| null | null |
null |
{
"abstract": " We consider the high-dimensional inference problem where the signal is a\nlow-rank matrix which is corrupted by an additive Gaussian noise. Given a\nprobabilistic model for the low-rank matrix, we compute the limit in the large\ndimension setting for the mutual information between the signal and the\nobservations, as well as the matrix minimum mean square error, while the rank\nof the signal remains constant. This allows to locate the information-theoretic\nthreshold for this estimation problem, i.e. the critical value of the signal\nintensity below which it is impossible to recover the low-rank matrix.\n",
"title": "Fundamental limits of low-rank matrix estimation: the non-symmetric case"
}
| null | null | null | null | true | null |
16779
| null |
Default
| null | null |
null |
{
"abstract": " Gaussian processes (GPs) offer a flexible class of priors for nonparametric\nBayesian regression, but popular GP posterior inference methods are typically\nprohibitively slow or lack desirable finite-data guarantees on quality. We\ndevelop an approach to scalable approximate GP regression with finite-data\nguarantees on the accuracy of pointwise posterior mean and variance estimates.\nOur main contribution is a novel objective for approximate inference in the\nnonparametric setting: the preconditioned Fisher (pF) divergence. We show that\nunlike the Kullback--Leibler divergence (used in variational inference), the pF\ndivergence bounds the 2-Wasserstein distance, which in turn provides tight\nbounds the pointwise difference of the mean and variance functions. We\ndemonstrate that, for sparse GP likelihood approximations, we can minimize the\npF divergence efficiently. Our experiments show that optimizing the pF\ndivergence has the same computational requirements as variational sparse GPs\nwhile providing comparable empirical performance--in addition to our novel\nfinite-data quality guarantees.\n",
"title": "Scalable Gaussian Process Inference with Finite-data Mean and Variance Guarantees"
}
| null | null |
[
"Statistics"
] | null | true | null |
16780
| null |
Validated
| null | null |
null |
{
"abstract": " The paper explores various special functions which generalize the\ntwo-parametric Mittag-Leffler type function of two variables. Integral\nrepresentations for these functions in different domains of variation of\narguments for certain values of the parameters are obtained. The asymptotic\nexpansions formulas and asymptotic properties of such functions are also\nestablished for large values of the variables. This provides statements of\ntheorems for these formulas and their corresponding properties.\n",
"title": "Integral representations and asymptotic behaviours of Mittag-Leffler type functions of two variables"
}
| null | null | null | null | true | null |
16781
| null |
Default
| null | null |
null |
{
"abstract": " It is common to model inductive datatypes as least fixed points of functors.\nWe show that within the Cedille type theory we can relax functoriality\nconstraints and generically derive an induction principle for Mendler-style\nlambda-encoded inductive datatypes, which arise as least fixed points of\ncovariant schemes where the morphism lifting is defined only on identities.\nAdditionally, we implement a destructor for these lambda-encodings that runs in\nconstant-time. As a result, we can define lambda-encoded natural numbers with\nan induction principle and a constant-time predecessor function so that the\nnormal form of a numeral requires only linear space. The paper also includes\nseveral more advanced examples.\n",
"title": "Efficient Mendler-Style Lambda-Encodings in Cedille"
}
| null | null | null | null | true | null |
16782
| null |
Default
| null | null |
null |
{
"abstract": " Let $\\mathcal{D}_{n,m}$ be the algebra of the quantum integrals of the\ndeformed Calogero-Moser-Sutherland problem corresponding to the root system of\nthe Lie superalgebra $\\frak{gl}(n,m)$. The algebra $\\mathcal{D}_{n,m}$ acts\nnaturally on the quasi-invariant Laurent polynomials and we investigate the\ncorresponding spectral decomposition. Even for general value of the parameter\n$k$ the spectral decomposition is not simple and we prove that the image of the\nalgebra $\\mathcal{D}_{n,m}$ in the algebra of endomorphisms of the generalised\neigen-space is $k[\\varepsilon]^{\\otimes r}$ where $k[\\varepsilon]$ is the\nalgebra of the dual numbers the corresponding representation is the regular\nrepresentation of the algebra $k[\\varepsilon]^{\\otimes r}$.\n",
"title": "Super Jack-Laurent Polynomials"
}
| null | null | null | null | true | null |
16783
| null |
Default
| null | null |
null |
{
"abstract": " This study here suggests a classification of technologies based on taxonomic\ncharacteristics of interaction between technologies in complex systems that is\nnot a studied research field in economics of technical change. The proposed\ntaxonomy here categorizes technologies in four typologies, in a broad analogy\nwith the ecology: 1) technological parasitism is a relationship between two\ntechnologies T1 and T2 in a complex system S where one technology T1 benefits\nfrom the interaction with T2, whereas T2 has a negative side from interaction\nwith T1; 2) technological commensalism is a relationship between two\ntechnologies in S where one technology benefits from the other without\naffecting it; 3) technological mutualism is a relationship in which each\ntechnology benefits from the activity of the other within complex systems; 4)\ntechnological symbiosis is a long-term interaction between two (or more)\ntechnologies that evolve together in complex systems. This taxonomy\nsystematizes the typologies of interactive technologies within complex systems\nand predicts their evolutionary pathways that generate stepwise coevolutionary\nprocesses of complex systems of technology. This study here begins the process\nof generalizing, as far as possible, critical typologies of interactive\ntechnologies that explain the long-run evolution of technology. The theoretical\nframework developed here opens the black box of the interaction between\ntechnologies that affects, with different types of technologies, the\nevolutionary pathways of complex systems of technology over time and space.\nOverall, then, this new theoretical framework may be useful for bringing a new\nperspective to categorize the gradient of benefit to technologies from\ninteraction with other technologies that can be a ground work for development\nof more sophisticated concepts to clarify technological and economic change in\nhuman society.\n",
"title": "A New Classification of Technologies"
}
| null | null | null | null | true | null |
16784
| null |
Default
| null | null |
null |
{
"abstract": " With a triangulation of a planar polygon with $n$ sides, one can associate an\nintegrable system on the Grassmannian of 2-planes in an $n$-space. In this\npaper, we show that the potential functions of Lagrangian torus fibers of the\nintegrable systems associated with different triangulations glue together by\ncluster transformations. We also prove that the cluster transformations\ncoincide with the wall-crossing formula in Lagrangian intersection Floer\ntheory.\n",
"title": "Potential functions on Grassmannians of planes and cluster transformations"
}
| null | null | null | null | true | null |
16785
| null |
Default
| null | null |
null |
{
"abstract": " We present K-band Multi-Object Spectrograph (KMOS) observations of 18 Red\nSupergiant (RSG) stars in the Sculptor Group galaxy NGC 55. Radial velocities\nare calculated and are shown to be in good agreement with previous estimates,\nconfirming the supergiant nature of the targets and providing the first\nspectroscopically confirmed RSGs in NGC 55. Stellar parameters are estimated\nfor 14 targets using the $J$-band analysis technique, making use of\nstate-of-the-art stellar model atmospheres. The metallicities estimated confirm\nthe low-metallicity nature of NGC 55, in good agreement with previous studies.\nThis study provides an independent estimate of the metallicity gradient of NGC\n55, in excellent agreement with recent results published using hot massive\nstars. In addition, we calculate luminosities of our targets and compare their\ndistribution of effective temperatures and luminosities to other RSGs, in\ndifferent environments, estimated using the same technique.\n",
"title": "Physical properties of the first spectroscopically confirmed red supergiant stars in the Sculptor Group galaxy NGC 55"
}
| null | null | null | null | true | null |
16786
| null |
Default
| null | null |
null |
{
"abstract": " For the gas near a solid planar wall, we propose a scaling formula for the\nmean free path of a molecule as a function of the distance from the wall, under\nthe assumption of a uniform distribution of the incident directions of the\nmolecular free flight. We subsequently impose the same scaling onto the\nviscosity of the gas near the wall, and compute the Navier-Stokes solution of\nthe velocity of a shear flow parallel to the wall. This solution exhibits the\nKnudsen velocity boundary layer in agreement with the corresponding Direct\nSimulation Monte Carlo computations for argon and nitrogen. We also find that\nthe proposed mean free path and viscosity scaling sets the second derivative of\nthe velocity to infinity at the wall boundary of the flow domain, which\nsuggests that the gas flow is formally turbulent within the Knudsen boundary\nlayer near the wall.\n",
"title": "Gas near a wall: a shortened mean free path, reduced viscosity, and the manifestation of a turbulent Knudsen layer in the Navier-Stokes solution of a shear flow"
}
| null | null |
[
"Physics"
] | null | true | null |
16787
| null |
Validated
| null | null |
null |
{
"abstract": " Donoho's JCGS (in press) paper is a spirited call to action for\nstatisticians, who he points out are losing ground in the field of data science\nby refusing to accept that data science is its own domain. (Or, at least, a\ndomain that is becoming distinctly defined.) He calls on writings by John\nTukey, Bill Cleveland, and Leo Breiman, among others, to remind us that\nstatisticians have been dealing with data science for years, and encourages\nacceptance of the direction of the field while also ensuring that statistics is\ntightly integrated.\nAs faculty at baccalaureate institutions (where the growth of undergraduate\nstatistics programs has been dramatic), we are keen to ensure statistics has a\nplace in data science and data science education. In his paper, Donoho is\nprimarily focused on graduate education. At our undergraduate institutions, we\nare considering many of the same questions.\n",
"title": "Greater data science at baccalaureate institutions"
}
| null | null | null | null | true | null |
16788
| null |
Default
| null | null |
null |
{
"abstract": " The demographics of dwarf galaxy populations have long been in tension with\npredictions from the Cold Dark Matter (CDM) paradigm. If primordial density\nfluctuations were scale-free as predicted, dwarf galaxies should themselves\nhost dark matter subhaloes, the most massive of which may have undergone star\nformation resulting in dwarf galaxy groups. Ensembles of dwarf galaxies are\nobserved as satellites of more massive galaxies, and there is observational and\ntheoretical evidence to suggest that these satellites at z=0 were captured by\nthe massive host halo as a group. However, the evolution of dwarf galaxies is\nhighly susceptible to environment making these satellite groups imperfect\nprobes of CDM in the low mass regime. We have identified one of the clearest\nexamples to date of hierarchical structure formation at low masses: seven\nisolated, spectroscopically confirmed groups with only dwarf galaxies as\nmembers. Each group hosts 3-5 known members, has a baryonic mass of ~4.4 x 10^9\nto 2 x 10^10 Msun, and requires a mass-to-light ratio of <100 to be\ngravitationally bound. Such groups are predicted to be rare theoretically and\nfound to be rare observationally at the current epoch and thus provide a unique\nwindow into the possible formation mechanism of more massive, isolated\ngalaxies.\n",
"title": "Direct evidence of hierarchical assembly at low masses from isolated dwarf galaxy groups"
}
| null | null | null | null | true | null |
16789
| null |
Default
| null | null |
null |
{
"abstract": " Risk prediction is central to both clinical medicine and public health. While\nmany machine learning models have been developed to predict mortality, they are\nrarely applied in the clinical literature, where classification tasks typically\nrely on logistic regression. One reason for this is that existing machine\nlearning models often seek to optimize predictions by incorporating features\nthat are not present in the databases readily available to providers and policy\nmakers, limiting generalizability and implementation. Here we tested a number\nof machine learning classifiers for prediction of six-month mortality in a\npopulation of elderly Medicare beneficiaries, using an administrative claims\ndatabase of the kind available to the majority of health care payers and\nproviders. We show that machine learning classifiers substantially outperform\ncurrent widely-used methods of risk prediction but only when used with an\nimproved feature set incorporating insights from clinical medicine, developed\nfor this study. Our work has applications to supporting patient and provider\ndecision making at the end of life, as well as population health-oriented\nefforts to identify patients at high risk of poor outcomes.\n",
"title": "Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data"
}
| null | null | null | null | true | null |
16790
| null |
Default
| null | null |
null |
{
"abstract": " We address the problem of activity detection in continuous, untrimmed video\nstreams. This is a difficult task that requires extracting meaningful\nspatio-temporal features to capture activities, accurately localizing the start\nand end times of each activity. We introduce a new model, Region Convolutional\n3D Network (R-C3D), which encodes the video streams using a three-dimensional\nfully convolutional network, then generates candidate temporal regions\ncontaining activities, and finally classifies selected regions into specific\nactivities. Computation is saved due to the sharing of convolutional features\nbetween the proposal and the classification pipelines. The entire model is\ntrained end-to-end with jointly optimized localization and classification\nlosses. R-C3D is faster than existing methods (569 frames per second on a\nsingle Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS'14.\nWe further demonstrate that our model is a general activity detection framework\nthat does not rely on assumptions about particular dataset properties by\nevaluating our approach on ActivityNet and Charades. Our code is available at\nthis http URL.\n",
"title": "R-C3D: Region Convolutional 3D Network for Temporal Activity Detection"
}
| null | null | null | null | true | null |
16791
| null |
Default
| null | null |
null |
{
"abstract": " Regularization is one of the crucial ingredients of deep learning, yet the\nterm regularization has various definitions, and regularization methods are\noften studied separately from each other. In our work we present a systematic,\nunifying taxonomy to categorize existing methods. We distinguish methods that\naffect data, network architectures, error terms, regularization terms, and\noptimization procedures. We do not provide all details about the listed\nmethods; instead, we present an overview of how the methods can be sorted into\nmeaningful categories and sub-categories. This helps revealing links and\nfundamental similarities between them. Finally, we include practical\nrecommendations both for users and for developers of new regularization\nmethods.\n",
"title": "Regularization for Deep Learning: A Taxonomy"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
16792
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we examine the statistical soundness of comparative\nassessments within the field of recommender systems in terms of reliability and\nhuman uncertainty. From a controlled experiment, we get the insight that users\nprovide different ratings on same items when repeatedly asked. This volatility\nof user ratings justifies the assumption of using probability densities instead\nof single rating scores. As a consequence, the well-known accuracy metrics\n(e.g. MAE, MSE, RMSE) yield a density themselves that emerges from convolution\nof all rating densities. When two different systems produce different RMSE\ndistributions with significant intersection, then there exists a probability of\nerror for each possible ranking. As an application, we examine possible ranking\nerrors of the Netflix Prize. We are able to show that all top rankings are more\nor less subject to high probabilities of error and that some rankings may be\ndeemed to be caused by mere chance rather than system quality.\n",
"title": "Re-Evaluating the Netflix Prize - Human Uncertainty and its Impact on Reliability"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16793
| null |
Validated
| null | null |
null |
{
"abstract": " N. Hindman, I. Leader and D. Strauss proved that it is consistent that there\nis a finite colouring of $\\mathbb R$ so that no infinite sumset\n$X+X=\\{x+y:x,y\\in X\\}$ is monochromatic. Our aim in this paper is to prove a\nconsistency result in the opposite direction: we show that, under certain\nset-theoretic assumptions, for any $c:\\mathbb R\\to r$ with $r$ finite there is\nan infinite $X\\subseteq \\mathbb R$ so that $c$ is constant on $X+X$.\n",
"title": "Infinite monochromatic sumsets for colourings of the reals"
}
| null | null | null | null | true | null |
16794
| null |
Default
| null | null |
null |
{
"abstract": " Even when confronted with the same data, agents often disagree on a model of\nthe real-world. Here, we address the question of how interacting heterogenous\nagents, who disagree on what model the real-world follows, optimize their\ntrading actions. The market has latent factors that drive prices, and agents\naccount for the permanent impact they have on prices. This leads to a large\nstochastic game, where each agents' performance criteria is computed under a\ndifferent probability measure. We analyse the mean-field game (MFG) limit of\nthe stochastic game and show that the Nash equilibria is given by the solution\nto a non-standard vector-valued forward-backward stochastic differential\nequation. Under some mild assumptions, we construct the solution in terms of\nexpectations of the filtered states. We prove the MFG strategy forms an\n\\epsilon-Nash equilibrium for the finite player game. Lastly, we present a\nleast-squares Monte Carlo based algorithm for computing the optimal control and\nillustrate the results through simulation in market where agents disagree on\nthe model.\n",
"title": "Mean-Field Games with Differing Beliefs for Algorithmic Trading"
}
| null | null | null | null | true | null |
16795
| null |
Default
| null | null |
null |
{
"abstract": " In addition to hardware wall-time restrictions commonly seen in\nhigh-performance computing systems, it is likely that future systems will also\nbe constrained by energy budgets. In the present work, finite difference\nalgorithms of varying computational and memory intensity are evaluated with\nrespect to both energy efficiency and runtime on an Intel Ivy Bridge CPU node,\nan Intel Xeon Phi Knights Landing processor, and an NVIDIA Tesla K40c GPU. The\nconventional way of storing the discretised derivatives to global arrays for\nsolution advancement is found to be inefficient in terms of energy consumption\nand runtime. In contrast, a class of algorithms in which the discretised\nderivatives are evaluated on-the-fly or stored as thread-/process-local\nvariables (yielding high compute intensity) is optimal both with respect to\nenergy consumption and runtime. On all three hardware architectures considered,\na speed-up of ~2 and an energy saving of ~2 are observed for the high compute\nintensive algorithms compared to the memory intensive algorithm. The energy\nconsumption is found to be proportional to runtime, irrespective of the power\nconsumed and the GPU has an energy saving of ~5 compared to the same algorithm\non a CPU node.\n",
"title": "Energy efficiency of finite difference algorithms on multicore CPUs, GPUs, and Intel Xeon Phi processors"
}
| null | null | null | null | true | null |
16796
| null |
Default
| null | null |
null |
{
"abstract": " We recently showed that several Local Group (LG) galaxies have much higher\nradial velocities (RVs) than predicted by a 3D dynamical model of the standard\ncosmological paradigm. Here, we show that 6 of these 7 galaxies define a thin\nplane with root mean square thickness of only 101 kpc despite a widest extent\nof nearly 3 Mpc, much larger than the conventional virial radius of the Milky\nWay (MW) or M31. This plane passes within ${\\sim 70}$ kpc of the MW-M31\nbarycentre and is oriented so the MW-M31 line is inclined by $16^\\circ$ to it.\nWe develop a toy model to constrain the scenario whereby a past MW-M31 flyby\nin Modified Newtonian Dynamics (MOND) forms tidal dwarf galaxies that settle\ninto the recently discovered planes of satellites around the MW and M31. The\nscenario is viable only for a particular MW-M31 orbital plane. This roughly\ncoincides with the plane of LG dwarfs with anomalously high RVs.\nUsing a restricted $N$-body simulation of the LG in MOND, we show how the\nonce fast-moving MW and M31 gravitationally slingshot test particles outwards\nat high speeds. The most distant such particles preferentially lie within the\nMW-M31 orbital plane, probably because the particles ending up with the highest\nRVs are those flung out almost parallel to the motion of the perturber. This\nsuggests a dynamical reason for our finding of a similar trend in the real LG,\nsomething not easily explained as a chance alignment of galaxies with an\nisotropic or mildly flattened distribution (probability $= {0.0015}$).\n",
"title": "A Plane of High Velocity Galaxies Across the Local Group"
}
| null | null |
[
"Physics"
] | null | true | null |
16797
| null |
Validated
| null | null |
null |
{
"abstract": " We present a method for scalable and fully 3D magnetic field simultaneous\nlocalisation and mapping (SLAM) using local anomalies in the magnetic field as\na source of position information. These anomalies are due to the presence of\nferromagnetic material in the structure of buildings and in objects such as\nfurniture. We represent the magnetic field map using a Gaussian process model\nand take well-known physical properties of the magnetic field into account. We\nbuild local maps using three-dimensional hexagonal block tiling. To make our\napproach computationally tractable we use reduced-rank Gaussian process\nregression in combination with a Rao-Blackwellised particle filter. We show\nthat it is possible to obtain accurate position and orientation estimates using\nmeasurements from a smartphone, and that our approach provides a scalable\nmagnetic field SLAM algorithm in terms of both computational complexity and map\nstorage.\n",
"title": "Scalable Magnetic Field SLAM in 3D Using Gaussian Process Maps"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
16798
| null |
Validated
| null | null |
null |
{
"abstract": " We consider a network of binary-valued sensors with a fusion center. The\nfusion center has to perform K-means clustering on the binary data transmitted\nby the sensors. In order to reduce the amount of data transmitted within the\nnetwork, the sensors compress their data with a source coding scheme based on\nbinary sparse matrices. We propose to apply the K-means algorithm directly over\nthe compressed data without reconstructing the original sensors measurements,\nin order to avoid potentially complex decoding operations. We provide\napproximated expressions of the error probabilities of the K-means steps in the\ncompressed domain. From these expressions, we show that applying the K-means\nalgorithm in the compressed domain enables to recover the clusters of the\noriginal domain. Monte Carlo simulations illustrate the accuracy of the\nobtained approximated error probabilities, and show that the coding rate needed\nto perform K-means clustering in the compressed domain is lower than the rate\nneeded to reconstruct all the measurements.\n",
"title": "K-means Algorithm over Compressed Binary Data"
}
| null | null | null | null | true | null |
16799
| null |
Default
| null | null |
null |
{
"abstract": " Large-scale Gaussian process inference has long faced practical challenges\ndue to time and space complexity that is superlinear in dataset size. While\nsparse variational Gaussian process models are capable of learning from\nlarge-scale data, standard strategies for sparsifying the model can prevent the\napproximation of complex functions. In this work, we propose a novel\nvariational Gaussian process model that decouples the representation of mean\nand covariance functions in reproducing kernel Hilbert space. We show that this\nnew parametrization generalizes previous models. Furthermore, it yields a\nvariational inference problem that can be solved by stochastic gradient ascent\nwith time and space complexity that is only linear in the number of mean\nfunction parameters, regardless of the choice of kernels, likelihoods, and\ninducing points. This strategy makes the adoption of large-scale expressive\nGaussian process models possible. We run several experiments on regression\ntasks and show that this decoupled approach greatly outperforms previous sparse\nvariational Gaussian process inference procedures.\n",
"title": "Variational Inference for Gaussian Process Models with Linear Complexity"
}
| null | null | null | null | true | null |
16800
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.