text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Direct frequency-comb spectroscopy is used to probe the absolute frequencies\nof $6S_{1/2}$-$8S_{1/2}$ two-photon transitions of atomic cesium in hot vapor\nenvironment. By utilizing the coherent control method of temporally splitting\nthe laser spectrum above and below the two-photon resonance frequency,\nDoppler-free absorption is built in two spatially distinct locations and imaged\nfor high-precision spectroscopy. Theoretical analysis finds that these\ntransition lines are measured with uncertainty below $5\\times10^{-10}$, mainly\ncontributed from laser-induced AC Stark shift.\n",
"title": "Direct frequency-comb spectroscopy of $6S_{1/2}$-$8S_{1/2}$ transitions of atomic cesium"
}
| null | null | null | null | true | null |
17701
| null |
Default
| null | null |
null |
{
"abstract": " A variety of energy resources has been identified as being flexible in their\nelectric energy consumption or generation. This energetic flexibility can be\nused for various purposes such as minimizing energy procurement costs or\nproviding ancillary services to power grids. To fully leverage the flexibility\navailable from distributed small-scale resources, their flexibility must be\nquantified and aggregated.\nThis paper introduces a generic and scalable approach for flexible energy\nsystems to quantitatively describe and price their flexibility based on\nzonotopic sets. The description proposed allows aggregators to efficiently pool\nthe flexibility of large numbers of systems and to make control and market\ndecisions on the aggregate level. In addition, an algorithm is presented that\ndistributes aggregate-level control decisions among the individual systems of\nthe pool in an economically fair and computationally efficient way. Finally, it\nis shown how the zonotopic description of flexibility enables an efficient\ncomputation of aggregate regulation power bid-curves.\n",
"title": "Aggregation and Disaggregation of Energetic Flexibility from Distributed Energy Resources"
}
| null | null | null | null | true | null |
17702
| null |
Default
| null | null |
null |
{
"abstract": " Inferring walls configuration of indoor environment could help robot\n\"understand\" the environment better. This allows the robot to execute a task\nthat involves inter-room navigation, such as picking an object in the kitchen.\nIn this paper, we present a method to inferring walls configuration from a\nmoving RGB-D sensor. Our goal is to combine a simple wall configuration model\nand fast wall detection method in order to get a system that works online, is\nreal-time, and does not need a Manhattan World assumption. We tested our\npreliminary work, i.e. wall detection and measurement from moving RGB-D sensor,\nwith MIT Stata Center Dataset. The performance of our method is reported in\nterms of accuracy and speed of execution.\n",
"title": "Mapping Walls of Indoor Environment using RGB-D Sensor"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17703
| null |
Validated
| null | null |
null |
{
"abstract": " Experience replay is a key technique behind many recent advances in deep\nreinforcement learning. Allowing the agent to learn from earlier memories can\nspeed up learning and break undesirable temporal correlations. Despite its\nwide-spread application, very little is understood about the properties of\nexperience replay. How does the amount of memory kept affect learning dynamics?\nDoes it help to prioritize certain experiences? In this paper, we address these\nquestions by formulating a dynamical systems ODE model of Q-learning with\nexperience replay. We derive analytic solutions of the ODE for a simple\nsetting. We show that even in this very simple setting, the amount of memory\nkept can substantially affect the agent's performance. Too much or too little\nmemory both slow down learning. Moreover, we characterize regimes where\nprioritized replay harms the agent's learning. We show that our analytic\nsolutions have excellent agreement with experiments. Finally, we propose a\nsimple algorithm for adaptively changing the memory buffer size which achieves\nconsistently good empirical performance.\n",
"title": "The Effects of Memory Replay in Reinforcement Learning"
}
| null | null | null | null | true | null |
17704
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the problem of dynamic portfolio optimization in\ncontinuous-time, finite-horizon setting for a portfolio of two stocks and one\nrisk-free asset. The stocks follow the Cointelation model. The proposed\noptimization methods are twofold. In what we call an Stochastic Differential\nEquation approach, we compute the optimal weights using mean-variance criterion\nand power utility maximization. We show that dynamically switching between\nthese two optimal strategies by introducing a triggering function can further\nimprove the portfolio returns. We contrast this with the machine learning\nclustering methodology inspired by the band-wise Gaussian mixture model. The\nfirst benefit of the machine learning over the Stochastic Differential Equation\napproach is that we were able to achieve the same results though a simpler\nchannel. The second advantage is a flexibility to regime change.\n",
"title": "Portfolio Optimization for Cointelated Pairs: SDEs vs. Machine Learning"
}
| null | null | null | null | true | null |
17705
| null |
Default
| null | null |
null |
{
"abstract": " We report a sample of 463 high-mass starless clump (HMSC) candidates within\n$-60°<l<60°$ and $-1°<b<1°$. This sample has been singled out from\n10861 ATLASGAL clumps. All of these sources are not associated with any known\nstar-forming activities collected in SIMBAD and young stellar objects\nidentified using color-based criteria. We also make sure that the HMSC\ncandidates have neither point sources at 24 and 70 \\micron~nor strong extended\nemission at 24 $\\mu$m. Most of the identified HMSCs are infrared ($\\le24$\n$\\mu$m) dark and some are even dark at 70 $\\mu$m. Their distribution shows\ncrowding in Galactic spiral arms and toward the Galactic center and some\nwell-known star-forming complexes. Many HMSCs are associated with large-scale\nfilaments. Some basic parameters were attained from column density and dust\ntemperature maps constructed via fitting far-infrared and submillimeter\ncontinuum data to modified blackbodies. The HMSC candidates have sizes, masses,\nand densities similar to clumps associated with Class II methanol masers and\nHII regions, suggesting they will evolve into star-forming clumps. More than\n90% of the HMSC candidates have densities above some proposed thresholds for\nforming high-mass stars. With dust temperatures and luminosity-to-mass ratios\nsignificantly lower than that for star-forming sources, the HMSC candidates are\nexternally heated and genuinely at very early stages of high-mass star\nformation. Twenty sources with equivalent radius $r_\\mathrm{eq}<0.15$ pc and\nmass surface density $\\Sigma>0.08$ g cm$^{-2}$ could be possible high-mass\nstarless cores. Further investigations toward these HMSCs would undoubtedly\nshed light on comprehensively understanding the birth of high-mass stars.\n",
"title": "High-mass Starless Clumps in the inner Galactic Plane: the Sample and Dust Properties"
}
| null | null | null | null | true | null |
17706
| null |
Default
| null | null |
null |
{
"abstract": " Artificial neural networks that learn to perform Principal Component Analysis\n(PCA) and related tasks using strictly local learning rules have been\npreviously derived based on the principle of similarity matching: similar pairs\nof inputs should map to similar pairs of outputs. However, the operation of\nthese networks (and of similar networks) requires a fixed-point iteration to\ndetermine the output corresponding to a given input, which means that dynamics\nmust operate on a faster time scale than the variation of the input. Further,\nduring these fast dynamics such networks typically \"disable\" learning, updating\nsynaptic weights only once the fixed-point iteration has been resolved. Here,\nwe derive a network for PCA-based dimensionality reduction that avoids this\nfast fixed-point iteration. The key novelty of our approach is a modification\nof the similarity matching objective to encourage near-diagonality of a\nsynaptic weight matrix. We then approximately invert this matrix using a Taylor\nseries approximation, replacing the previous fast iterations. In the offline\nsetting, our algorithm corresponds to a dynamical system, the stability of\nwhich we rigorously analyze. In the online setting (i.e., with stochastic\ngradients), we map our algorithm to a familiar neural network architecture and\ngive numerical results showing that our method converges at a competitive rate.\nThe computational complexity per iteration of our online algorithm is linear in\nthe total degrees of freedom, which is in some sense optimal.\n",
"title": "Biologically Plausible Online Principal Component Analysis Without Recurrent Neural Dynamics"
}
| null | null | null | null | true | null |
17707
| null |
Default
| null | null |
null |
{
"abstract": " The problem of object localization and recognition on autonomous mobile\nrobots is still an active topic. In this context, we tackle the problem of\nlearning a model of visual saliency directly on a robot. This model, learned\nand improved on-the-fly during the robot's exploration provides an efficient\ntool for localizing relevant objects within their environment. The proposed\napproach includes two intertwined components. On the one hand, we describe a\nmethod for learning and incrementally updating a model of visual saliency from\na depth-based object detector. This model of saliency can also be exploited to\nproduce bounding box proposals around objects of interest. On the other hand,\nwe investigate an autonomous exploration technique to efficiently learn such a\nsaliency model. The proposed exploration, called Reinforcement\nLearning-Intelligent Adaptive Curiosity (RL-IAC) is able to drive the robot's\nexploration so that samples selected by the robot are likely to improve the\ncurrent model of saliency. We then demonstrate that such a saliency model\nlearned directly on a robot outperforms several state-of-the-art saliency\ntechniques, and that RL-IAC can drastically decrease the required time for\nlearning a reliable saliency model.\n",
"title": "Exploring to learn visual saliency: The RL-IAC approach"
}
| null | null | null | null | true | null |
17708
| null |
Default
| null | null |
null |
{
"abstract": " We propose a fast, simple and robust algorithm for computing shortest paths\nand distances on Riemannian manifolds learned from data. This amounts to\nsolving a system of ordinary differential equations (ODEs) subject to boundary\nconditions. Here standard solvers perform poorly because they require\nwell-behaved Jacobians of the ODE, and usually, manifolds learned from data\nimply unstable and ill-conditioned Jacobians. Instead, we propose a fixed-point\niteration scheme for solving the ODE that avoids Jacobians. This enhances the\nstability of the solver, while reduces the computational cost. In experiments\ninvolving both Riemannian metric learning and deep generative models we\ndemonstrate significant improvements in speed and stability over both\ngeneral-purpose state-of-the-art solvers as well as over specialized solvers.\n",
"title": "Fast and Robust Shortest Paths on Manifolds Learned from Data"
}
| null | null | null | null | true | null |
17709
| null |
Default
| null | null |
null |
{
"abstract": " Recently, the deep learning community has given growing attention to neural\narchitectures engineered to learn problems in relational domains. Convolutional\nNeural Networks employ parameter sharing over the image domain, tying the\nweights of neural connections on a grid topology and thus enforcing the\nlearning of a number of convolutional kernels. By instantiating trainable\nneural modules and assembling them in varied configurations (apart from grids),\none can enforce parameter sharing over graphs, yielding models which can\neffectively be fed with relational data. In this context, vertices in a graph\ncan be projected into a hyperdimensional real space and iteratively refined\nover many message-passing iterations in an end-to-end differentiable\narchitecture. Architectures of this family have been referred to with several\ndefinitions in the literature, such as Graph Neural Networks, Message-passing\nNeural Networks, Relational Networks and Graph Networks. In this paper, we\nrevisit the original Graph Neural Network model and show that it generalises\nmany of the recent models, which in turn benefit from the insight of thinking\nabout vertex \\textbf{types}. To illustrate the generality of the original\nmodel, we present a Graph Neural Network formalisation, which partitions the\nvertices of a graph into a number of types. Each type represents an entity in\nthe ontology of the problem one wants to learn. This allows - for instance -\none to assign embeddings to edges, hyperedges, and any number of global\nattributes of the graph. As a companion to this paper we provide a\nPython/Tensorflow library to facilitate the development of such architectures,\nwith which we instantiate the formalisation to reproduce a number of models\nproposed in the current literature.\n",
"title": "Typed Graph Networks"
}
| null | null | null | null | true | null |
17710
| null |
Default
| null | null |
null |
{
"abstract": " The S=1/2 Heisenberg spin chain compound SrCuO2 doped with different amounts\nof nickel (Ni), palladium (Pd), zinc (Zn) and cobalt (Co) has been studied by\nmeans of Cu nuclear magnetic resonance (NMR). Replacing only a few of the S=1/2\nCu ions with Ni, Pd, Zn or Co has a major impact on the magnetic properties of\nthe spin chain system. In the case of Ni, Pd and Zn an unusual line broadening\nin the low temperature NMR spectra reveals the existence of an impurity-induced\nlocal alternating magnetization (LAM), while exponentially decaying\nspin-lattice relaxation rates $T_1^{-1}$ towards low temperatures indicate the\nopening of spin gaps. A distribution of gap magnitudes is proven by a stretched\nspin-lattice relaxation and a variation of $T_1^{-1}$ within the broad\nresonance lines. These observations depend strongly on the impurity\nconcentration and therefore can be understood using the model of finite\nsegments of the spin 1/2 antiferromagnetic Heisenberg chain, i.e. pure chain\nsegmentation due to S = 0 impurities. This is surprising for Ni as it was\npreviously assumed to be a magnetic impurity with S = 1 which is screened by\nthe neighboring copper spins. In order to confirm the S = 0 state of the Ni, we\nperformed x-ray absorption spectroscopy (XAS) and compared the measurements to\nsimulated XAS spectra based on multiplet ligand-field theory. Furthermore, Zn\ndoping leads to much smaller effects on both the NMR spectra and the\nspin-lattice relaxation rates, indicating that Zn avoids occupying Cu sites.\nFor magnetic Co impurities, $T_1^{-1}$ does not obey the gap like decrease, and\nthe low-temperature spectra get very broad. This could be related to the\nincrease of the Neel temperature which was observed by recent muSR and\nsusceptibility measurements, and is most likely an effect of the impurity spin\n$S\\neq0$.\n",
"title": "The effect of different in-chain impurities on the magnetic properties of the spin chain compound SrCuO$_2$ probed by NMR"
}
| null | null |
[
"Physics"
] | null | true | null |
17711
| null |
Validated
| null | null |
null |
{
"abstract": " In this joint introduction to an Asterisque volume, we give a short\ndiscussion of the historical developments in the study of nonlinear covering\ngroups, touching on their structure theory, representation theory and the\ntheory of automorphic forms. This serves as a historical motivation and sets\nthe scene for the papers in the volume. Our discussion is necessarily\nsubjective and will undoubtedly leave out the contributions of many authors, to\nwhom we apologize in earnest.\n",
"title": "L-groups and the Langlands program for covering groups: a historical introduction"
}
| null | null | null | null | true | null |
17712
| null |
Default
| null | null |
null |
{
"abstract": " Many real-world systems are characterized by stochastic dynamical rules where\na complex network of interactions among individual elements probabilistically\ndetermines their state. Even with full knowledge of the network structure and\nof the stochastic rules, the ability to predict system configurations is\ngenerally characterized by a large uncertainty. Selecting a fraction of the\nnodes and observing their state may help to reduce the uncertainty about the\nunobserved nodes. However, choosing these points of observation in an optimal\nway is a highly nontrivial task, depending on the nature of the stochastic\nprocess and on the structure of the underlying interaction pattern. In this\npaper, we introduce a computationally efficient algorithm to determine\nquasioptimal solutions to the problem. The method leverages network sparsity to\nreduce computational complexity from exponential to almost quadratic, thus\nallowing the straightforward application of the method to mid-to-large-size\nsystems. Although the method is exact only for equilibrium stochastic processes\ndefined on trees, it turns out to be effective also for out-of-equilibrium\nprocesses on sparse loopy networks.\n",
"title": "Uncertainty Reduction for Stochastic Processes on Complex Networks"
}
| null | null | null | null | true | null |
17713
| null |
Default
| null | null |
null |
{
"abstract": " The field of biomedical imaging has undergone a rapid growth in recent years,\nmostly due to the implementation of ad-hoc designed experimental setups,\ntheoretical support methods and numerical reconstructions. Especially for\nbiological samples, the high number of scattering events occurring during the\nphoton propagation process limit the penetration depth and the possibility to\noutperform direct imaging in thicker and not transparent samples. In this\nthesis, we will examine theoretically and experimentally the scattering process\nfrom two opposite points of view, focusing also on the continuous stimulus\noffered by the will to tackle some specific challenges in the emerging optical\nimaging science. Firstly, we will discuss the light propagation in diffusive\nbiological tissues considering the particular case of the presence of optically\ntransparent regions enclosed in a highly scattering environment. The correct\ninclusion of this information, can ultimately lead to higher resolution\nreconstruction, especially in neuroimaging. On the other hand, we will examine\nthe extreme case of the three-dimensional imaging of a totally hidden sample,\nin which the phase has been scrambled by a random scattering layer. By making\nuse of appropriate numerical methods, we will prove how it is possible to\noutperform such hidden reconstruction in a very efficient way, opening the path\ntoward the unexplored field of three-dimensional hidden imaging. Finally, we\nwill present how, the properties noticed while addressing these problems,\nleaded us to the development of a novel alignment-free three-dimensional\ntomographic technique that we refer to as Phase-Retrieved Tomography.\nUltimately, we used this technique for the study of the fluorescence\ndistribution in a three-dimensional spherical tumor model, the cancer cell\nspheroid, one of the most important biological model for the study of such\ndisease.\n",
"title": "Light propagation in Extreme Conditions - The role of optically clear tissues and scattering layers in optical biomedical imaging"
}
| null | null | null | null | true | null |
17714
| null |
Default
| null | null |
null |
{
"abstract": " A new initiative from the International Swaps and Derivatives Association\n(ISDA) aims to establish a \"Common Domain Model\" (ISDA CDM): a new standard for\ndata and process representation across the full range of derivatives\ninstruments. Design of the ISDA CDM is at an early stage and the draft\ndefinition contains considerable complexity. This paper contributes by offering\ninsight, analysis and discussion relating to key topics in the design space\nsuch as data lineage, timestamps, consistency, operations, events, state and\nstate transitions.\n",
"title": "Design discussion on the ISDA Common Domain Model"
}
| null | null | null | null | true | null |
17715
| null |
Default
| null | null |
null |
{
"abstract": " Hierarchical models are utilized in a wide variety of problems which are\ncharacterized by task hierarchies, where predictions on smaller subtasks are\nuseful for trying to predict a final task. Typically, neural networks are first\ntrained for the subtasks, and the predictions of these networks are\nsubsequently used as additional features when training a model and doing\ninference for a final task. In this work, we focus on improving learning for\nsuch hierarchical models and demonstrate our method on the task of speaker\ntrait prediction. Speaker trait prediction aims to computationally identify\nwhich personality traits a speaker might be perceived to have, and has been of\ngreat interest to both the Artificial Intelligence and Social Science\ncommunities. Persuasiveness prediction in particular has been of interest, as\npersuasive speakers have a large amount of influence on our thoughts, opinions\nand beliefs. In this work, we examine how leveraging the relationship between\nrelated speaker traits in a hierarchical structure can help improve our ability\nto predict how persuasive a speaker is. We present a novel algorithm that\nallows us to backpropagate through this hierarchy. This hierarchical model\nachieves a 25% relative error reduction in classification accuracy over current\nstate-of-the art methods on the publicly available POM dataset.\n",
"title": "Preserving Intermediate Objectives: One Simple Trick to Improve Learning for Hierarchical Models"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17716
| null |
Validated
| null | null |
null |
{
"abstract": " Atomistic simulations were carried out to analyze the interaction between $<\na>$ basal dislocations and precipitates in Mg-Al alloys and the associated\nstrengthening mechanisms.\n",
"title": "Atomistic simulations of dislocation/precipitation interactions in Mg-Al alloys and implications for precipitation hardening"
}
| null | null | null | null | true | null |
17717
| null |
Default
| null | null |
null |
{
"abstract": " Hyperspectral/multispectral imaging (HSI/MSI) contains rich information\nclinical applications, such as 1) narrow band imaging for vascular\nvisualisation; 2) oxygen saturation for intraoperative perfusion monitoring and\nclinical decision making [1]; 3) tissue classification and identification of\npathology [2]. The current systems which provide pixel-level HSI/MSI signal can\nbe generally divided into two types: spatial scanning and spectral scanning.\nHowever, the trade-off between spatial/spectral resolution, the acquisition\ntime, and the hardware complexity hampers implementation in real-world\napplications, especially intra-operatively. Acquiring high resolution images in\nreal-time is important for HSI/MSI in intra-operative imaging, to alleviate the\nside effect caused by breathing, heartbeat, and other sources of motion.\nTherefore, we developed an algorithm to recover a pixel-level MSI stack using\nonly the captured snapshot RGB images from a normal camera. We refer to this\ntechnique as \"super-spectral-resolution\". The proposed method enables recovery\nof pixel-level-dense MSI signals with 24 spectral bands at ~11 frames per\nsecond (FPS) on a GPU. Multispectral data captured from porcine bowel and\nsheep/rabbit uteri in vivo has been used for training, and the algorithm has\nbeen validated using unseen in vivo animal experiments.\n",
"title": "Recovering Dense Tissue Multispectral Signal from in vivo RGB Images"
}
| null | null | null | null | true | null |
17718
| null |
Default
| null | null |
null |
{
"abstract": " Among the large number of promising two-dimensional (2D) atomic layer\ncrystals, true metallic layers are rare. Through combined theoretical and\nexperimental approaches, we report on the stability and successful exfoliation\nof atomically thin gallenene sheets, having two distinct atomic arrangements\nalong crystallographic twin directions of the parent alpha-gallium. Utilizing\nthe weak interface between solid and molten phases of gallium, a solid-melt\ninterface exfoliation technique is developed to extract these layers. Phonon\ndispersion calculations show that gallenene can be stabilized with bulk gallium\nlattice parameters. The electronic band structure of gallenene shows a\ncombination of partially filled Dirac cone and the non-linear dispersive band\nnear Fermi level suggesting that gallenene should behave as a metallic layer.\nFurthermore it is observed that strong interaction of gallenene with other 2D\nsemiconductors induces semiconducting to metallic phase transitions in the\nlatter paving the way for using gallenene as interesting metallic contacts in\n2D devices.\n",
"title": "Atomically thin gallium layers from solid-melt exfoliation"
}
| null | null | null | null | true | null |
17719
| null |
Default
| null | null |
null |
{
"abstract": " Starting from a Langevin formulation of a thermally perturbed nonlinear\nelastic model of the ferroelectric smectic-C$^*$ (SmC${*}$) liquid crystals in\nthe presence of an electric field, this article characterizes the hitherto\nunexplored dynamical phase transition from a thermo-electrically forced\nferroelectric SmC${}^{*}$ phase to a chiral nematic liquid crystalline phase\nand vice versa. The theoretical analysis is based on a combination of dynamic\nrenormalization (DRG) and numerical simulation of the emergent model. While the\nDRG architecture predicts a generic transition to the Kardar-Parisi-Zhang (KPZ)\nuniversality class at dynamic equilibrium, in agreement with recent\nexperiments, the numerical simulations of the model show simultaneous existence\nof two phases, one a \"subdiffusive\" (SD) phase characterized by a dynamical\nexponent value of 1, and the other a KPZ phase, characterized by a dynamical\nexponent value of 1.5. The SD phase flows over to the KPZ phase with increased\nexternal forcing, offering a new universality paradigm, hitherto unexplored in\nthe context of ferroelectric liquid crystals.\n",
"title": "Novel Universality Classes in Ferroelectric Liquid Crystals"
}
| null | null | null | null | true | null |
17720
| null |
Default
| null | null |
null |
{
"abstract": " We present a feature functional theory - binding predictor (FFT-BP) for the\nprotein-ligand binding affinity prediction. The underpinning assumptions of\nFFT-BP are as follows: i) representability: there exists a microscopic feature\nvector that can uniquely characterize and distinguish one protein-ligand\ncomplex from another; ii) feature-function relationship: the macroscopic\nfeatures, including binding free energy, of a complex is a functional of\nmicroscopic feature vectors; and iii) similarity: molecules with similar\nmicroscopic features have similar macroscopic features, such as binding\naffinity. Physical models, such as implicit solvent models and quantum theory,\nare utilized to extract microscopic features, while machine learning algorithms\nare employed to rank the similarity among protein-ligand complexes. A large\nvariety of numerical validations and tests confirms the accuracy and robustness\nof the proposed FFT-BP model. The root mean square errors (RMSEs) of FFT-BP\nblind predictions of a benchmark set of 100 complexes, the PDBBind v2007 core\nset of 195 complexes and the PDBBind v2015 core set of 195 complexes are 1.99,\n2.02 and 1.92 kcal/mol, respectively. Their corresponding Pearson correlation\ncoefficients are 0.75, 0.80, and 0.78, respectively.\n",
"title": "Feature functional theory - binding predictor (FFT-BP) for the blind prediction of binding free energies"
}
| null | null | null | null | true | null |
17721
| null |
Default
| null | null |
null |
{
"abstract": " One of the serious issues in communication between people is hiding\ninformation from others, and the best way for this, is deceiving them. Since\nnowadays face images are mostly used in three dimensional format, in this paper\nwe are going to steganography 3D face images, detecting which by curious people\nwill be impossible. As in detecting face only its texture is important, we\nseparate texture from shape matrices, for eliminating half of the extra\ninformation, steganography is done only for face texture, and for\nreconstructing 3D face, we can use any other shape. Moreover, we will indicate\nthat, by using two textures, how two 3D faces can be combined. For a complete\ndescription of the process, first, 2D faces are used as an input for building\n3D faces, and then 3D textures are hidden within other images.\n",
"title": "Combining and Steganography of 3D Face Textures"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17722
| null |
Validated
| null | null |
null |
{
"abstract": " Weighted automata (WA) are an important formalism to describe quantitative\nproperties. Obtaining equivalent deterministic machines is a longstanding\nresearch problem. In this paper we consider WA with a set semantics, meaning\nthat the semantics is given by the set of weights of accepting runs. We focus\non multi-sequential WA that are defined as finite unions of sequential WA. The\nproblem we address is to minimize the size of this union. We call this minimum\nthe degree of sequentiality of (the relation realized by) the WA. For a given\npositive integer k, we provide multiple characterizations of relations realized\nby a union of k sequential WA over an infinitary finitely generated group: a\nLipschitz-like machine independent property, a pattern on the automaton (a new\ntwinning property) and a subclass of cost register automata. When possible, we\neffectively translate a WA into an equivalent union of k sequential WA. We also\nprovide a decision procedure for our twinning property for commutative\ncomputable groups thus allowing to compute the degree of sequentiality. Last,\nwe show that these results also hold for word transducers and that the\nassociated decision problem is Pspace-complete.\n",
"title": "Degree of sequentiality of weighted automata"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17723
| null |
Validated
| null | null |
null |
{
"abstract": " Let $G$ be a finite group and let $p_1,\\dots,p_n$ be distinct primes. If $G$\ncontains an element of order $p_1\\cdots p_n,$ then there is an element in $G$\nwhich is not contained in the Frattini subgroup of $G$ and whose order is\ndivisible by $p_1\\cdots p_n.$\n",
"title": "On the orders of the non-Frattini elements of a finite group"
}
| null | null | null | null | true | null |
17724
| null |
Default
| null | null |
null |
{
"abstract": " Generalize Kobayashi's example for the Noether inequality in dimension three,\nwe provide examples of n-folds of general type with small volumes.\n",
"title": "Varieties of general type with small volumes"
}
| null | null | null | null | true | null |
17725
| null |
Default
| null | null |
null |
{
"abstract": " We study the continuity of space translations on non-parametric exponential\nfamilies based on the exponential Orlicz space with Gaussian reference density.\n",
"title": "Translations in the exponential Orlicz space with Gaussian weight"
}
| null | null | null | null | true | null |
17726
| null |
Default
| null | null |
null |
{
"abstract": " In this study, we propose shrinkage methods based on {\\it generalized ridge\nregression} (GRR) estimation which is suitable for both multicollinearity and\nhigh dimensional problems with small number of samples (large $p$, small $n$).\nAlso, it is obtained theoretical properties of the proposed estimators for\nLow/High Dimensional cases. Furthermore, the performance of the listed\nestimators is demonstrated by both simulation studies and real-data analysis,\nand compare its performance with existing penalty methods. We show that the\nproposed methods compare well to competing regularization techniques.\n",
"title": "Shrinkage Estimation Strategies in Generalized Ridge Regression Models Under Low/High-Dimension Regime"
}
| null | null | null | null | true | null |
17727
| null |
Default
| null | null |
null |
{
"abstract": " We formulate Bayesian updates in Markov processes by means of path integral\ntechniques and derive the imaginary-time Schrödinger equation with\nlikelihood to direct the inference incorporated as a potential for the\nposterior probability distribution\n",
"title": "A path integral approach to Bayesian inference in Markov processes"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
17728
| null |
Validated
| null | null |
null |
{
"abstract": " Scanning tunnelling microscopy and low energy electron diffraction show a\ndimerization-like reconstruction in the one-dimensional atomic chains on\nBi(114) at low temperatures. While one-dimensional systems are generally\nunstable against such a distortion, its observation is not expected for this\nparticular surface, since there are several factors that should prevent it: One\nis the particular spin texture of the Fermi surface, which resembles a\none-dimensional topological state, and spin protection should hence prevent the\nformation of the reconstruction. The second is the very short nesting vector $2\nk_F$, which is inconsistent with the observed lattice distortion. A\nnesting-driven mechanism of the reconstruction is indeed excluded by the\nabsence of any changes in the electronic structure near the Fermi surface, as\nobserved by angle-resolved photoemission spectroscopy. However, distinct\nchanges in the electronic structure at higher binding energies are found to\naccompany the structural phase transition. This, as well as the observed short\ncorrelation length of the pairing distortion, suggest that the transition is of\nthe strong coupling type and driven by phonon entropy rather than electronic\nentropy.\n",
"title": "Strong-coupling charge density wave in a one-dimensional topological metal"
}
| null | null | null | null | true | null |
17729
| null |
Default
| null | null |
null |
{
"abstract": " A real hypersurface in the complex quadric $Q^m=SO_{m+2}/SO_mSO_2$ is said to\nbe $\\mathfrak A$-principal if its unit normal vector field is singular of type\n$\\mathfrak A$-principal everywhere. In this paper, we show that a $\\mathfrak\nA$-principal Hopf hypersurface in $Q^m$, $m\\geq3$ is an open part of a tube\naround a totally geodesic $Q^{m+1}$ in $Q^m$. We also show that such real\nhypersurfaces are the only contact real hypersurfaces in $Q^m$. %, this answers\naffirmatively a question posted by Berndt (cf. \\cite{berndt1})}. The\nclassification for pseudo-Einstein real hypersurfaces in $Q^m$, $m\\geq3$, is\nalso obtained.\n",
"title": "$\\mathfrak A$-principal Hopf hypersurfaces in complex quadrics"
}
| null | null |
[
"Mathematics"
] | null | true | null |
17730
| null |
Validated
| null | null |
null |
{
"abstract": " The quest towards expansion of the MAX design space has been accelerated with\nthe recent discovery of several solid solution and ordered phases involving at\nleast two MAX end members. Going beyond the nominal MAX compounds enables not\nonly fine tuning of existing properties but also entirely new functionality.\nThis search, however, has been mostly done through painstaking experiments as\nknowledge of the phase stability of the relevant systems is rather scarce. In\nthis work, we report the first attempt to evaluate the finite-temperature\npseudo-binary phase diagram of the Ti2AlC-Cr2AlC via first-principles-guided\nBayesian CALPHAD framework that accounts for uncertainties not only in ab\ninitio calculations and thermodynamic models but also in synthesis conditions\nin reported experiments. The phase stability analyses are shown to have good\nagreement with previous experiments. The work points towards a promising way of\ninvestigating phase stability in other MAX Phase systems providing the\nknowledge necessary to elucidate possible synthesis routes for MAX systems with\nunprecedented properties.\n",
"title": "On the stochastic phase stability of Ti2AlC-Cr2AlC"
}
| null | null | null | null | true | null |
17731
| null |
Default
| null | null |
null |
{
"abstract": " Deep learning has proven to be a powerful tool for computer vision and has\nseen widespread adoption for numerous tasks. However, deep learning algorithms\nare known to be vulnerable to adversarial examples. These adversarial inputs\nare created such that, when provided to a deep learning algorithm, they are\nvery likely to be mislabeled. This can be problematic when deep learning is\nused to assist in safety critical decisions. Recent research has shown that\nclassifiers can be attacked by physical adversarial examples under various\nphysical conditions. Given the fact that state-of-the-art objection detection\nalgorithms are harder to be fooled by the same set of adversarial examples,\nhere we show that these detectors can also be attacked by physical adversarial\nexamples. In this note, we briefly show both static and dynamic test results.\nWe design an algorithm that produces physical adversarial inputs, which can\nfool the YOLO object detector and can also attack Faster-RCNN with relatively\nhigh success rate based on transferability. Furthermore, our algorithm can\ncompress the size of the adversarial inputs to stickers that, when attached to\nthe targeted object, result in the detector either mislabeling or not detecting\nthe object a high percentage of the time. This note provides a small set of\nresults. Our upcoming paper will contain a thorough evaluation on other object\ndetectors, and will present the algorithm.\n",
"title": "Note on Attacking Object Detectors with Adversarial Stickers"
}
| null | null | null | null | true | null |
17732
| null |
Default
| null | null |
null |
{
"abstract": " We determine the value of some search games where our goal is to find all of\nsome hidden treasures using queries of bounded size. The answer to a query is\neither empty, in which case we lose, or a location, which contains a treasure.\nWe prove that if we need to find $d$ treasures at $n$ possible locations with\nqueries of size at most $k$, then our chance of winning is $\\frac{k^d}{\\binom\nnd}$ if each treasure is at a different location and\n$\\frac{k^d}{\\binom{n+d-1}d}$ if each location might hide several treasures for\nlarge enough $n$. Our work builds on some results by Csóka who has studied a\ncontinuous version of this problem, known as Alpern's Caching Game; we also\nprove that the value of Alpern's Caching Game is $\\frac{k^d}{\\binom{n+d-1}d}$\nfor integer $k$ and large enough $n$.\n",
"title": "All or Nothing Caching Games with Bounded Queries"
}
| null | null | null | null | true | null |
17733
| null |
Default
| null | null |
null |
{
"abstract": " We report the discovery of four short period extrasolar planets transiting\nmoderately bright stars from photometric measurements of the HATSouth network\ncoupled to additional spectroscopic and photometric follow-up observations.\nWhile the planet masses range from 0.26 to 0.90 M$_J$, the radii are all\napproximately a Jupiter radii, resulting in a wide range of bulk densities. The\norbital period of the planets range from 2.7d to 4.7d, with HATS-43b having an\norbit that appears to be marginally non-circular (e= 0.173$\\pm$0.089). HATS-44\nis notable for a high metallicity ([Fe/H]= 0.320$\\pm$0.071). The host stars\nspectral types range from late F to early K, and all of them are moderately\nbright (13.3<V<14.4), allowing the execution of future detailed follow-up\nobservations. HATS-43b and HATS-46b, with expected transmission signals of 2350\nppm and 1500 ppm, respectively, are particularly well suited targets for\natmospheric characterisation via transmission spectroscopy.\n",
"title": "HATS-43b, HATS-44b, HATS-45b, and HATS-46b: Four Short Period Transiting Giant Planets in the Neptune-Jupiter Mass Range"
}
| null | null | null | null | true | null |
17734
| null |
Default
| null | null |
null |
{
"abstract": " We formulate a general criterion for the exact preservation of the \"lake at\nrest\" solution in general mesh-based and meshless numerical schemes for the\nstrong form of the shallow-water equations with bottom topography. The main\nidea is a careful mimetic design for the spatial derivative operators in the\nmomentum flux equation that is paired with a compatible averaging rule for the\nwater column height arising in the bottom topography source term. We prove\nconsistency of the mimetic difference operators analytically and demonstrate\nthe well-balanced property numerically using finite difference and RBF-FD\nschemes in the one- and two-dimensional cases.\n",
"title": "Well-balanced mesh-based and meshless schemes for the shallow-water equations"
}
| null | null | null | null | true | null |
17735
| null |
Default
| null | null |
null |
{
"abstract": " The use of CVA to cover credit risk is widely spread, but has its\nlimitations. Namely, dealers face the problem of the illiquidity of instruments\nused for hedging it, hence forced to warehouse credit risk. As a result,\ndealers tend to offer a limited OTC derivatives market to highly risky\ncounterparties. Consequently, those highly risky entities rarely have access to\nhedging services precisely when they need them most. In this paper we propose a\nmethod to overcome this limitation. We propose to extend the CVA risk-neutral\nframework to compute an initial margin (IM) specific to each counterparty,\nwhich depends on the credit quality of the entity at stake, transforming the\neffective credit rating of a given netting set to AAA, regardless of the credit\nrating of the counterparty. By transforming CVA requirement into IM ones, as\nproposed in this paper, an institution could rely on the existing mechanisms\nfor posting and calling of IM, hence ensuring the operational viability of this\nnew form of managing warehoused risk. The main difference with the currently\nstandard framework is the creation of a Specific Initial Margin, that depends\nin the credit rating of the counterparty and the characteristics of the netting\nset in question. In this paper we propose a methodology for such transformation\nin a sound manner, and hence this method overcomes some of the limitations of\nthe CVA framework.\n",
"title": "An Enhanced Initial Margin Methodology to Manage Warehoused Credit Risk"
}
| null | null | null | null | true | null |
17736
| null |
Default
| null | null |
null |
{
"abstract": " Ensemble pruning, selecting a subset of individual learners from an original\nensemble, alleviates the deficiencies of ensemble learning on the cost of time\nand space. Accuracy and diversity serve as two crucial factors while they\nusually conflict with each other. To balance both of them, we formalize the\nensemble pruning problem as an objection maximization problem based on\ninformation entropy. Then we propose an ensemble pruning method including a\ncentralized version and a distributed version, in which the latter is to speed\nup the former's execution. At last, we extract a general distributed framework\nfor ensemble pruning, which can be widely suitable for most of existing\nensemble pruning methods and achieve less time consuming without much accuracy\ndecline. Experimental results validate the efficiency of our framework and\nmethods, particularly with regard to a remarkable improvement of the execution\nspeed, accompanied by gratifying accuracy performance.\n",
"title": "Ensemble Pruning based on Objection Maximization with a General Distributed Framework"
}
| null | null | null | null | true | null |
17737
| null |
Default
| null | null |
null |
{
"abstract": " The paper is devoted to the relationship between psychophysics and physics of\nmind. The basic trends in psychophysics development are briefly discussed with\nspecial attention focused on Teghtsoonian's hypotheses. These hypotheses pose\nthe concept of the universality of inner psychophysics and enable to speak\nabout psychological space as an individual object with its own properties.\nTurning to the two-component description of human behavior (I. Lubashevsky,\nPhysics of the Human Mind, Springer, 2017) the notion of mental space is\nformulated and human perception of external stimuli is treated as the emergence\nof the corresponding images in the mental space. On one hand, these images are\ncaused by external stimuli and their magnitude bears the information about the\nintensity of the corresponding stimuli. On the other hand, the individual\nstructure of such images as well as their subsistence after emergence is\ndetermined only by the properties of mental space on its own. Finally, the\nmental operations of image comparison and their scaling are defined in a way\nallowing for the bounded capacity of human cognition. As demonstrated, the\ndeveloped theory of stimulus perception is able to explain the basic\nregularities of psychophysics, e.g., (i) the regression and range effects\nleading to the overestimation of weak stimuli and the underestimation of strong\nstimuli, (ii) scalar variability (Weber's and Ekman' laws), and (\\textit{iii})\nthe sequential (memory) effects. As the final result, a solution to the\nFechner-Stevens dilemma is proposed. This solution posits that Fechner's\nlogarithmic law is not a consequences of Weber's law but stems from the\ninterplay of uncertainty in evaluating stimulus intensities and the multi-step\nscaling required to overcome the stimulus incommensurability.\n",
"title": "Psychophysical laws as reflection of mental space properties"
}
| null | null | null | null | true | null |
17738
| null |
Default
| null | null |
null |
{
"abstract": " We study Shimura curves of PEL type in $\\mathsf{A}_g$ generically contained\nin the Prym locus. We study both the unramified Prym locus, obtained using\nétale double covers, and the ramified Prym locus, corresponding to double\ncovers ramified at two points. In both cases we consider the family of all\ndouble covers compatible with a fixed group action on the base curve. We\nrestrict to the case where the family is 1-dimensional and the quotient of the\nbase curve by the group is $\\mathbb{P}^1$. We give a simple criterion for the\nimage of these families under the Prym map to be a Shimura curve. Using\ncomputer algebra we check all the examples gotten in this way up to genus 28.\nWe obtain 43 Shimura curves generically contained in the unramified Prym locus\nand 9 families generically contained in the ramified Prym locus. Most of these\ncurves are not generically contained in the Jacobian locus.\n",
"title": "Shimura curves in the Prym locus"
}
| null | null | null | null | true | null |
17739
| null |
Default
| null | null |
null |
{
"abstract": " Data-driven anomaly detection methods suffer from the drawback of detecting\nall instances that are statistically rare, irrespective of whether the detected\ninstances have real-world significance or not. In this paper, we are interested\nin the problem of specifically detecting anomalous instances that are known to\nhave high real-world utility, while ignoring the low-utility statistically\nanomalous instances. To this end, we propose a novel method called Latent\nLaplacian Maximum Entropy Discrimination (LatLapMED) as a potential solution.\nThis method uses the EM algorithm to simultaneously incorporate the Geometric\nEntropy Minimization principle for identifying statistical anomalies, and the\nMaximum Entropy Discrimination principle to incorporate utility labels, in\norder to detect high-utility anomalies. We apply our method in both simulated\nand real datasets to demonstrate that it has superior performance over existing\nalternatives that independently pre-process with unsupervised anomaly detection\nalgorithms before classifying.\n",
"title": "Latent Laplacian Maximum Entropy Discrimination for Detection of High-Utility Anomalies"
}
| null | null | null | null | true | null |
17740
| null |
Default
| null | null |
null |
{
"abstract": " Self-similarity was recently introduced as a measure of inter-class\ncongruence for classification of actions. Herein, we investigate the dual\nproblem of intra-class dissimilarity for classification of action styles. We\nintroduce self-dissimilarity matrices that discriminate between same actions\nperformed by different subjects regardless of viewing direction and camera\nparameters. We investigate two frameworks using these invariant style\ndissimilarity measures based on Principal Component Analysis (PCA) and Fisher\nDiscriminant Analysis (FDA). Extensive experiments performed on IXMAS dataset\nindicate remarkably good discriminant characteristics for the proposed\ninvariant measures for gender recognition from video data.\n",
"title": "View-Invariant Recognition of Action Style Self-Dissimilarity"
}
| null | null | null | null | true | null |
17741
| null |
Default
| null | null |
null |
{
"abstract": " Highly Principled Data Science insists on methodologies that are: (1)\nscientifically justified, (2) statistically principled, and (3) computationally\nefficient. An astrostatistics collaboration, together with some reminiscences,\nillustrates the increased roles statisticians can and should play to ensure\nthis trio, and to advance the science of data along the way.\n",
"title": "Conducting Highly Principled Data Science: A Statistician's Job and Joy"
}
| null | null | null | null | true | null |
17742
| null |
Default
| null | null |
null |
{
"abstract": " We study a class of focusing nonlinear Schroedinger-type equations derived\nrecently by Dumas, Lannes and Szeftel within the mathematical description of\nhigh intensity laser beams [7]. These equations incorporate the possibility of\na (partial) off-axis variation of the group velocity of such laser beams\nthrough a second order partial differential operator acting in some, but not\nnecessarily all, spatial directions. We study the well-posedness theory for\nsuch models and obtain a regularizing effect, even in the case of only partial\noff-axis dependence. This provides an answer to an open problem posed in [7].\n",
"title": "Regularizing nonlinear Schroedinger equations through partial off-axis variations"
}
| null | null |
[
"Mathematics"
] | null | true | null |
17743
| null |
Validated
| null | null |
null |
{
"abstract": " Following some previous studies on restarting automata, we introduce a\nrefined model - the h-lexicalized restarting automaton (h-RLWW). We argue that\nthis model is useful for expressing lexicalized syntax in computational\nlinguistics. We compare the input languages, which are the languages\ntraditionally considered in automata theory, to the so-called basic and\nh-proper languages, which are (implicitly) used by categorial grammars, the\noriginal tool for the description of lexicalized syntax. The basic and h-proper\nlanguages allow us to stress several nice properties of h-lexicalized\nrestarting automata, and they are suitable for modeling the analysis by\nreduction and, subsequently, for the development of categories of a lexicalized\nsyntax. Based on the fact that a two-way deterministic monotone restarting\nautomaton can be transformed into an equivalent deterministic monotone\nRL-automaton in (Marcus) contextual form, we obtain a transformation from\nmonotone RLWW-automata that recognize the class CFL of context-free languages\nas their input languages to deterministic monotone h-RLWW-automata that\nrecognize CFL through their h-proper languages. Through this transformation we\nobtain automata with the complete correctness preserving property and an\ninfinite hierarchy within CFL, based on the size of the read/write window.\nAdditionally, we consider h-RLWW-automata that are allowed to perform multiple\nrewrite steps per cycle, and we establish another infinite hierarchy above CFL\nthat is based on the number of rewrite steps that may be executed within a\ncycle. The corresponding separation results and their proofs illustrate the\ntransparency of h-RLWW-automata that work with the (complete or cyclic)\ncorrectness preserving property\n",
"title": "On h-Lexicalized Restarting Automata"
}
| null | null | null | null | true | null |
17744
| null |
Default
| null | null |
null |
{
"abstract": " In this work, nonparametric statistical inference is provided for the\ncontinuous-time M/G/1 queueing model from a Bayesian point of view. The\ninference is based on observations of the inter-arrival and service times.\nBeside other characteristics of the system, particular interest is in the\nwaiting time distribution which is not accessible in closed form. Thus, we use\nan indirect statistical approach by exploiting the Pollaczek-Khinchine\ntransform formula for the Laplace transform of the waiting time distribution.\nDue to this, an estimator is defined and its frequentist validation in terms of\nposterior consistency and posterior normality is studied. It will turn out that\nwe can hereby make inference for the observables separately and compose the\nresults subsequently by suitable techniques.\n",
"title": "Bayesian Nonparametric Inference for M/G/1 Queueing Systems"
}
| null | null | null | null | true | null |
17745
| null |
Default
| null | null |
null |
{
"abstract": " The symbol is used to describe the Springer correspondence for the classical\ngroups. We propose equivalent definitions of symbols for rigid partitions in\nthe $B_n$, $C_n$, and $D_n$ theories uniformly. Analysing the new definition of\nsymbol in detail, we give rules to construct symbol of a partition, which are\neasy to remember and to operate on. We introduce formal operations of a\npartition, which reduce the difficulties in the proof of the construction\nrules. According these rules, we give a closed formula of symbols for different\ntheories uniformly. As applications, previous results can be illustrated more\nclearly by the construction rules of symbol.\n",
"title": "Symbol Invariant of Partition and the Construction"
}
| null | null |
[
"Mathematics"
] | null | true | null |
17746
| null |
Validated
| null | null |
null |
{
"abstract": " While harms of allocation have been increasingly studied as part of the\nsubfield of algorithmic fairness, harms of representation have received\nconsiderably less attention. In this paper, we formalize two notions of\nstereotyping and show how they manifest in later allocative harms within the\nmachine learning pipeline. We also propose mitigation strategies and\ndemonstrate their effectiveness on synthetic datasets.\n",
"title": "Fairness in representation: quantifying stereotyping as a representational harm"
}
| null | null | null | null | true | null |
17747
| null |
Default
| null | null |
null |
{
"abstract": " The X-ray spectra of the neutron stars located in the centers of supernova\nremnants Cas A and HESS J1731-347 are well fit with carbon atmosphere models.\nThese fits yield plausible neutron star sizes for the known or estimated\ndistances to these supernova remnants. The evidence in favor of the presence of\na pure carbon envelope at the neutron star surface is rather indirect and is\nbased on the assumption that the emission is generated uniformly by the entire\nstellar surface. Although this assumption is supported by the absence of\npulsations, the observational upper limit on the pulsed fraction is not very\nstringent. In an attempt to quantify this evidence, we investigate the\npossibility that the observed spectrum of the neutron star in HESS J1731-347 is\na combination of the spectra produced in a hydrogen atmosphere of the hotspots\nand of the cooler remaining part of the neutron star surface. The lack of\npulsations in this case has to be explained either by a sufficiently small\nangle between the neutron star spin axis and the line of sight, or by a\nsufficiently small angular distance between the hotspots and the neutron star\nrotation poles. As the observed flux from a non-uniformly emitting neutron star\ndepends on the angular distribution of the radiation emerging from the\natmosphere, we have computed two new grids of pure carbon and pure hydrogen\natmosphere model spectra accounting for Compton scattering. Using new hydrogen\nmodels, we have evaluated the probability of a geometry that leads to a pulsed\nfraction below the observed upper limit to be about 8.2 %. Such a geometry thus\nseems to be rather improbable but cannot be excluded at this stage.\n",
"title": "Probing the possibility of hotspots on the central neutron star in HESS J1731-347"
}
| null | null |
[
"Physics"
] | null | true | null |
17748
| null |
Validated
| null | null |
null |
{
"abstract": " We prove various inequalities between the number of partitions with the bound\non the largest part and some restrictions on occurrences of parts. We explore\nmany interesting consequences of these partition inequalities. In particular,\nwe show that for $L\\geq 1$, the number of partitions with $l-s \\leq L$ and\n$s=1$ is greater than the number of partitions with $l-s\\leq L$ and $s>1$. Here\n$l$ and $s$ are the largest part and the smallest part of the partition,\nrespectively.\n",
"title": "Some Elementary Partition Inequalities and Their Implications"
}
| null | null | null | null | true | null |
17749
| null |
Default
| null | null |
null |
{
"abstract": " Overabundances in highly siderophile elements (HSEs) of Earth's mantle can be\nexplained by conveyance from a singular, immense (3000 km in a diameter) \"Late\nVeneer\" impactor of chondritic composition, subsequent to lunar formation and\nterrestrial core-closure. Such rocky objects of approximately lunar mass (about\n0.01 M_E) ought to be differentiated, such that nearly all of their HSE payload\nis sequestered into iron cores. Here, we analyze the mechanical and chemical\nfate of the core of such a Late Veneer impactor, and trace how its HSEs are\nsuspended - and thus pollute - the mantle. For the statistically most-likely\noblique collision (about 45degree), the impactor's core elongates and\nthereafter disintegrates into a metallic hail of small particles (about 10 m).\nSome strike the orbiting Moon as sesquinary impactors, but most re-accrete to\nEarth as secondaries with further fragmentation. We show that a single oblique\nimpactor provides an adequate amount of HSEs to the primordial terrestrial\nsilicate reservoirs via oxidation of (<m-sized) metal particles with a hydrous,\npre-impact, early Hadean Earth.\n",
"title": "The terrestrial late veneer from core disruption of a lunar-sized impactor"
}
| null | null |
[
"Physics"
] | null | true | null |
17750
| null |
Validated
| null | null |
null |
{
"abstract": " We consider a Bayesian model for inversion of observed amplitude variation\nwith offset (AVO) data into lithology/fluid classes, and study in particular\nhow the choice of prior distribution for the lithology/fluid classes influences\nthe inversion results. Two distinct prior distributions are considered, a\nsimple manually specified Markov random field prior with a first order\nneighborhood and a Markov mesh model with a much larger neighborhood estimated\nfrom a training image. They are chosen to model both horisontal connectivity\nand vertical thickness distribution of the lithology/fluid classes, and are\ncompared on an offshore clastic oil reservoir in the North Sea. We combine both\npriors with the same linearised Gaussian likelihood function based on a\nconvolved linearised Zoeppritz relation and estimate properties of the\nresulting two posterior distributions by simulating from these distributions\nwith the Metropolis-Hastings algorithm.\nThe influence of the prior on the marginal posterior probabilities for the\nlithology/fluid classes is clearly observable, but modest. The importance of\nthe prior on the connectivity properties in the posterior realisations,\nhowever, is much stronger. The larger neighborhood of the Markov mesh prior\nenables it to identify and model connectivity and curvature much better than\nwhat can be done by the first order neighborhood Markov random field prior. As\na result, we conclude that the posterior realisations based on the Markov mesh\nprior appear with much higher lateral connectivity, which is geologically\nplausible.\n",
"title": "A Bayesian model for lithology/fluid class prediction using a Markov mesh prior fitted from a training image"
}
| null | null | null | null | true | null |
17751
| null |
Default
| null | null |
null |
{
"abstract": " A promising route to the realization of Majorana fermions is in\nnon-centrosymmetric superconductors, in which spin-orbit-coupling lifts the\nspin degeneracy of both bulk and surface bands. A detailed assessment of the\nelectronic structure is critical to evaluate their suitability for this through\nestablishing the topological properties of the electronic structure. This\nrequires correct identification of the time-reversal-invariant momenta. One\nsuch material is BiPd, a recently rediscovered non-centrosymmetric\nsuperconductor which can be grown in large, high-quality single crystals and\nhas been studied by several groups using angular resolved photoemission to\nestablish its surface electronic structure. Many of the published electronic\nstructure studies on this material are based on a reciprocal unit cell which is\nnot the actual Brillouin zone of the material. We show here the consequences of\nthis for the electronic structures and show how the inferred topological nature\nof the material is affected.\n",
"title": "Correct Brillouin zone and electronic structure of BiPd"
}
| null | null | null | null | true | null |
17752
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we introduce RADULS2, the fastest parallel sorter based on\nradix algorithm. It is optimized to process huge amounts of data making use of\nmodern multicore CPUs. The main novelties include: extremely optimized\nalgorithm for handling tiny arrays (up to about a hundred of records) that\ncould appear even billions times as subproblems to handle and improved\nprocessing of larger subarrays with better use of non-temporal memory stores.\n",
"title": "Even faster sorting of (not only) integers"
}
| null | null | null | null | true | null |
17753
| null |
Default
| null | null |
null |
{
"abstract": " Deterministic neural nets have been shown to learn effective predictors on a\nwide range of machine learning problems. However, as the standard approach is\nto train the network to minimize a prediction loss, the resultant model remains\nignorant to its prediction confidence. Orthogonally to Bayesian neural nets\nthat indirectly infer prediction uncertainty through weight uncertainties, we\npropose explicit modeling of the same using the theory of subjective logic. By\nplacing a Dirichlet distribution on the class probabilities, we treat\npredictions of a neural net as subjective opinions and learn the function that\ncollects the evidence leading to these opinions by a deterministic neural net\nfrom data. The resultant predictor for a multi-class classification problem is\nanother Dirichlet distribution whose parameters are set by the continuous\noutput of a neural net. We provide a preliminary analysis on how the\npeculiarities of our new loss function drive improved uncertainty estimation.\nWe observe that our method achieves unprecedented success on detection of\nout-of-distribution queries and endurance against adversarial perturbations.\n",
"title": "Evidential Deep Learning to Quantify Classification Uncertainty"
}
| null | null | null | null | true | null |
17754
| null |
Default
| null | null |
null |
{
"abstract": " This work relates to the famous experiments, performed in 1975 and 1979 by\nWerner et al., measuring neutron interference and neutron Sagnac effects in the\nearth's gravitational field. Employing the method of Stodolsky in its weak\nfield approximation, explicit expressions are derived for the two phase shifts,\nwhich turn out to be in agreement with the experiments and with the previously\nobtained expressions derived from semi-classical arguments: these expressions\nare simply modified by relativistic correction factors.\n",
"title": "Neutron interference in the Earth's gravitational field"
}
| null | null | null | null | true | null |
17755
| null |
Default
| null | null |
null |
{
"abstract": " The Shortest Paths Problem (SPP) is no longer unresolved. Just for a large\nscalar of instance on this problem, even we cannot know if an algorithm\nachieves the computing. Those cutting-edge methods are still in the low\nperformance. If we go to a strategy the best-first-search to deal with\ncomputing, it is awkward that the technical barrier from another field: the\ndatabase, which with the capable of Online Oriented. In this paper, we will\nintroduce such a synthesis to solve for SPP which comprises various modules\ntherein including such database leads to finish the task in a logarithm\nruntime.\nThrough experiments taken on three typical instances on mega-scalar data for\ntransaction in a common laptop, we show off a totally robust, tractable and\npractical applicability for other projects.\n",
"title": "Solve For Shortest Paths Problem Within Logarithm Runtime"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17756
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we present a short and elementary proof for the error in\nSimpson's rule.\n",
"title": "A short proof of the error term in Simpson's rule"
}
| null | null |
[
"Mathematics"
] | null | true | null |
17757
| null |
Validated
| null | null |
null |
{
"abstract": " Cosmology in the near future promises a measurement of the sum of neutrino\nmasses, a fundamental Standard Model parameter, as well as\nsubstantially-improved constraints on the dark energy. We use the shape of the\nBOSS redshift-space galaxy power spectrum, in combination with CMB and\nsupernova data, to constrain the neutrino masses and the dark energy. Essential\nto this calculation are several recent advances in non-linear cosmological\nperturbation theory, including FFT methods, redshift space distortions, and\nscale-dependent growth. Our 95% confidence upper bound of 200 meV on the sum of\nmasses degrades substantially to 770 meV when the dark energy equation of state\nand its first derivative are also allowed to vary, representing a significant\nchallenge to current constraints. We also study the impact of additional galaxy\nbias parameters, finding that a velocity bias or a more complicated\nscale-dependent density bias shift the preferred neutrino mass values 20%-30%\nlower while minimally impacting the other cosmological parameters.\n",
"title": "Neutrino mass and dark energy constraints from redshift-space distortions"
}
| null | null | null | null | true | null |
17758
| null |
Default
| null | null |
null |
{
"abstract": " Liquid-phase-exfoliation is a technique capable of producing large quantities\nof two-dimensional material in suspension. Despite many efforts in the\noptimization of the exfoliation process itself not much has been done towards\nthe integration of liquid-phase-exfoliated materials in working solid-state\ndevices. In this article, we use dielectrophoresis to direct the assembly of\nliquid-phase-exfoliated TiS3 nanoribbons between two gold electrodes to produce\nphotodetectors working in the visible. Through electrical and optical\nmeasurements we characterize the responsivity of the device and we find values\nas large as 3.8 mA/W, which improve of more than one order of magnitude on the\nstate-of-the-art for devices based on liquid-phase-exfoliated two-dimensional\nmaterials assembled by drop-casting or ink-jet methods.\n",
"title": "Dielectrophoretic assembly of liquid-phase-exfoliated TiS3 nanoribbons for photodetecting applications"
}
| null | null | null | null | true | null |
17759
| null |
Default
| null | null |
null |
{
"abstract": " Designing an exoskeleton to reduce the risk of low-back injury during lifting\nis challenging. Computational models of the human-robot system coupled with\npredictive movement simulations can help to simplify this design process. Here,\nwe present a study that models the interaction between a human model actuated\nby muscles and a lower-back exoskeleton. We provide a computational framework\nfor identifying the spring parameters of the exoskeleton using an optimal\ncontrol approach and forward-dynamics simulations. This is applied to generate\ndynamically consistent bending and lifting movements in the sagittal plane. Our\ncomputations are able to predict motions and forces of the human and\nexoskeleton that are within the torque limits of a subject. The identified\nexoskeleton could also yield a considerable reduction of the peak lower-back\ntorques as well as the cumulative lower-back load during the movements. This\nwork is relevant to the research communities working on human-robot\ninteraction, and can be used as a basis for a better human-centered design\nprocess.\n",
"title": "Motion optimization and parameter identification for a human and lower-back exoskeleton model"
}
| null | null | null | null | true | null |
17760
| null |
Default
| null | null |
null |
{
"abstract": " Todays, researchers in the field of Pulmonary Embolism (PE) analysis need to\nuse a publicly available dataset to assess and compare their methods. Different\nsystems have been designed for the detection of pulmonary embolism (PE), but\nnone of them have used any public datasets. All papers have used their own\nprivate dataset. In order to fill this gap, we have collected 5160 slices of\ncomputed tomography angiography (CTA) images acquired from 20 patients, and\nafter labeling the image by experts in this field, we provided a reliable\ndataset which is now publicly available. In some situation, PE detection can be\ndifficult, for example when it occurs in the peripheral branches or when\npatients have pulmonary diseases (such as parenchymal disease). Therefore, the\nefficiency of CAD systems highly depends on the dataset. In the given dataset,\n66% of PE are located in peripheral branches, and different pulmonary diseases\nare also included.\n",
"title": "A dataset for Computer-Aided Detection of Pulmonary Embolism in CTA images"
}
| null | null | null | null | true | null |
17761
| null |
Default
| null | null |
null |
{
"abstract": " We point out that most of the classical thermodynamics results in the paper\nhave been known in the literature, see Kestin and Woods, for quite some time\nand are not new, contrary to what the authors imply. As shown by Kestin, these\nresults are valid for quasistatic irreversible processes only and not for\narbitrary irreversible processes as suggested in the paper. Thus, the\napplication to the Jarzynski process is limited.\n",
"title": "Comment on Ben-Amotz and Honig, \"Average entropy dissipation in irreversible mesoscopic processes,\" Phys. Rev. Lett. 96, 020602 (2006)"
}
| null | null |
[
"Physics"
] | null | true | null |
17762
| null |
Validated
| null | null |
null |
{
"abstract": " High-transmissivity all-dielectric metasurfaces have recently attracted\nattention towards the realization of ultra-compact optical devices and systems.\nSilicon based metasurfaces, in particular, are highly promising considering the\npossibility of monolithic integration with VLSI circuits. Realization of\nsilicon based metasurfaces operational in the visible wavelengths remains a\nchallenge. A numerical study of silicon metasurfaces based on stepped truncated\ncone shaped nanoantenna elements is presented. Metasurfaces based on the\nstepped conical geometry can be designed for operation in the 700nm to 800nm\nwavelength window and achieve full cycle phase response (0 to pi with an\nimproved transmittance in comparison with previously reported cylindrical\ngeometry [1]. A systematic parameter study of the influence of various\ngeometrical parameters on the achievable amplitude and phase coverage is\nreported.\n",
"title": "High-transmissivity Silicon Visible-wavelength Metasurface Designs based on Truncated-cone Nanoantennae"
}
| null | null |
[
"Physics"
] | null | true | null |
17763
| null |
Validated
| null | null |
null |
{
"abstract": " We proposed a probabilistic approach to joint modeling of participants'\nreliability and humans' regularity in crowdsourced affective studies.\nReliability measures how likely a subject will respond to a question seriously;\nand regularity measures how often a human will agree with other\nseriously-entered responses coming from a targeted population.\nCrowdsourcing-based studies or experiments, which rely on human self-reported\naffect, pose additional challenges as compared with typical crowdsourcing\nstudies that attempt to acquire concrete non-affective labels of objects. The\nreliability of participants has been massively pursued for typical\nnon-affective crowdsourcing studies, whereas the regularity of humans in an\naffective experiment in its own right has not been thoroughly considered. It\nhas been often observed that different individuals exhibit different feelings\non the same test question, which does not have a sole correct response in the\nfirst place. High reliability of responses from one individual thus cannot\nconclusively result in high consensus across individuals. Instead, globally\ntesting consensus of a population is of interest to investigators. Built upon\nthe agreement multigraph among tasks and workers, our probabilistic model\ndifferentiates subject regularity from population reliability. We demonstrate\nthe method's effectiveness for in-depth robust analysis of large-scale\ncrowdsourced affective data, including emotion and aesthetic assessments\ncollected by presenting visual stimuli to human subjects.\n",
"title": "Probabilistic Multigraph Modeling for Improving the Quality of Crowdsourced Affective Data"
}
| null | null | null | null | true | null |
17764
| null |
Default
| null | null |
null |
{
"abstract": " Quantum key distribution (QKD) offers a way for establishing\ninformation-theoretically secure communications. An important part of QKD\ntechnology is a high-quality random number generator (RNG) for quantum states\npreparation and for post-processing procedures. In the present work, we\nconsider a novel class of prepare-and-measure QKD protocols, utilizing\nadditional pseudorandomness in the preparation of quantum states. We study one\nof such protocols and analyze its security against the intercept-resend attack.\nWe demonstrate that, for single-photon sources, the considered protocol gives\nbetter secret key rates than the BB84 and the asymmetric BB84 protocol.\nHowever, the protocol strongly requires single-photon sources.\n",
"title": "Quantum key distribution protocol with pseudorandom bases"
}
| null | null | null | null | true | null |
17765
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks are a family of computational models that have led to a\ndramatical improvement of the state of the art in several domains such as\nimage, voice or text analysis. These methods provide a framework to model\ncomplex, non-linear interactions in large datasets, and are naturally suited to\nthe analysis of hierarchical data such as, for instance, longitudinal data with\nthe use of recurrent neural networks. In the other hand, cohort studies have\nbecome a tool of importance in the research field of epidemiology. In such\nstudies, variables are measured repeatedly over time, to allow the practitioner\nto study their temporal evolution as trajectories, and, as such, as\nlongitudinal data. This paper investigates the application of the advanced\nmodelling techniques provided by the deep learning framework in the analysis of\nthe longitudinal data provided by cohort studies. Methods: A method for\nvisualizing and clustering longitudinal dataset is proposed, and compared to\nother widely used approaches to the problem on both real and simulated\ndatasets. Results: The proposed method is shown to be coherent with the\npreexisting procedures on simple tasks, and to outperform them on more complex\ntasks such as the partitioning of longitudinal datasets into non-spherical\nclusters. Conclusion: Deep artificial neural networks can be used to visualize\nlongitudinal data in a low dimensional manifold that is much simpler to\ninterpret than traditional longitudinal plots are. Consequently, practitioners\nshould start considering the use of deep artificial neural networks for the\nanalysis of their longitudinal data in studies to come.\n",
"title": "Deep clustering of longitudinal data"
}
| null | null | null | null | true | null |
17766
| null |
Default
| null | null |
null |
{
"abstract": " In an economy with asymmetric information, the smart contract in the\nblockchain protocol mitigates uncertainty. Since, as a new trading platform,\nthe blockchain triggers segmentation of market and differentiation of agents in\nboth the sell and buy sides of the market, it recomposes the asymmetric\ninformation and generates spreads in asset price and quality between itself and\na traditional platform. We show that marginal innovation and sophistication of\nthe smart contract have non-monotonic effects on the trading value in the\nblockchain platform, its fundamental value, the price of cryptocurrency, and\nconsumers' welfare. Moreover, a blockchain manager who controls the level of\nthe innovation of the smart contract has an incentive to keep it lower than the\nfirst best when the underlying information asymmetry is not severe, leading to\nwelfare loss for consumers.\n",
"title": "Economic Implications of Blockchain Platforms"
}
| null | null | null | null | true | null |
17767
| null |
Default
| null | null |
null |
{
"abstract": " Ordering theorems, characterizing when partial orders of a group extend to\ntotal orders, are used to generate hypersequent calculi for varieties of\nlattice-ordered groups (l-groups). These calculi are then used to provide new\nproofs of theorems arising in the theory of ordered groups. More precisely: an\nanalytic calculus for abelian l-groups is generated using an ordering theorem\nfor abelian groups; a calculus is generated for l-groups and new decidability\nproofs are obtained for the equational theory of this variety and extending\nfinite subsets of free groups to right orders; and a calculus for representable\nl-groups is generated and a new proof is obtained that free groups are\norderable.\n",
"title": "Proof Theory and Ordered Groups"
}
| null | null | null | null | true | null |
17768
| null |
Default
| null | null |
null |
{
"abstract": " As the foundation of driverless vehicle and intelligent robots, Simultaneous\nLocalization and Mapping(SLAM) has attracted much attention these days.\nHowever, non-geometric modules of traditional SLAM algorithms are limited by\ndata association tasks and have become a bottleneck preventing the development\nof SLAM. To deal with such problems, many researchers seek to Deep Learning for\nhelp. But most of these studies are limited to virtual datasets or specific\nenvironments, and even sacrifice efficiency for accuracy. Thus, they are not\npractical enough.\nWe propose DF-SLAM system that uses deep local feature descriptors obtained\nby the neural network as a substitute for traditional hand-made features.\nExperimental results demonstrate its improvements in efficiency and stability.\nDF-SLAM outperforms popular traditional SLAM systems in various scenes,\nincluding challenging scenes with intense illumination changes. Its versatility\nand mobility fit well into the need for exploring new environments. Since we\nadopt a shallow network to extract local descriptors and remain others the same\nas original SLAM systems, our DF-SLAM can still run in real-time on GPU.\n",
"title": "DF-SLAM: A Deep-Learning Enhanced Visual SLAM System based on Deep Local Features"
}
| null | null | null | null | true | null |
17769
| null |
Default
| null | null |
null |
{
"abstract": " Existing Markov Chain Monte Carlo (MCMC) methods are either based on\ngeneral-purpose and domain-agnostic schemes which can lead to slow convergence,\nor hand-crafting of problem-specific proposals by an expert. We propose\nA-NICE-MC, a novel method to train flexible parametric Markov chain kernels to\nproduce samples with desired properties. First, we propose an efficient\nlikelihood-free adversarial training method to train a Markov chain and mimic a\ngiven data distribution. Then, we leverage flexible volume preserving flows to\nobtain parametric kernels for MCMC. Using a bootstrap approach, we show how to\ntrain efficient Markov chains to sample from a prescribed posterior\ndistribution by iteratively improving the quality of both the model and the\nsamples. A-NICE-MC provides the first framework to automatically design\nefficient domain-specific MCMC proposals. Empirical results demonstrate that\nA-NICE-MC combines the strong guarantees of MCMC with the expressiveness of\ndeep neural networks, and is able to significantly outperform competing methods\nsuch as Hamiltonian Monte Carlo.\n",
"title": "A-NICE-MC: Adversarial Training for MCMC"
}
| null | null | null | null | true | null |
17770
| null |
Default
| null | null |
null |
{
"abstract": " We embed a flipped ${\\rm SU}(5) \\times {\\rm U}(1)$ GUT model in a no-scale\nsupergravity framework, and discuss its predictions for cosmic microwave\nbackground observables, which are similar to those of the Starobinsky model of\ninflation. Measurements of the tilt in the spectrum of scalar perturbations in\nthe cosmic microwave background, $n_s$, constrain significantly the model\nparameters. We also discuss the model's predictions for neutrino masses, and\npay particular attention to the behaviours of scalar fields during and after\ninflation, reheating and the GUT phase transition. We argue in favor of strong\nreheating in order to avoid excessive entropy production which could dilute the\ngenerated baryon asymmetry.\n",
"title": "Starobinsky-like Inflation, Supercosmology and Neutrino Masses in No-Scale Flipped SU(5)"
}
| null | null | null | null | true | null |
17771
| null |
Default
| null | null |
null |
{
"abstract": " The interactive computation paradigm is reviewed and a particular example is\nextended to form the stochastic analog of a computational process via a\ntranscription of a minimal Turing Machine into an equivalent asynchronous\nCellular Automaton with an exponential waiting times distribution of effective\ntransitions. Furthermore, a special toolbox for analytic derivation of\nrecursive relations of important statistical and other quantities is introduced\nin the form of an Inductive Combinatorial Hierarchy.\n",
"title": "'Viral' Turing Machines, Computation from Noise and Combinatorial Hierarchies"
}
| null | null | null | null | true | null |
17772
| null |
Default
| null | null |
null |
{
"abstract": " IIn recent years, there has been a growing interest in applying data\nassimilation (DA) methods, originally designed for state estimation, to the\nmodel selection problem. In this setting, Carrassi et al. (2017) introduced the\ncontextual formulation of model evidence (CME) and showed that CME can be\nefficiently computed using a hierarchy of ensemble-based DA procedures.\nAlthough Carrassi et al. (2017) analyzed the DA methods most commonly used for\noperational atmospheric and oceanic prediction worldwide, they did not study\nthese methods in conjunction with localization to a specific domain. Yet any\napplication of ensemble DA methods to realistic geophysical models requires the\nimplementation of some form of localization. The present study extends the\ntheory for estimating CME to ensemble DA methods with domain localization. The\ndomain-localized CME (DL-CME) developed herein is tested for model selection\nwith two models: (i) the Lorenz 40-variable mid-latitude atmospheric dynamics\nmodel (L95); and (ii) the simplified global atmospheric SPEEDY model. The CME\nis compared to the root-mean-square-error (RMSE) as a metric for model\nselection. The experiments show that CME improves systematically over the RMSE,\nand that this skill improvement is further enhanced by applying localization in\nthe estimate of the CME, using the DL-CME. The potential use and range of\napplications of the CME and DL-CME as a model selection metric are also\ndiscussed.\n",
"title": "Estimating model evidence using ensemble-based data assimilation with localization - The model selection problem"
}
| null | null | null | null | true | null |
17773
| null |
Default
| null | null |
null |
{
"abstract": " We propose and analyze an efficient spectral-Galerkin approximation for the\nMaxwell transmission eigenvalue problem in spherical geometry. Using a vector\nspherical harmonic expansion, we reduce the problem to a sequence of equivalent\none-dimensional TE and TM modes that can be solved individually in parallel.\nFor the TE mode, we derive associated generalized eigenvalue problems and\ncorresponding pole conditions. Then we introduce weighted Sobolev spaces based\non the pole condition and prove error estimates for the generalized eigenvalue\nproblem. The TM mode is a coupled system with four unknown functions, which is\nchallenging for numerical calculation. To handle it, we design an effective\nalgorithm using Legendre-type vector basis functions. Finally, we provide some\nnumerical experiments to validate our theoretical results and demonstrate the\nefficiency of the algorithms.\n",
"title": "An efficient spectral-Galerkin approximation and error analysis for Maxwell transmission eigenvalue problems in spherical geometries"
}
| null | null | null | null | true | null |
17774
| null |
Default
| null | null |
null |
{
"abstract": " Mendelian randomization uses genetic variants to make causal inferences about\nthe effect of a risk factor on an outcome. With fine-mapped genetic data, there\nmay be hundreds of genetic variants in a single gene region any of which could\nbe used to assess this causal relationship. However, using too many genetic\nvariants in the analysis can lead to spurious estimates and inflated Type 1\nerror rates. But if only a few genetic variants are used, then the majority of\nthe data is ignored and estimates are highly sensitive to the particular choice\nof variants. We propose an approach based on summarized data only (genetic\nassociation and correlation estimates) that uses principal components analysis\nto form instruments. This approach has desirable theoretical properties: it\ntakes the totality of data into account and does not suffer from numerical\ninstabilities. It also has good properties in simulation studies: it is not\nparticularly sensitive to varying the genetic variants included in the analysis\nor the genetic correlation matrix, and it does not have greatly inflated Type 1\nerror rates. Overall, the method gives estimates that are not so precise as\nthose from variable selection approaches (such as using a conditional analysis\nor pruning approach to select variants), but are more robust to seemingly\narbitrary choices in the variable selection step. Methods are illustrated by an\nexample using genetic associations with testosterone for 320 genetic variants\nto assess the effect of sex hormone-related pathways on coronary artery disease\nrisk, in which variable selection approaches give inconsistent inferences.\n",
"title": "Mendelian randomization with fine-mapped genetic data: choosing from large numbers of correlated instrumental variables"
}
| null | null | null | null | true | null |
17775
| null |
Default
| null | null |
null |
{
"abstract": " Direct acoustics-to-word (A2W) models in the end-to-end paradigm have\nreceived increasing attention compared to conventional sub-word based automatic\nspeech recognition models using phones, characters, or context-dependent hidden\nMarkov model states. This is because A2W models recognize words from speech\nwithout any decoder, pronunciation lexicon, or externally-trained language\nmodel, making training and decoding with such models simple. Prior work has\nshown that A2W models require orders of magnitude more training data in order\nto perform comparably to conventional models. Our work also showed this\naccuracy gap when using the English Switchboard-Fisher data set. This paper\ndescribes a recipe to train an A2W model that closes this gap and is at-par\nwith state-of-the-art sub-word based models. We achieve a word error rate of\n8.8%/13.9% on the Hub5-2000 Switchboard/CallHome test sets without any decoder\nor language model. We find that model initialization, training data order, and\nregularization have the most impact on the A2W model performance. Next, we\npresent a joint word-character A2W model that learns to first spell the word\nand then recognize it. This model provides a rich output to the user instead of\nsimple word hypotheses, making it especially useful in the case of words unseen\nor rarely-seen during training.\n",
"title": "Building competitive direct acoustics-to-word models for English conversational speech recognition"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
17776
| null |
Validated
| null | null |
null |
{
"abstract": " Convolutional dictionary learning (CDL or sparsifying CDL) has many\napplications in image processing and computer vision. There has been growing\ninterest in developing efficient algorithms for CDL, mostly relying on the\naugmented Lagrangian (AL) method or the variant alternating direction method of\nmultipliers (ADMM). When their parameters are properly tuned, AL methods have\nshown fast convergence in CDL. However, the parameter tuning process is not\ntrivial due to its data dependence and, in practice, the convergence of AL\nmethods depends on the AL parameters for nonconvex CDL problems. To moderate\nthese problems, this paper proposes a new practically feasible and convergent\nBlock Proximal Gradient method using a Majorizer (BPG-M) for CDL. The\nBPG-M-based CDL is investigated with different block updating schemes and\nmajorization matrix designs, and further accelerated by incorporating some\nmomentum coefficient formulas and restarting techniques. All of the methods\ninvestigated incorporate a boundary artifacts removal (or, more generally,\nsampling) operator in the learning model. Numerical experiments show that,\nwithout needing any parameter tuning process, the proposed BPG-M approach\nconverges more stably to desirable solutions of lower objective values than the\nexisting state-of-the-art ADMM algorithm and its memory-efficient variant do.\nCompared to the ADMM approaches, the BPG-M method using a multi-block updating\nscheme is particularly useful in single-threaded CDL algorithm handling large\ndatasets, due to its lower memory requirement and no polynomial computational\ncomplexity. Image denoising experiments show that, for relatively strong\nadditive white Gaussian noise, the filters learned by BPG-M-based CDL\noutperform those trained by the ADMM approach.\n",
"title": "Convolutional Dictionary Learning: Acceleration and Convergence"
}
| null | null | null | null | true | null |
17777
| null |
Default
| null | null |
null |
{
"abstract": " We present late-time optical $R$-band imaging data from the Palomar Transient\nFactory (PTF) for the nearby type Ia supernova SN 2011fe. The stacked PTF light\ncurve provides densely sampled coverage down to $R\\simeq22$ mag over 200 to 620\ndays past explosion. Combining with literature data, we estimate the\npseudo-bolometric light curve for this event from 200 to 1600 days after\nexplosion, and constrain the likely near-infrared contribution. This light\ncurve shows a smooth decline consistent with radioactive decay, except over\n~450 to ~600 days where the light curve appears to decrease faster than\nexpected based on the radioactive isotopes presumed to be present, before\nflattening at around 600 days. We model the 200-1600d pseudo-bolometric light\ncurve with the luminosity generated by the radioactive decay chains of\n$^{56}$Ni, $^{57}$Ni and $^{55}$Co, and find it is not consistent with models\nthat have full positron trapping and no infrared catastrophe (IRC); some\nadditional energy escape other than optical/near-IR photons is required.\nHowever, the light curve is consistent with models that allow for positron\nescape (reaching 75% by day 500) and/or an IRC (with 85% of the flux emerging\nin non-optical wavelengths by day 600). The presence of the $^{57}$Ni decay\nchain is robustly detected, but the $^{55}$Co decay chain is not formally\nrequired, with an upper mass limit estimated at 0.014 M$_{\\odot}$. The\nmeasurement of the $^{57}$Ni/$^{56}$Ni mass ratio is subject to significant\nsystematic uncertainties, but all of our fits require a high ratio >0.031 (>1.3\nin solar abundances).\n",
"title": "The late-time light curve of the type Ia supernova SN 2011fe"
}
| null | null | null | null | true | null |
17778
| null |
Default
| null | null |
null |
{
"abstract": " In this article, we consider Markov chain Monte Carlo(MCMC) algorithms for\nexploring the intractable posterior density associated with Bayesian probit\nlinear mixed models under improper priors on the regression coefficients and\nvariance components. In particular, we construct the two-block Gibbs sampler\nusing the data augmentation (DA) techniques. Furthermore, we prove geometric\nergodicity of the Gibbs sampler, which is the foundation for building central\nlimit theorems for MCMC based estimators and subsequent inferences. The\nconditions for geometric convergence are similar to those guaranteeing\nposterior propriety. We also provide conditions for posterior propriety when\nthe design matrices take commonly observed forms. In general, the Haar\nparameter expansion for DA (PX- DA) algorithm is an improvement of the DA\nalgorithm and it has been shown that it is theoretically at least as good as\nthe DA algorithm. Here we construct a Haar PX-DA algorithm, which has\nessentially the same computational cost as the two-block Gibbs sampler.\n",
"title": "Convergence analysis of the block Gibbs sampler for Bayesian probit linear mixed models with improper priors"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
17779
| null |
Validated
| null | null |
null |
{
"abstract": " We generalise a multiple string pattern matching algorithm, recently proposed\nby Fredriksson and Grabowski [J. Discr. Alg. 7, 2009], to deal with arbitrary\ndictionaries on an alphabet of size $s$. If $r_m$ is the number of words of\nlength $m$ in the dictionary, and $\\phi(r) = \\max_m \\ln(s\\, m\\, r_m)/m$, the\ncomplexity rate for the string characters to be read by this algorithm is at\nmost $\\kappa_{{}_\\textrm{UB}}\\, \\phi(r)$ for some constant\n$\\kappa_{{}_\\textrm{UB}}$. On the other side, we generalise the classical lower\nbound of Yao [SIAM J. Comput. 8, 1979], for the problem with a single pattern,\nto deal with arbitrary dictionaries, and determine it to be at least\n$\\kappa_{{}_\\textrm{LB}}\\, \\phi(r)$. This proves the optimality of the\nalgorithm, improving and correcting previous claims.\n",
"title": "The complexity of the Multiple Pattern Matching Problem for random strings"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17780
| null |
Validated
| null | null |
null |
{
"abstract": " The correspondence between definable connected groupoids in a theory $T$ and\ninternal generalised imaginary sorts of $T$, established by Hrushovski in\n[\"Groupoids, imaginaries and internal covers,\" Turkish Journal of Mathematics,\n2012], is here extended in two ways: First, it is shown that the correspondence\nis in fact an equivalence of categories, with respect to appropriate notions of\nmorphism. Secondly, the equivalence of categories is shown to vary uniformly in\ndefinable families, with respect to an appropriate relativisation of these\ncategories. Some elaboration on Hrushovki's original constructions are also\nincluded.\n",
"title": "Functoriality and uniformity in Hrushovski's groupoid-cover correspondence"
}
| null | null | null | null | true | null |
17781
| null |
Default
| null | null |
null |
{
"abstract": " A family $\\{Q_{\\beta}\\}_{\\beta \\geq 0}$ of Markov chains is said to exhibit\n$\\textit{metastable mixing}$ with $\\textit{modes}$\n$S_{\\beta}^{(1)},\\ldots,S_{\\beta}^{(k)}$ if its spectral gap (or some other\nmixing property) is very close to the worst conductance\n$\\min(\\Phi_{\\beta}(S_{\\beta}^{(1)}), \\ldots, \\Phi_{\\beta}(S_{\\beta}^{(k)}))$ of\nits modes. We give simple sufficient conditions for a family of Markov chains\nto exhibit metastability in this sense, and verify that these conditions hold\nfor a prototypical Metropolis-Hastings chain targeting a mixture distribution.\nOur work differs from existing work on metastability in that, for the class of\nexamples we are interested in, it gives an asymptotically exact formula for the\nspectral gap (rather than a bound that can be very far from sharp) while at the\nsame time giving technical conditions that are easier to verify for many\nstatistical examples. Our bounds from this paper are used in a companion paper\nto compare the mixing times of the Hamiltonian Monte Carlo algorithm and a\nrandom walk algorithm for multimodal target distributions.\n",
"title": "Simple Conditions for Metastability of Continuous Markov Chains"
}
| null | null | null | null | true | null |
17782
| null |
Default
| null | null |
null |
{
"abstract": " We present a scheme to deterministically prepare non-classical quantum states\nof a massive mirror including highly non-Gaussian states exhibiting sizeable\nnegativity of the Wigner function. This is achieved by exploiting the\nnon-linear light-matter interaction in an optomechanical cavity by driving the\nsystem with optimally designed frequency patterns. Our scheme reveals to be\nresilient against mechanical and optical damping, as well as mechanical thermal\nnoise and imperfections in the driving scheme. Our proposal thus opens a\npromising route for table-top experiments to explore and exploit macroscopic\nquantum phenomena.\n",
"title": "Deterministic preparation of highly non-classical macroscopic quantum states"
}
| null | null |
[
"Physics"
] | null | true | null |
17783
| null |
Validated
| null | null |
null |
{
"abstract": " Bayesian optimization is a sample-efficient method for finding a global\noptimum of an expensive-to-evaluate black-box function. A global solution is\nfound by accumulating a pair of query point and corresponding function value,\nrepeating these two procedures: (i) learning a surrogate model for the\nobjective function using the data observed so far; (ii) the maximization of an\nacquisition function to determine where next to query the objective function.\nConvergence guarantees are only valid when the global optimizer of the\nacquisition function is found and selected as the next query point. In\npractice, however, local optimizers of acquisition functions are also used,\nsince searching the exact optimizer of the acquisition function is often a\nnon-trivial or time-consuming task. In this paper we present an analysis on the\nbehavior of local optimizers of acquisition functions, in terms of\ninstantaneous regrets over global optimizers. We also present the performance\nanalysis when multi-started local optimizers are used to find the maximum of\nthe acquisition function. Numerical experiments confirm the validity of our\ntheoretical analysis.\n",
"title": "On Local Optimizers of Acquisition Functions in Bayesian Optimization"
}
| null | null | null | null | true | null |
17784
| null |
Default
| null | null |
null |
{
"abstract": " In recent years, the attack which leverages register information (e.g.\naccounts and passwords) leaked from 3rd party applications to try other\napplications is popular and serious. We call this attack \"database collision\".\nTraditionally, people have to keep dozens of accounts and passwords for\ndifferent applications to prevent this attack. In this paper, we propose a\nnovel encryption scheme for hiding users' register information and preventing\nthis attack. Specifically, we first hash the register information using\nexisting safe hash function. Then the hash string is hidden, instead a\ncoefficient vector is stored for verification. Coefficient vectors of the same\nregister information are generated randomly for different applications. Hence,\nthe original information is hardly cracked by dictionary based attack or\ndatabase collision in practice. Using our encryption scheme, each user only\nneeds to keep one password for dozens of applications.\n",
"title": "One Password: An Encryption Scheme for Hiding Users' Register Information"
}
| null | null | null | null | true | null |
17785
| null |
Default
| null | null |
null |
{
"abstract": " We report the discovery of a system of two super-Earths orbiting the\nmoderately active K-dwarf HD 176986. This work is part of the RoPES RV program\nof G- and K-type stars, which combines radial velocities (RVs) from the HARPS\nand HARPS-N spectrographs to search for short-period terrestrial planets. HD\n176986 b and c are super-Earth planets with masses of 5.74 and 9.18\nM$_{\\oplus}$, orbital periods of 6.49 and 16.82 days, and distances of 0.063\nand 0.119 AU in orbits that are consistent with circular. The host star is a\nK2.5 dwarf, and despite its modest level of chromospheric activity (log(R'hk) =\n- 4.90 +- 0.04), it shows a complex activity pattern. Along with the discovery\nof the planets, we study the magnetic cycle and rotation of the star. HD 176986\nproves to be suitable for testing the available RV analysis technique and\nfurther our understanding of stellar activity.\n",
"title": "The RoPES project with HARPS and HARPS-N. I. A system of super-Earths orbiting the moderately active K-dwarf HD 176986"
}
| null | null |
[
"Physics"
] | null | true | null |
17786
| null |
Validated
| null | null |
null |
{
"abstract": " A new electron beam-optical procedure is proposed for quasi-cw pumping of\nhigh-pressure large-volume He-Ar laser on 4p[1/2]1 - 4s[3/2]2 argon atom\ntransition at the wavelength of 912.5 nm. It consists of creation and\nmaintenance of a necessary density of 4s[3/2]2 metastable state in the gain\nmedium by a fast electron beam and subsequent optically pumping of the upper\nlaser level via the classical three-level scheme using a laser diode.\nAbsorption probing is used to study collisional quenching of Ar* metastable in\nelectron-beam-excited high-pressure He-Ar mixtures with a low content of argon.\nThe rate constants for plasma-chemical reactions Ar*+He+Ar-Ar2*+He (3.6 +-\n0.4)x10-33 cm6/s, Ar+2He-HeAr*+He (4.4 +- 0.9)x10-36 cm6/s and\nAr*+He-Products+He (2.4 +- 0.3)x10-15 cm3/s are for the first time measured.\n",
"title": "On the possibility of developing quasi-cw high-power high-pressure laser on 4p-4s transition of ArI with electron beam - optical pumping: quenching of 4s (3P2) lower laser level"
}
| null | null | null | null | true | null |
17787
| null |
Default
| null | null |
null |
{
"abstract": " Due to its excellent shock-capturing capability and high resolution, the WENO\nscheme family has been widely used in varieties of compressive flow simulation.\nHowever, for problems containing strong shocks and contact discontinuities,\nsuch as the Lax shock tube problem, the WENO scheme still produces numerical\noscillations. To avoid such numerical oscillations, the characteristic-wise\nconstruction method should be applied. Compared to component-wise\nreconstruction, characteristic-wise reconstruction leads to much more\ncomputational cost and thus is not suite for large scale simulation such as\ndirect numeric simulation of turbulence. In this paper, an adaptive\ncharacteristic-wise reconstruction WENO scheme, i.e. the AdaWENO scheme, is\nproposed to improve the computational efficiency of the characteristic-wise\nreconstruction method. The new scheme performs characteristic-wise\nreconstruction near discontinuities while switching to component-wise\nreconstruction for smooth regions. Meanwhile, a new calculation strategy for\nthe WENO smoothness indicators is implemented to reduce over-all computational\ncost. Several one dimensional and two dimensional numerical tests are performed\nto validate and evaluate the AdaWENO scheme. Numerical results show that\nAdaWENO maintains essentially non-oscillatory flow field near discontinuities\nas the characteristic-wise reconstruction method. Besieds, compared to the\ncomponent-wise reconstruction, AdaWENO is about 40\\% faster which indicates its\nexcellent efficiency.\n",
"title": "An Adaptive Characteristic-wise Reconstruction WENOZ scheme for Gas Dynamic Euler Equations"
}
| null | null | null | null | true | null |
17788
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents an educational code written using FEniCS, based on the\nlevel set method, to perform compliance minimization in structural\noptimization. We use the concept of distributed shape derivative to compute a\ndescent direction for the compliance, which is defined as a shape functional.\nThe use of the distributed shape derivative is facilitated by FEniCS, which\nallows to handle complicated partial differential equations with a simple\nimplementation. The code is written for compliance minimization in the\nframework of linearized elasticity, and can be easily adapted to tackle other\nfunctionals and partial differential equations. We also provide an extension of\nthe code for compliant mechanisms. We start by explaining how to compute shape\nderivatives, and discuss the differences between the distributed and boundary\nexpressions of the shape derivative. Then we describe the implementation in\ndetails, and show the application of this code to some classical benchmarks of\ntopology optimization. The code is available at\nthis http URL, and the main file is also given in\nthe appendix.\n",
"title": "A level set-based structural optimization code using FEniCS"
}
| null | null | null | null | true | null |
17789
| null |
Default
| null | null |
null |
{
"abstract": " This document outlines the approach to supporting cross-node transactions\nover a Redis cluster.\n",
"title": "Transaction Support over Redis: An Overview"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17790
| null |
Validated
| null | null |
null |
{
"abstract": " How does the small-scale topological structure of an airline network behave\nas the network evolves? To address this question, we study the dynamic and\nspatial properties of small undirected subgraphs using 15 years of data on\nSouthwest Airlines' domestic route service. We find that this real-world\nnetwork has much in common with random graphs, and describe a possible\npower-law scaling between subgraph counts and the number of edges in the\nnetwork, that appears to be quite robust to changes in network density and\nsize. We use analytic formulae to identify statistically over- and\nunder-represented subgraphs, known as motifs and anti-motifs, and discover the\nexistence of substantial topology transitions. We propose a simple\nsubgraph-based node ranking measure, that is not always highly correlated with\nstandard node centrality, and can identify important nodes relative to specific\ntopologies, and investigate the spatial \"distribution\" of the triangle subgraph\nusing graphical tools. Our results have implications for the way in which\nsubgraphs can be used to analyze real-world networks.\n",
"title": "Subgraphs and motifs in a dynamic airline network"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17791
| null |
Validated
| null | null |
null |
{
"abstract": " The Principle of the Glitch states that for any device which makes a discrete\ndecision based upon a continuous range of possible inputs, there are inputs for\nwhich it will take arbitrarily long to reach a decision. The appropriate\nmathematical setting for studying this principle is described. This involves\ndefining the concept of continuity for mappings on sets of functions. It can\nthen be shown that the glitch principle follows from the continuous behavior of\nthe device.\n",
"title": "On the Glitch Phenomenon"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
17792
| null |
Validated
| null | null |
null |
{
"abstract": " In this article we perform an asymptotic analysis of Bayesian parallel\ndensity estimators which are based on logspline density estimation. The\nparallel estimator we introduce is in the spirit of a kernel density estimator\nintroduced in recent studies. We provide a numerical procedure that produces\nthe density estimator itself in place of the sampling algorithm. We then derive\nan error bound for the mean integrated squared error for the full data\nposterior density estimator. We also investigate the parameters that arise from\nlogspline density estimation and the numerical approximation procedure. Our\ninvestigation identifies specific choices of parameters for logspline density\nestimation that result in the error bound scaling appropriately in relation to\nthese choices.\n",
"title": "Asymptotic properties and approximation of Bayesian logspline density estimators for communication-free parallel computing methods"
}
| null | null | null | null | true | null |
17793
| null |
Default
| null | null |
null |
{
"abstract": " Recent interest in topological semimetals has lead to the proposal of many\nnew topological phases that can be realized in real materials. Next to Dirac\nand Weyl systems, these include more exotic phases based on manifold band\ndegeneracies in the bulk electronic structure. The exotic states in topological\nsemimetals are usually protected by some sort of crystal symmetry and the\nintroduction of magnetic order can influence these states by breaking time\nreversal symmetry. Here we show that we can realize a rich variety of different\ntopological semimetal states in a single material, $\\rm CeSbTe$. This compound\ncan exhibit different types of magnetic order that can be accessed easily by\napplying a small field. It allows, therefore, for tuning the electronic\nstructure and can drive it through a manifold of topologically distinct phases,\nsuch as the first nonsymmorphic magnetic topological material with an\neight-fold band crossing at a high symmetry point. Our experimental results are\nbacked by a full magnetic group theory analysis and ab initio calculations.\nThis discovery introduces a realistic and promising platform for studying the\ninterplay of magnetism and topology.\n",
"title": "Tunable Weyl and Dirac states in the nonsymmorphic compound $\\rm\\mathbf{CeSbTe}$"
}
| null | null | null | null | true | null |
17794
| null |
Default
| null | null |
null |
{
"abstract": " Recent research has shown the usefulness of using collective user interaction\ndata (e.g., query logs) to recommend query modification suggestions for\nIntranet search. However, most of the query suggestion approaches for Intranet\nsearch follow an \"one size fits all\" strategy, whereby different users who\nsubmit an identical query would get the same query suggestion list. This is\nproblematic, as even with the same query, different users may have different\ntopics of interest, which may change over time in response to the user's\ninteraction with the system. We address the problem by proposing a personalised\nquery suggestion framework for Intranet search. For each search session, we\nconstruct two temporal user profiles: a click user profile using the user's\nclicked documents and a query user profile using the user's submitted queries.\nWe then use the two profiles to re-rank the non-personalised query suggestion\nlist returned by a state-of-the-art query suggestion method for Intranet\nsearch. Experimental results on a large-scale query logs collection show that\nour personalised framework significantly improves the quality of suggested\nqueries.\n",
"title": "Personalised Query Suggestion for Intranet Search with Temporal User Profiling"
}
| null | null | null | null | true | null |
17795
| null |
Default
| null | null |
null |
{
"abstract": " Random geometric graphs in hyperbolic spaces explain many common structural\nand dynamical properties of real networks, yet they fail to predict the correct\nvalues of the exponents of power-law degree distributions observed in real\nnetworks. In that respect, random geometric graphs in asymptotically de Sitter\nspacetimes, such as the Lorentzian spacetime of our accelerating universe, are\nmore attractive as their predictions are more consistent with observations in\nreal networks. Yet another important property of hyperbolic graphs is their\nnavigability, and it remains unclear if de Sitter graphs are as navigable as\nhyperbolic ones. Here we study the navigability of random geometric graphs in\nthree Lorentzian manifolds corresponding to universes filled only with dark\nenergy (de Sitter spacetime), only with matter, and with a mixture of dark\nenergy and matter as in our universe. We find that these graphs are navigable\nonly in the manifolds with dark energy. This result implies that, in terms of\nnavigability, random geometric graphs in asymptotically de Sitter spacetimes\nare as good as random hyperbolic graphs. It also establishes a connection\nbetween the presence of dark energy and navigability of the discretized causal\nstructure of spacetime, which provides a basis for a different approach to the\ndark energy problem in cosmology.\n",
"title": "Navigability of Random Geometric Graphs in the Universe and Other Spacetimes"
}
| null | null |
[
"Computer Science",
"Physics"
] | null | true | null |
17796
| null |
Validated
| null | null |
null |
{
"abstract": " We study the efficient learnability of geometric concept classes -\nspecifically, low-degree polynomial threshold functions (PTFs) and\nintersections of halfspaces - when a fraction of the data is adversarially\ncorrupted. We give the first polynomial-time PAC learning algorithms for these\nconcept classes with dimension-independent error guarantees in the presence of\nnasty noise under the Gaussian distribution. In the nasty noise model, an\nomniscient adversary can arbitrarily corrupt a small fraction of both the\nunlabeled data points and their labels. This model generalizes well-studied\nnoise models, including the malicious noise model and the agnostic (adversarial\nlabel noise) model. Prior to our work, the only concept class for which\nefficient malicious learning algorithms were known was the class of\norigin-centered halfspaces.\nSpecifically, our robust learning algorithm for low-degree PTFs succeeds\nunder a number of tame distributions -- including the Gaussian distribution\nand, more generally, any log-concave distribution with (approximately) known\nlow-degree moments. For LTFs under the Gaussian distribution, we give a\npolynomial-time algorithm that achieves error $O(\\epsilon)$, where $\\epsilon$\nis the noise rate. At the core of our PAC learning results is an efficient\nalgorithm to approximate the low-degree Chow-parameters of any bounded function\nin the presence of nasty noise. To achieve this, we employ an iterative\nspectral method for outlier detection and removal, inspired by recent work in\nrobust unsupervised learning. Our aforementioned algorithm succeeds for a range\nof distributions satisfying mild concentration bounds and moment assumptions.\nThe correctness of our robust learning algorithm for intersections of\nhalfspaces makes essential use of a novel robust inverse independence lemma\nthat may be of broader interest.\n",
"title": "Learning Geometric Concepts with Nasty Noise"
}
| null | null | null | null | true | null |
17797
| null |
Default
| null | null |
null |
{
"abstract": " Editorial board members, who are considered the gatekeepers of scientific\njournals, play an important role in academia, and may directly or indirectly\naffect the scientific output of a university. In this article, we used the\nquantile regression method among a sample of 1,387 university in chemistry to\ncharacterize the correlation between the number of editorial board members and\nthe scientific output of their universities. Furthermore, we used time-series\ndata and the Granger causality test to explore the causal relationship between\nthe number of editorial board members and the number of articles of some top\nuniversities. Our results suggest that the number of editorial board members is\npositively and significantly related to the scientific output (as measured by\nthe number of articles, total number of citations, citations per paper, and h\nindex) of their universities. However, the Granger causality test results\nsuggest that the causal relationship between the number of editorial board\nmembers and the number of articles of some top universities is not obvious.\nCombining these findings with the results of qualitative interviews with\neditorial board members, we discuss the causal relationship between the number\nof editorial board members and the scientific output of their universities.\n",
"title": "The relationship between the number of editorial board members and the scientific output of universities in the chemistry field"
}
| null | null | null | null | true | null |
17798
| null |
Default
| null | null |
null |
{
"abstract": " This paper examines the Standard Model under the strong-electroweak gauge\ngroup $SU_S(3)\\times U_{EW}(2)$ subject to the condition $u_{EW}(2)\\not\\cong\nsu_I(2)\\oplus u_Y(1)$. Physically, the condition ensures that all electroweak\ngauge bosons interact with each other prior to symmetry breaking --- as one\nmight expect from $U(2)$ invariance. This represents a crucial shift in the\nnotion of physical gauge bosons: Unlike the Standard Model which posits a\nchange of Lie algebra basis induced by spontaneous symmetry breaking, here the\nbasis is unaltered and $A,\\,Z^0,\\,W^\\pm$ represent (modulo $U_{EW}(2)$ gauge\ntransformations) the physical bosons both \\emph{before} and after spontaneous\nsymmetry breaking.\nOur choice of $u_{EW}(2)$ basis requires some modification of the matter\nfield sector of the Standard Model. Careful attention to the product group\nstructure calls for strong-electroweak degrees of freedom in the\n$(\\mathbf{3},\\mathbf{2})$ and the $(\\mathbf{3},\\overline{\\mathbf{2}})$ of\n$SU_S(3)\\times U_{EW}(2)$ that possess integer electric charge just like\nleptons. These degrees of freedom play the role of quarks, and they lead to a\nmodified Lagrangian that nevertheless reproduces transition rates and cross\nsections equivalent to the Standard Model.\nThe close resemblance between quark and lepton electroweak doublets in this\npicture suggests a mechanism for a phase transition between quarks and leptons\nthat stems from the product structure of the gauge group. Our hypothesis is\nthat the strong and electroweak bosons see each other as a source of\ndecoherence. In effect, leptons get identified with the $SU_S(3)$-trace of\nquark representations. This mechanism allows for possible extensions of the\nStandard Model that don't require large inclusive multiplets of matter fields.\nAs an example, we propose and investigate a model that turns out to have some\npromising cosmological implications.\n",
"title": "A Non-standard Standard Model"
}
| null | null | null | null | true | null |
17799
| null |
Default
| null | null |
null |
{
"abstract": " We introduce flexible robust functional regression models, using various\nheavy-tailed processes, including a Student $t$-process. We propose efficient\nalgorithms in estimating parameters for the marginal mean inferences and in\npredicting conditional means as well interpolation and extrapolation for the\nsubject-specific inferences. We develop bootstrap prediction intervals for\nconditional mean curves. Numerical studies show that the proposed model\nprovides robust analysis against data contamination or distribution\nmisspecification, and the proposed prediction intervals maintain the nominal\nconfidence levels. A real data application is presented as an illustrative\nexample.\n",
"title": "Robust functional regression model for marginal mean and subject-specific inferences"
}
| null | null | null | null | true | null |
17800
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.