text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " In earlier work, Helen Wong and the author discovered certain \"miraculous\ncancellations\" for the quantum trace map connecting the Kauffman bracket skein\nalgebra of a surface to its quantum Teichmueller space, occurring when the\nquantum parameter $q$ is a root of unity. The current paper is devoted to\ngiving a more representation theoretic interpretation of this phenomenon, in\nterms of the quantum group $U_q(sl_2)$ and its dual Hopf algebra $SL_2^q$.\n",
"title": "Miraculous cancellations for quantum $SL_2$"
}
| null | null | null | null | true | null |
17001
| null |
Default
| null | null |
null |
{
"abstract": " This note is a short summary of the workshop on \"Energy and time measurements\nwith high-granular silicon devices\" that took place on the 13/6/16 and the\n14/6/16 at DESY/Hamburg in the frame of the first AIDA-2020 Annual Meeting.\nThis note tries to put forward trends that could be spotted and to emphasise in\nparticular open issues that were addressed by the speakers.\n",
"title": "Energy and time measurements with high-granular silicon devices"
}
| null | null | null | null | true | null |
17002
| null |
Default
| null | null |
null |
{
"abstract": " Current state-of-the-art approaches for spatio-temporal action localization\nrely on detections at the frame level that are then linked or tracked across\ntime. In this paper, we leverage the temporal continuity of videos instead of\noperating at the frame level. We propose the ACtion Tubelet detector\n(ACT-detector) that takes as input a sequence of frames and outputs tubelets,\ni.e., sequences of bounding boxes with associated scores. The same way\nstate-of-the-art object detectors rely on anchor boxes, our ACT-detector is\nbased on anchor cuboids. We build upon the SSD framework. Convolutional\nfeatures are extracted for each frame, while scores and regressions are based\non the temporal stacking of these features, thus exploiting information from a\nsequence. Our experimental results show that leveraging sequences of frames\nsignificantly improves detection performance over using individual frames. The\ngain of our tubelet detector can be explained by both more accurate scores and\nmore precise localization. Our ACT-detector outperforms the state-of-the-art\nmethods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in\nparticular at high overlap thresholds.\n",
"title": "Action Tubelet Detector for Spatio-Temporal Action Localization"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17003
| null |
Validated
| null | null |
null |
{
"abstract": " Percolation based graph matching algorithms rely on the availability of seed\nvertex pairs as side information to efficiently match users across networks.\nAlthough such algorithms work well in practice, there are other types of side\ninformation available which are potentially useful to an attacker. In this\npaper, we consider the problem of matching two correlated graphs when an\nattacker has access to side information, either in the form of community labels\nor an imperfect initial matching. In the former case, we propose a naive graph\nmatching algorithm by introducing the community degree vectors which harness\nthe information from community labels in an efficient manner. Furthermore, we\nanalyze a variant of the basic percolation algorithm proposed in literature for\ngraphs with community structure. In the latter case, we propose a novel\npercolation algorithm with two thresholds which uses an imperfect matching as\ninput to match correlated graphs.\nWe evaluate the proposed algorithms on synthetic as well as real world\ndatasets using various experiments. The experimental results demonstrate the\nimportance of communities as side information especially when the number of\nseeds is small and the networks are weakly correlated.\n",
"title": "Significance of Side Information in the Graph Matching Problem"
}
| null | null | null | null | true | null |
17004
| null |
Default
| null | null |
null |
{
"abstract": " We establish the rate region of an extended Gray-Wyner system for 2-DMS\n$(X,Y)$ with two additional decoders having complementary causal side\ninformation. This extension is interesting because in addition to the\noperationally significant extreme points of the Gray-Wyner rate region, which\ninclude Wyner's common information, G{á}cs-K{ö}rner common information and\ninformation bottleneck, the rate region for the extended system also includes\nthe K{ö}rner graph entropy, the privacy funnel and excess functional\ninformation, as well as three new quantities of potential interest, as extreme\npoints. To simplify the investigation of the 5-dimensional rate region of the\nextended Gray-Wyner system, we establish an equivalence of this region to a\n3-dimensional mutual information region that consists of the set of all triples\nof the form $(I(X;U),\\,I(Y;U),\\,I(X,Y;U))$ for some $p_{U|X,Y}$. We further\nshow that projections of this mutual information region yield the rate regions\nfor many settings involving a 2-DMS, including lossless source coding with\ncausal side information, distributed channel synthesis, and lossless source\ncoding with a helper.\n",
"title": "Extended Gray-Wyner System with Complementary Causal Side Information"
}
| null | null | null | null | true | null |
17005
| null |
Default
| null | null |
null |
{
"abstract": " We introduce the problem of simultaneously learning all powers of a Poisson\nBinomial Distribution (PBD). A PBD of order $n$ is the distribution of a sum of\n$n$ mutually independent Bernoulli random variables $X_i$, where\n$\\mathbb{E}[X_i] = p_i$. The $k$'th power of this distribution, for $k$ in a\nrange $[m]$, is the distribution of $P_k = \\sum_{i=1}^n X_i^{(k)}$, where each\nBernoulli random variable $X_i^{(k)}$ has $\\mathbb{E}[X_i^{(k)}] = (p_i)^k$.\nThe learning algorithm can query any power $P_k$ several times and succeeds in\nlearning all powers in the range, if with probability at least $1- \\delta$:\ngiven any $k \\in [m]$, it returns a probability distribution $Q_k$ with total\nvariation distance from $P_k$ at most $\\epsilon$. We provide almost matching\nlower and upper bounds on query complexity for this problem. We first show a\nlower bound on the query complexity on PBD powers instances with many distinct\nparameters $p_i$ which are separated, and we almost match this lower bound by\nexamining the query complexity of simultaneously learning all the powers of a\nspecial class of PBD's resembling the PBD's of our lower bound. We study the\nfundamental setting of a Binomial distribution, and provide an optimal\nalgorithm which uses $O(1/\\epsilon^2)$ samples. Diakonikolas, Kane and Stewart\n[COLT'16] showed a lower bound of $\\Omega(2^{1/\\epsilon})$ samples to learn the\n$p_i$'s within error $\\epsilon$. The question whether sampling from powers of\nPBDs can reduce this sampling complexity, has a negative answer since we show\nthat the exponential number of samples is inevitable. Having sampling access to\nthe powers of a PBD we then give a nearly optimal algorithm that learns its\n$p_i$'s. To prove our two last lower bounds we extend the classical minimax\nrisk definition from statistics to estimating functions of sequences of\ndistributions.\n",
"title": "Learning Powers of Poisson Binomial Distributions"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
17006
| null |
Validated
| null | null |
null |
{
"abstract": " There are many problems and configurations in Euclidean geometry that were\nnever extended to the framework of (normed or) finite dimensional real Banach\nspaces, although their original versions are inspiring for this type of\ngeneralization, and the analogous definitions for normed spaces represent a\npromising topic. An example is the geometry of simplices in non-Euclidean\nnormed spaces. We present new generalizations of well known properties of\nEuclidean simplices. These results refer to analogues of circumcenters, Euler\nlines, and Feuerbach spheres of simplices in normed spaces. Using duality, we\nalso get natural theorems on angular bisectors as well as in- and exspheres of\n(dual) simplices.\n",
"title": "Geometry of simplices in Minkowski spaces"
}
| null | null | null | null | true | null |
17007
| null |
Default
| null | null |
null |
{
"abstract": " In the use of deep neural networks, it is crucial to provide appropriate\ninput representations for the network to learn from. In this paper, we propose\nan approach to learn a representation that focus on rhythmic representation\nwhich is named as DLR (Deep Learning Rhythmic representation). The proposed\napproach aims to learn DLR from the raw audio signal and use it for other music\ninformatics tasks. A 1-dimensional convolutional network is utilised in the\nlearning of DLR. In the experiment, we present the results from the source task\nand the target task as well as visualisations of DLRs. The results reveals that\nDLR provides compact rhythmic information which can be used on multi-tagging\ntask.\n",
"title": "DLR : Toward a deep learned rhythmic representation for music content analysis"
}
| null | null | null | null | true | null |
17008
| null |
Default
| null | null |
null |
{
"abstract": " Tumor cells acquire different genetic alterations during the course of\nevolution in cancer patients. As a result of competition and selection, only a\nfew subgroups of cells with distinct genotypes survive. These subgroups of\ncells are often referred to as subclones. In recent years, many statistical and\ncomputational methods have been developed to identify tumor subclones, leading\nto biologically significant discoveries and shedding light on tumor\nprogression, metastasis, drug resistance and other processes. However, most\nexisting methods are either not able to infer the phylogenetic structure among\nsubclones, or not able to incorporate copy number variations (CNV). In this\narticle, we propose SIFA (tumor Subclone Identification by Feature Allocation),\na Bayesian model which takes into account both CNV and tumor phylogeny\nstructure to infer tumor subclones. We compare the performance of SIFA with two\nother commonly used methods using simulation studies with varying sequencing\ndepth, evolutionary tree size, and tree complexity. SIFA consistently yields\nbetter results in terms of Rand Index and cellularity estimation accuracy. The\nusefulness of SIFA is also demonstrated through its application to whole genome\nsequencing (WGS) samples from four patients in a breast cancer study.\n",
"title": "Phylogeny-based tumor subclone identification using a Bayesian feature allocation model"
}
| null | null | null | null | true | null |
17009
| null |
Default
| null | null |
null |
{
"abstract": " Predicting properties of nodes in a graph is an important problem with\napplications in a variety of domains. Graph-based Semi-Supervised Learning\n(SSL) methods aim to address this problem by labeling a small subset of the\nnodes as seeds and then utilizing the graph structure to predict label scores\nfor the rest of the nodes in the graph. Recently, Graph Convolutional Networks\n(GCNs) have achieved impressive performance on the graph-based SSL task. In\naddition to label scores, it is also desirable to have confidence scores\nassociated with them. Unfortunately, confidence estimation in the context of\nGCN has not been previously explored. We fill this important gap in this paper\nand propose ConfGCN, which estimates labels scores along with their confidences\njointly in GCN-based setting. ConfGCN uses these estimated confidences to\ndetermine the influence of one node on another during neighborhood aggregation,\nthereby acquiring anisotropic capabilities. Through extensive analysis and\nexperiments on standard benchmarks, we find that ConfGCN is able to outperform\nstate-of-the-art baselines. We have made ConfGCN's source code available to\nencourage reproducible research.\n",
"title": "Confidence-based Graph Convolutional Networks for Semi-Supervised Learning"
}
| null | null | null | null | true | null |
17010
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies the daily connectivity time series of a wind\nspeed-monitoring network using multifractal detrended fluctuation analysis. It\ninvestigates the long-range fluctuation and multifractality in the residuals of\nthe connectivity time series. Our findings reveal that the daily connectivity\nof the correlation-based network is persistent for any correlation threshold.\nFurther, the multifractality degree is higher for larger absolute values of the\ncorrelation threshold\n",
"title": "Long-range fluctuations and multifractality in connectivity density time series of a wind speed monitoring network"
}
| null | null | null | null | true | null |
17011
| null |
Default
| null | null |
null |
{
"abstract": " What happens when a new social convention replaces an old one? While the\npossible forces favoring norm change - such as institutions or committed\nactivists - have been identified since a long time, little is known about how a\npopulation adopts a new convention, due to the difficulties of finding\nrepresentative data. Here we address this issue by looking at changes occurred\nto 2,541 orthographic and lexical norms in English and Spanish through the\nanalysis of a large corpora of books published between the years 1800 and 2008.\nWe detect three markedly distinct patterns in the data, depending on whether\nthe behavioral change results from the action of a formal institution, an\ninformal authority or a spontaneous process of unregulated evolution. We\npropose a simple evolutionary model able to capture all the observed behaviors\nand we show that it reproduces quantitatively the empirical data. This work\nidentifies general mechanisms of norm change and we anticipate that it will be\nof interest to researchers investigating the cultural evolution of language\nand, more broadly, human collective behavior.\n",
"title": "The Dynamics of Norm Change in the Cultural Evolution of Language"
}
| null | null | null | null | true | null |
17012
| null |
Default
| null | null |
null |
{
"abstract": " In this article, we propose a new class of priors for Bayesian inference with\nmultiple Gaussian graphical models. We introduce fully Bayesian treatments of\ntwo popular procedures, the group graphical lasso and the fused graphical\nlasso, and extend them to a continuous spike-and-slab framework to allow\nself-adaptive shrinkage and model selection simultaneously. We develop an EM\nalgorithm that performs fast and dynamic explorations of posterior modes. Our\napproach selects sparse models efficiently with substantially smaller bias than\nwould be induced by alternative regularization procedures. The performance of\nthe proposed methods are demonstrated through simulation and two real data\nexamples.\n",
"title": "Bayesian Joint Spike-and-Slab Graphical Lasso"
}
| null | null | null | null | true | null |
17013
| null |
Default
| null | null |
null |
{
"abstract": " The uniform boundary condition in a normed chain complex asks for a uniform\nlinear bound on fillings of null-homologous cycles. For the $\\ell^1$-norm on\nthe singular chain complex, Matsumoto and Morita established a characterisation\nof the uniform boundary condition in terms of bounded cohomology. In\nparticular, spaces with amenable fundamental group satisfy the uniform boundary\ncondition in every degree. We will give an alternative proof of statements of\nthis type, using geometric F{\\o}lner arguments on the chain level instead of\npassing to the dual cochain complex. These geometric methods have the advantage\nthat they also lead to integral refinements. In particular, we obtain\napplications in the context of integral foliated simplicial volume.\n",
"title": "Variations on the theme of the uniform boundary condition"
}
| null | null | null | null | true | null |
17014
| null |
Default
| null | null |
null |
{
"abstract": " In recent years there has been widespread concern in the scientific community\nover a reproducibility crisis. Among the major causes that have been identified\nis statistical: In many scientific research the statistical analysis (including\ndata preparation) suffers from a lack of transparency and methodological\nproblems, major obstructions to reproducibility. The revisit package aims\ntoward remedying this problem, by generating a \"software paper trail\" of the\nstatistical operations applied to a dataset. This record can be \"replayed\" for\nverification purposes, as well as be modified to enable alternative analyses.\nThe software also issues warnings of certain kinds of potential errors in\nstatistical methodology, again related to the reproducibility issue.\n",
"title": "revisit: a Workflow Tool for Data Science"
}
| null | null | null | null | true | null |
17015
| null |
Default
| null | null |
null |
{
"abstract": " We present a reinforcement learning framework, called Programmatically\nInterpretable Reinforcement Learning (PIRL), that is designed to generate\ninterpretable and verifiable agent policies. Unlike the popular Deep\nReinforcement Learning (DRL) paradigm, which represents policies by neural\nnetworks, PIRL represents policies using a high-level, domain-specific\nprogramming language. Such programmatic policies have the benefits of being\nmore easily interpreted than neural networks, and being amenable to\nverification by symbolic methods. We propose a new method, called Neurally\nDirected Program Search (NDPS), for solving the challenging nonsmooth\noptimization problem of finding a programmatic policy with maximal reward. NDPS\nworks by first learning a neural policy network using DRL, and then performing\na local search over programmatic policies that seeks to minimize a distance\nfrom this neural \"oracle\". We evaluate NDPS on the task of learning to drive a\nsimulated car in the TORCS car-racing environment. We demonstrate that NDPS is\nable to discover human-readable policies that pass some significant performance\nbars. We also show that PIRL policies can have smoother trajectories, and can\nbe more easily transferred to environments not encountered during training,\nthan corresponding policies discovered by DRL.\n",
"title": "Programmatically Interpretable Reinforcement Learning"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
17016
| null |
Validated
| null | null |
null |
{
"abstract": " Plasmas with varying collisionalities occur in many applications, such as\ntokamak edge regions, where the flows are characterized by significant\nvariations in density and temperature. While a kinetic model is necessary for\nweakly-collisional high-temperature plasmas, high collisionality in colder\nregions render the equations numerically stiff due to disparate time scales. In\nthis paper, we propose an implicit-explicit algorithm for such cases, where the\ncollisional term is integrated implicitly in time, while the advective term is\nintegrated explicitly in time, thus allowing time step sizes that are\ncomparable to the advective time scales. This partitioning results in a more\nefficient algorithm than those using explicit time integrators, where the time\nstep sizes are constrained by the stiff collisional time scales. We implement\nsemi-implicit additive Runge-Kutta methods in COGENT, a finite-volume\ngyrokinetic code for mapped, multiblock grids and test the accuracy,\nconvergence, and computational cost of these semi-implicit methods for test\ncases with highly-collisional plasmas.\n",
"title": "Kinetic Simulation of Collisional Magnetized Plasmas with Semi-Implicit Time Integration"
}
| null | null | null | null | true | null |
17017
| null |
Default
| null | null |
null |
{
"abstract": " We study VC-dimension of short formulas in Presburger Arithmetic, defined to\nhave a bounded number of variables, quantifiers and atoms. We give both lower\nand upper bounds, which are tight up to a polynomial factor in the bit length\nof the formula.\n",
"title": "VC-dimension of short Presburger formulas"
}
| null | null | null | null | true | null |
17018
| null |
Default
| null | null |
null |
{
"abstract": " Traffic accident data are usually noisy, contain missing values, and\nheterogeneous. How to select the most important variables to improve real-time\ntraffic accident risk prediction has become a concern of many recent studies.\nThis paper proposes a novel variable selection method based on the Frequent\nPattern tree (FP tree) algorithm. First, all the frequent patterns in the\ntraffic accident dataset are discovered. Then for each frequent pattern, a new\ncriterion, called the Relative Object Purity Ratio (ROPR) which we proposed, is\ncalculated. This ROPR is added to the importance score of the variables that\ndifferentiate one frequent pattern from the others. To test the proposed\nmethod, a dataset was compiled from the traffic accidents records detected by\nonly one detector on interstate highway I-64 in Virginia in 2005. This dataset\nwas then linked to other variables such as real-time traffic information and\nweather conditions. Both the proposed method based on the FP tree algorithm, as\nwell as the widely utilized, random forest method, were then used to identify\nthe important variables or the Virginia dataset. The results indicate that\nthere are some differences between the variables deemed important by the FP\ntree and those selected by the random forest method. Following this, two\nbaseline models (i.e. a nearest neighbor (k-NN) method and a Bayesian network)\nwere developed to predict accident risk based on the variables identified by\nboth the FP tree method and the random forest method. The results show that the\nmodels based on the variable selection using the FP tree performed better than\nthose based on the random forest method for several versions of the k-NN and\nBayesian network models.The best results were derived from a Bayesian network\nmodel using variables from FP tree. That model could predict 61.11% of\naccidents accurately while having a false alarm rate of 38.16%.\n",
"title": "Real-time Traffic Accident Risk Prediction based on Frequent Pattern Tree"
}
| null | null | null | null | true | null |
17019
| null |
Default
| null | null |
null |
{
"abstract": " Third-party library reuse has become common practice in contemporary software\ndevelopment, as it includes several benefits for developers. Library\ndependencies are constantly evolving, with newly added features and patches\nthat fix bugs in older versions. To take full advantage of third-party reuse,\ndevelopers should always keep up to date with the latest versions of their\nlibrary dependencies. In this paper, we investigate the extent of which\ndevelopers update their library dependencies. Specifically, we conducted an\nempirical study on library migration that covers over 4,600 GitHub software\nprojects and 2,700 library dependencies. Results show that although many of\nthese systems rely heavily on dependencies, 81.5% of the studied systems still\nkeep their outdated dependencies. In the case of updating a vulnerable\ndependency, the study reveals that affected developers are not likely to\nrespond to a security advisory. Surveying these developers, we find that 69% of\nthe interviewees claim that they were unaware of their vulnerable dependencies.\nFurthermore, developers are not likely to prioritize library updates, citing it\nas extra effort and added responsibility. This study concludes that even though\nthird-party reuse is commonplace, the practice of updating a dependency is not\nas common for many developers.\n",
"title": "Do Developers Update Their Library Dependencies? An Empirical Study on the Impact of Security Advisories on Library Migration"
}
| null | null | null | null | true | null |
17020
| null |
Default
| null | null |
null |
{
"abstract": " Bacteria are easily characterizable model organisms with an impressively\ncomplicated set of capabilities. Among their capabilities is quorum sensing, a\ndetailed cell-cell signaling system that may have a common origin with\neukaryotic cell-cell signaling. Not only are the two phenomena similar, but\nquorum sensing, as is the case with any bacterial phenomenon when compared to\neukaryotes, is also easier to study in depth than eukaryotic cell-cell\nsignaling. This ease of study is a contrast to the only partially understood\ncellular dynamics of neurons. Here we review the literature on the strikingly\nneuron-like qualities of bacterial colonies and biofilms, including ion-based\nand hormonal signaling, and action potential-like behavior. This allows them to\nfeasibly act as an analog for neurons that could produce more detailed and more\naccurate biologically-based computational models. Using bacteria as the basis\nfor biologically feasible computational models may allow models to better\nharness the tremendous ability of biological organisms to make decisions and\nprocess information. Additionally, principles gleaned from bacterial function\nhave the potential to influence computational efforts divorced from biology,\njust as neuronal function has in the abstract influenced countless machine\nlearning efforts.\n",
"title": "Is Smaller Better: A Proposal To Consider Bacteria For Biologically Inspired Modeling"
}
| null | null | null | null | true | null |
17021
| null |
Default
| null | null |
null |
{
"abstract": " Data augmentation is an essential part of the training process applied to\ndeep learning models. The motivation is that a robust training process for deep\nlearning models depends on large annotated datasets, which are expensive to be\nacquired, stored and processed. Therefore a reasonable alternative is to be\nable to automatically generate new annotated training samples using a process\nknown as data augmentation. The dominant data augmentation approach in the\nfield assumes that new training samples can be obtained via random geometric or\nappearance transformations applied to annotated training samples, but this is a\nstrong assumption because it is unclear if this is a reliable generative model\nfor producing new training samples. In this paper, we provide a novel Bayesian\nformulation to data augmentation, where new annotated training points are\ntreated as missing variables and generated based on the distribution learned\nfrom the training set. For learning, we introduce a theoretically sound\nalgorithm --- generalised Monte Carlo expectation maximisation, and demonstrate\none possible implementation via an extension of the Generative Adversarial\nNetwork (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the\nbetter performance of our proposed method compared to the current dominant data\naugmentation approach mentioned above --- the results also show that our\napproach produces better classification results than similar GAN models.\n",
"title": "A Bayesian Data Augmentation Approach for Learning Deep Models"
}
| null | null | null | null | true | null |
17022
| null |
Default
| null | null |
null |
{
"abstract": " Knowledge bases are employed in a variety of applications from natural\nlanguage processing to semantic web search; alas, in practice their usefulness\nis hurt by their incompleteness. Embedding models attain state-of-the-art\naccuracy in knowledge base completion, but their predictions are notoriously\nhard to interpret. In this paper, we adapt \"pedagogical approaches\" (from the\nliterature on neural networks) so as to interpret embedding models by\nextracting weighted Horn rules from them. We show how pedagogical approaches\nhave to be adapted to take upon the large-scale relational aspects of knowledge\nbases and show experimentally their strengths and weaknesses.\n",
"title": "Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach"
}
| null | null | null | null | true | null |
17023
| null |
Default
| null | null |
null |
{
"abstract": " Machine scheduling problems are a long-time key domain of algorithms and\ncomplexity research. A novel approach to machine scheduling problems are\nfixed-parameter algorithms. To stimulate this thriving research direction, we\npropose 15 open questions in this area whose resolution we expect to lead to\nthe discovery of new approaches and techniques both in scheduling and\nparameterized complexity theory.\n",
"title": "Parameterized complexity of machine scheduling: 15 open problems"
}
| null | null | null | null | true | null |
17024
| null |
Default
| null | null |
null |
{
"abstract": " The conditional mutual information I(X;Y|Z) measures the average information\nthat X and Y contain about each other given Z. This is an important primitive\nin many learning problems including conditional independence testing, graphical\nmodel inference, causal strength estimation and time-series problems. In\nseveral applications, it is desirable to have a functional purely of the\nconditional distribution p_{Y|X,Z} rather than of the joint distribution\np_{X,Y,Z}. We define the potential conditional mutual information as the\nconditional mutual information calculated with a modified joint distribution\np_{Y|X,Z} q_{X,Z}, where q_{X,Z} is a potential distribution, fixed airport. We\ndevelop K nearest neighbor based estimators for this functional, employing\nimportance sampling, and a coupling trick, and prove the finite k consistency\nof such an estimator. We demonstrate that the estimator has excellent practical\nperformance and show an application in dynamical system inference.\n",
"title": "Potential Conditional Mutual Information: Estimators, Properties and Applications"
}
| null | null | null | null | true | null |
17025
| null |
Default
| null | null |
null |
{
"abstract": " An interesting attempt for solving infrared divergence problems via the\ntheory of generalized wave operators was made by P. Kulish and L. Faddeev. Our\nmethod of using the ideas from the theory of generalized wave operators is\nessentially different. We assume that the unperturbed operator $A_0$ is known\nand that the scattering operator $S$ and the unperturbed operator $A_0$ are\npermutable. (In the Kulish-Faddeev theory this basic property is not\nfulfilled.) The permutability of $S$ and $A_0$ gives us an important\ninformation about the structure of the scattering operator. We show that the\ndivergences appeared because the deviations of the initial and final waves from\nthe free waves were not taken into account. The approach is demonstrated on\nimportant examples.\n",
"title": "A new approach to divergences in quantum electrodynamics, concrete examples"
}
| null | null | null | null | true | null |
17026
| null |
Default
| null | null |
null |
{
"abstract": " We consider the spectral structure of indefinite second order boundary-value\nproblems on graphs. A variational formulation for such boundary-value problems\non graphs is given and we obtain both full and half-range completeness results.\nThis leads to a max-min principle and as a consequence we can formulate an\nanalogue of Dirichlet-Neumann bracketing and this in turn gives rise to\nasymptotic approximations for the eigenvalues.\n",
"title": "Indefinite boundary value problems on graphs"
}
| null | null | null | null | true | null |
17027
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study the integral curvatures of Finsler manifolds. Some\nBishop-Gromov relative volume comparisons and several Myers type theorems are\nobtained. We also establish a Gromov type precompactness theorem and a\nYamaguchi type finiteness theorem. Furthermore, the isoperimetric and Sobolev\nconstants of a closed Finsler manifold are estimated by integral curvature\nbounds.\n",
"title": "Integral curvatures of Finsler manifolds and applications"
}
| null | null | null | null | true | null |
17028
| null |
Default
| null | null |
null |
{
"abstract": " For convex co-compact subgroups of SL2(Z) we consider the \"congruence\nsubgroups\" for p prime. We prove a factorization formula for the Selberg zeta\nfunction in term of L-functions related to irreducible representations of the\nGalois group SL2(Fp) of the covering, together with a priori bounds and\nanalytic continuation. We use this factorization property combined with an\naveraging technique over representations to prove a new existence result of\nnon-trivial resonances in an effective low frequency strip.\n",
"title": "L-functions and sharp resonances of infinite index congruence subgroups of $SL_2(\\mathbb{Z})$"
}
| null | null | null | null | true | null |
17029
| null |
Default
| null | null |
null |
{
"abstract": " The massive parallel approach of neuromorphic circuits leads to effective\nmethods for solving complex problems. It has turned out that resistive\nswitching devices with a continuous resistance range are potential candidates\nfor such applications. These devices are memristive systems - nonlinear\nresistors with memory. They are fabricated in nanotechnology and hence\nparameter spread during fabrication may aggravate reproducible analyses. This\nissue makes simulation models of memristive devices worthwhile.\nKinetic Monte-Carlo simulations based on a distributed model of the device\ncan be used to understand the underlying physical and chemical phenomena.\nHowever, such simulations are very time-consuming and neither convenient for\ninvestigations of whole circuits nor for real-time applications, e.g. emulation\npurposes. Instead, a concentrated model of the device can be used for both fast\nsimulations and real-time applications, respectively. We introduce an enhanced\nelectrical model of a valence change mechanism (VCM) based double barrier\nmemristive device (DBMD) with a continuous resistance range. This device\nconsists of an ultra-thin memristive layer sandwiched between a tunnel barrier\nand a Schottky-contact. The introduced model leads to very fast simulations by\nusing usual circuit simulation tools while maintaining physically meaningful\nparameters.\nKinetic Monte-Carlo simulations based on a distributed model and experimental\ndata have been utilized as references to verify the concentrated model.\n",
"title": "An Enhanced Lumped Element Electrical Model of a Double Barrier Memristive Device"
}
| null | null | null | null | true | null |
17030
| null |
Default
| null | null |
null |
{
"abstract": " We first study the discrete Schrödinger equations with analytic potentials\ngiven by a class of transformations. It is shown that if the coupling number is\nlarge, then its logarithm equals approximately to the Lyapunov exponents. When\nthe transformation becomes the skew-shift, we prove that the Lyapunov exponent\nis week Hölder continuous, and the spectrum satisfies Anderson Localization\nand contains large intervals. Moreover, all of these conclusions are\nnon-perturbative.\n",
"title": "Non-perturbative positive Lyapunov exponent of Schrödinger equations and its applications to skew-shift"
}
| null | null | null | null | true | null |
17031
| null |
Default
| null | null |
null |
{
"abstract": " Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe\n(FW) algorithms regained popularity in recent years due to their simplicity,\neffectiveness and theoretical guarantees. MP and FW address optimization over\nthe linear span and the convex hull of a set of atoms, respectively. In this\npaper, we consider the intermediate case of optimization over the convex cone,\nparametrized as the conic hull of a generic atom set, leading to the first\nprincipled definitions of non-negative MP algorithms for which we give explicit\nconvergence rates and demonstrate excellent empirical performance. In\nparticular, we derive sublinear ($\\mathcal{O}(1/t)$) convergence on general\nsmooth and convex objectives, and linear convergence ($\\mathcal{O}(e^{-t})$) on\nstrongly convex objectives, in both cases for general sets of atoms.\nFurthermore, we establish a clear correspondence of our algorithms to known\nalgorithms from the MP and FW literature. Our novel algorithms and analyses\ntarget general atom sets and general objective functions, and hence are\ndirectly applicable to a large variety of learning settings.\n",
"title": "Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees"
}
| null | null | null | null | true | null |
17032
| null |
Default
| null | null |
null |
{
"abstract": " We analyze the relation between the emission radii of twin kilohertz\nquasi-periodic oscillations (kHz QPOs) and the co-rotation radii of the 12\nneutron star low mass X-ray binaries (NS-LMXBs) which are simultaneously\ndetected with the twin kHz QPOs and NS spins. We find that the average\nco-rotation radius of these sources is r_co about 32 km, and all the emission\npositions of twin kHz QPOs lie inside the corotation radii, indicating that the\ntwin kHz QPOs are formed in the spin-up process. It is noticed that the upper\nfrequency of twin kHz QPOs is higher than NS spin frequency by > 10%, which may\naccount for a critical velocity difference between the Keplerian motion of\naccretion matter and NS spin that is corresponding to the production of twin\nkHz QPOs. In addition, we also find that about 83% of twin kHz QPOs cluster\naround the radius range of 15-20 km, which may be affected by the hard surface\nor the local strong magnetic field of NS. As a special case, SAX J1808.4-3658\nshows the larger emission radii of twin kHz QPOs of r about 21-24 km, which may\nbe due to its low accretion rate or small measured NS mass (< 1.4 solar mass).\n",
"title": "Probing the accretion disc structure by the twin kHz QPOs and spins of neutron stars in LMXBs"
}
| null | null | null | null | true | null |
17033
| null |
Default
| null | null |
null |
{
"abstract": " This article offers a personal perspective on the current state of academic\npublishing, and posits that the scientific community is beset with journals\nthat contribute little valuable knowledge, overload the community's capacity\nfor high-quality peer review, and waste resources. Open access publishing can\noffer solutions that benefit researchers and other information users, as well\nas institutions and funders, but commercial journal publishers have influenced\nopen access policies and practices in ways that favor their economic interests\nover those of other stakeholders in knowledge creation and sharing. One way to\nfree research from constraints on access is the diamond route of open access\npublishing, in which institutions and funders that produce new knowledge\nreclaim responsibility for publication via institutional journals or other open\nplatforms. I argue that research journals (especially those published for\nprofit) may no longer be fit for purpose, and hope that readers will consider\nwhether the time has come to put responsibility for publishing back into the\nhands of researchers and their institutions. The potential advantages and\nchallenges involved in a shift away from for-profit journals in favor of\ninstitutional open access publishing are explored.\n",
"title": "Can scientists and their institutions become their own open access publishers?"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17034
| null |
Validated
| null | null |
null |
{
"abstract": " If $E$ is an elliptic curve over $\\mathbb{Q}$, then it follows from work of\nSerre and Hooley that, under the assumption of the Generalized Riemann\nHypothesis, the density of primes $p$ such that the group of\n$\\mathbb{F}_p$-rational points of the reduced curve $\\tilde{E}(\\mathbb{F}_p)$\nis cyclic can be written as an infinite product $\\prod \\delta_\\ell$ of local\nfactors $\\delta_\\ell$ reflecting the degree of the $\\ell$-torsion fields,\nmultiplied by a factor that corrects for the entanglements between the various\ntorsion fields. We show that this correction factor can be interpreted as a\ncharacter sum, and the resulting description allows us to easily determine\nnon-vanishing criteria for it. We apply this method in a variety of other\nsettings. Among these, we consider the aforementioned problem with the\nadditional condition that the primes $p$ lie in a given arithmetic progression.\nWe also study the conjectural constants appearing in Koblitz's conjecture, a\nconjecture which relates to the density of primes $p$ for which the cardinality\nof the group of $\\mathbb{F}_p$-points of $E$ is prime.\n",
"title": "Character sums for elliptic curve densities"
}
| null | null | null | null | true | null |
17035
| null |
Default
| null | null |
null |
{
"abstract": " A unified fluid-structure interaction (FSI) formulation is presented for\nsolid, liquid and mixed membranes. Nonlinear finite elements (FE) and the\ngeneralized-alpha scheme are used for the spatial and temporal discretization.\nThe membrane discretization is based on curvilinear surface elements that can\ndescribe large deformations and rotations, and also provide a straightforward\ndescription for contact. The fluid is described by the incompressible\nNavier-Stokes equations, and its discretization is based on stabilized\nPetrov-Galerkin FE. The coupling between fluid and structure uses a conforming\nsharp interface discretization, and the resulting non-linear FE equations are\nsolved monolithically within the Newton-Raphson scheme. An arbitrary\nLagrangian-Eulerian formulation is used for the fluid in order to account for\nthe mesh motion around the structure. The formulation is very general and\nadmits diverse applications that include contact at free surfaces. This is\ndemonstrated by two analytical and three numerical examples exhibiting strong\ncoupling between fluid and structure. The examples include balloon inflation,\ndroplet rolling and flapping flags. They span a Reynolds-number range from\n0.001 to 2000. One of the examples considers the extension to rotation-free\nshells using isogeometric FE.\n",
"title": "A monolithic fluid-structure interaction formulation for solid and liquid membranes including free-surface contact"
}
| null | null | null | null | true | null |
17036
| null |
Default
| null | null |
null |
{
"abstract": " The transverse momentum ($p_T$) spectra from heavy-ion collisions at\nintermediate momenta are described by non-extensive statistical models.\nAssuming a fixed relative variance of the temperature fluctuating event by\nevent or alternatively a fixed mean multiplicity in a negative binomial\ndistribution (NBD), two different linear relations emerge between the\ntemperature, $T$, and the Tsallis parameter $q-1$. Our results qualitatively\nagree with that of G.~Wilk. Furthermore we revisit the \"Soft+Hard\" model,\nproposed recently by G.~G.~Barnaföldi \\textit{et.al.}, by a $T$-independent\naverage $p_T^2$ assumption. Finally we compare results with those predicted by\nanother deformed distribution, using Kaniadakis' $\\kappa$ parametrization.\n",
"title": "Different Non-extensive Models for heavy-ion collisions"
}
| null | null |
[
"Physics"
] | null | true | null |
17037
| null |
Validated
| null | null |
null |
{
"abstract": " Toxicity prediction of chemical compounds is a grand challenge. Lately, it\nachieved significant progress in accuracy but using a huge set of features,\nimplementing a complex blackbox technique such as a deep neural network, and\nexploiting enormous computational resources. In this paper, we strongly argue\nfor the models and methods that are simple in machine learning characteristics,\nefficient in computing resource usage, and powerful to achieve very high\naccuracy levels. To demonstrate this, we develop a single task-based chemical\ntoxicity prediction framework using only 2D features that are less compute\nintensive. We effectively use a decision tree to obtain an optimum number of\nfeatures from a collection of thousands of them. We use a shallow neural\nnetwork and jointly optimize it with decision tree taking both network\nparameters and input features into account. Our model needs only a minute on a\nsingle CPU for its training while existing methods using deep neural networks\nneed about 10 min on NVidia Tesla K40 GPU. However, we obtain similar or better\nperformance on several toxicity benchmark tasks. We also develop a cumulative\nfeature ranking method which enables us to identify features that can help\nchemists perform prescreening of toxic compounds effectively.\n",
"title": "Efficient Toxicity Prediction via Simple Features Using Shallow Neural Networks and Decision Trees"
}
| null | null | null | null | true | null |
17038
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a general scheme that permits to generate successive min-max\nproblems for producing critical points of higher and higher indices to\nPalais-Smale Functionals in Banach manifolds equipped with Finsler structures.\nWe call the resulting tree of minmax problems a minmax hierarchy. Using the\nviscosity approach to the minmax theory of minimal surfaces introduced by the\nauthor in a series of recent works, we explain how this scheme can be deformed\nfor producing smooth minimal surfaces of strictly increasing area in arbitrary\ncodimension. We implement this scheme to the case of the $3-$dimensional\nsphere. In particular we are giving a min-max characterization of the Clifford\nTorus and conjecture what are the next minimal surfaces to come in the $S^3$\nhierarchy. Among other results we prove here the lower semi continuity of the\nMorse Index in the viscosity method below an area level.\n",
"title": "Minmax Hierarchies and Minimal Surfaces in Manifolds"
}
| null | null |
[
"Mathematics"
] | null | true | null |
17039
| null |
Validated
| null | null |
null |
{
"abstract": " Multinomial choice models are fundamental for empirical modeling of economic\nchoices among discrete alternatives. We analyze identification of binary and\nmultinomial choice models when the choice utilities are nonseparable in\nobserved attributes and multidimensional unobserved heterogeneity with\ncross-section and panel data. We show that derivatives of choice probabilities\nwith respect to continuous attributes are weighted averages of utility\nderivatives in cross-section models with exogenous heterogeneity. In the\nspecial case of random coefficient models with an independent additive effect,\nwe further characterize that the probability derivative at zero is proportional\nto the population mean of the coefficients. We extend the identification\nresults to models with endogenous heterogeneity using either a control function\nor panel data. In time stationary panel models with two periods, we find that\ndifferences over time of derivatives of choice probabilities identify utility\nderivatives \"on the diagonal,\" i.e. when the observed attributes take the same\nvalues in the two periods. We also show that time stationarity does not\nidentify structural derivatives \"off the diagonal\" both in continuous and\nmultinomial choice panel models.\n",
"title": "Nonseparable Multinomial Choice Models in Cross-Section and Panel Data"
}
| null | null |
[
"Statistics"
] | null | true | null |
17040
| null |
Validated
| null | null |
null |
{
"abstract": " We study the limit shape of successive coronas of a tiling, which models the\ngrowth of crystals. We define basic terminologies and discuss the existence and\nuniqueness of corona limits, and then prove that corona limits are completely\ncharacterized by directional speeds. As an application, we give another proof\nthat the corona limit of a periodic tiling is a centrally symmetric convex\npolyhedron (see [Zhuravlev 2001], [Maleev-Shutov 2011]).\n",
"title": "Corona limits of tilings : Periodic case"
}
| null | null | null | null | true | null |
17041
| null |
Default
| null | null |
null |
{
"abstract": " Properties of galaxies like their absolute magnitude and their stellar mass\ncontent are correlated. These correlations are tighter for close pairs of\ngalaxies, which is called galactic conformity. In hierarchical structure\nformation scenarios, galaxies form within dark matter halos. To explain the\namplitude and the spatial range of galactic conformity two--halo terms or\nassembly bias become important. With the scale dependent correlation\ncoefficients the amplitude and the spatial range of conformity are determined\nfrom galaxy and halo samples. The scale dependent correlation coefficients are\nintroduced as a new descriptive statistic to quantify the correlations between\nproperties of galaxies or halos, depending on the distances to other galaxies\nor halos. These scale dependent correlation coefficients can be applied to the\ngalaxy distribution directly. Neither a splitting of the sample into\nsubsamples, nor an a priori clustering is needed. This new descriptive\nstatistic is applied to galaxy catalogues derived from the Sloan Digital Sky\nSurvey III and to halo catalogues from the MultiDark simulations. In the galaxy\nsample the correlations between absolute Magnitude, velocity dispersion,\nellipticity, and stellar mass content are investigated. The correlations of\nmass, spin, and ellipticity are explored in the halo samples. Both for galaxies\nand halos a scale dependent conformity is confirmed. Moreover the scale\ndependent correlation coefficients reveal a signal of conformity out to 40Mpc\nand beyond. The halo and galaxy samples show a differing amplitude and range of\nconformity.\n",
"title": "The Spatial Range of Conformity"
}
| null | null | null | null | true | null |
17042
| null |
Default
| null | null |
null |
{
"abstract": " Stochastic Gradient Langevin Dynamics (SGLD) is a popular variant of\nStochastic Gradient Descent, where properly scaled isotropic Gaussian noise is\nadded to an unbiased estimate of the gradient at each iteration. This modest\nchange allows SGLD to escape local minima and suffices to guarantee asymptotic\nconvergence to global minimizers for sufficiently regular non-convex objectives\n(Gelfand and Mitter, 1991). The present work provides a nonasymptotic analysis\nin the context of non-convex learning problems, giving finite-time guarantees\nfor SGLD to find approximate minimizers of both empirical and population risks.\nAs in the asymptotic setting, our analysis relates the discrete-time SGLD\nMarkov chain to a continuous-time diffusion process. A new tool that drives the\nresults is the use of weighted transportation cost inequalities to quantify the\nrate of convergence of SGLD to a stationary distribution in the Euclidean\n$2$-Wasserstein distance.\n",
"title": "Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis"
}
| null | null | null | null | true | null |
17043
| null |
Default
| null | null |
null |
{
"abstract": " More than 23 million people are suffered by Heart failure worldwide. Despite\nthe modern transplant operation is well established, the lack of heart\ndonations becomes a big restriction on transplantation frequency. With respect\nto this matter, ventricular assist devices (VADs) can play an important role in\nsupporting patients during waiting period and after the surgery. Moreover, it\nhas been shown that VADs by means of blood pump have advantages for working\nunder different conditions. While a lot of work has been done on modeling the\nfunctionality of the blood pump, but quantifying uncertainties in a numerical\nmodel is a challenging task. We consider the Polynomial Chaos (PC) method,\nwhich is introduced by Wiener for modeling stochastic process with Gaussian\ndistribution. The Galerkin projection, the intrusive version of the generalized\nPolynomial Chaos (gPC), has been densely studied and applied for various\nproblems. The intrusive Galerkin approach could represent stochastic process\ndirectly at once with Polynomial Chaos series expansions, it would therefore\noptimize the total computing effort comparing with classical non-intrusive\nmethods. We compared different preconditioning techniques for a steady state\nsimulation of a blood pump configuration in our previous work, the comparison\nshows that an inexact multilevel preconditioner has a promising performance. In\nthis work, we show an instationary blood flow through a FDA blood pump\nconfiguration with Galerkin Projection method, which is implemented in our open\nsource Finite Element library Hiflow3. Three uncertainty sources are\nconsidered: inflow boundary condition, rotor angular speed and dynamic\nviscosity, the numerical results are demonstrated with more than 30 Million\ndegrees of freedom by using supercomputer.\n",
"title": "Multilevel preconditioner of Polynomial Chaos Method for quantifying uncertainties in a blood pump"
}
| null | null | null | null | true | null |
17044
| null |
Default
| null | null |
null |
{
"abstract": " The combination of strong correlation and emergent lattice can be achieved\nwhen quantum gases are confined in a superradiant Fabry-Perot cavity. In\naddition to the discoveries of exotic phases, such as density wave ordered Mott\ninsulator and superfluid, a surprising kink structure is found in the slope of\nthe cavity strength as a function of the pumping strength. In this Letter, we\nshow that the appearance of such a kink is a manifestation of a liquid-gas like\ntransition between two superfluids with different densities. The slopes in the\nimmediate neighborhood of the kink become divergent at the liquid-gas critical\npoints and display a critical scaling law with a critical exponent 1 in the\nquantum critical region. Our predictions could be tested in current\nexperimental set-up.\n",
"title": "Superradiant Mott Transition"
}
| null | null | null | null | true | null |
17045
| null |
Default
| null | null |
null |
{
"abstract": " A practical, biologically motivated case of protein complexes (immunoglobulin\nG and FcRII receptors) moving on the surface of mastcells, that are common\nparts of an immunological system, is investigated. Proteins are considered as\nnanomachines creating a nanonetwork. Accurate molecular models of the proteins\nand the fluorophores which act as their nanoantennas are used to simulate the\ncommunication between the nanomachines when they are close to each other. The\ntheory of diffusion-based Brownian motion is applied to model movements of the\nproteins. It is assumed that fluorophore molecules send and receive signals\nusing the Forster Resonance Energy Transfer. The probability of the efficient\nsignal transfer and the respective bit error rate are calculated and discussed.\n",
"title": "Communication via FRET in Nanonetworks of Mobile Proteins"
}
| null | null | null | null | true | null |
17046
| null |
Default
| null | null |
null |
{
"abstract": " Multivariate generalized Pareto distributions arise as the limit\ndistributions of exceedances over multivariate thresholds of random vectors in\nthe domain of attraction of a max-stable distribution. These distributions can\nbe parametrized and represented in a number of different ways. Moreover,\ngeneralized Pareto distributions enjoy a number of interesting stability\nproperties. An overview of the main features of such distributions are given,\nexpressed compactly in several parametrizations, giving the potential user of\nthese distributions a convenient catalogue of ways to handle and work with\ngeneralized Pareto distributions.\n",
"title": "Multivariate generalized Pareto distributions: parametrizations, representations, and properties"
}
| null | null | null | null | true | null |
17047
| null |
Default
| null | null |
null |
{
"abstract": " In the Alvarez-Macovski method, the line integrals of the x-ray basis set\ncoefficients are computed from measurements with multiple spectra. An important\nquestion is whether the transformation from measurements to line integrals is\ninvertible. This paper presents a proof that for a system with two spectra and\na photon counting detector, pileup does not affect the invertibility of the\nsystem. If the system is invertible with no pileup, it will remain invertible\nwith pileup although the reduced Jacobian may lead to increased noise.\n",
"title": "Invertibility of spectral x-ray data with pileup--two dimension-two spectrum case"
}
| null | null | null | null | true | null |
17048
| null |
Default
| null | null |
null |
{
"abstract": " Let $G$ be an adjoint quasi-simple group defined and split over a\nnon-archimedean local field $K$. We prove that the dual of the Steinberg\nrepresentation of $G$ is isomorphic to a certain space of harmonic cochains on\nthe Bruhat-Tits building of $G$. The Steinberg representation is considered\nwith coefficients in any commutative ring.\n",
"title": "Steinberg representations and harmonic cochains for split adjoint quasi-simple groups"
}
| null | null | null | null | true | null |
17049
| null |
Default
| null | null |
null |
{
"abstract": " The b-boundary is a mathematical tool used to attach a topological boundary\nto incomplete Lorentzian manifolds using a Riemaniann metric called the Schmidt\nmetric on the frame bundle. In this paper, we give the general form of the\nSchmidt metric in the case of Lorentzian surfaces. Furthermore, we write the\nRicci scalar of the Schmidt metric in terms of the Ricci scalar of the\nLorentzian manifold and give some examples. Finally, we discuss some\napplications to general relativity.\n",
"title": "Lorentzian surfaces and the curvature of the Schmidt metric"
}
| null | null | null | null | true | null |
17050
| null |
Default
| null | null |
null |
{
"abstract": " Lattice Quantum Chromodynamics (Lattice QCD) is a quantum field theory on a\nfinite discretized space-time box so as to numerically compute the dynamics of\nquarks and gluons to explore the nature of subatomic world. Solving the\nequation of motion of quarks (quark solver) is the most compute-intensive part\nof the lattice QCD simulations and is one of the legacy HPC applications. We\nhave developed a mixed-precision quark solver for a large Intel Xeon Phi (KNL)\nsystem named \"Oakforest-PACS\", employing the $O(a)$-improved Wilson quarks as\nthe discretized equation of motion. The nested-BiCGSTab algorithm for the\nsolver was implemented and optimized using mixed-precision,\ncommunication-computation overlapping with MPI-offloading, SIMD vectorization,\nand thread stealing techniques. The solver achieved 2.6 PFLOPS in the\nsingle-precision part on a $400^3\\times 800$ lattice using 16000 MPI processes\non 8000 nodes on the system.\n",
"title": "Mixed Precision Solver Scalable to 16000 MPI Processes for Lattice Quantum Chromodynamics Simulations on the Oakforest-PACS System"
}
| null | null |
[
"Physics"
] | null | true | null |
17051
| null |
Validated
| null | null |
null |
{
"abstract": " A new MHD solver, based on the Nektar++ spectral/hp element framework, is\npresented in this paper. The velocity and electric potential quasi-static MHD\nmodel is used. The Hartmann flow in plane channel and its stability, the\nHartmann flow in rectangular duct, and the stability of Hunt's flow are\nexplored as examples. Exponential convergence is achieved and the resulting\nnumerical values were found to have an accuracy up to $10^{-12}$ for the state\nflows compared to an exact solution, and $10^{-5}$ for the stability\neigenvalues compared to independent numerical results.\n",
"title": "A spectral/hp element MHD solver"
}
| null | null | null | null | true | null |
17052
| null |
Default
| null | null |
null |
{
"abstract": " We describe the results of a qualitative study on journalists' information\nseeking behavior on social media. Based on interviews with eleven journalists\nalong with a study of a set of university level journalism modules, we\ndetermined the categories of information need types that lead journalists to\nsocial media. We also determined the ways that social media is exploited as a\ntool to satisfy information needs and to define influential factors, which\nimpacted on journalists' information seeking behavior. We find that not only is\nsocial media used as an information source, but it can also be a supplier of\nstories found serendipitously. We find seven information need types that expand\nthe types found in previous work. We also find five categories of influential\nfactors that affect the way journalists seek information.\n",
"title": "Journalists' information needs, seeking behavior, and its determinants on social media"
}
| null | null | null | null | true | null |
17053
| null |
Default
| null | null |
null |
{
"abstract": " The problem of low rank matrix completion is considered in this paper. To\nexploit the underlying low-rank structure of the data matrix, we propose a\nhierarchical Gaussian prior model, where columns of the low-rank matrix are\nassumed to follow a Gaussian distribution with zero mean and a common precision\nmatrix, and a Wishart distribution is specified as a hyperprior over the\nprecision matrix. We show that such a hierarchical Gaussian prior has the\npotential to encourage a low-rank solution. Based on the proposed hierarchical\nprior model, a variational Bayesian method is developed for matrix completion,\nwhere the generalized approximate massage passing (GAMP) technique is embedded\ninto the variational Bayesian inference in order to circumvent cumbersome\nmatrix inverse operations. Simulation results show that our proposed method\ndemonstrates superiority over existing state-of-the-art matrix completion\nmethods.\n",
"title": "Fast Low-Rank Bayesian Matrix Completion with Hierarchical Gaussian Prior Models"
}
| null | null | null | null | true | null |
17054
| null |
Default
| null | null |
null |
{
"abstract": " We report magnetic and calorimetric measurements down to T = 1 mK on the\ncanonical heavy-electron metal YbRh2Si2. The data reveal the development of\nnuclear antiferromagnetic order slightly above 2 mK. The latter weakens the\nprimary electronic antiferromagnetism, thereby paving the way for\nheavy-electron superconductivity below Tc = 2 mK. Our results demonstrate that\nsuperconductivity driven by quantum criticality is a general phenomenon.\n",
"title": "Emergence of superconductivity in the canonical heavy-electron metal YbRh2Si2"
}
| null | null | null | null | true | null |
17055
| null |
Default
| null | null |
null |
{
"abstract": " We consider the following control problem on fair allocation of indivisible\ngoods. Given a set $I$ of items and a set of agents, each having strict linear\npreference over the items, we ask for a minimum subset of the items whose\ndeletion guarantees the existence of a proportional allocation in the remaining\ninstance; we call this problem Proportionality by Item Deletion (PID). Our main\nresult is a polynomial-time algorithm that solves PID for three agents. By\ncontrast, we prove that PID is computationally intractable when the number of\nagents is unbounded, even if the number $k$ of item deletions allowed is small,\nsince the problem turns out to be W[3]-hard with respect to the parameter $k$.\nAdditionally, we provide some tight lower and upper bounds on the complexity of\nPID when regarded as a function of $|I|$ and $k$.\n",
"title": "Obtaining a Proportional Allocation by Deleting Items"
}
| null | null | null | null | true | null |
17056
| null |
Default
| null | null |
null |
{
"abstract": " We use CNNs to build a system that both classifies images of faces based on a\nvariety of different facial attributes and generates new faces given a set of\ndesired facial characteristics. After introducing the problem and providing\ncontext in the first section, we discuss recent work related to image\ngeneration in Section 2. In Section 3, we describe the methods used to\nfine-tune our CNN and generate new images using a novel approach inspired by a\nGaussian mixture model. In Section 4, we discuss our working dataset and\ndescribe our preprocessing steps and handling of facial attributes. Finally, in\nSections 5, 6 and 7, we explain our experiments and results and conclude in the\nfollowing section. Our classification system has 82\\% test accuracy.\nFurthermore, our generation pipeline successfully creates well-formed faces.\n",
"title": "DeepFace: Face Generation using Deep Learning"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17057
| null |
Validated
| null | null |
null |
{
"abstract": " This paper presents a method to generate high quality triangular or\nquadrilateral meshes that uses direction fields and a frontal point insertion\nstrategy. Two types of direction fields are considered: asterisk fields and\ncross fields. With asterisk fields we generate high quality triangulations,\nwhile with cross fields we generate right-angled triangulations that are\noptimal for transformation to quadrilateral meshes. The input of our algorithm\nis an initial triangular mesh and a direction field calculated on it. The goal\nis to compute the vertices of the final mesh by an advancing front strategy\nalong the direction field. We present an algorithm that enables to efficiently\ngenerate the points using solely information from the base mesh. A\nmulti-threaded implementation of our algorithm is presented, allowing us to\nachieve significant speedup of the point generation. Regarding the\nquadrangulation process, we develop a quality criterion for right-angled\ntriangles with respect to the local cross field and an optimization process\nbased on it. Thus we are able to further improve the quality of the output\nquadrilaterals. The algorithm is demonstrated on the sphere and examples of\nhigh quality triangular and quadrilateral meshes of coastal domains are\npresented.\n",
"title": "High quality mesh generation using cross and asterisk fields: Application on coastal domains"
}
| null | null | null | null | true | null |
17058
| null |
Default
| null | null |
null |
{
"abstract": " Engine for Likelihood-Free Inference (ELFI) is a Python software library for\nperforming likelihood-free inference (LFI). ELFI provides a convenient syntax\nfor arranging components in LFI, such as priors, simulators, summaries or\ndistances, to a network called ELFI graph. The components can be implemented in\na wide variety of languages. The stand-alone ELFI graph can be used with any of\nthe available inference methods without modifications. A central method\nimplemented in ELFI is Bayesian Optimization for Likelihood-Free Inference\n(BOLFI), which has recently been shown to accelerate likelihood-free inference\nup to several orders of magnitude by surrogate-modelling the distance. ELFI\nalso has an inbuilt support for output data storing for reuse and analysis, and\nsupports parallelization of computation from multiple cores up to a cluster\nenvironment. ELFI is designed to be extensible and provides interfaces for\nwidening its functionality. This makes the adding of new inference methods to\nELFI straightforward and automatically compatible with the inbuilt features.\n",
"title": "ELFI: Engine for Likelihood-Free Inference"
}
| null | null | null | null | true | null |
17059
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks are vulnerable to adversarial examples, which poses\nsecurity concerns on these algorithms due to the potentially severe\nconsequences. Adversarial attacks serve as an important surrogate to evaluate\nthe robustness of deep learning models before they are deployed. However, most\nof existing adversarial attacks can only fool a black-box model with a low\nsuccess rate. To address this issue, we propose a broad class of momentum-based\niterative algorithms to boost adversarial attacks. By integrating the momentum\nterm into the iterative process for attacks, our methods can stabilize update\ndirections and escape from poor local maxima during the iterations, resulting\nin more transferable adversarial examples. To further improve the success rates\nfor black-box attacks, we apply momentum iterative algorithms to an ensemble of\nmodels, and show that the adversarially trained models with a strong defense\nability are also vulnerable to our black-box attacks. We hope that the proposed\nmethods will serve as a benchmark for evaluating the robustness of various deep\nmodels and defense methods. With this method, we won the first places in NIPS\n2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack\ncompetitions.\n",
"title": "Boosting Adversarial Attacks with Momentum"
}
| null | null | null | null | true | null |
17060
| null |
Default
| null | null |
null |
{
"abstract": " The most critical time for information to spread is in the aftermath of a\nserious emergency, crisis, or disaster. Individuals affected by such situations\ncan now turn to an array of communication channels, from mobile phone calls and\ntext messages to social media posts, when alerting social ties. These channels\ndrastically improve the speed of information in a time-sensitive event, and\nprovide extant records of human dynamics during and afterward the event.\nRetrospective analysis of such anomalous events provides researchers with a\nclass of \"found experiments\" that may be used to better understand social\nspreading. In this chapter, we study information spreading due to a number of\nemergency events, including the Boston Marathon Bombing and a plane crash at a\nwestern European airport. We also contrast the different information which may\nbe gleaned by social media data compared with mobile phone data and we estimate\nthe rate of anomalous events in a mobile phone dataset using a proposed anomaly\ndetection method.\n",
"title": "Information spreading during emergencies and anomalous events"
}
| null | null | null | null | true | null |
17061
| null |
Default
| null | null |
null |
{
"abstract": " This paper discusses the potential of applying deep learning techniques for\nplant classification and its usage for citizen science in large-scale\nbiodiversity monitoring. We show that plant classification using near\nstate-of-the-art convolutional network architectures like ResNet50 achieves\nsignificant improvements in accuracy compared to the most widespread plant\nclassification application in test sets composed of thousands of different\nspecies labels. We find that the predictions can be confidently used as a\nbaseline classification in citizen science communities like iNaturalist (or its\nSpanish fork, Natusfera) which in turn can share their data with biodiversity\nportals like GBIF.\n",
"title": "Large-Scale Plant Classification with Deep Neural Networks"
}
| null | null | null | null | true | null |
17062
| null |
Default
| null | null |
null |
{
"abstract": " The General Video Game AI (GVGAI) competition and its associated software\nframework provides a way of benchmarking AI algorithms on a large number of\ngames written in a domain-specific description language. While the competition\nhas seen plenty of interest, it has so far focused on online planning,\nproviding a forward model that allows the use of algorithms such as Monte Carlo\nTree Search.\nIn this paper, we describe how we interface GVGAI to the OpenAI Gym\nenvironment, a widely used way of connecting agents to reinforcement learning\nproblems. Using this interface, we characterize how widely used implementations\nof several deep reinforcement learning algorithms fare on a number of GVGAI\ngames. We further analyze the results to provide a first indication of the\nrelative difficulty of these games relative to each other, and relative to\nthose in the Arcade Learning Environment under similar conditions.\n",
"title": "Deep Reinforcement Learning for General Video Game AI"
}
| null | null | null | null | true | null |
17063
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider pure infiniteness of generalized Cuntz-Krieger\nalgebras associated to labeled spaces $(E,\\mathcal{L},\\mathcal{E})$. It is\nshown that a $C^*$-algebra $C^*(E,\\mathcal{L},\\mathcal{E})$ is purely infinite\nin the sense that every nonzero hereditary subalgebra contains an infinite\nprojection (we call this property (IH)) if $(E, \\mathcal{L},\\mathcal{E})$ is\ndisagreeable and every vertex connects to a loop. We also prove that under the\ncondition analogous to (K) for usual graphs,\n$C^*(E,\\mathcal{L},\\mathcal{E})=C^*(p_A, s_a)$ is purely infinite in the sense\nof Kirchberg and R{\\o}rdam if and only if every generating projection $p_A$,\n$A\\in \\mathcal{E}$, is properly infinite, and also if and only if every\nquotient of $C^*(E,\\mathcal{L},\\mathcal{E})$ has the property (IH).\n",
"title": "Purely infinite labeled graph $C^*$-algebras"
}
| null | null | null | null | true | null |
17064
| null |
Default
| null | null |
null |
{
"abstract": " Convex sparsity-promoting regularizations are ubiquitous in modern\nstatistical learning. By construction, they yield solutions with few non-zero\ncoefficients, which correspond to saturated constraints in the dual\noptimization formulation. Working set (WS) strategies are generic optimization\ntechniques that consist in solving simpler problems that only consider a subset\nof constraints, whose indices form the WS. Working set methods therefore\ninvolve two nested iterations: the outer loop corresponds to the definition of\nthe WS and the inner loop calls a solver for the subproblems. For the Lasso\nestimator a WS is a set of features, while for a Group Lasso it refers to a set\nof groups. In practice, WS are generally small in this context so the\nassociated feature Gram matrix can fit in memory. Here we show that the\nGauss-Southwell rule (a greedy strategy for block coordinate descent\ntechniques) leads to fast solvers in this case. Combined with a working set\nstrategy based on an aggressive use of so-called Gap Safe screening rules, we\npropose a solver achieving state-of-the-art performance on sparse learning\nproblems. Results are presented on Lasso and multi-task Lasso estimators.\n",
"title": "From safe screening rules to working sets for faster Lasso-type solvers"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
17065
| null |
Validated
| null | null |
null |
{
"abstract": " Exoplanets smaller than Neptune are numerous, but the nature of the planet\npopulations in the 1-4 Earth radii range remains a mystery. The complete Kepler\nsample of Q1-Q17 exoplanet candidates shows a radius gap at ~ 2 Earth radii, as\nreported by us in January 2017 in LPSC conference abstract #1576 (Zeng et al.\n2017). A careful analysis of Kepler host stars spectroscopy by the CKS survey\nallowed Fulton et al. (2017) in March 2017 to unambiguously show this radius\ngap. The cause of this gap is still under discussion (Ginzburg et al. 2017;\nLehmer & Catling 2017; Owen & Wu 2017). Here we add to our original analysis\nthe dependence of the radius gap on host star type.\n",
"title": "Exoplanet Radius Gap Dependence on Host Star Type"
}
| null | null |
[
"Physics"
] | null | true | null |
17066
| null |
Validated
| null | null |
null |
{
"abstract": " The surge in political information, discourse, and interaction has been one\nof the most important developments in social media over the past several years.\nThere is rich structure in the interaction among different viewpoints on the\nideological spectrum. However, we still have only a limited analytical\nvocabulary for expressing the ways in which these viewpoints interact.\nIn this paper, we develop network-based methods that operate on the ways in\nwhich users share content; we construct \\emph{invocation graphs} on Web domains\nshowing the extent to which pages from one domain are invoked by users to reply\nto posts containing pages from other domains. When we locate the domains on a\npolitical spectrum induced from the data, we obtain an embedded graph showing\nhow these interaction links span different distances on the spectrum. The\nstructure of this embedded network, and its evolution over time, helps us\nderive macro-level insights about how political interaction unfolded through\n2016, leading up to the US Presidential election. In particular, we find that\nthe domains invoked in replies spanned increasing distances on the spectrum\nover the months approaching the election, and that there was clear asymmetry\nbetween the left-to-right and right-to-left patterns of linkage.\n",
"title": "Mapping the Invocation Structure of Online Political Interaction"
}
| null | null |
[
"Computer Science"
] | null | true | null |
17067
| null |
Validated
| null | null |
null |
{
"abstract": " In open set recognition (OSR), almost all existing methods are designed\nspecially for recognizing individual instances, even these instances are\ncollectively coming in batch. Recognizers in decision either reject or\ncategorize them to some known class using empirically-set threshold. Thus the\nthreshold plays a key role, however, the selection for it usually depends on\nthe knowledge of known classes, inevitably incurring risks due to lacking\navailable information from unknown classes. On the other hand, a more realistic\nOSR system should NOT just rest on a reject decision but should go further,\nespecially for discovering the hidden unknown classes among the reject\ninstances, whereas existing OSR methods do not pay special attention. In this\npaper, we introduce a novel collective/batch decision strategy with an aim to\nextend existing OSR for new class discovery while considering correlations\namong the testing instances. Specifically, a collective decision-based OSR\nframework (CD-OSR) is proposed by slightly modifying the Hierarchical Dirichlet\nprocess (HDP). Thanks to the HDP, our CD-OSR does not need to define the\nspecific threshold and can automatically reserve space for unknown classes in\ntesting, naturally resulting in a new class discovery function. Finally,\nextensive experiments on benchmark datasets indicate the validity of CD-OSR.\n",
"title": "Collective decision for open set recognition"
}
| null | null | null | null | true | null |
17068
| null |
Default
| null | null |
null |
{
"abstract": " The projection factor p is the key quantity used in the Baade-Wesselink (BW)\nmethod for distance determination; it converts radial velocities into pulsation\nvelocities. Several methods are used to determine p, such as geometrical and\nhydrodynamical models or the inverse BW approach when the distance is known. We\nanalyze new HARPS-N spectra of delta Cep to measure its cycle-averaged\natmospheric velocity gradient in order to better constrain the projection\nfactor. We first apply the inverse BW method to derive p directly from\nobservations. The projection factor can be divided into three subconcepts: (1)\na geometrical effect (p0); (2) the velocity gradient within the atmosphere\n(fgrad); and (3) the relative motion of the optical pulsating photosphere with\nrespect to the corresponding mass elements (fo-g). We then measure the fgrad\nvalue of delta Cep for the first time. When the HARPS-N mean cross-correlated\nline-profiles are fitted with a Gaussian profile, the projection factor is\npcc-g = 1.239 +/- 0.034(stat) +/- 0.023(syst). When we consider the different\namplitudes of the radial velocity curves that are associated with 17 selected\nspectral lines, we measure projection factors ranging from 1.273 to 1.329. We\nfind a relation between fgrad and the line depth measured when the Cepheid is\nat minimum radius. This relation is consistent with that obtained from our best\nhydrodynamical model of delta Cep and with our projection factor decomposition.\nUsing the observational values of p and fgrad found for the 17 spectral lines,\nwe derive a semi-theoretical value of fo-g. We alternatively obtain fo-g =\n0.975+/-0.002 or 1.006+/-0.002 assuming models using radiative transfer in\nplane-parallel or spherically symmetric geometries, respectively. The new\nHARPS-N observations of delta Cep are consistent with our decomposition of the\nprojection factor.\n",
"title": "HARPS-N high spectral resolution observations of Cepheids I. The Baade-Wesselink projection factor of δ Cep revisited"
}
| null | null | null | null | true | null |
17069
| null |
Default
| null | null |
null |
{
"abstract": " The United States spends more than $1B each year on initiatives such as the\nAmerican Community Survey (ACS), a labor-intensive door-to-door study that\nmeasures statistics relating to race, gender, education, occupation,\nunemployment, and other demographic factors. Although a comprehensive source of\ndata, the lag between demographic changes and their appearance in the ACS can\nexceed half a decade. As digital imagery becomes ubiquitous and machine vision\ntechniques improve, automated data analysis may provide a cheaper and faster\nalternative. Here, we present a method that determines socioeconomic trends\nfrom 50 million images of street scenes, gathered in 200 American cities by\nGoogle Street View cars. Using deep learning-based computer vision techniques,\nwe determined the make, model, and year of all motor vehicles encountered in\nparticular neighborhoods. Data from this census of motor vehicles, which\nenumerated 22M automobiles in total (8% of all automobiles in the US), was used\nto accurately estimate income, race, education, and voting patterns, with\nsingle-precinct resolution. (The average US precinct contains approximately\n1000 people.) The resulting associations are surprisingly simple and powerful.\nFor instance, if the number of sedans encountered during a 15-minute drive\nthrough a city is higher than the number of pickup trucks, the city is likely\nto vote for a Democrat during the next Presidential election (88% chance);\notherwise, it is likely to vote Republican (82%). Our results suggest that\nautomated systems for monitoring demographic trends may effectively complement\nlabor-intensive approaches, with the potential to detect trends with fine\nspatial resolution, in close to real time.\n",
"title": "Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US"
}
| null | null | null | null | true | null |
17070
| null |
Default
| null | null |
null |
{
"abstract": " We propose stochastic, non-parametric activation functions that are fully\nlearnable and individual to each neuron. Complexity and the risk of overfitting\nare controlled by placing a Gaussian process prior over these functions. The\nresult is the Gaussian process neuron, a probabilistic unit that can be used as\nthe basic building block for probabilistic graphical models that resemble the\nstructure of neural networks. The proposed model can intrinsically handle\nuncertainties in its inputs and self-estimate the confidence of its\npredictions. Using variational Bayesian inference and the central limit\ntheorem, a fully deterministic loss function is derived, allowing it to be\ntrained as efficiently as a conventional neural network using mini-batch\ngradient descent. The posterior distribution of activation functions is\ninferred from the training data alongside the weights of the network.\nThe proposed model favorably compares to deep Gaussian processes, both in\nmodel complexity and efficiency of inference. It can be directly applied to\nrecurrent or convolutional network structures, allowing its use in audio and\nimage processing tasks.\nAs an preliminary empirical evaluation we present experiments on regression\nand classification tasks, in which our model achieves performance comparable to\nor better than a Dropout regularized neural network with a fixed activation\nfunction. Experiments are ongoing and results will be added as they become\navailable.\n",
"title": "Gaussian Process Neurons Learn Stochastic Activation Functions"
}
| null | null | null | null | true | null |
17071
| null |
Default
| null | null |
null |
{
"abstract": " We analyze a proprietary dataset of trades by a single asset manager,\ncomparing their price impact with that of the trades of the rest of the market.\nIn the context of a linear propagator model we find no significant difference\nbetween the two, suggesting that both the magnitude and time dependence of\nimpact are universal in anonymous, electronic markets. This result is important\nas optimal execution policies often rely on propagators calibrated on anonymous\ndata. We also find evidence that in the wake of a trade the order flow of other\nmarket participants first adds further copy-cat trades enhancing price impact\non very short time scales. The induced order flow then quickly inverts, thereby\ncontributing to impact decay.\n",
"title": "The short-term price impact of trades is universal"
}
| null | null | null | null | true | null |
17072
| null |
Default
| null | null |
null |
{
"abstract": " This is a list of questions raised by our joint work arXiv:1412.0737 and its\nsequels.\n",
"title": "Questions on mod p representations of reductive p-adic groups"
}
| null | null | null | null | true | null |
17073
| null |
Default
| null | null |
null |
{
"abstract": " ZrSe2 is a band semiconductor studied long time ago. It has interesting\nelectronic properties, and because its layers structure can be intercalated\nwith different atoms to change some of the physical properties. In this\ninvestigation we found that Zr deficiencies alter the semiconducting behavior\nand the compound can be turned into a superconductor. In this paper we report\nour studies related to this discovery. The decreasing of the number of Zr atoms\nin small proportion according to the formula ZrxSe2, where x is varied from\nabout 8.1 to 8.6 K, changing the semiconducting behavior to a superconductor\nwith transition temperatures ranging between 7.8 to 8.5 K, it depending of the\ndeficiencies. Outside of those ranges the compound behaves as semiconducting\nwith the properties already known. In our experiments we found that this new\nsuperconductor has only a very small fraction of superconducting material\ndetermined by magnetic measurements with applied magnetic field of 10 Oe. Our\nconclusions is that superconductivity is filamentary. However, in one studied\nsample the fraction was about 10.2 %, whereas in others is only about 1 % or\nless. We determined the superconducting characteristics; the critical fields\nthat indicate a type two superonductor with Ginzburg-Landau ? parameter of the\norder about 2.7. The synthesis procedure is quite normal fol- lowing the\nconventional solid state reaction. In this paper are included, the electronic\ncharacteristics, transition temperature, and evolution with temperature of the\ncritical fields.\n",
"title": "Filamentary superconductivity in semiconducting policrystalline ZrSe2 compound with Zr vacancies"
}
| null | null | null | null | true | null |
17074
| null |
Default
| null | null |
null |
{
"abstract": " In this study we map out the large-scale structure of citation networks of\nscience journals and follow their evolution in time by using stochastic block\nmodels (SBMs). The SBM fitting procedures are principled methods that can be\nused to find hierarchical grouping of journals into blocks that show similar\nincoming and outgoing citations patterns. These methods work directly on the\ncitation network without the need to construct auxiliary networks based on\nsimilarity of nodes. We fit the SBMs to the networks of journals we have\nconstructed from the data set of around 630 million citations and find a\nvariety of different types of blocks, such as clusters, bridges, sources, and\nsinks. In addition we use a recent generalization of SBMs to determine how much\na manually curated classification of journals into subfields of science is\nrelated to the block structure of the journal network and how this relationship\nchanges in time. The SBM method tries to find a network of blocks that is the\nbest high-level representation of the network of journals, and we illustrate\nhow these block networks (at various levels of resolution) can be used as maps\nof science.\n",
"title": "Stochastic Block Model Reveals the Map of Citation Patterns and Their Evolution in Time"
}
| null | null | null | null | true | null |
17075
| null |
Default
| null | null |
null |
{
"abstract": " A large class of modified theories of gravity used as models for dark energy\npredict a propagation speed for gravitational waves which can differ from the\nspeed of light. This difference of propagations speeds for photons and\ngravitons has an impact in the emission of gravitational waves by binary\nsystems. Thus, we revisit the usual quadrupolar emission of binary system for\nan arbitrary propagation speed of gravitational waves and obtain the\ncorresponding period decay formula. We then use timing data from the\nHulse-Taylor binary pulsar and obtain that the speed of gravitational waves can\nonly differ from the speed of light at the percentage level. This bound places\ntight constraints on dark energy models featuring an anomalous propagations\nspeed for the gravitational waves.\n",
"title": "Limits on the anomalous speed of gravitational waves from binary pulsars"
}
| null | null | null | null | true | null |
17076
| null |
Default
| null | null |
null |
{
"abstract": " The notion of entropy-regularized optimal transport, also known as Sinkhorn\ndivergence, has recently gained popularity in machine learning and statistics,\nas it makes feasible the use of smoothed optimal transportation distances for\ndata analysis. The Sinkhorn divergence allows the fast computation of an\nentropically regularized Wasserstein distance between two probability\ndistributions supported on a finite metric space of (possibly) high-dimension.\nFor data sampled from one or two unknown probability distributions, we derive\nthe distributional limits of the empirical Sinkhorn divergence and its centered\nversion (Sinkhorn loss). We also propose a bootstrap procedure which allows to\nobtain new test statistics for measuring the discrepancies between multivariate\nprobability distributions. Our work is inspired by the results of Sommerfeld\nand Munk (2016) on the asymptotic distribution of empirical Wasserstein\ndistance on finite space using unregularized transportation costs. Incidentally\nwe also analyze the asymptotic distribution of entropy-regularized Wasserstein\ndistances when the regularization parameter tends to zero. Simulated and real\ndatasets are used to illustrate our approach.\n",
"title": "Central limit theorems for entropy-regularized optimal transport on finite spaces and statistical applications"
}
| null | null | null | null | true | null |
17077
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we present a methodology to estimate the parameters of\nstochastically contaminated models under two contamination regimes. In both\nregimes, we assume that the original process is a variable length Markov chain\nthat is contaminated by a random noise. In the first regime we consider that\nthe random noise is added to the original source and in the second regime, the\nrandom noise is multiplied by the original source. Given a contaminated sample\nof these models, the original process is hidden. Then we propose a two steps\nestimator for the parameters of these models, that is, the probability\ntransitions and the noise parameter, and prove its consistency. The first step\nis an adaptation of the Baum-Welch algorithm for Hidden Markov Models. This\nstep provides an estimate of a complete order $k$ Markov chain, where $k$ is\nbigger than the order of the variable length Markov chain if it has finite\norder and is a constant depending on the sample size if the hidden process has\ninfinite order. In the second estimation step, we propose a bootstrap Bayesian\nInformation Criterion, given a sample of the Markov chain estimated in the\nfirst step, to obtain the variable length time dependence structure associated\nwith the hidden process. We present a simulation study showing that our\nmethodology is able to accurately recover the parameters of the models for a\nreasonable interval of random noises.\n",
"title": "Inference for Stochastically Contaminated Variable Length Markov Chains"
}
| null | null | null | null | true | null |
17078
| null |
Default
| null | null |
null |
{
"abstract": " We introduce the problem of variable-length source resolvability, where a\ngiven target probability distribution is approximated by encoding a\nvariable-length uniform random number, and the asymptotically minimum average\nlength rate of the uniform random numbers, called the (variable-length)\nresolvability, is investigated. We first analyze the variable-length\nresolvability with the variational distance as an approximation measure. Next,\nwe investigate the case under the divergence as an approximation measure. When\nthe asymptotically exact approximation is required, it is shown that the\nresolvability under the two kinds of approximation measures coincides. We then\nextend the analysis to the case of channel resolvability, where the target\ndistribution is the output distribution via a general channel due to the fixed\ngeneral source as an input. The obtained characterization of the channel\nresolvability is fully general in the sense that when the channel is just the\nidentity mapping, the characterization reduces to the general formula for the\nsource resolvability. We also analyze the second-order variable-length\nresolvability.\n",
"title": "Variable-Length Resolvability for General Sources and Channels"
}
| null | null | null | null | true | null |
17079
| null |
Default
| null | null |
null |
{
"abstract": " 3D-Polarized Light Imaging (3D-PLI) reconstructs nerve fibers in histological\nbrain sections by measuring their birefringence. This study investigates\nanother effect caused by the optical anisotropy of brain tissue -\ndiattenuation. Based on numerical and experimental studies and a complete\nanalytical description of the optical system, the diattenuation was determined\nto be below 4 % in rat brain tissue. It was demonstrated that the diattenuation\neffect has negligible impact on the fiber orientations derived by 3D-PLI. The\ndiattenuation signal, however, was found to highlight different anatomical\nstructures that cannot be distinguished with current imaging techniques, which\nmakes Diattenuation Imaging a promising extension to 3D-PLI.\n",
"title": "Diattenuation of Brain Tissue and its Impact on 3D Polarized Light Imaging"
}
| null | null | null | null | true | null |
17080
| null |
Default
| null | null |
null |
{
"abstract": " The pair density wave (PDW) superconducting state has been proposed to\nexplain the layer- decoupling effect observed in the compound\nLa$_{2-x}$Ba$_x$CuO$_4$ at $x=1/8$ (Phys. Rev. Lett. 99, 127003). In this state\nthe superconducting order parameter is spatially modulated, in contrast with\nthe usual superconducting (SC) state where the order parameter is uniform. In\nthis work, we study the properties of the amplitude (Higgs) modes in a\nunidirectional PDW state. To this end we consider a phenomenological model of\nPDW type states coupled to a Fermi surface of fermionic quasiparticles. In\ncontrast to conventional superconductors that have a single Higgs mode,\nunidirectional PDW superconductors have two Higgs modes. While in the PDW state\nthe Fermi surface largely remains gapless, we find that the damping of the PDW\nHiggs modes into fermionic quasiparticles requires exceeding an energy\nthreshold. We show that this suppression of damping in the PDW state is due to\nkinematics. As a result, only one of the two Higgs modes is significantly\ndamped. In addition, motivated by the experimental phase diagram, we discuss\nthe mixing of Higgs modes in the coexistence regime of the PDW and uniform SC\nstates. These results should be observable directly in a Raman spectroscopy, in\nmomentum resolved electron energy loss spectroscopy, and in resonant inelastic\nX-ray scattering, thus providing evidence of the PDW states.\n",
"title": "Higgs Modes in the Pair Density Wave Superconducting State"
}
| null | null | null | null | true | null |
17081
| null |
Default
| null | null |
null |
{
"abstract": " Neuroscience has been carried into the domain of big data and high\nperformance computing (HPC) on the backs of initiatives in data collection and\nan increasingly compute-intensive tools. While managing HPC experiments\nrequires considerable technical acumen, platforms and standards have been\ndeveloped to ease this burden on scientists. While web-portals make resources\nwidely accessible, data organizations such as the Brain Imaging Data Structure\nand tool description languages such as Boutiques provide researchers with a\nfoothold to tackle these problems using their own datasets, pipelines, and\nenvironments. While these standards lower the barrier to adoption of HPC and\ncloud systems for neuroscience applications, they still require the\nconsolidation of disparate domain-specific knowledge. We present Clowdr, a\nlightweight tool to launch experiments on HPC systems and clouds, record rich\nexecution records, and enable the accessible sharing of experimental summaries\nand results. Clowdr uniquely sits between web platforms and bare-metal\napplications for experiment management by preserving the flexibility of\ndo-it-yourself solutions while providing a low barrier for developing,\ndeploying and disseminating neuroscientific analysis.\n",
"title": "A Serverless Tool for Platform Agnostic Computational Experiment Management"
}
| null | null | null | null | true | null |
17082
| null |
Default
| null | null |
null |
{
"abstract": " We have developed a recently proposed Josephson traveling-wave parametric\namplifier with three-wave mixing [A. B. Zorin, Phys. Rev. Applied 6, 034006,\n2016]. The amplifier consists of a microwave transmission line formed by a\nserial array of nonhysteretic one-junction SQUIDs. These SQUIDs are flux-biased\nin a way that the phase drops across the Josephson junctions are equal to 90\ndegrees and the persistent currents in the SQUID loops are equal to the\nJosephson critical current values. Such a one-dimensional metamaterial\npossesses a maximal quadratic nonlinearity and zero cubic (Kerr) nonlinearity.\nThis property allows phase matching and exponential power gain of traveling\nmicrowaves to take place over a wide frequency range. We report the\nproof-of-principle experiment performed at a temperature of T = 4.2 K on Nb\ntrilayer samples, which has demonstrated that our concept of a practical\nbroadband Josephson parametric amplifier is valid and very promising for\nachieving quantum-limited operation.\n",
"title": "Traveling-wave parametric amplifier based on three-wave mixing in a Josephson metamaterial"
}
| null | null |
[
"Physics"
] | null | true | null |
17083
| null |
Validated
| null | null |
null |
{
"abstract": " Background: Unstructured and textual data is increasing rapidly and Latent\nDirichlet Allocation (LDA) topic modeling is a popular data analysis methods\nfor it. Past work suggests that instability of LDA topics may lead to\nsystematic errors. Aim: We propose a method that relies on replicated LDA runs,\nclustering, and providing a stability metric for the topics. Method: We\ngenerate k LDA topics and replicate this process n times resulting in n*k\ntopics. Then we use K-medioids to cluster the n*k topics to k clusters. The k\nclusters now represent the original LDA topics and we present them like normal\nLDA topics showing the ten most probable words. For the clusters, we try\nmultiple stability metrics, out of which we recommend Rank-Biased Overlap,\nshowing the stability of the topics inside the clusters. Results: We provide an\ninitial validation where our method is used for 270,000 Mozilla Firefox commit\nmessages with k=20 and n=20. We show how our topic stability metrics are\nrelated to the contents of the topics. Conclusions: Advances in text mining\nenable us to analyze large masses of text in software engineering but\nnon-deterministic algorithms, such as LDA, may lead to unreplicable\nconclusions. Our approach makes LDA stability transparent and is also\ncomplementary rather than alternative to many prior works that focus on LDA\nparameter tuning.\n",
"title": "Measuring LDA Topic Stability from Clusters of Replicated Runs"
}
| null | null | null | null | true | null |
17084
| null |
Default
| null | null |
null |
{
"abstract": " We present a study of the continuum polarization over the 400--600 nm range\nof 19 Type Ia SNe obtained with FORS at the VLT. We separate them in those that\nshow Na I D lines at the velocity of their hosts and those that do not.\nContinuum polarization of the sodium sample near maximum light displays a broad\nrange of values, from extremely polarized cases like SN 2006X to almost\nunpolarized ones like SN 2011ae. The non--sodium sample shows, typically,\nsmaller polarization values. The continuum polarization of the sodium sample in\nthe 400--600 nm range is linear with wavelength and can be characterized by the\nmean polarization (P$_{\\rm{mean}}$). Its values span a wide range and show a\nlinear correlation with color, color excess, and extinction in the visual band.\nLarger dispersion correlations were found with the equivalent width of the Na I\nD and Ca II H & K lines, and also a noisy relation between P$_{\\rm{mean}}$ and\n$R_{V}$, the ratio of total to selective extinction. Redder SNe show stronger\ncontinuum polarization, with larger color excesses and extinctions. We also\nconfirm that high continuum polarization is associated with small values of\n$R_{V}$.\nThe correlation between extinction and polarization -- and polarization\nangles -- suggest that the dominant fraction of dust polarization is imprinted\nin interstellar regions of the host galaxies.\nWe show that Na I D lines from foreground matter in the SN host are usually\nassociated with non-galactic ISM, challenging the typical assumptions in\nforeground interstellar polarization models.\n",
"title": "Continuum Foreground Polarization and Na~I Absorption in Type Ia SNe"
}
| null | null | null | null | true | null |
17085
| null |
Default
| null | null |
null |
{
"abstract": " This study deals with content-based musical playlists generation focused on\nSongs and Instrumentals. Automatic playlist generation relies on collaborative\nfiltering and autotagging algorithms. Autotagging can solve the cold start\nissue and popularity bias that are critical in music recommender systems.\nHowever, autotagging remains to be improved and cannot generate satisfying\nmusic playlists. In this paper, we suggest improvements toward better\nautotagging-generated playlists compared to state-of-the-art. To assess our\nmethod, we focus on the Song and Instrumental tags. Song and Instrumental are\ntwo objective and opposite tags that are under-studied compared to genres or\nmoods, which are subjective and multi-modal tags. In this paper, we consider an\nindustrial real-world musical database that is unevenly distributed between\nSongs and Instrumentals and bigger than databases used in previous studies. We\nset up three incremental experiments to enhance automatic playlist generation.\nOur suggested approach generates an Instrumental playlist with up to three\ntimes less false positives than cutting edge methods. Moreover, we provide a\ndesign of experiment framework to foster research on Songs and Instrumentals.\nWe give insight on how to improve further the quality of generated playlists\nand to extend our methods to other musical tags. Furthermore, we provide the\nsource code to guarantee reproducible research.\n",
"title": "Toward Faultless Content-Based Playlists Generation for Instrumentals"
}
| null | null | null | null | true | null |
17086
| null |
Default
| null | null |
null |
{
"abstract": " ReS$_2$ is considered as a promising candidate for novel electronic and\nsensor applications. The low crystal symmetry of the van der Waals compound\nReS$_2$ leads to a highly anisotropic optical, vibrational, and transport\nbehavior. However, the details of the electronic band structure of this\nfascinating material are still largely unexplored. We present a\nmomentum-resolved study of the electronic structure of monolayer, bilayer, and\nbulk ReS$_2$ using k-space photoemission microscopy in combination with\nfirst-principles calculations. We demonstrate that the valence electrons in\nbulk ReS$_2$ are - contrary to assumptions in recent literature - significantly\ndelocalized across the van der Waals gap. Furthermore, we directly observe the\nevolution of the valence band dispersion as a function of the number of layers,\nrevealing a significantly increased effective electron mass in single-layer\ncrystals. We also find that only bilayer ReS$_2$ has a direct band gap. Our\nresults establish bilayer ReS$_2$ as a advantageous building block for\ntwo-dimensional devices and van der Waals heterostructures.\n",
"title": "Direct observation of the band gap transition in atomically thin ReS$_2$"
}
| null | null | null | null | true | null |
17087
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we deal with the problem of extending Zadeh's operators on\nfuzzy sets (FSs) to interval-valued (IVFSs), set-valued (SVFSs) and type-2\n(T2FSs) fuzzy sets. Namely, it is known that seeing FSs as SVFSs, or T2FSs,\nwhose membership degrees are singletons is not order-preserving. We then\ndescribe a family of lattice embeddings from FSs to SVFSs. Alternatively, if\nthe former singleton viewpoint is required, we reformulate the intersection on\nhesitant fuzzy sets and introduce what we have called closed-valued fuzzy sets.\nThis new type of fuzzy sets extends standard union and intersection on FSs. In\naddition, it allows handling together membership degrees of different nature\nas, for instance, closed intervals and finite sets. Finally, all these\nconstructions are viewed as T2FSs forming a chain of lattices.\n",
"title": "Lattice embeddings between types of fuzzy sets. Closed-valued fuzzy sets"
}
| null | null | null | null | true | null |
17088
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we present an algorithm for the coupling of magneto-thermal and\nmechanical finite element models representing superconducting accelerator\nmagnets. The mechanical models are used during the design of the mechanical\nstructure as well as the optimization of the magnetic field quality under\nnominal conditions. The magneto-thermal models allow for the analysis of\ntransient phenomena occurring during quench initiation, propagation, and\nprotection. Mechanical analysis of quenching magnets is of high importance\nconsidering the design of new protection systems and the study of new\nsuperconductor types. We use field/circuit coupling to determine temperature\nand electromagnetic force evolution during the magnet discharge. These\nquantities are provided as a load to existing mechanical models. The models are\ndiscretized with different meshes and, therefore, we employ a mesh-based\ninterpolation method to exchange coupled quantities. The coupling algorithm is\nillustrated with a simulation of a mechanical response of a standalone\nhigh-field dipole magnet protected with CLIQ (Coupling-Loss Induced Quench)\ntechnology.\n",
"title": "Coupling of Magneto-Thermal and Mechanical Superconducting Magnet Models by Means of Mesh-Based Interpolation"
}
| null | null |
[
"Computer Science",
"Physics"
] | null | true | null |
17089
| null |
Validated
| null | null |
null |
{
"abstract": " In contrast with the well-known methods of matching asymptotics and\nmultiscale (or compound) asymptotics, the \" functional analytic approach \" of\nLanza de Cristoforis (Analysis 28, 2008) allows to prove convergence of\nexpansions around interior small holes of size $\\epsilon$ for solutions of\nelliptic boundary value problems. Using the method of layer potentials, the\nasymptotic behavior of the solution as $\\epsilon$ tends to zero is described\nnot only by asymptotic series in powers of $\\epsilon$, but by convergent power\nseries. Here we use this method to investigate the Dirichlet problem for the\nLaplace operator where holes are collapsing at a polygonal corner of opening\n$\\omega$. Then in addition to the scale $\\epsilon$ there appears the scale\n$\\eta = \\epsilon^{\\pi/\\omega}$. We prove that when $\\pi/\\omega$ is irrational,\nthe solution of the Dirichlet problem is given by convergent series in powers\nof these two small parameters. Due to interference of the two scales, this\nconvergence is obtained, in full generality, by grouping together integer\npowers of the two scales that are very close to each other. Nevertheless, there\nexists a dense subset of openings $\\omega$ (characterized by Diophantine\napproximation properties), for which real analyticity in the two variables\n$\\epsilon$ and $\\eta$ holds and the power series converge unconditionally. When\n$\\pi/\\omega$ is rational, the series are unconditionally convergent, but\ncontain terms in log $\\epsilon$.\n",
"title": "Converging expansions for Lipschitz self-similar perforations of a plane sector"
}
| null | null | null | null | true | null |
17090
| null |
Default
| null | null |
null |
{
"abstract": " Bio-inspired paradigms are proving to be useful in analyzing propagation and\ndissemination of information in networks. In this paper we explore the use of\nmulti-type branching processes to analyse viral properties of content in a\nsocial network, with and without competition from other sources. We derive and\ncompute various virality measures, e.g., probability of virality, expected\nnumber of shares, or the rate of growth of expected number of shares etc. They\nallow one to predict the emergence of global macro properties (e.g., viral\nspread of a post in the entire network) from the laws and parameters that\ndetermine local interactions. The local interactions, greatly depend upon the\nstructure of the timelines holding the content and the number of friends (i.e.,\nconnections) of users of the network. We then formulate a non-cooperative game\nproblem and study the Nash equilibria as a function of the parameters. The\nbranching processes modelling the social network under competition turn out to\nbe decomposable, multi-type and continuous time variants. For such processes\ntypes belonging to different sub-classes evolve at different rates and have\ndifferent probabilities of extinction etc. We compute content provider wise\nextinction probability, rate of growth etc. We also conjecture the\ncontent-provider wise growth rate of expected shares.\n",
"title": "A Viral Timeline Branching Process to study a Social Network"
}
| null | null | null | null | true | null |
17091
| null |
Default
| null | null |
null |
{
"abstract": " Viral zoonoses have emerged as the key drivers of recent pandemics. Human\ninfection by zoonotic viruses are either spillover events -- isolated\ninfections that fail to cause a widespread contagion -- or species jumps, where\nsuccessful adaptation to the new host leads to a pandemic. Despite expensive\nbio-surveillance efforts, historically emergence response has been reactive,\nand post-hoc. Here we use machine inference to demonstrate a high accuracy\npredictive bio-surveillance capability, designed to pro-actively localize an\nimpending species jump via automated interrogation of massive sequence\ndatabases of viral proteins. Our results suggest that a jump might not purely\nbe the result of an isolated unfortunate cross-infection localized in space and\ntime; there are subtle yet detectable patterns of genotypic changes\naccumulating in the global viral population leading up to emergence. Using tens\nof thousands of protein sequences simultaneously, we train models that track\nmaximum achievable accuracy for disambiguating host tropism from the primary\nstructure of surface proteins, and show that the inverse classification\naccuracy is a quantitative indicator of jump risk. We validate our claim in the\ncontext of the 2009 swine flu outbreak, and the 2004 emergence of H5N1\nsubspecies of Influenza A from avian reservoirs; illustrating that\ninterrogation of the global viral population can unambiguously track a near\nmonotonic risk elevation over several preceding years leading to eventual\nemergence.\n",
"title": "Algorithmic Bio-surveillance For Precise Spatio-temporal Prediction of Zoonotic Emergence"
}
| null | null | null | null | true | null |
17092
| null |
Default
| null | null |
null |
{
"abstract": " Operationalizing machine learning based security detections is extremely\nchallenging, especially in a continuously evolving cloud environment.\nConventional anomaly detection does not produce satisfactory results for\nanalysts that are investigating security incidents in the cloud. Model\nevaluation alone presents its own set of problems due to a lack of benchmark\ndatasets. When deploying these detections, we must deal with model compliance,\nlocalization, and data silo issues, among many others. We pose the problem of\n\"attack disruption\" as a way forward in the security data science space. In\nthis paper, we describe the framework, challenges, and open questions\nsurrounding the successful operationalization of machine learning based\nsecurity detections in a cloud environment and provide some insights on how we\nhave addressed them.\n",
"title": "Practical Machine Learning for Cloud Intrusion Detection: Challenges and the Way Forward"
}
| null | null | null | null | true | null |
17093
| null |
Default
| null | null |
null |
{
"abstract": " The objective of this research was to design a 0-5 GHz RF SOI switch, with\n0.18um power Jazz SOI technology by using Cadence software, for health care\napplications. This paper introduces the design of a RF switch implemented in\nshunt-series topology. An insertion loss of 0.906 dB and an isolation of 30.95\ndB were obtained at 5 GHz. The switch also achieved a third order distortion of\n53.05 dBm and 1 dB compression point reached 50.06dBm. The RF switch\nperformance meets the desired specification requirements.\n",
"title": "SOI RF Switch for Wireless Sensor Network"
}
| null | null | null | null | true | null |
17094
| null |
Default
| null | null |
null |
{
"abstract": " Given a positive linear combination of five (respectively seven) cosines,\nwhere the angles are positive and sum to pi, the aim of this article is to\nexpress the sharp bound of the combination as a Positive Real Fraction in the\ncoefficients (hence cosine-free). The method uses algebraic and arithmetic\nmanipulations with judicious transformations.\n",
"title": "The Pentagonal Inequality"
}
| null | null | null | null | true | null |
17095
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies the landscape of empirical risk of deep neural networks by\ntheoretically analyzing its convergence behavior to the population risk as well\nas its stationary points and properties. For an $l$-layer linear neural\nnetwork, we prove its empirical risk uniformly converges to its population risk\nat the rate of $\\mathcal{O}(r^{2l}\\sqrt{d\\log(l)}/\\sqrt{n})$ with training\nsample size of $n$, the total weight dimension of $d$ and the magnitude bound\n$r$ of weight of each layer. We then derive the stability and generalization\nbounds for the empirical risk based on this result. Besides, we establish the\nuniform convergence of gradient of the empirical risk to its population\ncounterpart. We prove the one-to-one correspondence of the non-degenerate\nstationary points between the empirical and population risks with convergence\nguarantees, which describes the landscape of deep neural networks. In addition,\nwe analyze these properties for deep nonlinear neural networks with sigmoid\nactivation functions. We prove similar results for convergence behavior of\ntheir empirical risks as well as the gradients and analyze properties of their\nnon-degenerate stationary points.\nTo our best knowledge, this work is the first one theoretically\ncharacterizing landscapes of deep learning algorithms. Besides, our results\nprovide the sample complexity of training a good deep neural network. We also\nprovide theoretical understanding on how the neural network depth $l$, the\nlayer width, the network size $d$ and parameter magnitude determine the neural\nnetwork landscapes.\n",
"title": "The Landscape of Deep Learning Algorithms"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
17096
| null |
Validated
| null | null |
null |
{
"abstract": " With the aim of understanding the effect of the environment on the star\nformation history and morphological transformation of galaxies, we present a\ndetailed analysis of the colour, morphology and internal structure of cluster\nand field galaxies at $0.4 \\le z \\le 0.8$. We use {\\em HST} data for over 500\ngalaxies from the ESO Distant Cluster Survey (EDisCS) to quantify how the\ngalaxies' light distribution deviate from symmetric smooth profiles. We\nvisually inspect the galaxies' images to identify the likely causes for such\ndeviations. We find that the residual flux fraction ($RFF$), which measures the\nfractional contribution to the galaxy light of the residuals left after\nsubtracting a symmetric and smooth model, is very sensitive to the degree of\nstructural disturbance but not the causes of such disturbance. On the other\nhand, the asymmetry of these residuals ($A_{\\rm res}$) is more sensitive to the\ncauses of the disturbance, with merging galaxies having the highest values of\n$A_{\\rm res}$. Using these quantitative parameters we find that, at a fixed\nmorphology, cluster and field galaxies show statistically similar degrees of\ndisturbance. However, there is a higher fraction of symmetric and passive\nspirals in the cluster than in the field. These galaxies have smoother light\ndistributions than their star-forming counterparts. We also find that while\nalmost all field and cluster S0s appear undisturbed, there is a relatively\nsmall population of star-forming S0s in clusters but not in the field. These\nfindings are consistent with relatively gentle environmental processes acting\non galaxies infalling onto clusters.\n",
"title": "The effect of the environment on the structure, morphology and star-formation history of intermediate-redshift galaxies"
}
| null | null | null | null | true | null |
17097
| null |
Default
| null | null |
null |
{
"abstract": " Recommender systems play a crucial role in mitigating the problem of\ninformation overload by suggesting users' personalized items or services. The\nvast majority of traditional recommender systems consider the recommendation\nprocedure as a static process and make recommendations following a fixed\nstrategy. In this paper, we propose a novel recommender system with the\ncapability of continuously improving its strategies during the interactions\nwith users. We model the sequential interactions between users and a\nrecommender system as a Markov Decision Process (MDP) and leverage\nReinforcement Learning (RL) to automatically learn the optimal strategies via\nrecommending trial-and-error items and receiving reinforcements of these items\nfrom users' feedback. Users' feedback can be positive and negative and both\ntypes of feedback have great potentials to boost recommendations. However, the\nnumber of negative feedback is much larger than that of positive one; thus\nincorporating them simultaneously is challenging since positive feedback could\nbe buried by negative one. In this paper, we develop a novel approach to\nincorporate them into the proposed deep recommender system (DEERS) framework.\nThe experimental results based on real-world e-commerce data demonstrate the\neffectiveness of the proposed framework. Further experiments have been\nconducted to understand the importance of both positive and negative feedback\nin recommendations.\n",
"title": "Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning"
}
| null | null | null | null | true | null |
17098
| null |
Default
| null | null |
null |
{
"abstract": " This work addresses the instability in asynchronous data parallel\noptimization. It does so by introducing a novel distributed optimizer which is\nable to efficiently optimize a centralized model under communication\nconstraints. The optimizer achieves this by pushing a normalized sequence of\nfirst-order gradients to a parameter server. This implies that the magnitude of\na worker delta is smaller compared to an accumulated gradient, and provides a\nbetter direction towards a minimum compared to first-order gradients, which in\nturn also forces possible implicit momentum fluctuations to be more aligned\nsince we make the assumption that all workers contribute towards a single\nminima. As a result, our approach mitigates the parameter staleness problem\nmore effectively since staleness in asynchrony induces (implicit) momentum, and\nachieves a better convergence rate compared to other optimizers such as\nasynchronous EASGD and DynSGD, which we show empirically.\n",
"title": "Accumulated Gradient Normalization"
}
| null | null | null | null | true | null |
17099
| null |
Default
| null | null |
null |
{
"abstract": " This work presents an algorithm for changing from latitudinal to longitudinal\nformation of autonomous aircraft squadrons. The maneuvers are defined\ndynamically by using a predefined set of 3D basic maneuvers. This formation\nchanging is necessary when the squadron has to perform tasks which demand both\nformations, such as lift off, georeferencing, obstacle avoidance and landing.\nSimulations show that the formation changing is made without collision. The\ntime complexity analysis of the transformation algorithm reveals that its\nefficiency is optimal, and the proof of correction ensures its longitudinal\nformation features.\n",
"title": "An Optimal Algorithm for Changing from Latitudinal to Longitudinal Formation of Autonomous Aircraft Squadrons"
}
| null | null | null | null | true | null |
17100
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.