text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " The formaldehyde MegaMaser emission has been mapped for the three host\ngalaxies IC\\,860. IRAS\\,15107$+$0724, and Arp\\,220. Elongated emission\ncomponents are found at the nuclear centres of all galaxies with an extent\nranging between 30 to 100 pc. These components are superposed on the peaks of\nthe nuclear continuum. Additional isolated emission components are found\nsuperposed in the outskirts of the radio continuum structure. The brightness\ntemperatures of the detected features ranges from 0.6 to 13.4 $\\times 10^{4}$\nK, which confirms their masering nature. The masering scenario is interpreted\nas amplification of the radio continuum by foreground molecular gas that is\npumped by far-infrared radiation fields in these starburst environments of the\nhost galaxies.\n",
"title": "The Emission Structure of Formaldehyde MegaMasers"
} | null | null | [
"Physics"
]
| null | true | null | 2701 | null | Validated | null | null |
null | {
"abstract": " Inference, prediction and control of complex dynamical systems from time\nseries is important in many areas, including financial markets, power grid\nmanagement, climate and weather modeling, or molecular dynamics. The analysis\nof such highly nonlinear dynamical systems is facilitated by the fact that we\ncan often find a (generally nonlinear) transformation of the system coordinates\nto features in which the dynamics can be excellently approximated by a linear\nMarkovian model. Moreover, the large number of system variables often change\ncollectively on large time- and length-scales, facilitating a low-dimensional\nanalysis in feature space. In this paper, we introduce a variational approach\nfor Markov processes (VAMP) that allows us to find optimal feature mappings and\noptimal Markovian models of the dynamics from given time series data. The key\ninsight is that the best linear model can be obtained from the top singular\ncomponents of the Koopman operator. This leads to the definition of a family of\nscore functions called VAMP-r which can be calculated from data, and can be\nemployed to optimize a Markovian model. In addition, based on the relationship\nbetween the variational scores and approximation errors of Koopman operators,\nwe propose a new VAMP-E score, which can be applied to cross-validation for\nhyper-parameter optimization and model selection in VAMP. VAMP is valid for\nboth reversible and nonreversible processes and for stationary and\nnon-stationary processes or realizations.\n",
"title": "Variational approach for learning Markov processes from time series data"
} | null | null | [
"Statistics"
]
| null | true | null | 2702 | null | Validated | null | null |
null | {
"abstract": " This paper presents a new framework for analysing forensic DNA samples using\nprobabilistic genotyping. Specifically it presents a mathematical framework for\nspecifying and combining the steps in producing forensic casework\nelectropherograms of short tandem repeat loci from DNA samples. It is\napplicable to both high and low template DNA samples, that is, samples\ncontaining either high or low amounts DNA. A specific model is developed within\nthe framework, by way of particular modelling assumptions and approximations,\nand its interpretive power presented on examples using simulated data and data\nfrom a publicly available dataset. The framework relies heavily on the use of\nunivariate and multivariate probability generating functions. It is shown that\nthese provide a succinct and elegant mathematical scaffolding to model the key\nsteps in the process. A significant development in this paper is that of new\nnumerical methods for accurately and efficiently evaluating the probability\ndistribution of amplicons arising from the polymerase chain reaction process,\nwhich is modelled as a discrete multi-type branching process. Source code in\nthe scripting languages Python, R and Julia is provided for illustration of\nthese methods. These new developments will be of general interest to persons\nworking outside the province of forensic DNA interpretation that this paper\nfocuses on.\n",
"title": "A unifying framework for the modelling and analysis of STR DNA samples arising in forensic casework"
} | null | null | [
"Statistics",
"Quantitative Biology"
]
| null | true | null | 2703 | null | Validated | null | null |
null | {
"abstract": " Ferromagnetic semiconductors (FMSs), which have the properties and\nfunctionalities of both semiconductors and ferromagnets, provide fascinating\nopportunities for basic research in condensed matter physics and device\napplications. Over the past two decades, however, intensive studies on various\nFMS materials, inspired by the influential mean-field Zener (MFZ) model have\nfailed to realise reliable FMSs that have a high Curie temperature (Tc > 300\nK), good compatibility with semiconductor electronics, and characteristics\nsuperior to those of their non-magnetic host semiconductors. Here, we\ndemonstrate a new n type Fe-doped narrow-gap III-V FMS, (In,Fe)Sb, in which\nferromagnetic order is induced by electron carriers, and its Tc is unexpectedly\nhigh, reaching ~335 K at a modest Fe concentration of 16%. Furthermore, we show\nthat by utilizing the large anomalous Hall effect of (In,Fe)Sb at room\ntemperature, it is possible to obtain a Hall sensor with a very high\nsensitivity that surpasses that of the best commercially available InSb Hall\nsensor devices. Our results reveal a new design rule of FMSs that is not\nexpected from the conventional MFZ model. (This work was presented at the JSAP\nSpring meeting, presentation No. E15a-501-2:\nthis https URL)\n",
"title": "A new class of ferromagnetic semiconductors with high Curie temperatures"
} | null | null | null | null | true | null | 2704 | null | Default | null | null |
null | {
"abstract": " Efficient extraction of useful knowledge from these data is still a\nchallenge, mainly when the data is distributed, heterogeneous and of different\nquality depending on its corresponding local infrastructure. To reduce the\noverhead cost, most of the existing distributed clustering approaches generate\nglobal models by aggregating local results obtained on each individual node.\nThe complexity and quality of solutions depend highly on the quality of the\naggregation. In this respect, we proposed for distributed density-based\nclustering that both reduces the communication overheads due to the data\nexchange and improves the quality of the global models by considering the\nshapes of local clusters. From preliminary results we show that this algorithm\nis very promising.\n",
"title": "On a Distributed Approach for Density-based Clustering"
} | null | null | null | null | true | null | 2705 | null | Default | null | null |
null | {
"abstract": " Accretion of planetary material onto host stars may occur throughout a star's\nlife. Especially prone to accretion, extrasolar planets in short-period orbits,\nwhile relatively rare, constitute a significant fraction of the known\npopulation, and these planets are subject to dynamical and atmospheric\ninfluences that can drive significant mass loss. Theoretical models frame\nexpectations regarding the rates and extent of this planetary accretion. For\ninstance, tidal interactions between planets and stars may drive complete\norbital decay during the main sequence. Many planets that survive their stars'\nmain sequence lifetime will still be engulfed when the host stars become red\ngiant stars. There is some observational evidence supporting these predictions,\nsuch as a dearth of close-in planets around fast stellar rotators, which is\nconsistent with tidal spin-up and planet accretion. There remains no clear\nchemical evidence for pollution of the atmospheres of main sequence or red\ngiant stars by planetary materials, but a wealth of evidence points to active\naccretion by white dwarfs. In this article, we review the current understanding\nof accretion of planetary material, from the pre- to the post-main sequence and\nbeyond. The review begins with the astrophysical framework for that process and\nthen considers accretion during various phases of a host star's life, during\nwhich the details of accretion vary, and the observational evidence for\naccretion during these phases.\n",
"title": "Accretion of Planetary Material onto Host Stars"
} | null | null | null | null | true | null | 2706 | null | Default | null | null |
null | {
"abstract": " We use a co-trapped ion ($^{88}\\mathrm{Sr}^{+}$) to sympathetically cool and\nmeasure the quantum state populations of a memory-qubit ion of a different\natomic species ($^{40}\\mathrm{Ca}^{+}$) in a cryogenic, surface-electrode ion\ntrap. Due in part to the low motional heating rate demonstrated here, the state\npopulations of the memory ion can be transferred to the auxiliary ion by using\nthe shared motion as a quantum state bus and measured with an average accuracy\nof 96(1)%. This scheme can be used in quantum information processors to reduce\nphoton-scattering-induced error in unmeasured memory qubits.\n",
"title": "High-Fidelity, Single-Shot, Quantum-Logic-Assisted Readout in a Mixed-Species Ion Chain"
} | null | null | null | null | true | null | 2707 | null | Default | null | null |
null | {
"abstract": " Infra-Red(IR) astronomical databases, namely, IRAS, 2MASS, WISE, and Spitzer,\nare used to analyze photometric data of 126 carbon stars whose spectra are\nvisible in the First Byurakan Survey low-resolution spectral plates. Among\nthese, six new objects, recently confirmed on the digitized FBS plates, are\nincluded. For three of them, moderate-resolution CCD optical spectra are also\npresented. In this work several IR color-color diagrams are studied. Early and\nlate-type C stars are separated in the JHK Near-Infra-Red(NIR) color-color\nplots, as well as in the WISE W3-W4 versus W1-W2 diagram. Late N-type\nAsymptotic Giant Branch stars are redder in W1-W2, while early-types(CH and R\ngiants) are redder in W3-W4 as expected. Objects with W2-W3 > 1.0 mag. show\ndouble-peaked spectral energy distribution, indicating the existence of the\ncircumstellar envelopes around them. 26 N-type stars have IRAS Point Source\nCatalog(PSC) associations. For FBS 1812+455 IRAS Low-Resolution Spectra in the\nwavelength range 7.7 - 22.6micron and Spitzer Space Telescope Spectra in the\nrange 5 - 38micro are presented clearly showing absorption features of\nC2H2(acetylene) molecule at 7.5 and 13.7micron , and the SiC(silicone carbide)\nemission at 11.3micron. The mass-loss rates for eight Mira-type variables are\nderived from the K-[12] color and from the pulsation periods. The reddest\nobject among the targets is N-type C star FBS 2213+421, which belong to the\ngroup of the cold post-AGB R Coronae Borealis(R CrB) variables.\n",
"title": "Investigation of faint galactic carbon stars from the first Byurakan spectral survey. III. Infrared characteristics"
} | null | null | null | null | true | null | 2708 | null | Default | null | null |
null | {
"abstract": " Decisions by Machine Learning (ML) models have become ubiquitous. Trusting\nthese decisions requires understanding how algorithms take them. Hence\ninterpretability methods for ML are an active focus of research. A central\nproblem in this context is that both the quality of interpretability methods as\nwell as trust in ML predictions are difficult to measure. Yet evaluations,\ncomparisons and improvements of trust and interpretability require quantifiable\nmeasures. Here we propose a quantitative measure for the quality of\ninterpretability methods. Based on that we derive a quantitative measure of\ntrust in ML decisions. Building on previous work we propose to measure\nintuitive understanding of algorithmic decisions using the information transfer\nrate at which humans replicate ML model predictions. We provide empirical\nevidence from crowdsourcing experiments that the proposed metric robustly\ndifferentiates interpretability methods. The proposed metric also demonstrates\nthe value of interpretability for ML assisted human decision making: in our\nexperiments providing explanations more than doubled productivity in annotation\ntasks. However unbiased human judgement is critical for doctors, judges, policy\nmakers and others. Here we derive a trust metric that identifies when human\ndecisions are overly biased towards ML predictions. Our results complement\nexisting qualitative work on trust and interpretability by quantifiable\nmeasures that can serve as objectives for further improving methods in this\nfield of research.\n",
"title": "Quantifying Interpretability and Trust in Machine Learning Systems"
} | null | null | null | null | true | null | 2709 | null | Default | null | null |
null | {
"abstract": " This activity has been developed as a resource for the \"EU Space Awareness\"\neducational programme. As part of the suite \"Our Fragile Planet\" together with\nthe \"Climate Box\" it addresses aspects of weather phenomena, the Earth's\nclimate and climate change as well as Earth observation efforts like in the\nEuropean \"Copernicus\" programme. This resource consists of three parts that\nillustrate the power of the Sun driving a global air circulation system that is\nalso responsible for tropical and subtropical climate zones. Through\nexperiments, students learn how heated air rises above cool air and how a\ncontinuous heat source produces air convection streams that can even drive a\npropeller. Students then apply what they have learnt to complete a worksheet\nthat presents the big picture of the global air circulation system of the\nequator region by transferring the knowledge from the previous activities in to\na larger scale.\n",
"title": "The Intertropical Convergence Zone"
} | null | null | [
"Physics"
]
| null | true | null | 2710 | null | Validated | null | null |
null | {
"abstract": " In this work we propose to fit a sparse logistic regression model by a weakly\nconvex regularized nonconvex optimization problem. The idea is based on the\nfinding that a weakly convex function as an approximation of the $\\ell_0$\npseudo norm is able to better induce sparsity than the commonly used $\\ell_1$\nnorm. For a class of weakly convex sparsity inducing functions, we prove the\nnonconvexity of the corresponding sparse logistic regression problem, and study\nits local optimality conditions and the choice of the regularization parameter\nto exclude trivial solutions. Despite the nonconvexity, a method based on\nproximal gradient descent is used to solve the general weakly convex sparse\nlogistic regression, and its convergence behavior is studied theoretically.\nThen the general framework is applied to a specific weakly convex function, and\na necessary and sufficient local optimality condition is provided. The solution\nmethod is instantiated in this case as an iterative firm-shrinkage algorithm,\nand its effectiveness is demonstrated in numerical experiments by both randomly\ngenerated and real datasets.\n",
"title": "Nonconvex Sparse Logistic Regression with Weakly Convex Regularization"
} | null | null | null | null | true | null | 2711 | null | Default | null | null |
null | {
"abstract": " We study the complexity of approximating the independent set polynomial\n$Z_G(\\lambda)$ of a graph $G$ with maximum degree $\\Delta$ when the activity\n$\\lambda$ is a complex number.\nThis problem is already well understood when $\\lambda$ is real using\nconnections to the $\\Delta$-regular tree $T$. The key concept in that case is\nthe \"occupation ratio\" of the tree $T$. This ratio is the contribution to\n$Z_T(\\lambda)$ from independent sets containing the root of the tree, divided\nby $Z_T(\\lambda)$ itself. If $\\lambda$ is such that the occupation ratio\nconverges to a limit, as the height of $T$ grows, then there is an FPTAS for\napproximating $Z_G(\\lambda)$ on a graph $G$ with maximum degree $\\Delta$.\nOtherwise, the approximation problem is NP-hard.\nUnsurprisingly, the case where $\\lambda$ is complex is more challenging.\nPeters and Regts identified the complex values of $\\lambda$ for which the\noccupation ratio of the $\\Delta$-regular tree converges. These values carve a\ncardioid-shaped region $\\Lambda_\\Delta$ in the complex plane. Motivated by the\npicture in the real case, they asked whether $\\Lambda_\\Delta$ marks the true\napproximability threshold for general complex values $\\lambda$.\nOur main result shows that for every $\\lambda$ outside of $\\Lambda_\\Delta$,\nthe problem of approximating $Z_G(\\lambda)$ on graphs $G$ with maximum degree\nat most $\\Delta$ is indeed NP-hard. In fact, when $\\lambda$ is outside of\n$\\Lambda_\\Delta$ and is not a positive real number, we give the stronger result\nthat approximating $Z_G(\\lambda)$ is actually #P-hard. If $\\lambda$ is a\nnegative real number outside of $\\Lambda_\\Delta$, we show that it is #P-hard to\neven decide whether $Z_G(\\lambda)>0$, resolving in the affirmative a conjecture\nof Harvey, Srivastava and Vondrak.\nOur proof techniques are based around tools from complex analysis -\nspecifically the study of iterative multivariate rational maps.\n",
"title": "Inapproximability of the independent set polynomial in the complex plane"
} | null | null | [
"Computer Science"
]
| null | true | null | 2712 | null | Validated | null | null |
null | {
"abstract": " This paper considers the use of Machine Learning (ML) in medicine by focusing\non the main problem that this computational approach has been aimed at solving\nor at least minimizing: uncertainty. To this aim, we point out how uncertainty\nis so ingrained in medicine that it biases also the representation of clinical\nphenomena, that is the very input of ML models, thus undermining the clinical\nsignificance of their output. Recognizing this can motivate both medical\ndoctors, in taking more responsibility in the development and use of these\ndecision aids, and the researchers, in pursuing different ways to assess the\nvalue of these systems. In so doing, both designers and users could take this\nintrinsic characteristic of medicine more seriously and consider alternative\napproaches that do not \"sweep uncertainty under the rug\" within an objectivist\nfiction, which everyone can come up by believing as true.\n",
"title": "A giant with feet of clay: on the validity of the data that feed machine learning in medicine"
} | null | null | null | null | true | null | 2713 | null | Default | null | null |
null | {
"abstract": " We present a near-infrared direct imaging search for accretion signatures of\npossible protoplanets around the young stellar object (YSO) TW Hya, a\nmulti-ring disk exhibiting evidence of planet formation. The Pa$\\beta$ line\n(1.282 $\\mu$m) is an indication of accretion onto a protoplanet, and its\nintensity is much higher than that of blackbody radiation from the protoplanet.\nWe focused on the Pa$\\beta$ line and performed Keck/OSIRIS spectroscopic\nobservations. Although spectral differential imaging (SDI) reduction detected\nno accretion signatures, the results of the present study allowed us to set\n5$\\sigma$ detection limits for Pa$\\beta$ emission of $5.8\\times10^{-18}$ and\n$1.5\\times10^{-18}$ erg/s/cm$^2$ at 0\\farcs4 and 1\\farcs6, respectively. We\nconsidered the mass of potential planets using theoretical simulations of\ncircumplanetary disks and hydrogen emission. The resulting masses were $1.45\\pm\n0.04$ M$_{\\rm J}$ and $2.29 ^{+0.03}_{-0.04}$ M$_{\\rm J}$ at 25 and 95 AU,\nrespectively, which agree with the detection limits obtained from previous\nbroadband imaging. The detection limits should allow the identification of\nprotoplanets as small as $\\sim$1 M$_{\\rm J}$, which may assist in direct\nimaging searches around faint YSOs for which extreme adaptive optics\ninstruments are unavailable.\n",
"title": "Constraining accretion signatures of exoplanets in the TW Hya transitional disk"
} | null | null | [
"Physics"
]
| null | true | null | 2714 | null | Validated | null | null |
null | {
"abstract": " We prove that if a contact 3-manifold admits an open book decomposition of\ngenus 0, a certain intersection pattern cannot appear in the homology of any of\nits symplectic fillings, and morever, fillings cannot contain certain\nsymplectic surfaces. Applying these obstructions to canonical contact\nstructures on links of normal surface singularities, we show that links of\nisolated singularities of surfaces in the complex 3-space are planar only in\nthe case of $A_n$-singularities, and in general characterize completely planar\nlinks of normal surface singularities (in terms of their resolution graphs). We\nalso establish non-planarity of tight contact structures on certain small\nSeifert fibered L-spaces and of contact structures compatible with open books\ngiven by a boundary multi-twist on a page of positive genus. Additionally, we\nprove that every finitely presented group is the fundamental group of a\nLeschetz fibration with planar fibers.\n",
"title": "Obstructions to planarity of contact 3-manifolds"
} | null | null | null | null | true | null | 2715 | null | Default | null | null |
null | {
"abstract": " Under the usual condition that the volume of a geodesic ball is close to the\nEuclidean one or the injectivity radii is bounded from below, we prove a lower\nbound of the $C^{\\alpha} W^{1, q}$ harmonic radius for manifolds with bounded\nBakry-Émery Ricci curvature when the gradient of the potential is bounded.\nUnder these conditions, the regularity that can be imposed on the metrics under\nharmonic coordinates is only $C^\\alpha W^{1,q}$, where $q>2n$ and $n$ is the\ndimension of the manifolds. This is almost 1 order lower than that in the\nclassical $C^{1,\\alpha} W^{2, p}$ harmonic coordinates under bounded Ricci\ncurvature condition [And]. The loss of regularity induces some difference in\nthe method of proof, which can also be used to address the detail of $W^{2, p}$\nconvergence in the classical case.\nBased on this lower bound and the techniques in [ChNa2] and [WZ], we extend\nCheeger-Naber's Codimension 4 Theorem in [ChNa2] to the case where the\nmanifolds have bounded Bakry-Émery Ricci curvature when the gradient of the\npotential is bounded. This result covers Ricci solitons when the gradient of\nthe potential is bounded.\nDuring the proof, we will use a Green's function argument and adopt a linear\nalgebra argument in [Bam]. A new ingradient is to show that the diagonal\nentries of the matrices in the Transformation Theorem are bounded away from 0.\nTogether these seem to simplify the proof of the Codimension 4 Theorem, even in\nthe case where Ricci curvature is bounded.\n",
"title": "Bounds on harmonic radius and limits of manifolds with bounded Bakry-Émery Ricci curvature"
} | null | null | null | null | true | null | 2716 | null | Default | null | null |
null | {
"abstract": " Uncertainty computation in deep learning is essential to design robust and\nreliable systems. Variational inference (VI) is a promising approach for such\ncomputation, but requires more effort to implement and execute compared to\nmaximum-likelihood methods. In this paper, we propose new natural-gradient\nalgorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms\ncan be implemented within the Adam optimizer by perturbing the network weights\nduring gradient evaluations, and uncertainty estimates can be cheaply obtained\nby using the vector that adapts the learning rate. This requires lower memory,\ncomputation, and implementation effort than existing VI methods, while\nobtaining uncertainty estimates of comparable quality. Our empirical results\nconfirm this and further suggest that the weight-perturbation in our algorithm\ncould be useful for exploration in reinforcement learning and stochastic\noptimization.\n",
"title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam"
} | null | null | null | null | true | null | 2717 | null | Default | null | null |
null | {
"abstract": " Availability of research datasets is keystone for health and life science\nstudy reproducibility and scientific progress. Due to the heterogeneity and\ncomplexity of these data, a main challenge to be overcome by research data\nmanagement systems is to provide users with the best answers for their search\nqueries. In the context of the 2016 bioCADDIE Dataset Retrieval Challenge, we\ninvestigate a novel ranking pipeline to improve the search of datasets used in\nbiomedical experiments. Our system comprises a query expansion model based on\nword embeddings, a similarity measure algorithm that takes into consideration\nthe relevance of the query terms, and a dataset categorisation method that\nboosts the rank of datasets matching query constraints. The system was\nevaluated using a corpus with 800k datasets and 21 annotated user queries. Our\nsystem provides competitive results when compared to the other challenge\nparticipants. In the official run, it achieved the highest infAP among the\nparticipants, being +22.3% higher than the median infAP of the participant's\nbest submissions. Overall, it is ranked at top 2 if an aggregated metric using\nthe best official measures per participant is considered. The query expansion\nmethod showed positive impact on the system's performance increasing our\nbaseline up to +5.0% and +3.4% for the infAP and infNDCG metrics, respectively.\nOur similarity measure algorithm seems to be robust, in particular compared to\nDivergence From Randomness framework, having smaller performance variations\nunder different training conditions. Finally, the result categorization did not\nhave significant impact on the system's performance. We believe that our\nsolution could be used to enhance biomedical dataset management systems. In\nparticular, the use of data driven query expansion methods could be an\nalternative to the complexity of biomedical terminologies.\n",
"title": "Improving average ranking precision in user searches for biomedical research datasets"
} | null | null | null | null | true | null | 2718 | null | Default | null | null |
null | {
"abstract": " Deep learning models for graphs have achieved strong performance for the task\nof node classification. Despite their proliferation, currently there is no\nstudy of their robustness to adversarial attacks. Yet, in domains where they\nare likely to be used, e.g. the web, adversaries are common. Can deep learning\nmodels for graphs be easily fooled? In this work, we introduce the first study\nof adversarial attacks on attributed graphs, specifically focusing on models\nexploiting ideas of graph convolutions. In addition to attacks at test time, we\ntackle the more challenging class of poisoning/causative attacks, which focus\non the training phase of a machine learning model. We generate adversarial\nperturbations targeting the node's features and the graph structure, thus,\ntaking the dependencies between instances in account. Moreover, we ensure that\nthe perturbations remain unnoticeable by preserving important data\ncharacteristics. To cope with the underlying discrete domain we propose an\nefficient algorithm Nettack exploiting incremental computations. Our\nexperimental study shows that accuracy of node classification significantly\ndrops even when performing only few perturbations. Even more, our attacks are\ntransferable: the learned attacks generalize to other state-of-the-art node\nclassification models and unsupervised approaches, and likewise are successful\neven when only limited knowledge about the graph is given.\n",
"title": "Adversarial Attacks on Neural Networks for Graph Data"
} | null | null | null | null | true | null | 2719 | null | Default | null | null |
null | {
"abstract": " From the energy-momentum tensors of the electromagnetic field and the\nmechanical energy-momentum, the equations of energy conservation and balance of\nelectromagnetic and mechanical forces are obtained. The equation for the\nAbraham force in a dielectric medium with losses is obtained\n",
"title": "Electromagnetic energy, momentum and forces in a dielectric medium with losses"
} | null | null | null | null | true | null | 2720 | null | Default | null | null |
null | {
"abstract": " The control of electric currents in solids is at the origin of the modern\nelectronics revolution which has driven our daily life since the second half of\n20th century. Surprisingly, to date, there is no thermal analog for a control\nof heat flux. Here, we summarize the very last developments carried out in this\ndirection to control heat exchanges by radiation both in near and far-field in\ncomplex architecture networks.\n",
"title": "Thermotronics: toward nanocircuits to manage radiative heat flux"
} | null | null | null | null | true | null | 2721 | null | Default | null | null |
null | {
"abstract": " We present the results from the first measurements of the Time-Correlated\nPulse-Height (TCPH) distributions from 4.5 kg sphere of $\\alpha$-phase\nweapons-grade plutonium metal in five configurations: bare, reflected by 1.27\ncm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new\nmethod for characterizing source multiplication and shielding configuration is\nalso demonstrated. The method relies on solving for the underlying fission\nchain timing distribution that drives the spreading of the measured TCPH\ndistribution. We found that a gamma distribution fits the fission chain timing\ndistribution well and that the fit parameters correlate with both\nmultiplication (rate parameter) and shielding material types (shape parameter).\nThe source-to-detector distance was another free parameter that we were able to\noptimize, and proved to be the most well constrained parameter. MCNPX-PoliMi\nsimulations were used to complement the measurements and help illustrate trends\nin these parameters and their relation to multiplication and the amount and\ntype of material coupled to the subcritical assembly.\n",
"title": "Multiplication and Presence of Shielding Material from Time-Correlated Pulse-Height Measurements of Subcritical Plutonium Assemblies"
} | null | null | null | null | true | null | 2722 | null | Default | null | null |
null | {
"abstract": " This paper provides a mathematical approach to study metasurfaces in non flat\ngeometries. Analytical conditions between the curvature of the surface and the\nset of refracted directions are introduced to guarantee the existence of phase\ndiscontinuities. The approach contains both the near and far field cases. A\nstarting point is the formulation of a vector Snell law in presence of abrupt\ndiscontinuities on the interfaces.\n",
"title": "General Refraction Problems with Phase Discontinuity"
} | null | null | null | null | true | null | 2723 | null | Default | null | null |
null | {
"abstract": " At CCS 2015 Naveed et al. presented first attacks on efficiently searchable\nencryption, such as deterministic and order-preserving encryption. These\nplaintext guessing attacks have been further improved in subsequent work, e.g.\nby Grubbs et al. in 2016. Such cryptanalysis is crucially important to sharpen\nour understanding of the implications of security models. In this paper we\npresent an efficiently searchable, encrypted data structure that is provably\nsecure against these and even more powerful chosen plaintext attacks. Our data\nstructure supports logarithmic-time search with linear space complexity. The\nindices of our data structure can be used to search by standard comparisons and\nhence allow easy retrofitting to existing database management systems. We\nimplemented our scheme and show that its search time overhead is only 10\nmilliseconds compared to non-secure search.\n",
"title": "An Efficiently Searchable Encrypted Data Structure for Range Queries"
} | null | null | [
"Computer Science"
]
| null | true | null | 2724 | null | Validated | null | null |
null | {
"abstract": " Condensate of spin-1 atoms frozen in a unique spatial mode may possess large\ninternal degrees of freedom. The scattering amplitudes of polarized cold atoms\nscattered by the condensate are obtained with the method of fractional\nparentage coefficients that treats the spin degrees of freedom rigorously.\nChannels with scattering cross sections enhanced by square of atom number of\nthe condensate are found. Entanglement between the condensate and the\npropagating atom can be established by the scattering. The entanglement entropy\nis analytically obtained for arbitrary initial states. Our results also give\nhint for the establishment of quantum thermal ensembles in the hyperfine space.\n",
"title": "Hyperfine state entanglement of spinor BEC and scattering atom"
} | null | null | [
"Physics"
]
| null | true | null | 2725 | null | Validated | null | null |
null | {
"abstract": " Empirically, neural networks that attempt to learn programs from data have\nexhibited poor generalizability. Moreover, it has traditionally been difficult\nto reason about the behavior of these models beyond a certain level of input\ncomplexity. In order to address these issues, we propose augmenting neural\narchitectures with a key abstraction: recursion. As an application, we\nimplement recursion in the Neural Programmer-Interpreter framework on four\ntasks: grade-school addition, bubble sort, topological sort, and quicksort. We\ndemonstrate superior generalizability and interpretability with small amounts\nof training data. Recursion divides the problem into smaller pieces and\ndrastically reduces the domain of each neural network component, making it\ntractable to prove guarantees about the overall system's behavior. Our\nexperience suggests that in order for neural architectures to robustly learn\nprogram semantics, it is necessary to incorporate a concept like recursion.\n",
"title": "Making Neural Programming Architectures Generalize via Recursion"
} | null | null | null | null | true | null | 2726 | null | Default | null | null |
null | {
"abstract": " When conducting large scale inference, such as genome-wide association\nstudies or image analysis, nominal $p$-values are often adjusted to improve\ncontrol over the family-wise error rate (FWER). When the majority of tests are\nnull, procedures controlling the False discovery rate (Fdr) can be improved by\nreplacing the theoretical global null with its empirical estimate. However,\nthese other adjustment procedures remain sensitive to the working model\nassumption. Here we propose two key ideas to improve inference in this space.\nFirst, we propose $p$-values that are standardized to the empirical null\ndistribution (instead of the theoretical null). Second, we propose model\naveraging $p$-values by bootstrap aggregation (Bagging) to account for model\nuncertainty and selection procedures. The combination of these two key ideas\nyields bagged empirical null $p$-values (BEN $p$-values) that often\ndramatically alter the rank ordering of significant findings. Moreover, we find\nthat a multidimensional selection criteria based on BEN $p$-values and bagged\nmodel fit statistics is more likely to yield reproducible findings. A\nre-analysis of the famous Golub Leukemia data is presented to illustrate these\nideas. We uncovered new findings in these data, not detected previously, that\nare backed by published bench work pre-dating the Gloub experiment. A\npseudo-simulation using the leukemia data is also presented to explore the\nstability of this approach under broader conditions, and illustrates the\nsuperiority of the BEN $p$-values compared to the other approaches.\n",
"title": "Bagged Empirical Null p-values: A Method to Account for Model Uncertainty in Large Scale Inference"
} | null | null | [
"Statistics"
]
| null | true | null | 2727 | null | Validated | null | null |
null | {
"abstract": " Objects moving in fluids experience patterns of stress on their surfaces\ndetermined by the geometry of nearby boundaries. Flows at low Reynolds number,\nas occur in microscopic vessels such as capillaries in biological tissues, have\nrelatively simple relations between stresses and nearby vessel geometry. Using\nthese relations, this paper shows how a microscopic robot moving with such\nflows can use changes in stress on its surface to identify when it encounters\nvessel branches.\n",
"title": "Identifying Vessel Branching from Fluid Stresses on Microscopic Robots"
} | null | null | null | null | true | null | 2728 | null | Default | null | null |
null | {
"abstract": " Recent studies show that the fast growing expansion of wind power generation\nmay lead to extremely high levels of price volatility in wholesale electricity\nmarkets. Storage technologies, regardless of their specific forms e.g.\npump-storage hydro, large-scale or distributed batteries, are capable of\nalleviating the extreme price volatility levels due to their energy usage time\nshifting, fast-ramping and price arbitrage capabilities. In this paper, we\npropose a stochastic bi-level optimization model to find the optimal nodal\nstorage capacities required to achieve a certain price volatility level in a\nhighly volatile electricity market. The decision on storage capacities is made\nin the upper level problem and the operation of strategic/regulated generation,\nstorage and transmission players is modeled at the lower level problem using an\nextended Cournot-based stochastic game. The South Australia (SA) electricity\nmarket, which has recently experienced high levels of price volatility, is\nconsidered as the case study for the proposed storage allocation framework. Our\nnumerical results indicate that 80% price volatility reduction in SA\nelectricity market can be achieved by installing either 340 MWh regulated\nstorage or 420 MWh strategic storage. In other words, regulated storage firms\nare more efficient in reducing the price volatility than strategic storage\nfirms.\n",
"title": "Impact of Optimal Storage Allocation on Price Volatility in Electricity Markets"
} | null | null | null | null | true | null | 2729 | null | Default | null | null |
null | {
"abstract": " Short-circuit evaluation denotes the semantics of propositional connectives\nin which the second argument is evaluated only if the first argument does not\nsuffice to determine the value of the expression. Free short-circuit logic is\nthe equational logic in which compound statements are evaluated from left to\nright, while atomic evaluations are not memorised throughout the evaluation,\ni.e., evaluations of distinct occurrences of an atom in a compound statement\nmay yield different truth values. We provide a simple semantics for free SCL\nand an independent axiomatisation. Finally, we discuss evaluation strategies,\nsome other SCLs, and side effects.\n",
"title": "An independent axiomatisation for free short-circuit logic"
} | null | null | [
"Computer Science",
"Mathematics"
]
| null | true | null | 2730 | null | Validated | null | null |
null | {
"abstract": " We describe a communication game, and a conjecture about this game, whose\nproof would imply the well-known Sensitivity Conjecture asserting a polynomial\nrelation between sensitivity and block sensitivity for Boolean functions. The\nauthor defined this game and observed the connection in Dec. 2013 - Jan. 2014.\nThe game and connection were independently discovered by Gilmer, Koucký, and\nSaks, who also established further results about the game (not proved by us)\nand published their results in ITCS '15 [GKS15].\nThis note records our independent work, including some observations that did\nnot appear in [GKS15]. Namely, the main conjecture about this communication\ngame would imply not only the Sensitivity Conjecture, but also a stronger\nhypothesis raised by Chung, Füredi, Graham, and Seymour [CFGS88]; and,\nanother related conjecture we pose about a \"query-bounded\" variant of our\ncommunication game would suffice to answer a question of Aaronson, Ambainis,\nBalodis, and Bavarian [AABB14] about the query complexity of the \"Weak Parity\"\nproblem---a question whose resolution was previously shown by [AABB14] to\nfollow from a proof of the Chung et al. hypothesis.\n",
"title": "A Note on a Communication Game"
} | null | null | null | null | true | null | 2731 | null | Default | null | null |
null | {
"abstract": " For a finite field of odd cardinality $q$, we show that the sequence of\niterates of $aX^2+c$, starting at $0$, always recurs after $O(q/\\log\\log q)$\nsteps. For $X^2+1$ the same is true for any starting value. We suggest that the\ntraditional \"Birthday Paradox\" model is inappropriate for iterates of $X^3+c$,\nwhen $q$ is 2 mod 3.\n",
"title": "Iteration of Quadratic Polynomials Over Finite Fields"
} | null | null | null | null | true | null | 2732 | null | Default | null | null |
null | {
"abstract": " We present 1-second cadence observations of M32 (NGC221) with the CHIMERA\ninstrument at the Hale 200-inch telescope of the Palomar Observatory. Using\nfield stars as a baseline for relative photometry, we are able to construct a\nlight curve of the nucleus in the g-prime and r-prime band with 1sigma=36\nmilli-mag photometric stability. We derive a temporal power spectrum for the\nnucleus and find no evidence for a time-variable signal above the noise as\nwould be expected if the nuclear black hole were accreting gas. Thus, we are\nunable to constrain the spin of the black hole although future work will use\nthis powerful instrument to target more actively accreting black holes. Given\nthe black hole mass of (2.5+/-0.5)*10^6 Msun inferred from stellar kinematics,\nthe absence of a contribution from a nuclear time-variable signal places an\nupper limit on the accretion rate which is 4.6*10^{-8} of the Eddington rate, a\nfactor of two more stringent than past upper limits from HST. The low mass of\nthe black hole despite the high stellar density suggests that the gas liberated\nby stellar interactions was primarily at early cosmic times when the low-mass\nblack hole had a small Eddington luminosity. This is at least partly driven by\na top-heavy stellar initial mass function at early cosmic times which is an\nefficient producer of stellar mass black holes. The implication is that\nsupermassive black holes likely arise from seeds formed through the coalescence\nof 3-100 Msun mass black holes that then accrete gas produced through stellar\ninteraction processes.\n",
"title": "Constraints on the Growth and Spin of the Supermassive Black Hole in M32 From High Cadence Visible Light Observations"
} | null | null | null | null | true | null | 2733 | null | Default | null | null |
null | {
"abstract": " Bipartite networks manifest as a stream of edges that represent transactions,\ne.g., purchases by retail customers. Many machine learning applications employ\nneighborhood-based measures to characterize the similarity among the nodes,\nsuch as the pairwise number of common neighbors (CN) and related metrics. While\nthe number of node pairs that share neighbors is potentially enormous, only a\nrelatively small proportion of them have many common neighbors. This motivates\nfinding a weighted sampling approach to preferentially sample these node pairs.\nThis paper presents a new sampling algorithm that provides a fixed size\nunbiased estimate of the similarity matrix resulting from a bipartite graph\nstream projection. The algorithm has two components. First, it maintains a\nreservoir of sampled bipartite edges with sampling weights that favor selection\nof high similarity nodes. Second, arriving edges generate a stream of\n\\textsl{similarity updates} based on their adjacency with the current sample.\nThese updates are aggregated in a second reservoir sample-based stream\naggregator to yield the final unbiased estimate. Experiments on real world\ngraphs show that a 10% sample at each stage yields estimates of high similarity\nedges with weighted relative errors of about 1%.\n",
"title": "Sampling for Approximate Bipartite Network Projection"
} | null | null | null | null | true | null | 2734 | null | Default | null | null |
null | {
"abstract": " Background: Performance bugs can lead to severe issues regarding computation\nefficiency, power consumption, and user experience. Locating these bugs is a\ndifficult task because developers have to judge for every costly operation\nwhether runtime is consumed necessarily or unnecessarily. Objective: We wanted\nto investigate how developers, when locating performance bugs, navigate through\nthe code, understand the program, and communicate the detected issues. Method:\nWe performed a qualitative user study observing twelve developers trying to fix\ndocumented performance bugs in two open source projects. The developers worked\nwith a profiling and analysis tool that visually depicts runtime information in\na list representation and embedded into the source code view. Results: We\nidentified typical navigation strategies developers used for pinpointing the\nbug, for instance, following method calls based on runtime consumption. The\nintegration of visualization and code helped developers to understand the bug.\nSketches visualizing data structures and algorithms turned out to be valuable\nfor externalizing and communicating the comprehension process for complex bugs.\nConclusion: Fixing a performance bug is a code comprehension and navigation\nproblem. Flexible navigation features based on executed methods and a close\nintegration of source code and performance information support the process.\n",
"title": "Navigate, Understand, Communicate: How Developers Locate Performance Bugs"
} | null | null | null | null | true | null | 2735 | null | Default | null | null |
null | {
"abstract": " Classical coupling constructions arrange for copies of the \\emph{same} Markov\nprocess started at two \\emph{different} initial states to become equal as soon\nas possible. In this paper, we consider an alternative coupling framework in\nwhich one seeks to arrange for two \\emph{different} Markov (or other\nstochastic) processes to remain equal for as long as possible, when started in\nthe \\emph{same} state. We refer to this \"un-coupling\" or \"maximal agreement\"\nconstruction as \\emph{MEXIT}, standing for \"maximal exit\". After highlighting\nthe importance of un-coupling arguments in a few key statistical and\nprobabilistic settings, we develop an explicit \\MEXIT construction for\nstochastic processes in discrete time with countable state-space. This\nconstruction is generalized to random processes on general state-space running\nin continuous time, and then exemplified by discussion of \\MEXIT for Brownian\nmotions with two different constant drifts.\n",
"title": "MEXIT: Maximal un-coupling times for stochastic processes"
} | null | null | [
"Mathematics"
]
| null | true | null | 2736 | null | Validated | null | null |
null | {
"abstract": " The primary motivation of much of software analytics is decision making. How\nto make these decisions? Should one make decisions based on lessons that arise\nfrom within a particular project? Or should one generate these decisions from\nacross multiple projects? This work is an attempt to answer these questions.\nOur work was motivated by a realization that much of the current generation\nsoftware analytics tools focus primarily on prediction. Indeed prediction is a\nuseful task, but it is usually followed by \"planning\" about what actions need\nto be taken. This research seeks to address the planning task by seeking\nmethods that support actionable analytics that offer clear guidance on what to\ndo. Specifically, we propose XTREE and BELLTREE algorithms for generating a set\nof actionable plans within and across projects. Each of these plans, if\nfollowed will improve the quality of the software project.\n",
"title": "Learning Effective Changes for Software Projects"
} | null | null | null | null | true | null | 2737 | null | Default | null | null |
null | {
"abstract": " The increasing richness in volume, and especially types of data in the\nfinancial domain provides unprecedented opportunities to understand the stock\nmarket more comprehensively and makes the price prediction more accurate than\nbefore. However, they also bring challenges to classic statistic approaches\nsince those models might be constrained to a certain type of data. Aiming at\naggregating differently sourced information and offering type-free capability\nto existing models, a framework for predicting stock market of scenarios with\nmixed data, including scalar data, compositional data (pie-like) and functional\ndata (curve-like), is established. The presented framework is\nmodel-independent, as it serves like an interface to multiple types of data and\ncan be combined with various prediction models. And it is proved to be\neffective through numerical simulations. Regarding to price prediction, we\nincorporate the trading volume (scalar data), intraday return series\n(functional data), and investors' emotions from social media (compositional\ndata) through the framework to competently forecast whether the market goes up\nor down at opening in the next day. The strong explanatory power of the\nframework is further demonstrated. Specifically, it is found that the intraday\nreturns impact the following opening prices differently between bearish market\nand bullish market. And it is not at the beginning of the bearish market but\nthe subsequent period in which the investors' \"fear\" comes to be indicative.\nThe framework would help extend existing prediction models easily to scenarios\nwith multiple types of data and shed light on a more systemic understanding of\nthe stock market.\n",
"title": "Aggregating multiple types of complex data in stock market prediction: A model-independent framework"
} | null | null | null | null | true | null | 2738 | null | Default | null | null |
null | {
"abstract": " Taxi demand prediction is an important building block to enabling intelligent\ntransportation systems in a smart city. An accurate prediction model can help\nthe city pre-allocate resources to meet travel demand and to reduce empty taxis\non streets which waste energy and worsen the traffic congestion. With the\nincreasing popularity of taxi requesting services such as Uber and Didi Chuxing\n(in China), we are able to collect large-scale taxi demand data continuously.\nHow to utilize such big data to improve the demand prediction is an interesting\nand critical real-world problem. Traditional demand prediction methods mostly\nrely on time series forecasting techniques, which fail to model the complex\nnon-linear spatial and temporal relations. Recent advances in deep learning\nhave shown superior performance on traditionally challenging tasks such as\nimage classification by learning the complex features and correlations from\nlarge-scale data. This breakthrough has inspired researchers to explore deep\nlearning techniques on traffic prediction problems. However, existing methods\non traffic prediction have only considered spatial relation (e.g., using CNN)\nor temporal relation (e.g., using LSTM) independently. We propose a Deep\nMulti-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial\nand temporal relations. Specifically, our proposed model consists of three\nviews: temporal view (modeling correlations between future demand values with\nnear time points via LSTM), spatial view (modeling local spatial correlation\nvia local CNN), and semantic view (modeling correlations among regions sharing\nsimilar temporal patterns). Experiments on large-scale real taxi demand data\ndemonstrate effectiveness of our approach over state-of-the-art methods.\n",
"title": "Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction"
} | null | null | null | null | true | null | 2739 | null | Default | null | null |
null | {
"abstract": " We have explored the optimal frequency of interstellar photon communications\nand benchmarked other particles as information carriers in previous papers of\nthis series. We now compare the latency and bandwidth of sending probes with\ninscribed matter. Durability requirements such as shields against dust and\nradiation, as well as data duplication, add negligible weight overhead at\nvelocities <0.2c. Probes may arrive in full, while most of a photon beam is\nlost to diffraction. Probes can be more energy efficient per bit, and can have\nhigher bandwidth, compared to classical communication, unless a photon receiver\nis placed in a stellar gravitational lens. The probe's advantage dominates by\norder of magnitude for long distances (kpc) and low velocities (<0.1c) at the\ncost of higher latency.\n",
"title": "Interstellar communication. VII. Benchmarking inscribed matter probes"
} | null | null | [
"Physics"
]
| null | true | null | 2740 | null | Validated | null | null |
null | {
"abstract": " We present an asymptotic criterion to determine the optimal number of\nclusters in k-means. We consider k-means as data compression, and propose to\nadopt the number of clusters that minimizes the estimated description length\nafter compression. Here we report two types of compression ratio based on two\nways to quantify the description length of data after compression. This\napproach further offers a way to evaluate whether clusters obtained with\nk-means have a hierarchical structure by examining whether multi-stage\ncompression can further reduce the description length. We applied our criteria\nto determine the number of clusters to synthetic data and empirical\nneuroimaging data to observe the behavior of the criteria across different\ntypes of data set and suitability of the two types of criteria for different\ndatasets. We found that our method can offer reasonable clustering results that\nare useful for dimension reduction. While our numerical results revealed\ndependency of our criteria on the various aspects of dataset such as the\ndimensionality, the description length approach proposed here provides a useful\nguidance to determine the number of clusters in a principled manner when\nunderlying properties of the data are unknown and only inferred from\nobservation of data.\n",
"title": "A description length approach to determining the number of k-means clusters"
} | null | null | null | null | true | null | 2741 | null | Default | null | null |
null | {
"abstract": " In this note we study the Seifert rational homology spheres with two\ncomplementary legs, i.e. with a pair of invariants whose fractions add up to\none. We give a complete classification of the Seifert manifolds with 3\nexceptional fibers and two complementary legs which bound rational homology\nballs. The result translates in a statement on the sliceness of some Montesinos\nknots.\n",
"title": "Complementary legs and rational balls"
} | null | null | null | null | true | null | 2742 | null | Default | null | null |
null | {
"abstract": " We investigate the impact of resonant gravitational waves on quadrupole\nacoustic modes of Sun-like stars located nearby stellar black hole binary\nsystems (such as GW150914 and GW151226). We find that the stimulation of the\nlow-overtone modes by gravitational radiation can lead to sizeable photometric\namplitude variations, much larger than the predictions for amplitudes driven by\nturbulent convection, which in turn are consistent with the photometric\namplitudes observed in most Sun-like stars. For accurate stellar evolution\nmodels, using up-to-date stellar physics, we predict photometric amplitude\nvariations of $1$ -- $10^3$ ppm for a solar mass star located at a distance\nbetween 1 au and 10 au from the black hole binary, and belonging to the same\nmulti-star system. The observation of such a phenomenon will be within the\nreach of the Plato mission because telescope will observe several portions of\nthe Milky Way, many of which are regions of high stellar density with a\nsubstantial mixed population of Sun-like stars and black hole binaries.\n",
"title": "Gravitational Waves from Stellar Black Hole Binaries and the Impact on Nearby Sun-like Stars"
} | null | null | [
"Physics"
]
| null | true | null | 2743 | null | Validated | null | null |
null | {
"abstract": " We present the detection of long-period RV variations in HD 36384, HD 52030,\nand HD 208742 by using the high-resolution, fiber-fed Bohyunsan Observatory\nEchelle Spectrograph (BOES) for the precise radial velocity (RV) survey of\nabout 200 northern circumpolar stars. Analyses of RV data, chromospheric\nactivity indicators, and bisector variations spanning about five years suggest\nthat the RV variations are compatible with planet or brown dwarf companions in\nKeplerian motion. However, HD 36384 shows photometric variations with a period\nvery close to that of RV variations as well as amplitude variations in the\nweighted wavelet Z-transform (WWZ) analysis, which argues that the RV\nvariations in HD~36384 are from the stellar pulsations. Assuming that the\ncompanion hypothesis is correct, HD~52030 hosts a companion with minimum mass\n13.3 M_Jup$ orbiting in 484 days at a distance of 1.2 AU. HD~208742 hosts a\ncompanion of 14.0 M_Jup at 1.5 AU with a period of 602 days. All stars are\nlocated at the asymptotic giant branch (AGB) stage on the H-R diagram after\nundergone the helium flash and left the giant clump.With stellar radii of 53.0\nR_Sun and 57.2 R_Sun for HD 52030 and HD 208742, respectively, these stars may\nbe the largest yet, in terms of stellar radius, found to host sub-stellar\ncompanions. However, given possible RV amplitude variations and the fact that\nthese are highly evolved stars the planet hypothesis is not yet certain.\n",
"title": "Search for Exoplanets around Northern Circumpolar Stars- II. The Detection of Radial Velocity Variations in M Giant Stars HD 36384, HD 52030, and HD 208742"
} | null | null | null | null | true | null | 2744 | null | Default | null | null |
null | {
"abstract": " A personalized learning system needs a large pool of items for learners to\nsolve. When working with a large pool of items, it is useful to measure the\nsimilarity of items. We outline a general approach to measuring the similarity\nof items and discuss specific measures for items used in introductory\nprogramming. Evaluation of quality of similarity measures is difficult. To this\nend, we propose an evaluation approach utilizing three levels of abstraction.\nWe illustrate our approach to measuring similarity and provide evaluation using\nitems from three diverse programming environments.\n",
"title": "Measuring Item Similarity in Introductory Programming: Python and Robot Programming Case Studies"
} | null | null | null | null | true | null | 2745 | null | Default | null | null |
null | {
"abstract": " This paper is a contribution to the study of the universal Horn fragment of\npredicate fuzzy logics, focusing on the proof of the existence of free models\nof theories of Horn clauses over Rational Pavelka predicate logic. We define\nthe notion of a term structure associated to every consistent theory T over\nRational Pavelka predicate logic and we prove that the term models of T are\nfree on the class of all models of T. Finally, it is shown that if T is a set\nof Horn clauses, the term structure associated to T is a model of T.\n",
"title": "Term Models of Horn Clauses over Rational Pavelka Predicate Logic"
} | null | null | null | null | true | null | 2746 | null | Default | null | null |
null | {
"abstract": " Supermassive black hole (SMBH) binaries residing at the core of merging\ngalaxies are recently found to be strongly affected by the rotation of their\nhost galaxies. The highly eccentric orbits that form when the host is\ncounterrotating emit strong bursts of gravitational waves that propel rapid\nSMBH binary coalescence. Most prior work, however, focused on planar orbits and\na uniform rotation profile, an unlikely interaction configuration. However, the\ncoupling between rotation and SMBH binary evolution appears to be such a strong\ndynamical process that it warrants further investigation. This study uses\ndirect N-body simulations to isolate the effect of galaxy rotation in more\nrealistic interactions. In particular, we systematically vary the SMBH orbital\nplane with respect to the galaxy rotation axis, the radial extent of the\nrotating component, and the initial eccentricity of the SMBH binary orbit. We\nfind that the initial orbital plane orientation and eccentricity alone can\nchange the inspiral time by an order of magnitude. Because SMBH binary inspiral\nand merger is such a loud gravitational wave source, these studies are critical\nfor the future gravitational wave detector, LISA, an ESA/NASA mission currently\nset to launch by 2034.\n",
"title": "Galaxy Rotation and Supermassive Black Hole Binary Evolution"
} | null | null | null | null | true | null | 2747 | null | Default | null | null |
null | {
"abstract": " Web applications require access to the file-system for many different tasks.\nWhen analyzing the security of a web application, secu- rity analysts should\nthus consider the impact that file-system operations have on the security of\nthe whole application. Moreover, the analysis should take into consideration\nhow file-system vulnerabilities might in- teract with other vulnerabilities\nleading an attacker to breach into the web application. In this paper, we first\npropose a classification of file- system vulnerabilities, and then, based on\nthis classification, we present a formal approach that allows one to exploit\nfile-system vulnerabilities. We give a formal representation of web\napplications, databases and file- systems, and show how to reason about\nfile-system vulnerabilities. We also show how to combine file-system\nvulnerabilities and SQL-Injection vulnerabilities for the identification of\ncomplex, multi-stage attacks. We have developed an automatic tool that\nimplements our approach and we show its efficiency by discussing several\nreal-world case studies, which are witness to the fact that our tool can\ngenerate, and exploit, complex attacks that, to the best of our knowledge, no\nother state-of-the-art-tool for the security of web applications can find.\n",
"title": "A Formal Approach to Exploiting Multi-Stage Attacks based on File-System Vulnerabilities of Web Applications (Extended Version)"
} | null | null | [
"Computer Science"
]
| null | true | null | 2748 | null | Validated | null | null |
null | {
"abstract": " Fully realizing the potential of acceleration for Deep Neural Networks (DNNs)\nrequires understanding and leveraging algorithmic properties. This paper builds\nupon the algorithmic insight that bitwidth of operations in DNNs can be reduced\nwithout compromising their classification accuracy. However, to prevent\naccuracy loss, the bitwidth varies significantly across DNNs and it may even be\nadjusted for each layer. Thus, a fixed-bitwidth accelerator would either offer\nlimited benefits to accommodate the worst-case bitwidth requirements, or lead\nto a degradation in final accuracy. To alleviate these deficiencies, this work\nintroduces dynamic bit-level fusion/decomposition as a new dimension in the\ndesign of DNN accelerators. We explore this dimension by designing Bit Fusion,\na bit-flexible accelerator, that constitutes an array of bit-level processing\nelements that dynamically fuse to match the bitwidth of individual DNN layers.\nThis flexibility in the architecture enables minimizing the computation and the\ncommunication at the finest granularity possible with no loss in accuracy. We\nevaluate the benefits of BitFusion using eight real-world feed-forward and\nrecurrent DNNs. The proposed microarchitecture is implemented in Verilog and\nsynthesized in 45 nm technology. Using the synthesis results and cycle accurate\nsimulation, we compare the benefits of Bit Fusion to two state-of-the-art DNN\naccelerators, Eyeriss and Stripes. In the same area, frequency, and process\ntechnology, BitFusion offers 3.9x speedup and 5.1x energy savings over Eyeriss.\nCompared to Stripes, BitFusion provides 2.6x speedup and 3.9x energy reduction\nat 45 nm node when BitFusion area and frequency are set to those of Stripes.\nScaling to GPU technology node of 16 nm, BitFusion almost matches the\nperformance of a 250-Watt Titan Xp, which uses 8-bit vector instructions, while\nBitFusion merely consumes 895 milliwatts of power.\n",
"title": "Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks"
} | null | null | null | null | true | null | 2749 | null | Default | null | null |
null | {
"abstract": " The physical layer security in the up-link of the wireless communication\nsystems is often modeled as the multiple access wiretap channel (MAC-WT), and\nrecently it has received a lot attention. In this paper, the MAC-WT has been\nre-visited by considering the situation that the legitimate receiver feeds his\nreceived channel output back to the transmitters via two noiseless channels,\nrespectively. This model is called the MAC-WT with noiseless feedback. Inner\nand outer bounds on the secrecy capacity region of this feedback model are\nprovided. To be specific, we first present a decode-and-forward (DF) inner\nbound on the secrecy capacity region of this feedback model, and this bound is\nconstructed by allowing each transmitter to decode the other one's transmitted\nmessage from the feedback, and then each transmitter uses the decoded message\nto re-encode his own messages, i.e., this DF inner bound allows the independent\ntransmitters to co-operate with each other. Then, we provide a hybrid inner\nbound which is strictly larger than the DF inner bound, and it is constructed\nby using the feedback as a tool not only to allow the independent transmitters\nto co-operate with each other, but also to generate two secret keys\nrespectively shared between the legitimate receiver and the two transmitters.\nFinally, we give a sato-type outer bound on the secrecy capacity region of this\nfeedback model. The results of this paper are further explained via a Gaussian\nexample.\n",
"title": "Multiple Access Wiretap Channel with Noiseless Feedback"
} | null | null | null | null | true | null | 2750 | null | Default | null | null |
null | {
"abstract": " We develop a new modeling framework for Inter-Subject Analysis (ISA). The\ngoal of ISA is to explore the dependency structure between different subjects\nwith the intra-subject dependency as nuisance. It has important applications in\nneuroscience to explore the functional connectivity between brain regions under\nnatural stimuli. Our framework is based on the Gaussian graphical models, under\nwhich ISA can be converted to the problem of estimation and inference of the\ninter-subject precision matrix. The main statistical challenge is that we do\nnot impose sparsity constraint on the whole precision matrix and we only assume\nthe inter-subject part is sparse. For estimation, we propose to estimate an\nalternative parameter to get around the non-sparse issue and it can achieve\nasymptotic consistency even if the intra-subject dependency is dense. For\ninference, we propose an \"untangle and chord\" procedure to de-bias our\nestimator. It is valid without the sparsity assumption on the inverse Hessian\nof the log-likelihood function. This inferential method is general and can be\napplied to many other statistical problems, thus it is of independent\ntheoretical interest. Numerical experiments on both simulated and brain imaging\ndata validate our methods and theory.\n",
"title": "Inter-Subject Analysis: Inferring Sparse Interactions with Dense Intra-Graphs"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 2751 | null | Validated | null | null |
null | {
"abstract": " Nanoscale quantum probes such as the nitrogen-vacancy centre in diamond have\ndemonstrated remarkable sensing capabilities over the past decade as control\nover the fabrication and manipulation of these systems has evolved. However, as\nthe size of these nanoscale quantum probes is reduced, the surface termination\nof the host material begins to play a prominent role as a source of magnetic\nand electric field noise. In this work, we show that borane-reduced nanodiamond\nsurfaces can on average double the spin relaxation time of individual\nnitrogen-vacancy centres in nanodiamonds when compared to the thermally\noxidised surfaces. Using a combination of infra-red and x-ray absorption\nspectroscopy techniques, we correlate the changes in quantum relaxation rates\nwith the conversion of sp2 carbon to C-O and C-H bonds on the diamond surface.\nThese findings implicate double-bonded carbon species as a dominant source of\nspin noise for near surface NV centres and show that through tailored\nengineering of the surface, we can improve the quantum properties and magnetic\nsensitivity of these nanoscale probes.\n",
"title": "Impact of surface functionalisation on the quantum coherence of nitrogen vacancy centres in nanodiamond"
} | null | null | [
"Physics"
]
| null | true | null | 2752 | null | Validated | null | null |
null | {
"abstract": " Optimization plays a key role in machine learning. Recently, stochastic\nsecond-order methods have attracted much attention due to their low\ncomputational cost in each iteration. However, these algorithms might perform\npoorly especially if it is hard to approximate the Hessian well and\nefficiently. As far as we know, there is no effective way to handle this\nproblem. In this paper, we resort to Nesterov's acceleration technique to\nimprove the convergence performance of a class of second-order methods called\napproximate Newton. We give a theoretical analysis that Nesterov's acceleration\ntechnique can improve the convergence performance for approximate Newton just\nlike for first-order methods. We accordingly propose an accelerated regularized\nsub-sampled Newton. Our accelerated algorithm performs much better than the\noriginal regularized sub-sampled Newton in experiments, which validates our\ntheory empirically. Besides, the accelerated regularized sub-sampled Newton has\ngood performance comparable to or even better than classical algorithms.\n",
"title": "Nesterov's Acceleration For Approximate Newton"
} | null | null | [
"Computer Science"
]
| null | true | null | 2753 | null | Validated | null | null |
null | {
"abstract": " Centrality metrics are among the main tools in social network analysis. Being\ncentral for a user of a network leads to several benefits to the user: central\nusers are highly influential and play key roles within the network. Therefore,\nthe optimization problem of increasing the centrality of a network user\nrecently received considerable attention. Given a network and a target user\n$v$, the centrality maximization problem consists in creating $k$ new links\nincident to $v$ in such a way that the centrality of $v$ is maximized,\naccording to some centrality metric. Most of the algorithms proposed in the\nliterature are based on showing that a given centrality metric is monotone and\nsubmodular with respect to link addition. However, this property does not hold\nfor several shortest-path based centrality metrics if the links are undirected.\nIn this paper we study the centrality maximization problem in undirected\nnetworks for one of the most important shortest-path based centrality measures,\nthe coverage centrality. We provide several hardness and approximation results.\nWe first show that the problem cannot be approximated within a factor greater\nthan $1-1/e$, unless $P=NP$, and, under the stronger gap-ETH hypothesis, the\nproblem cannot be approximated within a factor better than $1/n^{o(1)}$, where\n$n$ is the number of users. We then propose two greedy approximation\nalgorithms, and show that, by suitably combining them, we can guarantee an\napproximation factor of $\\Omega(1/\\sqrt{n})$. We experimentally compare the\nsolutions provided by our approximation algorithm with optimal solutions\ncomputed by means of an exact IP formulation. We show that our algorithm\nproduces solutions that are very close to the optimum.\n",
"title": "Coverage Centrality Maximization in Undirected Networks"
} | null | null | [
"Computer Science"
]
| null | true | null | 2754 | null | Validated | null | null |
null | {
"abstract": " The removal of noise typically correlated in time and wavelength is one of\nthe main challenges for using the radial velocity method to detect Earth\nanalogues. We analyze radial velocity data of tau Ceti and find robust evidence\nfor wavelength dependent noise. We find this noise can be modeled by a\ncombination of moving average models and \"differential radial velocities\". We\napply this noise model to various radial velocity data sets for tau Ceti, and\nfind four periodic signals at 20.0, 49.3, 160 and 642 d which we interpret as\nplanets. We identify two new signals with orbital periods of 20.0 and 49.3 d\nwhile the other two previously suspected signals around 160 and 600 d are\nquantified to a higher precision. The 20.0 d candidate is independently\ndetected in KECK data. All planets detected in this work have minimum masses\nless than 4$M_\\oplus$ with the two long period ones located around the inner\nand outer edges of the habitable zone, respectively. We find that the\ninstrumental noise gives rise to a precision limit of the HARPS around 0.2 m/s.\nWe also find correlation between the HARPS data and the central moments of the\nspectral line profile at around 0.5 m/s level, although these central moments\nmay contain both noise and signals. The signals detected in this work have\nsemi-amplitudes as low as 0.3 m/s, demonstrating the ability of the radial\nvelocity technique to detect relatively weak signals.\n",
"title": "Color difference makes a difference: four planet candidates around tau Ceti"
} | null | null | null | null | true | null | 2755 | null | Default | null | null |
null | {
"abstract": " The understanding of variations in genome sequences assists us in identifying\npeople who are predisposed to common diseases, solving rare diseases, and\nfinding the corresponding population group of the individuals from a larger\npopulation group. Although classical machine learning techniques allow\nresearchers to identify groups (i.e. clusters) of related variables, the\naccuracy, and effectiveness of these methods diminish for large and\nhigh-dimensional datasets such as the whole human genome. On the other hand,\ndeep neural network architectures (the core of deep learning) can better\nexploit large-scale datasets to build complex models. In this paper, we use the\nK-means clustering approach for scalable genomic data analysis aiming towards\nclustering genotypic variants at the population scale. Finally, we train a deep\nbelief network (DBN) for predicting the geographic ethnicity. We used the\ngenotype data from the 1000 Genomes Project, which covers the result of genome\nsequencing for 2504 individuals from 26 different ethnic origins and comprises\n84 million variants. Our experimental results, with a focus on accuracy and\nscalability, show the effectiveness and superiority compared to the\nstate-of-the-art.\n",
"title": "Recurrent Deep Embedding Networks for Genotype Clustering and Ethnicity Prediction"
} | null | null | null | null | true | null | 2756 | null | Default | null | null |
null | {
"abstract": " How can we design reinforcement learning agents that avoid causing\nunnecessary disruptions to their environment? We argue that current approaches\nto penalizing side effects can introduce bad incentives in tasks that require\nirreversible actions, and in environments that contain sources of change other\nthan the agent. For example, some approaches give the agent an incentive to\nprevent any irreversible changes in the environment, including the actions of\nother agents. We introduce a general definition of side effects, based on\nrelative reachability of states compared to a default state, that avoids these\nundesirable incentives. Using a set of gridworld experiments illustrating\nrelevant scenarios, we empirically compare relative reachability to penalties\nbased on existing definitions and show that it is the only penalty among those\ntested that produces the desired behavior in all the scenarios.\n",
"title": "Measuring and avoiding side effects using relative reachability"
} | null | null | null | null | true | null | 2757 | null | Default | null | null |
null | {
"abstract": " We introduce the State Classification Problem (SCP) for hybrid systems, and\npresent Neural State Classification (NSC) as an efficient solution technique.\nSCP generalizes the model checking problem as it entails classifying each state\n$s$ of a hybrid automaton as either positive or negative, depending on whether\nor not $s$ satisfies a given time-bounded reachability specification. This is\nan interesting problem in its own right, which NSC solves using\nmachine-learning techniques, Deep Neural Networks in particular. State\nclassifiers produced by NSC tend to be very efficient (run in constant time and\nspace), but may be subject to classification errors. To quantify and mitigate\nsuch errors, our approach comprises: i) techniques for certifying, with\nstatistical guarantees, that an NSC classifier meets given accuracy levels; ii)\ntuning techniques, including a novel technique based on adversarial sampling,\nthat can virtually eliminate false negatives (positive states classified as\nnegative), thereby making the classifier more conservative. We have applied NSC\nto six nonlinear hybrid system benchmarks, achieving an accuracy of 99.25% to\n99.98%, and a false-negative rate of 0.0033 to 0, which we further reduced to\n0.0015 to 0 after tuning the classifier. We believe that this level of accuracy\nis acceptable in many practical applications, and that these results\ndemonstrate the promise of the NSC approach.\n",
"title": "Neural State Classification for Hybrid Systems"
} | null | null | null | null | true | null | 2758 | null | Default | null | null |
null | {
"abstract": " Tree ensemble models such as random forests and boosted trees are among the\nmost widely used and practically successful predictive models in applied\nmachine learning and business analytics. Although such models have been used to\nmake predictions based on exogenous, uncontrollable independent variables, they\nare increasingly being used to make predictions where the independent variables\nare controllable and are also decision variables. In this paper, we study the\nproblem of tree ensemble optimization: given a tree ensemble that predicts some\ndependent variable using controllable independent variables, how should we set\nthese variables so as to maximize the predicted value? We formulate the problem\nas a mixed-integer optimization problem. We theoretically examine the strength\nof our formulation, provide a hierarchy of approximate formulations with bounds\non approximation quality and exploit the structure of the problem to develop\ntwo large-scale solution methods, one based on Benders decomposition and one\nbased on iteratively generating tree split constraints. We test our methodology\non real data sets, including two case studies in drug design and customized\npricing, and show that our methodology can efficiently solve large-scale\ninstances to near or full optimality, and outperforms solutions obtained by\nheuristic approaches. In our drug design case, we show how our approach can\nidentify compounds that efficiently trade-off predicted performance and novelty\nwith respect to existing, known compounds. In our customized pricing case, we\nshow how our approach can efficiently determine optimal store-level prices\nunder a random forest model that delivers excellent predictive accuracy.\n",
"title": "Optimization of Tree Ensembles"
} | null | null | null | null | true | null | 2759 | null | Default | null | null |
null | {
"abstract": " The formalism to augment the classical models of equation of state for real\ngases with the quantum statistical effects is presented. It allows an arbitrary\nexcluded volume procedure to model repulsive interactions, and an arbitrary\ndensity-dependent mean field to model attractive interactions. Variations on\nthe excluded volume mechanism include van der Waals (VDW) and Carnahan-Starling\nmodels, while the mean fields are based on VDW, Redlich-Kwong-Soave,\nPeng-Robinson, and Clausius equations of state. The VDW parameters of the\nnucleon-nucleon interaction are fitted in each model to the properties of the\nground state of nuclear matter, and the following range of values is obtained:\n$a = 330 - 430$ MeV fm$^3$ and $b = 2.5 - 4.4$ fm$^3$. In the context of the\nexcluded-volume approach, the fits to the nuclear ground state disfavor the\nvalues of the effective hard-core radius of a nucleon significantly smaller\nthan $0.5$ fm, at least for the nuclear matter region of the phase diagram.\nModifications to the standard VDW repulsion and attraction terms allow to\nimprove significantly the value of the nuclear incompressibility factor $K_0$,\nbringing it closer to empirical estimates. The generalization to include the\nbaryon-baryon interactions into the hadron resonance gas model is performed.\nThe behavior of the baryon-related lattice QCD observables at zero chemical\npotential is shown to be strongly correlated to the nuclear matter properties:\nan improved description of the nuclear incompressibility also yields an\nimproved description of the lattice data at $\\mu = 0$.\n",
"title": "Equations of state for real gases on the nuclear scale"
} | null | null | [
"Physics"
]
| null | true | null | 2760 | null | Validated | null | null |
null | {
"abstract": " Knowledge distillation (KD) consists of transferring knowledge from one\nmachine learning model (the teacher}) to another (the student). Commonly, the\nteacher is a high-capacity model with formidable performance, while the student\nis more compact. By transferring knowledge, one hopes to benefit from the\nstudent's compactness. %we desire a compact model with performance close to the\nteacher's. We study KD from a new perspective: rather than compressing models,\nwe train students parameterized identically to their teachers. Surprisingly,\nthese {Born-Again Networks (BANs), outperform their teachers significantly,\nboth on computer vision and language modeling tasks. Our experiments with BANs\nbased on DenseNets demonstrate state-of-the-art performance on the CIFAR-10\n(3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional\nexperiments explore two distillation objectives: (i) Confidence-Weighted by\nTeacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP).\nBoth methods elucidate the essential components of KD, demonstrating a role of\nthe teacher outputs on both predicted and non-predicted classes. We present\nexperiments with students of various capacities, focusing on the under-explored\ncase where students overpower teachers. Our experiments show significant\nadvantages from transferring knowledge between DenseNets and ResNets in either\ndirection.\n",
"title": "Born Again Neural Networks"
} | null | null | null | null | true | null | 2761 | null | Default | null | null |
null | {
"abstract": " The annual cost of Cybercrime to the global economy is estimated to be around\n400 billion dollar in support of which Exploit Kits have been providing\nenabling technology.This paper reviews the recent developments in Exploit Kit\ncapability and how these are being applied in practice.In doing so it paves the\nway for better understanding of the exploit kits economy that may better help\nin combatting them and considers industry preparedness to respond.\n",
"title": "Exploit Kits: The production line of the Cybercrime Economy"
} | null | null | null | null | true | null | 2762 | null | Default | null | null |
null | {
"abstract": " Surface plasmon waves carry an intrinsic transverse spin, which is locked to\nits propagation direction. Apparently, when a singular plasmonic mode is guided\non a conic surface this spin-locking may lead to a strong circular polarization\nof the far-field emission. Specifically, an adiabatically tapered gold nanocone\nguides an a priori excited plasmonic vortex upwards where the mode accelerates\nand finally beams out from the tip apex. The helicity of this beam is shown to\nbe single-handed and stems solely from the transverse spin-locking of the\nhelical plasmonic wave-front. We present a simple geometric model that fully\npredicts the emerging light spin in our system. Finally we experimentally\ndemonstrate the helicity-locking phenomenon by using accurately fabricated\nnanostructures and confirm the results with the model and numerical data.\n",
"title": "Helicity locking in light emitted from a plasmonic nanotaper"
} | null | null | null | null | true | null | 2763 | null | Default | null | null |
null | {
"abstract": " In this work we introduce declarative statistics, a suite of declarative\nmodelling tools for statistical analysis. Statistical constraints represent the\nkey building block of declarative statistics. First, we introduce a range of\nrelevant counting and matrix constraints and associated decompositions, some of\nwhich novel, that are instrumental in the design of statistical constraints.\nSecond, we introduce a selection of novel statistical constraints and\nassociated decompositions, which constitute a self-contained toolbox that can\nbe used to tackle a wide range of problems typically encountered by\nstatisticians. Finally, we deploy these statistical constraints to a wide range\nof application areas drawn from classical statistics and we contrast our\nframework against established practices.\n",
"title": "Declarative Statistics"
} | null | null | null | null | true | null | 2764 | null | Default | null | null |
null | {
"abstract": " In Phase 2 of CRESST-II 18 detector modules were operated for about two years\n(July 2013 - August 2015). Together with this document we are publishing data\nfrom two detector modules which have been used for direct dark-matter searches.\nWith these data-sets we were able to set world-leading limits on the cross\nsection for spin-independent elastic scattering of dark matter particles off\nnuclei. We publish the energies of all events within the acceptance regions for\ndark-matter searches. In addition, we also publish the energies of the events\nwithin the electron-recoil band. This data set can be used to study\ninteractions with electrons of CaWO$_4$. In this document we describe how to\nuse these data sets. In particular, we explain the cut-survival probabilities\nrequired for comparisons of models with the data sets.\n",
"title": "Description of CRESST-II data"
} | null | null | null | null | true | null | 2765 | null | Default | null | null |
null | {
"abstract": " The problem of construction of ladder operators for rationally extended\nquantum harmonic oscillator (REQHO) systems of a general form is investigated\nin the light of existence of different schemes of the Darboux-Crum-Krein-Adler\ntransformations by which such systems can be generated from the quantum\nharmonic oscillator. Any REQHO system is characterized by the number of\nseparated states in its spectrum, the number of `valence bands' in which the\nseparated states are organized, and by the total number of the missing energy\nlevels and their position. All these peculiarities of a REQHO system are shown\nto be detected and reflected by a trinity $(\\mathcal{A}^\\pm$,\n$\\mathcal{B}^\\pm$, $\\mathcal{C}^\\pm$) of the basic (primary) lowering and\nraising ladder operators related between themselves by certain algebraic\nidentities with coefficients polynomially-dependent on the Hamiltonian. We show\nthat all the secondary, higher-order ladder operators are obtainable by a\ncomposition of the basic ladder operators of the trinity which form the set of\nthe spectrum-generating operators. Each trinity, in turn, can be constructed\nfrom the intertwining operators of the two complementary minimal schemes of the\nDarboux-Crum-Krein-Adler transformations.\n",
"title": "ABC of ladder operators for rationally extended quantum harmonic oscillator systems"
} | null | null | [
"Physics",
"Mathematics"
]
| null | true | null | 2766 | null | Validated | null | null |
null | {
"abstract": " By a classical principle of probability theory, sufficiently thin\nsubsequences of general sequences of random variables behave like i.i.d.\\\nsequences. This observation not only explains the remarkable properties of\nlacunary trigonometric series, but also provides a powerful tool in many areas\nof analysis, such the theory of orthogonal series and Banach space theory. In\ncontrast to i.i.d.\\ sequences, however, the probabilistic structure of lacunary\nsequences is not permutation-invariant and the analytic properties of such\nsequences can change after rearrangement. In a previous paper we showed that\npermutation-invariance of subsequences of the trigonometric system and related\nfunction systems is connected with Diophantine properties of the index\nsequence. In this paper we will study permutation-invariance of subsequences of\ngeneral r.v.\\ sequences.\n",
"title": "On permutation-invariance of limit theorems"
} | null | null | [
"Mathematics"
]
| null | true | null | 2767 | null | Validated | null | null |
null | {
"abstract": " We have synthesized 10 new iron oxyarsenides, K$Ln_2$Fe$_4$As$_4$O$_2$ ($Ln$\n= Gd, Tb, Dy, and Ho) and Cs$Ln_2$Fe$_4$As$_4$O$_2$ ($Ln$ = Nd, Sm, Gd, Tb, Dy,\nand Ho), with the aid of lattice-match [between $A$Fe$_2$As$_2$ ($A$ = K and\nCs) and $Ln$FeAsO] approach. The resultant compounds possess hole-doped\nconducting double FeAs layers, [$A$Fe$_4$As$_4$]$^{2-}$, that are separated by\nthe insulating [$Ln_2$O$_2$]$^{2+}$ slabs. Measurements of electrical\nresistivity and dc magnetic susceptibility demonstrate bulk superconductivity\nat $T_\\mathrm{c}$ = 33 - 37 K. We find that $T_\\mathrm{c}$ correlates with the\naxis ratio $c/a$ for all 12442-type superconductors discovered. Also,\n$T_\\mathrm{c}$ tends to increase with the lattice mismatch, implying a role of\nlattice instability for the enhancement of superconductivity.\n",
"title": "Superconductivity at 33 - 37 K in $ALn_2$Fe$_4$As$_4$O$_2$ ($A$ = K and Cs; $Ln$ = Lanthanides)"
} | null | null | null | null | true | null | 2768 | null | Default | null | null |
null | {
"abstract": " The present paper introduces the initial implementation of a software\nexploration tool targeting graphical user interface (GUI) driven applications.\nGUITracer facilitates the comprehension of GUI-driven applications by starting\nfrom their most conspicuous artefact - the user interface itself. The current\nimplementation of the tool can be used with any Java-based target application\nthat employs one of the AWT, Swing or SWT toolkits. The tool transparently\ninstruments the target application and provides real time information about the\nGUI events fired. For each event, call relations within the application are\ndisplayed at method, class or package level, together with detailed coverage\ninformation. The tool facilitates feature location, program comprehension as\nwell as GUI test creation by revealing the link between the application's GUI\nand its underlying code. As such, GUITracer is intended for software\npractitioners developing or maintaining GUI-driven applications. We believe our\ntool to be especially useful for entry-level practitioners as well as students\nseeking to understand complex GUI-driven software systems. The present paper\ndetails the rationale as well as the technical implementation of the tool. As a\nproof-of-concept implementation, we also discuss further development that can\nlead to our tool's integration into a software development workflow.\n",
"title": "Live Visualization of GUI Application Code Coverage with GUITracer"
} | null | null | null | null | true | null | 2769 | null | Default | null | null |
null | {
"abstract": " Time Projection Chamber (TPC) has been chosen as the main tracking system in\nseveral high-flux and high repetition rate experiments. These include on-going\nexperiments such as ALICE and future experiments such as PANDA at FAIR and ILC.\nDifferent $\\mathrm{R}\\&\\mathrm{D}$ activities were carried out on the adoption\nof Gas Electron Multiplier (GEM) as the gas amplification stage of the\nALICE-TPC upgrade version. The requirement of low ion feedback has been\nestablished through these activities. Low ion feedback minimizes distortions\ndue to space charge and maintains the necessary values of detector gain and\nenergy resolution. In the present work, Garfield simulation framework has been\nused to study the related physical processes occurring within single, triple\nand quadruple GEM detectors. Ion backflow and electron transmission of\nquadruple GEMs, made up of foils with different hole pitch under different\nelectromagnetic field configurations (the projected solutions for the ALICE\nTPC) have been studied. Finally a new triple GEM detector configuration with\nlow ion backflow fraction and good electron transmission properties has been\nproposed as a simpler GEM-based alternative suitable for TPCs for future\ncollider experiments.\n",
"title": "3D Simulation of Electron and Ion Transmission of GEM-based Detectors"
} | null | null | null | null | true | null | 2770 | null | Default | null | null |
null | {
"abstract": " We classify all invariants of the functor $I^n$ (powers of the fundamental\nideal of the Witt ring) with values in $A$, that it to say functions\n$I^n(K)\\rightarrow A(K)$ compatible with field extensions, in the cases where\n$A(K)=W(K)$ is the Witt ring and $A(K)=H^*(K,\\mu_2)$ is mod 2 Galois\ncohomology. This is done in terms of some invariants $f_n^d$ that behave like\ndivided powers with respect to sums of Pfister forms, and we show that any\ninvariant of $I^n$ can be written uniquely as a (possibly infinite) combination\nof those $f_n^d$. This in particular allows to lift operations defined on mod 2\nMilnor K-theory (or equivalently mod 2 Galois cohomology) to the level of\n$I^n$. We also study various properties of these invariants, including\nbehaviour under products, similitudes, residues for discrete valuations, and\nrestriction from $I^n$ to $I^{n+1}$. The goal is to use this to study\ninvariants of algebras with involutions in future articles.\n",
"title": "Witt and Cohomological Invariants of Witt Classes"
} | null | null | [
"Mathematics"
]
| null | true | null | 2771 | null | Validated | null | null |
null | {
"abstract": " In this paper, we first present an adaptive distributed observer for a\ndiscrete-time leader system. This adaptive distributed observer will provide,\nto each follower, not only the estimation of the leader's signal, but also the\nestimation of the leader's system matrix. Then, based on the estimation of the\nmatrix S, we devise a discrete adaptive algorithm to calculate the solution to\nthe regulator equations associated with each follower, and obtain an estimated\nfeedforward control gain. Finally, we solve the cooperative output regulation\nproblem for discrete-time linear multi-agent systems by both state feedback and\noutput feedback adaptive distributed control laws utilizing the adaptive\ndistributed observer.\n",
"title": "The Cooperative Output Regulation Problem of Discrete-Time Linear Multi-Agent Systems by the Adaptive Distributed Observer"
} | null | null | null | null | true | null | 2772 | null | Default | null | null |
null | {
"abstract": " Recent results of Laca, Raeburn, Ramagge and Whittaker show that any\nself-similar action of a groupoid on a graph determines a 1-parameter family of\nself-mappings of the trace space of the groupoid C*-algebra. We investigate the\nfixed points for these self-mappings, under the same hypotheses that Laca et\nal. used to prove that the C*-algebra of the self-similar action admits a\nunique KMS state. We prove that for any value of the parameter, the associated\nself-mapping admits a unique fixed point, which is in fact a universal\nattractor. This fixed point is precisely the trace that extends to a KMS state\non the C*-algebra of the self-similar action.\n",
"title": "Preferred traces on C*-algebras of self-similar groupoids arising as fixed points"
} | null | null | null | null | true | null | 2773 | null | Default | null | null |
null | {
"abstract": " In manufacture, steel and other metals are mainly cut and shaped during the\nfabrication process by computer numerical control (CNC) machines. To keep high\nproductivity and efficiency of the fabrication process, engineers need to\nmonitor the real-time process of CNC machines, and the lifetime management of\nmachine tools. In a real manufacturing process, breakage of machine tools\nusually happens without any indication, this problem seriously affects the\nfabrication process for many years. Previous studies suggested many different\napproaches for monitoring and detecting the breakage of machine tools. However,\nthere still exists a big gap between academic experiments and the complex real\nfabrication processes such as the high demands of real-time detections, the\ndifficulty in data acquisition and transmission. In this work, we use the\nspindle current approach to detect the breakage of machine tools, which has the\nhigh performance of real-time monitoring, low cost, and easy to install. We\nanalyze the features of the current of a milling machine spindle through tools\nwearing processes, and then we predict the status of tool breakage by a\nconvolutional neural network(CNN). In addition, we use a BP neural network to\nunderstand the reliability of the CNN. The results show that our CNN approach\ncan detect tool breakage with an accuracy of 93%, while the best performance of\nBP is 80%.\n",
"title": "Tool Breakage Detection using Deep Learning"
} | null | null | null | null | true | null | 2774 | null | Default | null | null |
null | {
"abstract": " It was recently shown that architectural, regularization and rehearsal\nstrategies can be used to train deep models sequentially on a number of\ndisjoint tasks without forgetting previously acquired knowledge. However, these\nstrategies are still unsatisfactory if the tasks are not disjoint but\nconstitute a single incremental task (e.g., class-incremental learning). In\nthis paper we point out the differences between multi-task and\nsingle-incremental-task scenarios and show that well-known approaches such as\nLWF, EWC and SI are not ideal for incremental task scenarios. A new approach,\ndenoted as AR1, combining architectural and regularization strategies is then\nspecifically proposed. AR1 overhead (in term of memory and computation) is very\nsmall thus making it suitable for online learning. When tested on CORe50 and\niCIFAR-100, AR1 outperformed existing regularization strategies by a good\nmargin.\n",
"title": "Continuous Learning in Single-Incremental-Task Scenarios"
} | null | null | null | null | true | null | 2775 | null | Default | null | null |
null | {
"abstract": " We define a second-order neural network stochastic gradient training\nalgorithm whose block-diagonal structure effectively amounts to normalizing the\nunit activations. Investigating why this algorithm lacks in robustness then\nreveals two interesting insights. The first insight suggests a new way to scale\nthe stepsizes, clarifying popular algorithms such as RMSProp as well as old\nneural network tricks such as fanin stepsize scaling. The second insight\nstresses the practical importance of dealing with fast changes of the curvature\nof the cost.\n",
"title": "Diagonal Rescaling For Neural Networks"
} | null | null | null | null | true | null | 2776 | null | Default | null | null |
null | {
"abstract": " Word embeddings are a powerful approach for unsupervised analysis of\nlanguage. Recently, Rudolph et al. (2016) developed exponential family\nembeddings, which cast word embeddings in a probabilistic framework. Here, we\ndevelop dynamic embeddings, building on exponential family embeddings to\ncapture how the meanings of words change over time. We use dynamic embeddings\nto analyze three large collections of historical texts: the U.S. Senate\nspeeches from 1858 to 2009, the history of computer science ACM abstracts from\n1951 to 2014, and machine learning papers on the Arxiv from 2007 to 2015. We\nfind dynamic embeddings provide better fits than classical embeddings and\ncapture interesting patterns about how language changes.\n",
"title": "Dynamic Bernoulli Embeddings for Language Evolution"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 2777 | null | Validated | null | null |
null | {
"abstract": " We analyse the homotopy types of gauge groups of principal U(n)-bundles\nassociated to pseudo Real vector bundles in the sense of Atiyah. We provide\nsatisfactory homotopy decompositions of these gauge groups into factors in\nwhich the homotopy groups are well known. Therefore, we substantially build\nupon the low dimensional homotopy groups as provided in a paper by I. Biswas,\nJ. Huisman, and J. Hurtubise.\n",
"title": "Homotopy Decompositions of Gauge Groups over Real Surfaces"
} | null | null | null | null | true | null | 2778 | null | Default | null | null |
null | {
"abstract": " We define an integral form of the deformed W-algebra of type gl_r, and\nconstruct its action on the K-theory groups of moduli spaces of rank r stable\nsheaves on a smooth projective surface S, under certain assumptions. Our\nconstruction generalizes the action studied by Nakajima, Grojnowski and\nBaranovsky in cohomology, although the appearance of deformed W-algebras by\ngenerators and relations is a new feature. Physically, this action encodes the\nAGT correspondence for 5d supersymmetric gauge theory on S x circle.\n",
"title": "W-algebras associated to surfaces"
} | null | null | null | null | true | null | 2779 | null | Default | null | null |
null | {
"abstract": " The aim of this paper is to present a new logic-based understanding of the\nconnection between classical kinematics and relativistic kinematics. We show\nthat the axioms of special relativity can be interpreted in the language of\nclassical kinematics. This means that there is a logical translation function\nfrom the language of special relativity to the language of classical kinematics\nwhich translates the axioms of special relativity into consequences of\nclassical kinematics. We will also show that if we distinguish a class of\nobservers (representing observers stationary with respect to the \"Ether\") in\nspecial relativity and exclude the non-slower-than light observers from\nclassical kinematics by an extra axiom, then the two theories become\ndefinitionally equivalent (i.e., they become equivalent theories in the sense\nas the theory of lattices as algebraic structures is the same as the theory of\nlattices as partially ordered sets). Furthermore, we show that classical\nkinematics is definitionally equivalent to classical kinematics with only\nslower-than-light inertial observers, and hence by transitivity of definitional\nequivalence that special relativity theory extended with \"Ether\" is\ndefinitionally equivalent to classical kinematics. So within an axiomatic\nframework of mathematical logic, we explicitly show that the transition from\nclassical kinematics to relativistic kinematics is the knowledge acquisition\nthat there is no \"Ether\", accompanied by a redefinition of the concepts of time\nand space.\n",
"title": "Comparing Classical and Relativistic Kinematics in First-Order Logic"
} | null | null | null | null | true | null | 2780 | null | Default | null | null |
null | {
"abstract": " Plasmons, the collective excitations of electrons in the bulk or at the\nsurface, play an important role in the properties of materials, and have\ngenerated the field of Plasmonics. We report the observation of a highly\nunusual acoustic plasmon mode on the surface of a three-dimensional topological\ninsulator (TI), Bi2Se3, using momentum resolved inelastic electron scattering.\nIn sharp contrast to ordinary plasmon modes, this mode exhibits almost linear\ndispersion into the second Brillouin zone and remains prominent with remarkably\nweak damping not seen in any other systems. This behavior must be associated\nwith the inherent robustness of the electrons in the TI surface state, so that\nnot only the surface Dirac states but also their collective excitations are\ntopologically protected. On the other hand, this mode has much smaller energy\ndispersion than expected from a continuous media excitation picture, which can\nbe attributed to the strong coupling with surface phonons.\n",
"title": "Anomalous Acoustic Plasmon Mode from Topologically Protected States"
} | null | null | [
"Physics"
]
| null | true | null | 2781 | null | Validated | null | null |
null | {
"abstract": " We describe a procedure naturally associating relativistic Klein-Gordon\nequations in static curved spacetimes to non-relativistic quantum motion on\ncurved spaces in the presence of a potential. Our procedure is particularly\nattractive in application to (typically, superintegrable) problems whose energy\nspectrum is given by a quadratic function of the energy level number, since for\nsuch systems the spacetimes one obtains possess evenly spaced, resonant spectra\nof frequencies for scalar fields of a certain mass. This construction emerges\nas a generalization of the previously studied correspondence between the Higgs\noscillator and Anti-de Sitter spacetime, which has been useful for both\nunderstanding weakly nonlinear dynamics in Anti-de Sitter spacetime and\nalgebras of conserved quantities of the Higgs oscillator. Our conversion\nprocedure (\"Klein-Gordonization\") reduces to a nonlinear elliptic equation\nclosely reminiscent of the one emerging in relation to the celebrated Yamabe\nproblem of differential geometry. As an illustration, we explicitly demonstrate\nhow to apply this procedure to superintegrable Rosochatius systems, resulting\nin a large family of spacetimes with resonant spectra for massless wave\nequations.\n",
"title": "Klein-Gordonization: mapping superintegrable quantum mechanics to resonant spacetimes"
} | null | null | [
"Physics",
"Mathematics"
]
| null | true | null | 2782 | null | Validated | null | null |
null | {
"abstract": " We study the challenges of applying deep learning to gene expression data. We\nfind experimentally that there exists non-linear signal in the data, however is\nit not discovered automatically given the noise and low numbers of samples used\nin most research. We discuss how gene interaction graphs (same pathway,\nprotein-protein, co-expression, or research paper text association) can be used\nto impose a bias on a deep model similar to the spatial bias imposed by\nconvolutions on an image. We explore the usage of Graph Convolutional Neural\nNetworks coupled with dropout and gene embeddings to utilize the graph\ninformation. We find this approach provides an advantage for particular tasks\nin a low data regime but is very dependent on the quality of the graph used. We\nconclude that more work should be done in this direction. We design experiments\nthat show why existing methods fail to capture signal that is present in the\ndata when features are added which clearly isolates the problem that needs to\nbe addressed.\n",
"title": "Towards Gene Expression Convolutions using Gene Interaction Graphs"
} | null | null | [
"Statistics",
"Quantitative Biology"
]
| null | true | null | 2783 | null | Validated | null | null |
null | {
"abstract": " Novel low-band-gap copolymer oligomers are proposed on the basis of density\nfunctional theory (DFT) quantum chemical calculations of photophysical\nproperties. These molecules have an electron donor-accepter (D-A) architecture\ninvolving poly(3-hexylthiophene-2,5-diyl) (P3HT) as D units and furan, aniline,\nor hydroquinone as A units. Structural parameters, electronic properties,\nhighest occupied molecular orbital (HOMO)-lowest unoccupied molecular orbital\n(LUMO) gaps and molecular orbital densities are predicted. The charge transfer\nprocess between the D unit and the A unit one is supported by analyzing the\noptical absorption spectra of the compounds and the localization of the HOMO\nand LUMO.\n",
"title": "Density-Functional Theory Study of the Optoelectronic Properties of π-Conjugated Copolymers for Organic Light-Emitting Diodes"
} | null | null | null | null | true | null | 2784 | null | Default | null | null |
null | {
"abstract": " We propose an efficient and accurate measure for ranking spreaders and\nidentifying the influential ones in spreading processes in networks. While the\nedges determine the connections among the nodes, their specific role in\nspreading should be considered explicitly. An edge connecting nodes i and j may\ndiffer in its importance for spreading from i to j and from j to i. The key\nissue is whether node j, after infected by i through the edge, would reach out\nto other nodes that i itself could not reach directly. It becomes necessary to\ninvoke two unequal weights wij and wji characterizing the importance of an edge\naccording to the neighborhoods of nodes i and j. The total asymmetric\ndirectional weights originating from a node leads to a novel measure si which\nquantifies the impact of the node in spreading processes. A s-shell\ndecomposition scheme further assigns a s-shell index or weighted coreness to\nthe nodes. The effectiveness and accuracy of rankings based on si and the\nweighted coreness are demonstrated by applying them to nine real-world\nnetworks. Results show that they generally outperform rankings based on the\nnodes' degree and k-shell index, while maintaining a low computational\ncomplexity. Our work represents a crucial step towards understanding and\ncontrolling the spread of diseases, rumors, information, trends, and\ninnovations in networks.\n",
"title": "Accurate ranking of influential spreaders in networks based on dynamically asymmetric link-impact"
} | null | null | null | null | true | null | 2785 | null | Default | null | null |
null | {
"abstract": " In this paper we study how to learn stochastic, multimodal transition\ndynamics in reinforcement learning (RL) tasks. We focus on evaluating\ntransition function estimation, while we defer planning over this model to\nfuture work. Stochasticity is a fundamental property of many task environments.\nHowever, discriminative function approximators have difficulty estimating\nmultimodal stochasticity. In contrast, deep generative models do capture\ncomplex high-dimensional outcome distributions. First we discuss why, amongst\nsuch models, conditional variational inference (VI) is theoretically most\nappealing for model-based RL. Subsequently, we compare different VI models on\ntheir ability to learn complex stochasticity on simulated functions, as well as\non a typical RL gridworld with multimodal dynamics. Results show VI\nsuccessfully predicts multimodal outcomes, but also robustly ignores these for\ndeterministic parts of the transition dynamics. In summary, we show a robust\nmethod to learn multimodal transitions using function approximation, which is a\nkey preliminary for model-based RL in stochastic domains.\n",
"title": "Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning"
} | null | null | null | null | true | null | 2786 | null | Default | null | null |
null | {
"abstract": " A publication trend in Physics Education by employing bibliometric analysis\nleads the researchers to describe current scientific movement. This paper tries\nto answer \"What do Physics education scientists concentrate in their\npublications?\" by analyzing the productivity and development of publications on\nthe subject category of Physics Education in the period 1980--2013. The Web of\nScience databases in the research areas of \"EDUCATION - EDUCATIONAL RESEARCH\"\nwas used to extract the publication trends. The study involves 1360\npublications, including 840 articles, 503 proceedings paper, 22 reviews, 7\neditorial material, 6 Book review, and one Biographical item. Number of\npublications with \"Physical Education\" in topic increased from 0.14 % (n = 2)\nin 1980 to 16.54 % (n = 225) in 2011. Total number of receiving citations is\n8071, with approximately citations per papers of 5.93. The results show the\npublication and citations in Physic Education has increased dramatically while\nthe Malaysian share is well ranked.\n",
"title": "Publication Trends in Physics Education: A Bibliometric study"
} | null | null | null | null | true | null | 2787 | null | Default | null | null |
null | {
"abstract": " Self-organization is a natural phenomenon that emerges in systems with a\nlarge number of interacting components. Self-organized systems show robustness,\nscalability, and flexibility, which are essential properties when handling\nreal-world problems. Swarm intelligence seeks to design nature-inspired\nalgorithms with a high degree of self-organization. Yet, we do not know why\nswarm-based algorithms work well and neither we can compare the different\napproaches in the literature. The lack of a common framework capable of\ncharacterizing these several swarm-based algorithms, transcending their\nparticularities, has led to a stream of publications inspired by different\naspects of nature without much regard as to whether they are similar to already\nexisting approaches. We address this gap by introducing a network-based\nframework$-$the interaction network$-$to examine computational swarm-based\nsystems via the optics of social dynamics. We discuss the social dimension of\nseveral swarm classes and provide a case study of the Particle Swarm\nOptimization. The interaction network enables a better understanding of the\nplethora of approaches currently available by looking at them from a general\nperspective focusing on the structure of the social interactions.\n",
"title": "Unveiling Swarm Intelligence with Network Science$-$the Metaphor Explained"
} | null | null | null | null | true | null | 2788 | null | Default | null | null |
null | {
"abstract": " A finite abstract simplicial complex G defines two finite simple graphs: the\nBarycentric refinement G1, connecting two simplices if one is a subset of the\nother and the connection graph G', connecting two simplices if they intersect.\nWe prove that the Poincare-Hopf value i(x)=1-X(S(x)), where X is Euler\ncharacteristics and S(x) is the unit sphere of a vertex x in G1, agrees with\nthe Green function value g(x,x),the diagonal element of the inverse of (1+A'),\nwhere A' is the adjacency matrix of G'. By unimodularity, det(1+A') is the\nproduct of parities (-1)^dim(x) of simplices in G, the Fredholm matrix 1+A' is\nin GL(n,Z), where n is the number of simplices in G. We show that the set of\npossible unit sphere topologies in G1 is a combinatorial invariant of the\ncomplex G. So, also the Green function range of G is a combinatorial invariant.\nTo prove the invariance of the unit sphere topology we use that all unit\nspheres in G1 decompose as a join of a stable and unstable part. The join\noperation + renders the category X of simplicial complexes into a monoid, where\nthe empty complex is the 0 element and the cone construction adds 1. The\naugmented Grothendieck group (X,+,0) contains the graph and sphere monoids\n(Graphs, +,0) and (Spheres,+,0). The Poincare-Hopf functionals i(G) as well as\nthe volume are multiplicative functions on (X,+). For the sphere group, both\ni(G) as well as Fredholm characteristic are characters. The join + can be\naugmented with a product * so that we have a commutative ring (X,+,0,*,1)for\nwhich there are both additive and multiplicative primes and which contains as a\nsubring of signed complete complexes isomorphic to the integers (Z,+,0,*,1). We\nalso look at the spectrum of the Laplacian of the join of two graphs. Both for\naddition + and multiplication *, one can ask whether unique prime factorization\nholds.\n",
"title": "Sphere geometry and invariants"
} | null | null | [
"Computer Science",
"Mathematics"
]
| null | true | null | 2789 | null | Validated | null | null |
null | {
"abstract": " Chaos and ergodicity are the cornerstones of statistical physics and\nthermodynamics. While classically even small systems like a particle in a\ntwo-dimensional cavity, can exhibit chaotic behavior and thereby relax to a\nmicrocanonical ensemble, quantum systems formally can not. Recent theoretical\nbreakthroughs and, in particular, the eigenstate thermalization hypothesis\n(ETH) however indicate that quantum systems can also thermalize. In fact ETH\nprovided us with a framework connecting microscopic models and macroscopic\nphenomena, based on the notion of highly entangled quantum states. Such\nthermalization was beautifully demonstrated experimentally by A. Kaufman et.\nal. who studied relaxation dynamics of a small lattice system of interacting\nbosonic particles. By directly measuring the entanglement entropy of\nsubsystems, as well as other observables, they showed that after the initial\ntransient time the system locally relaxes to a thermal ensemble while globally\nmaintaining a zero-entropy pure state.\n",
"title": "Chaos and thermalization in small quantum systems"
} | null | null | null | null | true | null | 2790 | null | Default | null | null |
null | {
"abstract": " Over the years, many different indexing techniques and search algorithms have\nbeen proposed, including CSS-trees, CSB+ trees, k-ary binary search, and fast\narchitecture sensitive tree search. There have also been papers on how best to\nset the many different parameters of these index structures, such as the node\nsize of CSB+ trees.\nThese indices have been proposed because CPU speeds have been increasing at a\ndramatically higher rate than memory speeds, giving rise to the Von Neumann\nCPU--Memory bottleneck. To hide the long latencies caused by memory access, it\nhas become very important to well-utilize the features of modern CPUs. In order\nto drive down the average number of CPU clock cycles required to execute CPU\ninstructions, and thus increase throughput, it has become important to achieve\na good utilization of CPU resources. Some of these are the data and instruction\ncaches, and the translation lookaside buffers. But it also has become important\nto avoid branch misprediction penalties, and utilize vectorization provided by\nCPUs in the form of SIMD instructions.\nWhile the layout of index structures has been heavily optimized for the data\ncache of modern CPUs, the instruction cache has been neglected so far. In this\npaper, we present NitroGen, a framework for utilizing code generation for\nspeeding up index traversal in main memory database systems. By bringing\ntogether data and code, we make index structures use the dormant resource of\nthe instruction cache. We show how to combine index compilation with previous\napproaches, such as binary tree search, cache-sensitive tree search, and the\narchitecture-sensitive tree search presented by Kim et al.\n",
"title": "Index Search Algorithms for Databases and Modern CPUs"
} | null | null | null | null | true | null | 2791 | null | Default | null | null |
null | {
"abstract": " Contour integration is a crucial technique in many numeric methods of\ninterest in physics ranging from differentiation to evaluating functions of\nmatrices. It is often important to determine whether a given contour contains\nany poles or branch cuts, either to make use of these features or to avoid\nthem. A special case of this problem is that of determining or bounding the\nradius of convergence of a function, as this provides a known circle around a\npoint in which a function remains analytic. We describe a method for\ndetermining whether or not a circular contour of a complex-analytic function\ncontains any poles. We then build on this to produce a robust method for\nbounding the radius of convergence of a complex-analytic function.\n",
"title": "Bounding the Radius of Convergence of Analytic Functions"
} | null | null | null | null | true | null | 2792 | null | Default | null | null |
null | {
"abstract": " The goal of this study is to develop an efficient numerical algorithm\napplicable to a wide range of compressible multicomponent flows. Although many\nhighly efficient algorithms have been proposed for simulating each type of the\nflows, the construction of a universal solver is known to be challenging.\nExtreme cases, such as incompressible and highly compressible flows, or\ninviscid and highly viscous flows, require different numerical treatments in\norder to maintain the efficiency, stability, and accuracy of the method.\nLinearized block implicit (LBI) factored schemes are known to provide an\nefficient way of solving the compressible Navier-Stokes equations implicitly,\nallowing us to avoid stability restrictions at low Mach number and high\nviscosity. However, the methods' splitting error has been shown to grow and\ndominate physical fluxes as the Mach number goes to zero. In this paper, a\nsplitting error reduction technique is proposed to solve the issue. A novel\nfinite element shock-capturing algorithm, proposed by Guermond and Popov, is\nreformulated in terms of finite differences, extended to the stiffened gas\nequation of state (SG EOS) and combined with the LBI factored scheme to\nstabilize the method around flow discontinuities at high Mach numbers. A novel\nstabilization term is proposed for low Mach number applications. The resulting\nalgorithm is shown to be efficient in both low and high Mach number regimes.\nThe algorithm is extended to the multicomponent case using an interface\ncapturing strategy with surface tension as a continuous surface force.\nNumerical tests are presented to verify the performance and stability\nproperties for a wide range of flows.\n",
"title": "An Efficient Algorithm for the Multicomponent Compressible Navier-Stokes Equations in Low- and High-Mach Number Regimes"
} | null | null | null | null | true | null | 2793 | null | Default | null | null |
null | {
"abstract": " Goldstone modes are massless particles resulting from spontaneous symmetry\nbreaking. Although such modes are found in elementary particle physics as well\nas in condensed matter systems like superfluid helium, superconductors and\nmagnons - structural Goldstone modes are rare. Epitaxial strain in thin films\ncan induce structures and properties not accessible in bulk and has been\nintensively studied for (001)-oriented perovskite oxides. Here we predict\nGoldstone-like phonon modes in (111)-strained SrMnO3 by first-principles\ncalculations. Under compressive strain the coupling between two in-plane\nrotational instabilities give rise to a Mexican hat shaped energy surface\ncharacteristic of a Goldstone mode. Conversely, large tensile strain induces\nin-plane polar instabilities with no directional preference, giving rise to a\ncontinuous polar ground state. Such phonon modes with U(1) symmetry could\nemulate structural condensed matter Higgs modes. The mass of this Higgs boson,\ngiven by the shape of the Mexican hat energy surface, can be tuned by strain\nthrough proper choice of substrate.\n",
"title": "Goldstone-like phonon modes in a (111)-strained perovskite"
} | null | null | [
"Physics"
]
| null | true | null | 2794 | null | Validated | null | null |
null | {
"abstract": " This paper provides a link between causal inference and machine learning\ntechniques - specifically, Classification and Regression Trees (CART) - in\nobservational studies where the receipt of the treatment is not randomized, but\nthe assignment to the treatment can be assumed to be randomized (irregular\nassignment mechanism). The paper contributes to the growing applied machine\nlearning literature on causal inference, by proposing a modified version of the\nCausal Tree (CT) algorithm to draw causal inference from an irregular\nassignment mechanism. The proposed method is developed by merging the CT\napproach with the instrumental variable framework to causal inference, hence\nthe name Causal Tree with Instrumental Variable (CT-IV). As compared to CT, the\nmain strength of CT-IV is that it can deal more efficiently with the\nheterogeneity of causal effects, as demonstrated by a series of numerical\nresults obtained on synthetic data. Then, the proposed algorithm is used to\nevaluate a public policy implemented by the Tuscan Regional Administration\n(Italy), which aimed at easing the access to credit for small firms. In this\ncontext, CT-IV breaks fresh ground for target-based policies, identifying\ninteresting heterogeneous causal effects.\n",
"title": "Estimating Heterogeneous Causal Effects in the Presence of Irregular Assignment Mechanisms"
} | null | null | null | null | true | null | 2795 | null | Default | null | null |
null | {
"abstract": " Traditional medicine typically applies one-size-fits-all treatment for the\nentire patient population whereas precision medicine develops tailored\ntreatment schemes for different patient subgroups. The fact that some factors\nmay be more significant for a specific patient subgroup motivates clinicians\nand medical researchers to develop new approaches to subgroup detection and\nanalysis, which is an effective strategy to personalize treatment. In this\nstudy, we propose a novel patient subgroup detection method, called Supervised\nBiclustring (SUBIC) using convex optimization and apply our approach to detect\npatient subgroups and prioritize risk factors for hypertension (HTN) in a\nvulnerable demographic subgroup (African-American). Our approach not only finds\npatient subgroups with guidance of a clinically relevant target variable but\nalso identifies and prioritizes risk factors by pursuing sparsity of the input\nvariables and encouraging similarity among the input variables and between the\ninput and target variables\n",
"title": "SUBIC: A Supervised Bi-Clustering Approach for Precision Medicine"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 2796 | null | Validated | null | null |
null | {
"abstract": " Leakage of polarized Galactic diffuse emission into total intensity can\npotentially mimic the 21-cm signal coming from the epoch of reionization (EoR),\nas both of them might have fluctuating spectral structure. Although we are\nsensitive to the EoR signal only in small fields of view, chromatic sidelobes\nfrom further away can contaminate the inner region. Here, we explore the\neffects of leakage into the 'EoR window' of the cylindrically averaged power\nspectra (PS) within wide fields of view using both observation and simulation\nof the 3C196 and NCP fields, two observing fields of the LOFAR-EoR project. We\npresent the polarization PS of two one-night observations of the two fields and\nfind that the NCP field has higher fluctuations along frequency, and\nconsequently exhibits more power at high-$k_\\parallel$ that could potentially\nleak to Stokes $I$. Subsequently, we simulate LOFAR observations of Galactic\ndiffuse polarized emission based on a model to assess what fraction of\npolarized power leaks into Stokes $I$ because of the primary beam. We find that\nthe rms fractional leakage over the instrumental $k$-space is $0.35\\%$ in the\n3C196 field and $0.27\\%$ in the NCP field, and it does not change significantly\nwithin the diameters of $15^\\circ$, $9^\\circ$ and $4^\\circ$. Based on the\nobserved PS and simulated fractional leakage, we show that a similar level of\nleakage into Stokes $I$ is expected in the 3C196 and NCP fields, and the\nleakage can be considered to be a bias in the PS.\n",
"title": "Polarization leakage in epoch of reionization windows: III. Wide-field effects of narrow-field arrays"
} | null | null | [
"Physics"
]
| null | true | null | 2797 | null | Validated | null | null |
null | {
"abstract": " This paper proposes a joint framework wherein lifting-based, separable,\nimage-matched wavelets are estimated from compressively sensed (CS) images and\nused for the reconstruction of the same. Matched wavelet can be easily designed\nif full image is available. Also matched wavelet may provide better\nreconstruction results in CS application compared to standard wavelet\nsparsifying basis. Since in CS application, we have compressively sensed image\ninstead of full image, existing methods of designing matched wavelet cannot be\nused. Thus, we propose a joint framework that estimates matched wavelet from\nthe compressively sensed images and also reconstructs full images. This paper\nhas three significant contributions. First, lifting-based, image-matched\nseparable wavelet is designed from compressively sensed images and is also used\nto reconstruct the same. Second, a simple sensing matrix is employed to sample\ndata at sub-Nyquist rate such that sensing and reconstruction time is reduced\nconsiderably without any noticeable degradation in the reconstruction\nperformance. Third, a new multi-level L-Pyramid wavelet decomposition strategy\nis provided for separable wavelet implementation on images that leads to\nimproved reconstruction performance. Compared to CS-based reconstruction using\nstandard wavelets with Gaussian sensing matrix and with existing wavelet\ndecomposition strategy, the proposed methodology provides faster and better\nimage reconstruction in compressive sensing application.\n",
"title": "Image Reconstruction using Matched Wavelet Estimated from Data Sensed Compressively using Partial Canonical Identity Matrix"
} | null | null | null | null | true | null | 2798 | null | Default | null | null |
null | {
"abstract": " Here we present a working framework to establish finite abelian groups in\npython. The primary aim is to allow new A-level students to work with examples\nof finite abelian groups using open source software. We include the code used\nin the implementation of the framework. We also prove some useful results\nregarding finite abelian groups which are used to establish the functions and\nhelp show how number theoretic results can blend with computational power when\nstudying algebra. The groups established are based modular multiplication and\naddition. We include direct products of cyclic groups meaning the user has\naccess to all finite abelian groups.\n",
"title": "Python Implementation and Construction of Finite Abelian Groups"
} | null | null | null | null | true | null | 2799 | null | Default | null | null |
null | {
"abstract": " We investigate the problem of testing the equivalence between two discrete\nhistograms. A {\\em $k$-histogram} over $[n]$ is a probability distribution that\nis piecewise constant over some set of $k$ intervals over $[n]$. Histograms\nhave been extensively studied in computer science and statistics. Given a set\nof samples from two $k$-histogram distributions $p, q$ over $[n]$, we want to\ndistinguish (with high probability) between the cases that $p = q$ and\n$\\|p-q\\|_1 \\geq \\epsilon$. The main contribution of this paper is a new\nalgorithm for this testing problem and a nearly matching information-theoretic\nlower bound. Specifically, the sample complexity of our algorithm matches our\nlower bound up to a logarithmic factor, improving on previous work by\npolynomial factors in the relevant parameters. Our algorithmic approach applies\nin a more general setting and yields improved sample upper bounds for testing\ncloseness of other structured distributions as well.\n",
"title": "Near-Optimal Closeness Testing of Discrete Histogram Distributions"
} | null | null | null | null | true | null | 2800 | null | Default | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.