text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " In this paper we present a loss-based approach to change point analysis. In\nparticular, we look at the problem from two perspectives. The first focuses on\nthe definition of a prior when the number of change points is known a priori.\nThe second contribution aims to estimate the number of change points by using a\nloss-based approach recently introduced in the literature. The latter considers\nchange point estimation as a model selection exercise. We show the performance\nof the proposed approach on simulated data and real data sets.\n", "title": "Objective Bayesian Analysis for Change Point Problems" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
5201
null
Validated
null
null
null
{ "abstract": " An elastic foil interacting with a uniform flow with its trailing edge\nclamped, also known as the inverted foil, exhibits a wide range of complex\nself-induced flapping regimes such as large amplitude flapping (LAF), deformed\nand flipped flapping. Here, we perform three-dimensional numerical experiments\nto examine the role of vortex shedding and the vortex-vortex interaction on the\nLAF response at Reynolds number Re=30,000. Here we investigate the dynamics of\nthe inverted foil for a novel configuration wherein we introduce a fixed\nsplitter plate at the trailing edge to suppress the vortex shedding from\ntrailing edge and inhibit the interaction between the counter-rotating\nvortices. We find that the inhibition of the interaction has an insignificant\neffect on the transverse flapping amplitudes, due to a relatively weaker\ncoupling between the counter-rotating vortices emanating from the leading edge\nand trailing edge. However, the inhibition of the trailing edge vortex reduces\nthe streamwise flapping amplitude, the flapping frequency and the net strain\nenergy of foil. To further generalize our understanding of the LAF, we next\nperform low-Reynolds number (Re$\\in[0.1,50]$) simulations for the identical\nfoil properties to realize the impact of vortex shedding on the large amplitude\nflapping. Due to the absence of vortex shedding process in the low-$Re$ regime,\nthe inverted foil no longer exhibits the periodic flapping. However, the\nflexible foil still loses its stability through divergence instability to\nundergo a large static deformation. Finally, we introduce an analogous\nanalytical model for the LAF based on the dynamics of an elastically mounted\nflat plate undergoing flow-induced pitching oscillations in a uniform stream.\n", "title": "On the Mechanism of Large Amplitude Flapping of Inverted Foil in a Uniform Flow" }
null
null
null
null
true
null
5202
null
Default
null
null
null
{ "abstract": " A foundation of the modern technology that uses single-crystal silicon has\nbeen the growth of high-quality single-crystal Si ingots with diameters up to\n12 inches or larger. For many applications of graphene, large-area high-quality\n(ideally of single-crystal) material will be enabling. Since the first growth\non copper foil a decade ago, inch-sized single-crystal graphene has been\nachieved. We present here the growth, in 20 minutes, of a graphene film of 5 x\n50 cm2 dimension with > 99% ultra-highly oriented grains. This growth was\nachieved by: (i) synthesis of sub-metre-sized single-crystal Cu(111) foil as\nsubstrate; (ii) epitaxial growth of graphene islands on the Cu(111) surface;\n(iii) seamless merging of such graphene islands into a graphene film with high\nsingle crystallinity and (iv) the ultrafast growth of graphene film. These\nachievements were realized by a temperature-driven annealing technique to\nproduce single-crystal Cu(111) from industrial polycrystalline Cu foil and the\nmarvellous effects of a continuous oxygen supply from an adjacent oxide. The\nas-synthesized graphene film, with very few misoriented grains (if any), has a\nmobility up to ~ 23,000 cm2V-1s-1 at 4 K and room temperature sheet resistance\nof ~ 230 ohm/square. It is very likely that this approach can be scaled up to\nachieve exceptionally large and high-quality graphene films with single\ncrystallinity, and thus realize various industrial-level applications at a low\ncost.\n", "title": "Ultrafast Epitaxial Growth of Metre-Sized Single-Crystal Graphene on Industrial Cu Foil" }
null
null
null
null
true
null
5203
null
Default
null
null
null
{ "abstract": " A new lower bound on the average reconstruction error variance of\nmultidimensional sampling and reconstruction is presented. It applies to\nsampling on arbitrary lattices in arbitrary dimensions, assuming a stochastic\nprocess with constant, isotropically bandlimited spectrum and reconstruction by\nthe best linear interpolator. The lower bound is exact for any lattice at\nsufficiently high and low sampling rates. The two threshold rates where the\nerror variance deviates from the lower bound gives two optimality criteria for\nsampling lattices. It is proved that at low rates, near the first threshold,\nthe optimal lattice is the dual of the best sphere-covering lattice, which for\nthe first time establishes a rigorous relation between optimal sampling and\noptimal sphere covering. A previously known result is confirmed at high rates,\nnear the second threshold, namely, that the optimal lattice is the dual of the\nbest sphere-packing lattice. Numerical results quantify the performance of\nvarious lattices for sampling and support the theoretical optimality criteria.\n", "title": "Multidimensional Sampling of Isotropically Bandlimited Signals" }
null
null
null
null
true
null
5204
null
Default
null
null
null
{ "abstract": " We use a weighted variant of the frequency functions introduced by Almgren to\nprove sharp asymptotic estimates for almost eigenfunctions of the drift\nLaplacian associated to the Gaussian weight on an asymptotically conical end.\nAs a consequence, we obtain a purely elliptic proof of a result of L. Wang on\nthe uniqueness of self-shrinkers of the mean curvature flow asymptotic to a\ngiven cone. Another consequence is a unique continuation property for\nself-expanders of the mean curvature flow that flow from a cone.\n", "title": "Asymptotic structure of almost eigenfunctions of drift Laplacians on conical ends" }
null
null
null
null
true
null
5205
null
Default
null
null
null
{ "abstract": " We present optical spectroscopy of the recently discovered hyperbolic\nnear-Earth object A/2017 U1, taken on 25 Oct 2017 at Palomar Observatory.\nAlthough our data are at a very low signal-to-noise, they indicate a very red\nsurface at optical wavelengths without significant absorption features.\n", "title": "Palomar Optical Spectrum of Hyperbolic Near-Earth Object A/2017 U1" }
null
null
null
null
true
null
5206
null
Default
null
null
null
{ "abstract": " A single quantum dot deterministically coupled to a photonic crystal\nenvironment constitutes an indispensable elementary unit to both generate and\nmanipulate single-photons in next-generation quantum photonic circuits. To\ndate, the scaling of the number of these quantum nodes on a fully-integrated\nchip has been prevented by the use of optical pumping strategies that require a\nbulky off-chip laser along with the lack of methods to control the energies of\nnano-cavities and emitters. Here, we concurrently overcome these limitations by\ndemonstrating electrical injection of single excitonic lines within a\nnano-electro-mechanically tuneable photonic crystal cavity. When an\nelectrically-driven dot line is brought into resonance with a photonic crystal\nmode, its emission rate is enhanced. Anti-bunching experiments reveal the\nquantum nature of these on-demand sources emitting in the telecom range. These\nresults represent an important step forward in the realization of integrated\nquantum optics experiments featuring multiple electrically-triggered\nPurcell-enhanced single-photon sources embedded in a reconfigurable\nsemiconductor architecture.\n", "title": "Electrically driven quantum light emission in electromechanically-tuneable photonic crystal cavities" }
null
null
[ "Physics" ]
null
true
null
5207
null
Validated
null
null
null
{ "abstract": " Answering queries over a federation of SPARQL endpoints requires combining\ndata from more than one data source. Optimizing queries in such scenarios is\nparticularly challenging not only because of (i) the large variety of possible\nquery execution plans that correctly answer the query but also because (ii)\nthere is only limited access to statistics about schema and instance data of\nremote sources. To overcome these challenges, most federated query engines rely\non heuristics to reduce the space of possible query execution plans or on\ndynamic programming strategies to produce optimal plans. Nevertheless, these\nplans may still exhibit a high number of intermediate results or high execution\ntimes because of heuristics and inaccurate cost estimations. In this paper, we\npresent Odyssey, an approach that uses statistics that allow for a more\naccurate cost estimation for federated queries and therefore enables Odyssey to\nproduce better query execution plans. Our experimental results show that\nOdyssey produces query execution plans that are better in terms of data\ntransfer and execution time than state-of-the-art optimizers. Our experiments\nusing the FedBench benchmark show execution time gains of at least 25 times on\naverage.\n", "title": "The Odyssey Approach for Optimizing Federated SPARQL Queries" }
null
null
null
null
true
null
5208
null
Default
null
null
null
{ "abstract": " It is unknown if there exists a locally $\\alpha$-Hölder homeomorphism\n$f:\\mathbb{R}^3\\to \\mathbb{H}^1$ for any $\\frac{1}{2}< \\alpha\\le \\frac{2}{3}$,\nalthough the identity map $\\mathbb{R}^3\\to \\mathbb{H}^1$ is locally\n$\\frac{1}{2}$-Hölder. More generally, Gromov asked: Given $k$ and a Carnot\ngroup $G$, for which $\\alpha$ does there exist a locally $\\alpha$-Hölder\nhomeomorphism $f:\\mathbb{R}^k\\to G$? Here, we equip a Carnot group $G$ with the\nCarnot-Carathéodory metric. In 2014, Balogh, Hajlasz, and Wildrick considered\na variant of this problem. These authors proved that if $k>n$, there does not\nexist an injective, $(\\frac{1}{2}+)$-Hölder mapping $f:\\mathbb{R}^k\\to\n\\mathbb{H}^n$ that is also locally Lipschitz as a mapping into\n$\\mathbb{R}^{2n+1}$. For their proof, they use the fact that $\\mathbb{H}^n$ is\npurely $k$-unrectifiable for $k>n$. In this paper, we will extend their result\nfrom the Heisenberg group to model filiform groups and Carnot groups of step at\nmost three. We will now require that the Carnot group is purely\n$k$-unrectifiable. The main key to our proof will be showing that\n$(\\frac{1}{2}+)$-Hölder maps $f:\\mathbb{R}^k\\to G$ that are locally Lipschitz\ninto Euclidean space, are weakly contact. Proving weak contactness in these two\nsettings requires understanding the relationship between the algebraic and\nmetric structures of the Carnot group. We will use coordinates of the first and\nsecond kind for Carnot groups.\n", "title": "A variant of Gromov's problem on Hölder equivalence of Carnot groups" }
null
null
null
null
true
null
5209
null
Default
null
null
null
{ "abstract": " The game of the Towers of Hanoi is generalized to binary trees. First, a\nstraightforward solution of the game is discussed. Second, a shorter solution\nis presented, which is then shown to be optimal.\n", "title": "The Trees of Hanoi" }
null
null
null
null
true
null
5210
null
Default
null
null
null
{ "abstract": " We consider the problem of estimating the mean of a noisy vector. When the\nmean lies in a convex constraint set, the least squares projection of the\nrandom vector onto the set is a natural estimator. Properties of the risk of\nthis estimator, such as its asymptotic behavior as the noise tends to zero,\nhave been well studied. We instead study the behavior of this estimator under\nmisspecification, that is, without the assumption that the mean lies in the\nconstraint set. For appropriately defined notions of risk in the misspecified\nsetting, we prove a generalization of a low noise characterization of the risk\ndue to Oymak and Hassibi in the case of a polyhedral constraint set. An\ninteresting consequence of our results is that the risk can be much smaller in\nthe misspecified setting than in the well-specified setting. We also discuss\nconsequences of our result for isotonic regression.\n", "title": "On the risk of convex-constrained least squares estimators under misspecification" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
5211
null
Validated
null
null
null
{ "abstract": " A good classification method should yield more accurate results than simple\nheuristics. But there are classification problems, especially high-dimensional\nones like the ones based on image/video data, for which simple heuristics can\nwork quite accurately; the structure of the data in such problems is easy to\nuncover without any sophisticated or computationally expensive method. On the\nother hand, some problems have a structure that can only be found with\nsophisticated pattern recognition methods. We are interested in quantifying the\ndifficulty of a given high-dimensional pattern recognition problem. We consider\nthe case where the patterns come from two pre-determined classes and where the\nobjects are represented by points in a high-dimensional vector space. However,\nthe framework we propose is extendable to an arbitrarily large number of\nclasses. We propose classification benchmarks based on simple random projection\nheuristics. Our benchmarks are 2D curves parameterized by the classification\nerror and computational cost of these simple heuristics. Each curve divides the\nplane into a \"positive- gain\" and a \"negative-gain\" region. The latter contains\nmethods that are ill-suited for the given classification problem. The former is\ndivided into two by the curve asymptote; methods that lie in the small region\nunder the curve but right of the asymptote merely provide a computational gain\nbut no structural advantage over the random heuristics. We prove that the curve\nasymptotes are optimal (i.e. at Bayes error) in some cases, and thus no\nsophisticated method can provide a structural advantage over the random\nheuristics. Such classification problems, an example of which we present in our\nnumerical experiments, provide poor ground for testing new pattern\nclassification methods.\n", "title": "Benchmarks for Image Classification and Other High-dimensional Pattern Recognition Problems" }
null
null
null
null
true
null
5212
null
Default
null
null
null
{ "abstract": " Although the rate region for the lossless many-help-one problem with\nindependently degraded helpers is already \"solved\", its solution is given in\nterms of a convex closure over a set of auxiliary random variables. Thus, for\nany such a problem in particular, an optimization over the set of auxiliary\nrandom variables is required to truly solve the rate region. Providing the\nsolution is surprisingly difficult even for an example as basic as binary\nsources. In this work, we derive a simple and tight inner bound on the rate\nregion's lower boundary for the lossless many-help-one problem with\nindependently degraded helpers when specialized to sources that are binary,\nuniformly distributed, and interrelated through symmetric channels. This\nscenario finds important applications in emerging cooperative communication\nschemes in which the direct-link transmission is assisted via multiple lossy\nrelaying links. Numerical results indicate that the derived inner bound proves\nincreasingly tight as the helpers become more degraded.\n", "title": "On the Binary Lossless Many-Help-One Problem with Independently Degraded Helpers" }
null
null
null
null
true
null
5213
null
Default
null
null
null
{ "abstract": " Correlated topic modeling has been limited to small model and problem sizes\ndue to their high computational cost and poor scaling. In this paper, we\npropose a new model which learns compact topic embeddings and captures topic\ncorrelations through the closeness between the topic vectors. Our method\nenables efficient inference in the low-dimensional embedding space, reducing\nprevious cubic or quadratic time complexity to linear w.r.t the topic size. We\nfurther speedup variational inference with a fast sampler to exploit sparsity\nof topic occurrence. Extensive experiments show that our approach is capable of\nhandling model and data scales which are several orders of magnitude larger\nthan existing correlation results, without sacrificing modeling quality by\nproviding competitive or superior performance in document classification and\nretrieval.\n", "title": "Efficient Correlated Topic Modeling with Topic Embedding" }
null
null
null
null
true
null
5214
null
Default
null
null
null
{ "abstract": " We prove risk bounds for binary classification in high-dimensional settings\nwhen the sample size is allowed to be smaller than the dimensionality of the\ntraining set observations. In particular, we prove upper bounds for both\n'compressive learning' by empirical risk minimization (ERM) (that is when the\nERM classifier is learned from data that have been projected from\nhigh-dimensions onto a randomly selected low-dimensional subspace) as well as\nuniform upper bounds in the full high-dimensional space. A novel tool we employ\nin both settings is the 'flipping probability' of Durrant and Kaban (ICML 2013)\nwhich we use to capture benign geometric structures that make a classification\nproblem 'easy' in the sense of demanding a relatively low sample size for\nguarantees of good generalization. Furthermore our bounds also enable us to\nexplain or draw connections between several existing successful classification\nalgorithms. Finally we show empirically that our bounds are informative enough\nin practice to serve as the objective function for learning a classifier (by\nusing them to do so).\n", "title": "Structure-aware error bounds for linear classification with the zero-one loss" }
null
null
null
null
true
null
5215
null
Default
null
null
null
{ "abstract": " We propose a dimensional reduction procedure in the Stolz--Teichner framework\nof supersymmetric Euclidean field theories (EFTs) that is well-suited in the\npresence of a finite gauge group or, more generally, for field theories over an\norbifold. As an illustration, we give a geometric interpretation of the Chern\ncharacter for manifolds with an action by a finite group.\n", "title": "Dimensional reduction and the equivariant Chern character" }
null
null
[ "Mathematics" ]
null
true
null
5216
null
Validated
null
null
null
{ "abstract": " We present a practical approach for processing mobile sensor time series data\nfor continual deep learning predictions. The approach comprises data cleaning,\nnormalization, capping, time-based compression, and finally classification with\na recurrent neural network. We demonstrate the effectiveness of the approach in\na case study with 279 participants. On the basis of sparse sensor events, the\nnetwork continually predicts whether the participants would attend to a\nnotification within 10 minutes. Compared to a random baseline, the classifier\nachieves a 40% performance increase (AUC of 0.702) on a withheld test set. This\napproach allows to forgo resource-intensive, domain-specific, error-prone\nfeature engineering, which may drastically increase the applicability of\nmachine learning to mobile phone sensor data.\n", "title": "Practical Processing of Mobile Sensor Data for Continual Deep Learning Predictions" }
null
null
[ "Computer Science" ]
null
true
null
5217
null
Validated
null
null
null
{ "abstract": " Motivated by the rapid rise in statistical tools in Functional Data Analysis,\nwe consider the Gaussian mechanism for achieving differential privacy with\nparameter estimates taking values in a, potentially infinite-dimensional,\nseparable Banach space. Using classic results from probability theory, we show\nhow densities over function spaces can be utilized to achieve the desired\ndifferential privacy bounds. This extends prior results of Hall et al (2013) to\na much broader class of statistical estimates and summaries, including \"path\nlevel\" summaries, nonlinear functionals, and full function releases. By\nfocusing on Banach spaces, we provide a deeper picture of the challenges for\nprivacy with complex data, especially the role regularization plays in\nbalancing utility and privacy. Using an application to penalized smoothing, we\nexplicitly highlight this balance in the context of mean function estimation.\nSimulations and an application to diffusion tensor imaging are briefly\npresented, with extensive additions included in a supplement.\n", "title": "Formal Privacy for Functional Data with Gaussian Perturbations" }
null
null
null
null
true
null
5218
null
Default
null
null
null
{ "abstract": " A well-known result in the study of convex polyhedra, due to Minkowski, is\nthat a convex polyhedron is uniquely determined (up to translation) by the\ndirections and areas of its faces. The theorem guarantees existence of the\npolyhedron associated to given face normals and areas, but does not provide a\nconstructive way to find it explicitly. This article provides an algorithm to\nreconstruct 3D convex polyhedra from their face normals and areas, based on an\nmethod by Lasserre to compute the volume of a convex polyhedron in\n$\\mathbb{R}^n$. A Python implementation of the algorithm is available at\nthis https URL.\n", "title": "An algorithm to reconstruct convex polyhedra from their face normals and areas" }
null
null
null
null
true
null
5219
null
Default
null
null
null
{ "abstract": " In this work, we assess the accuracy of dielectric-dependent hybrid density\nfunctionals and many-body perturbation theory methods for the calculation of\nelectron affinities of small water clusters, including hydrogen-bonded water\ndimer and water hexamer isomers. We show that many-body perturbation theory in\nthe G$_0$W$_0$ approximation starting with the dielectric-dependent hybrid\nfunctionals predicts electron affinities of clusters within 0.1 eV of the\ncoupled-cluster results with single, double, and perturbative triple\nexcitations.\n", "title": "Electron affinities of water clusters from density-functional and many-body-perturbation theory" }
null
null
null
null
true
null
5220
null
Default
null
null
null
{ "abstract": " The flow in a shock tube is extremely complex with dynamic multi-scale\nstructures of sharp fronts, flow separation, and vortices due to the\ninteraction of the shock wave, the contact surface, and the boundary layer over\nthe side wall of the tube. Prediction and understanding of the complex fluid\ndynamics is of theoretical and practical importance. It is also an extremely\nchallenging problem for numerical simulation, especially at relatively high\nReynolds numbers. Daru & Tenaud (Daru, V. & Tenaud, C. 2001 Evaluation of TVD\nhigh resolution schemes for unsteady viscous shocked flows. Computers & Fluids\n30, 89-113) proposed a two-dimensional model problem as a numerical test case\nfor high-resolution schemes to simulate the flow field in a square closed shock\ntube. Though many researchers have tried this problem using a variety of\ncomputational methods, there is not yet an agreed-upon grid-converged solution\nof the problem at the Reynolds number of 1000. This paper presents a rigorous\ngrid-convergence study and the resulting grid-converged solutions for this\nproblem by using a newly-developed, efficient, and high-order gas-kinetic\nscheme. Critical data extracted from the converged solutions are documented as\nbenchmark data. The complex fluid dynamics of the flow at Re = 1000 are\ndiscussed and analysed in detail. Major phenomena revealed by the numerical\ncomputations include the downward concentration of the fluid through the curved\nshock, the formation of the vortices, the mechanism of the shock wave\nbifurcation, the structure of the jet along the bottom wall, and the\nKelvin-Helmholtz instability near the contact surface.\n", "title": "Grid-converged Solution and Analysis of the Unsteady Viscous Flow in a Two-dimensional Shock Tube" }
null
null
null
null
true
null
5221
null
Default
null
null
null
{ "abstract": " We study the Kitaev chain under generalized twisted boundary conditions, for\nwhich both the amplitudes and the phases of the boundary couplings can be tuned\nat will. We explicitly show the presence of exact zero modes for large chains\nbelonging to the topological phase in the most general case, in spite of the\nabsence of \"edges\" in the system. For specific values of the phase parameters,\nwe rigorously obtain the condition for the presence of the exact zero modes in\nfinite chains, and show that the zero modes obtained are indeed localized. The\nfull spectrum of the twisted chains with zero chemical potential is\nanalytically presented. Finally, we demonstrate the persistence of zero modes\n(level crossing) even in the presence of disorder or interactions.\n", "title": "Exact zero modes in twisted Kitaev chains" }
null
null
[ "Physics" ]
null
true
null
5222
null
Validated
null
null
null
{ "abstract": " We consider the general problem of modeling temporal data with long-range\ndependencies, wherein new observations are fully or partially predictable based\non temporally-distant, past observations. A sufficiently powerful temporal\nmodel should separate predictable elements of the sequence from unpredictable\nelements, express uncertainty about those unpredictable elements, and rapidly\nidentify novel elements that may help to predict the future. To create such\nmodels, we introduce Generative Temporal Models augmented with external memory\nsystems. They are developed within the variational inference framework, which\nprovides both a practical training methodology and methods to gain insight into\nthe models' operation. We show, on a range of problems with sparse, long-term\ntemporal dependencies, that these models store information from early in a\nsequence, and reuse this stored information efficiently. This allows them to\nperform substantially better than existing models based on well-known recurrent\nneural networks, like LSTMs.\n", "title": "Generative Temporal Models with Memory" }
null
null
null
null
true
null
5223
null
Default
null
null
null
{ "abstract": " This is an empirical paper that addresses the role of bilateral and\nmultilateral international co-authorships in the six leading science systems\namong the ASEAN group of countries (ASEAN6). The paper highlights the different\nways that bilateral and multilateral co-authorships structure global networks\nand the collaborations of the ASEAN6. The paper looks at the influence of the\ncollaboration styles of major collaborating countries of the ASEAN6,\nparticularly the USA and Japan. It also highlights the role of bilateral and\nmultilateral co-authorships in the production of knowledge in the leading\nspecialisations of the ASEAN6. The discussion section offers some tentative\nexplanations for major dynamics evident in the results and summarises the next\nsteps in this research.\n", "title": "Global research collaboration: Networks and partners in South East Asia" }
null
null
null
null
true
null
5224
null
Default
null
null
null
{ "abstract": " How the information microscopically processed by individual neurons is\nintegrated and used in organising the macroscopic behaviour of an animal is a\ncentral question in neuroscience. Coherence of dynamics over different scales\nhas been suggested as a clue to the mechanisms underlying this integration.\nBalanced excitation and inhibition amplify microscopic fluctuations to a\nmacroscopic level and may provide a mechanism for generating coherent dynamics\nover the two scales. Previous theories of brain dynamics, however, have been\nrestricted to cases in which population-averaged activities have been\nconstrained to constant values, that is, to cases with no macroscopic degrees\nof freedom. In the present study, we investigate balanced neuronal networks\nwith a nonzero number of macroscopic degrees of freedom coupled to microscopic\ndegrees of freedom. In these networks, amplified microscopic fluctuations drive\nthe macroscopic dynamics, while the macroscopic dynamics determine the\nstatistics of the microscopic fluctuations. We develop a novel type of\nmean-field theory applicable to this class of interscale interactions, for\nwhich an analytical approach has previously been unknown. Irregular macroscopic\nrhythms similar to those observed in the brain emerge spontaneously as a result\nof such interactions. Microscopic inputs to a small number of neurons\neffectively entrain the whole network through the amplification mechanism.\nNeuronal responses become coherent as the magnitude of either the balanced\nexcitation and inhibition or the external inputs is increased. Our mean-field\ntheory successfully predicts the behaviour of the model. Our numerical results\nfurther suggest that the coherent dynamics can be used for selective read-out\nof information. In conclusion, our results show a novel form of neuronal\ninformation processing that bridges different scales, and advance our\nunderstanding of the brain.\n", "title": "Spontaneous and stimulus-induced coherent states of dynamically balanced neuronal networks" }
null
null
[ "Physics" ]
null
true
null
5225
null
Validated
null
null
null
{ "abstract": " Pairwise association measure is an important operation in data analytics.\nKendall's tau coefficient is one widely used correlation coefficient\nidentifying non-linear relationships between ordinal variables. In this paper,\nwe investigated a parallel algorithm accelerating all-pairs Kendall's tau\ncoefficient computation via single instruction multiple data (SIMD) vectorized\nsorting on Intel Xeon Phis by taking advantage of many processing cores and\n512-bit SIMD vector instructions. To facilitate workload balancing and overcome\non-chip memory limitation, we proposed a generic framework for symmetric\nall-pairs computation by building provable bijective functions between job\nidentifier and coordinate space. Performance evaluation demonstrated that our\nalgorithm on one 5110P Phi achieves two orders-of-magnitude speedups over\n16-threaded MATLAB and three orders-of-magnitude speedups over sequential R,\nboth running on high-end CPUs. Besides, our algorithm exhibited rather good\ndistributed computing scalability with respect to number of Phis. Source code\nand datasets are publicly available at this http URL.\n", "title": "Parallelized Kendall's Tau Coefficient Computation via SIMD Vectorized Sorting On Many-Integrated-Core Processors" }
null
null
null
null
true
null
5226
null
Default
null
null
null
{ "abstract": " Repeated exposure to low-level blast may initiate a range of adverse health\nproblem such as traumatic brain injury (TBI). Although many studies\nsuccessfully identified genes associated with TBI, yet the cellular mechanisms\nunderpinning TBI are not fully elucidated. In this study, we investigated\nunderlying relationship among genes through constructing transcript Bayesian\nnetworks using RNA-seq data. The data for pre- and post-blast transcripts,\nwhich were collected on 33 individuals in Army training program, combined with\nour system approach provide unique opportunity to investigate the effect of\nblast-wave exposure on gene-gene interactions. Digging into the networks, we\nidentified four subnetworks related to immune system and inflammatory process\nthat are disrupted due to the exposure. Among genes with relatively high fold\nchange in their transcript expression level, ATP6V1G1, B2M, BCL2A1, PELI,\nS100A8, TRIM58 and ZNF654 showed major impact on the dysregulation of the\ngene-gene interactions. This study reveals how repeated exposures to traumatic\nconditions increase the level of fold change of transcript expression and\nhypothesizes new targets for further experimental studies.\n", "title": "Effect of Blast Exposure on Gene-Gene Interactions" }
null
null
[ "Quantitative Biology" ]
null
true
null
5227
null
Validated
null
null
null
{ "abstract": " Gaussian Markov random fields are used in a large number of disciplines in\nmachine vision and spatial statistics. The models take advantage of sparsity in\nmatrices introduced through the Markov assumptions, and all operations in\ninference and prediction use sparse linear algebra operations that scale well\nwith dimensionality. Yet, for very high-dimensional models, exact computation\nof predictive variances of linear combinations of variables is generally\ncomputationally prohibitive, and approximate methods (generally interpolation\nor conditional simulation) are typically used instead. A set of conditions are\nestablished under which the variances of linear combinations of random\nvariables can be computed exactly using the Takahashi recursions. The ensuing\ncomputational simplification has wide applicability and may be used to enhance\nseveral software packages where model fitting is seated in a maximum-likelihood\nframework. The resulting algorithm is ideal for use in a variety of spatial\nstatistical applications, including \\emph{LatticeKrig} modelling, statistical\ndownscaling, and fixed rank kriging. It can compute hundreds of thousands exact\npredictive variances of linear combinations on a standard desktop with ease,\neven when large spatial GMRF models are used.\n", "title": "A sparse linear algebra algorithm for fast computation of prediction variances with Gaussian Markov random fields" }
null
null
null
null
true
null
5228
null
Default
null
null
null
{ "abstract": " We initiate a study of path spaces in the nascent context of \"motivic dga's\",\nunder development in doctoral work by Gabriella Guzman. This enables us to\nreconstruct the unipotent fundamental group of a pointed scheme from the\nassociated augmented motivic dga, and provides us with a factorization of Kim's\nrelative unipotent section conjecture into several smaller conjectures with a\nhomotopical flavor. Based on a conversation with Joseph Ayoub, we prove that\nthe path spaces of the punctured projective line over a number field are\nconcentrated in degree zero with respect to Levine's t-structure for mixed Tate\nmotives. This constitutes a step in the direction of Kim's conjecture.\n", "title": "Rational motivic path spaces and Kim's relative unipotent section conjecture" }
null
null
null
null
true
null
5229
null
Default
null
null
null
{ "abstract": " In this paper we study the frequentist convergence rate for the Latent\nDirichlet Allocation (Blei et al., 2003) topic models. We show that the maximum\nlikelihood estimator converges to one of the finitely many equivalent\nparameters in Wasserstein's distance metric at a rate of $n^{-1/4}$ without\nassuming separability or non-degeneracy of the underlying topics and/or the\nexistence of more than three words per document, thus generalizing the previous\nworks of Anandkumar et al. (2012, 2014) from an information-theoretical\nperspective. We also show that the $n^{-1/4}$ convergence rate is optimal in\nthe worst case.\n", "title": "Convergence Rates of Latent Topic Models Under Relaxed Identifiability Conditions" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
5230
null
Validated
null
null
null
{ "abstract": " In this paper we consider the problem of clustering collections of very short\ntexts using subspace clustering. This problem arises in many applications such\nas product categorisation, fraud detection, and sentiment analysis. The main\nchallenge lies in the fact that the vectorial representation of short texts is\nboth high-dimensional, due to the large number of unique terms in the corpus,\nand extremely sparse, as each text contains a very small number of words with\nno repetition. We propose a new, simple subspace clustering algorithm that\nrelies on linear algebra to cluster such datasets. Experimental results on\nidentifying product categories from product names obtained from the US Amazon\nwebsite indicate that the algorithm can be competitive against state-of-the-art\nclustering algorithms.\n", "title": "Subspace Clustering of Very Sparse High-Dimensional Data" }
null
null
null
null
true
null
5231
null
Default
null
null
null
{ "abstract": " An important problem in phylogenetics is the construction of phylogenetic\ntrees. One way to approach this problem, known as the supertree method,\ninvolves inferring a phylogenetic tree with leaves consisting of a set $X$ of\nspecies from a collection of trees, each having leaf-set some subset of $X$. In\nthe 1980's characterizations, certain inference rules were given for when a\ncollection of 4-leaved trees, one for each 4-element subset of $X$, can all be\nsimultaneously displayed by a single supertree with leaf-set $X$. Recently, it\nhas become of interest to extend such results to phylogenetic networks. These\nare a generalization of phylogenetic trees which can be used to represent\nreticulate evolution (where species can come together to form a new species).\nIt has been shown that a certain type of phylogenetic network, called a level-1\nnetwork, can essentially be constructed from 4-leaved trees. However, the\nproblem of providing appropriate inference rules for such networks remains\nunresolved. Here we show that by considering 4-leaved networks, called\nquarnets, as opposed to 4-leaved trees, it is possible to provide such rules.\nIn particular, we show that these rules can be used to characterize when a\ncollection of quarnets, one for each 4-element subset of $X$, can all be\nsimultaneously displayed by a level-1 network with leaf-set $X$. The rules are\nan intriguing mixture of tree inference rules, and an inference rule for\nbuilding up a cyclic ordering of $X$ from orderings on subsets of $X$ of size\n4. This opens up several new directions of research for inferring phylogenetic\nnetworks from smaller ones, which could yield new algorithms for solving the\nsupernetwork problem in phylogenetics.\n", "title": "Quarnet inference rules for level-1 networks" }
null
null
null
null
true
null
5232
null
Default
null
null
null
{ "abstract": " Measurements of 21 cm line fluctuations from minihalos have been discussed as\na powerful probe of a wide range of cosmological models. However, previous\nstudies have taken into account only the pixel variance, where contributions\nfrom different scales are integrated. In order to sort out information from\ndifferent scales, we formulate the angular power spectrum of 21 cm line\nfluctuations from minihalos at different redshifts, which can enhance the\nconstraining power enormously. By adopting this formalism, we investigate\nexpected constraints on parameters characterizing the primordial power\nspectrum, particularly focusing on the spectral index $n_s$ and its runnings\n$\\alpha_s$ and $\\beta_s$. We show that future observations of 21 cm line\nfluctuations from minihalos, in combination with cosmic microwave background,\ncan potentially probe these runnings as $\\alpha_s \\sim {\\cal O}(10^{-3})$ and\n$\\beta_s \\sim {\\cal O}(10^{-4})$. Its implications to the test of inflationary\nmodels are also discussed.\n", "title": "21 cm Angular Power Spectrum from Minihalos as a Probe of Primordial Spectral Runnings" }
null
null
null
null
true
null
5233
null
Default
null
null
null
{ "abstract": " We utilise a series of high-resolution cosmological zoom simulations of\ngalaxy formation to investigate the relationship between the ultraviolet (UV)\nslope, beta, and the ratio of the infrared luminosity to UV luminosity (IRX) in\nthe spectral energy distributions (SEDs) of galaxies. We employ dust radiative\ntransfer calculations in which the SEDs of the stars in galaxies propagate\nthrough the dusty interstellar medium. Our main goals are to understand the\norigin of, and scatter in the IRX-beta relation; to assess the efficacy of\nsimplified stellar population synthesis screen models in capturing the\nessential physics in the IRX-beta relation; and to understand systematic\ndeviations from the canonical local IRX-beta relations in particular\npopulations of high-redshift galaxies. Our main results follow. Galaxies that\nhave young stellar populations with relatively cospatial UV and IR emitting\nregions and a Milky Way-like extinction curve fall on or near the standard\nMeurer relation. This behaviour is well captured by simplified screen models.\nScatter in the IRX-beta relation is dominated by three major effects: (i) older\nstellar populations drive galaxies below the relations defined for local\nstarbursts due to a reddening of their intrinsic UV SEDs; (ii) complex\ngeometries in high-z heavily star forming galaxies drive galaxies toward blue\nUV slopes owing to optically thin UV sightlines; (iii) shallow extinction\ncurves drive galaxies downward in the IRX-beta plane due to lowered NUV/FUV\nextinction ratios. We use these features of the UV slopes of galaxies to derive\na fitting relation that reasonably collapses the scatter back toward the\ncanonical local relation. Finally, we use these results to develop an\nunderstanding for the location of two particularly enigmatic populations of\ngalaxies in the IRX-beta plane: z~2-4 dusty star forming galaxies, and z>5 star\nforming galaxies.\n", "title": "The IRX-Beta Dust Attenuation Relation in Cosmological Galaxy Formation Simulations" }
null
null
null
null
true
null
5234
null
Default
null
null
null
{ "abstract": " There are two parts of this paper. First, we discovered an explicit formula\nfor the complex Hessian of the weighted log-Bergman kernel on a parallelogram\ndomain, and utilised this formula to give a new proof about the strict\nconvexity of the Mabuchi functional along a smooth geodesic. Second, when a\nC^{1,1}-geodesic connects two non-degenerate energy minimizers, we also proved\nthis strict convexity, by showing that such a geodesic must be non-degenerate\nand smooth.\n", "title": "Strict convexity of the Mabuchi functional for energy minimizers" }
null
null
null
null
true
null
5235
null
Default
null
null
null
{ "abstract": " In this paper, we bring anonymous variables into imperative languages.\nAnonymous variables represent don't-care values and have proven useful in logic\nprogramming. To bring the same level of benefits into imperative languages, we\ndescribe an extension to C wth anonymous variables.\n", "title": "Anonymous Variables in Imperative Languages" }
null
null
[ "Computer Science" ]
null
true
null
5236
null
Validated
null
null
null
{ "abstract": " The cosmic 21 cm signal is set to revolutionise our understanding of the\nearly Universe, allowing us to probe the 3D temperature and ionisation\nstructure of the intergalactic medium (IGM). It will open a window onto the\nunseen first galaxies, showing us how their UV and X-ray photons drove the\ncosmic milestones of the epoch of reionisation (EoR) and epoch of heating\n(EoH). To facilitate parameter inference from the 21 cm signal, we previously\ndeveloped 21CMMC: a Monte Carlo Markov Chain sampler of 3D EoR simulations.\nHere we extend 21CMMC to include simultaneous modelling of the EoH, resulting\nin a complete Bayesian inference framework for the astrophysics dominating the\nobservable epochs of the cosmic 21 cm signal. We demonstrate that second\ngeneration interferometers, the Hydrogen Epoch of Reionisation Array (HERA) and\nSquare Kilometre Array (SKA) will be able to constrain ionising and X-ray\nsource properties of the first galaxies with a fractional precision of order\n$\\sim1$-10 per cent (1$\\sigma$). The ionisation history of the Universe can be\nconstrained to within a few percent. Using our extended framework, we quantify\nthe bias in EoR parameter recovery incurred by the common simplification of a\nsaturated spin temperature in the IGM. Depending on the extent of overlap\nbetween the EoR and EoH, the recovered astrophysical parameters can be biased\nby $\\sim3-10\\sigma$.\n", "title": "Simultaneously constraining the astrophysics of reionisation and the epoch of heating with 21CMMC" }
null
null
null
null
true
null
5237
null
Default
null
null
null
{ "abstract": " Standard penalized methods of variable selection and parameter estimation\nrely on the magnitude of coefficient estimates to decide which variables to\ninclude in the final model. However, coefficient estimates are unreliable when\nthe design matrix is collinear. To overcome this challenge an entirely new\nperspective on variable selection is presented within a generalized fiducial\ninference framework. This new procedure is able to effectively account for\nlinear dependencies among subsets of covariates in a high-dimensional setting\nwhere $p$ can grow almost exponentially in $n$, as well as in the classical\nsetting where $p \\le n$. It is shown that the procedure very naturally assigns\nsmall probabilities to subsets of covariates which include redundancies by way\nof explicit $L_{0}$ minimization. Furthermore, with a typical sparsity\nassumption, it is shown that the proposed method is consistent in the sense\nthat the probability of the true sparse subset of covariates converges in\nprobability to 1 as $n \\to \\infty$, or as $n \\to \\infty$ and $p \\to \\infty$.\nVery reasonable conditions are needed, and little restriction is placed on the\nclass of possible subsets of covariates to achieve this consistency result.\n", "title": "Non-penalized variable selection in high-dimensional linear model settings via generalized fiducial inference" }
null
null
null
null
true
null
5238
null
Default
null
null
null
{ "abstract": " With the development of speech synthesis techniques, automatic speaker\nverification systems face the serious challenge of spoofing attack. In order to\nimprove the reliability of speaker verification systems, we develop a new\nfilter bank based cepstral feature, deep neural network filter bank cepstral\ncoefficients (DNN-FBCC), to distinguish between natural and spoofed speech. The\ndeep neural network filter bank is automatically generated by training a filter\nbank neural network (FBNN) using natural and synthetic speech. By adding\nrestrictions on the training rules, the learned weight matrix of FBNN is\nband-limited and sorted by frequency, similar to the normal filter bank. Unlike\nthe manually designed filter bank, the learned filter bank has different filter\nshapes in different channels, which can capture the differences between natural\nand synthetic speech more effectively. The experimental results on the ASVspoof\n{2015} database show that the Gaussian mixture model maximum-likelihood\n(GMM-ML) classifier trained by the new feature performs better than the\nstate-of-the-art linear frequency cepstral coefficients (LFCC) based\nclassifier, especially on detecting unknown attacks.\n", "title": "DNN Filter Bank Cepstral Coefficients for Spoofing Detection" }
null
null
null
null
true
null
5239
null
Default
null
null
null
{ "abstract": " The present study is concerned with the following Schrödinger-Poisson\nsystem involving critical nonlocal term with general nonlinearity: $$ \\left\\{\n\\begin{array}{ll} -\\Delta u+V(x)u- \\phi |u|^3u= f(u), & x\\in\\mathbb{R}^3,\n-\\Delta \\phi= |u|^5, & x\\in\\mathbb{R}^3,\\\\ \\end{array} \\right. $$ Under certain\nassumptions on non-constant $V(x)$, the existence of a positive least energy\nsolution is obtained by using some new analytical skills and Pohožaev type\nmanifold. In particular, the Ambrosetti-Rabinowitz type condition or\nmonotonicity assumption on the nonlinearity is not necessary.\n", "title": "The existence of positive least energy solutions for a class of Schrodinger-Poisson systems involving critical nonlocal term with general nonlinearity" }
null
null
null
null
true
null
5240
null
Default
null
null
null
{ "abstract": " We introduce an up-down coloring of a virtual-link diagram. The\ncolorabilities give a lower bound of the minimum number of Reidemeister moves\nof type II which are needed between two 2-component virtual-link diagrams. By\nusing the notion of a quandle cocycle invariant, we determine the necessity of\nReidemeister moves of type II for a pair of diagrams of the trivial\nvirtual-knot. This implies that for any virtual-knot diagram $D$, there exists\na diagram $D'$ representing the same virtual-knot such that any sequence of\ngeneralized Reidemeister moves between them includes at least one Reidemeister\nmove of type II.\n", "title": "Up-down colorings of virtual-link diagrams and the necessity of Reidemeister moves of type II" }
null
null
null
null
true
null
5241
null
Default
null
null
null
{ "abstract": " We present in this paper our work on comparison between Statistical Machine\nTranslation (SMT) and Rule-based machine translation for translation from\nMarathi to Hindi. Rule Based systems although robust take lots of time to\nbuild. On the other hand statistical machine translation systems are easier to\ncreate, maintain and improve upon. We describe the development of a basic\nMarathi-Hindi SMT system and evaluate its performance. Through a detailed error\nanalysis, we, point out the relative strengths and weaknesses of both systems.\nEffectively, we shall see that even with a small amount of training corpus a\nstatistical machine translation system has many advantages for high quality\ndomain specific machine translation over that of a rule-based counterpart.\n", "title": "Comparison of SMT and RBMT; The Requirement of Hybridization for Marathi-Hindi MT" }
null
null
null
null
true
null
5242
null
Default
null
null
null
{ "abstract": " The emerging field at the intersection of quantitative biology, network\nmodeling, and control theory has enjoyed significant progress in recent years.\nThis Special Issue brings together a selection of papers on complementary\napproaches to observe, identify, and control biological and biologically\ninspired networks. These approaches advance the state of the art in the field\nby addressing challenges common to many such networks, including high\ndimensionality, strong nonlinearity, uncertainty, and limited opportunities for\nobservation and intervention. Because these challenges are not unique to\nbiological systems, it is expected that many of the results presented in these\ncontributions will also find applications in other domains, including physical,\nsocial, and technological networks.\n", "title": "Introduction to the Special Issue on Approaches to Control Biological and Biologically Inspired Networks" }
null
null
null
null
true
null
5243
null
Default
null
null
null
{ "abstract": " The new era of the Web is known as the semantic Web or the Web of data. The\nsemantic Web depends on ontologies that are seen as one of its pillars. The\nbigger these ontologies, the greater their exploitation. However, when these\nontologies become too big other problems may appear, such as the complexity to\ncharge big files in memory, the time it needs to download such files and\nespecially the time it needs to make reasoning on them. We discuss in this\npaper approaches for segmenting such big Web ontologies as well as its\nusefulness. The segmentation method extracts from an existing ontology a\nsegment that represents a layer or a generation in the existing ontology; i.e.\na horizontally extraction. The extracted segment should be itself an ontology.\n", "title": "Towards Classification of Web ontologies using the Horizontal and Vertical Segmentation" }
null
null
[ "Computer Science" ]
null
true
null
5244
null
Validated
null
null
null
{ "abstract": " In their celebrated paper \"On Siegel's Lemma\", Bombieri and Vaaler found an\nupper bound on the height of integer solutions of systems of linear Diophantine\nequations. Calculating the bound directly, however, requires exponential time.\nIn this paper, we present the bound in a different form that can be computed in\npolynomial time. We also give an elementary (and arguably simpler) proof for\nthe bound.\n", "title": "An Efficient Version of the Bombieri-Vaaler Lemma" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
5245
null
Validated
null
null
null
{ "abstract": " The rise of connected personal devices together with privacy concerns call\nfor machine learning algorithms capable of leveraging the data of a large\nnumber of agents to learn personalized models under strong privacy\nrequirements. In this paper, we introduce an efficient algorithm to address the\nabove problem in a fully decentralized (peer-to-peer) and asynchronous fashion,\nwith provable convergence rate. We show how to make the algorithm\ndifferentially private to protect against the disclosure of information about\nthe personal datasets, and formally analyze the trade-off between utility and\nprivacy. Our experiments show that our approach dramatically outperforms\nprevious work in the non-private case, and that under privacy constraints, we\ncan significantly improve over models learned in isolation.\n", "title": "Personalized and Private Peer-to-Peer Machine Learning" }
null
null
null
null
true
null
5246
null
Default
null
null
null
{ "abstract": " The implications of considering interaction between Chaplygin gas and a\nbarotropic fluid with constant equation of state have been explored. The unique\nfeature of this work is that assuming an interaction $Q \\propto H\\rho_d$,\nanalytic expressions for the energy density and pressure have been derived in\nterms of the Hypergeometric $_2\\text{F}_1$ function. It is worthwhile to\nmention that an interacting Chaplygin gas model was considered in 2006 by Zhang\nand Zhu, nevertheless, analytic solutions for the continuity equations could\nnot be determined assuming an interaction proportional to $H$ times the sum of\nthe energy densities of Chaplygin gas and dust. Our model can successfully\nexplain the transition from the early decelerating phase to the present phase\nof cosmic acceleration. Arbitrary choice of the free parameters of our model\nthrough trial and error show at recent observational data strongly favors\n$w_m=0$ and $w_m=-\\frac{1}{3}$ over the $w_m=\\frac{1}{3}$ case. Interestingly,\nthe present model also incorporates the transition of dark energy into the\nphantom domain, however, future deceleration is forbidden.\n", "title": "Interacting Chaplygin gas revisited" }
null
null
null
null
true
null
5247
null
Default
null
null
null
{ "abstract": " We present GALARIO, a computational library that exploits the power of modern\ngraphical processing units (GPUs) to accelerate the analysis of observations\nfrom radio interferometers like ALMA or the VLA. GALARIO speeds up the\ncomputation of synthetic visibilities from a generic 2D model image or a radial\nbrightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster\nthan standard Python and 10 times faster than serial C++ code on a CPU. Highly\nmodular, easy to use and to adopt in existing code, GALARIO comes as two\ncompiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both\nhave the same functions with identical interfaces. GALARIO comes with Python\nbindings but can also be directly used in C or C++. The versatility and the\nspeed of GALARIO open new analysis pathways that otherwise would be\nprohibitively time consuming, e.g. fitting high resolution observations of\nlarge number of objects, or entire spectral cubes of molecular gas emission. It\nis a general tool that can be applied to any field that uses radio\ninterferometer observations. The source code is available online at\nthis https URL under the open source GNU Lesser General\nPublic License v3.\n", "title": "GALARIO: a GPU Accelerated Library for Analysing Radio Interferometer Observations" }
null
null
null
null
true
null
5248
null
Default
null
null
null
{ "abstract": " Geoelectrical techniques are widely used to monitor groundwater processes,\nwhile surprisingly few studies have considered audio (AMT) and radio (RMT)\nmagnetotellurics for such purposes. In this numerical investigation, we analyze\nto what extent inversion results based on AMT and RMT monitoring data can be\nimproved by (1) time-lapse difference inversion; (2) incorporation of\nstatistical information about the expected model update (i.e., the model\nregularization is based on a geostatistical model); (3) using alternative model\nnorms to quantify temporal changes (i.e., approximations of l1 and Cauchy norms\nusing iteratively reweighted least-squares), (4) constraining model updates to\npredefined ranges (i.e., using Lagrange Multipliers to only allow either\nincreases or decreases of electrical resistivity with respect to background\nconditions). To do so, we consider a simple illustrative model and a more\nrealistic test case related to seawater intrusion. The results are encouraging\nand show significant improvements when using time-lapse difference inversion\nwith non l2 model norms. Artifacts that may arise when imposing compactness of\nregions with temporal changes can be suppressed through inequality constraints\nto yield models without oscillations outside the true region of temporal\nchanges. Based on these results, we recommend approximate l1-norm solutions as\nthey can resolve both sharp and smooth interfaces within the same model.\n", "title": "Focused time-lapse inversion of radio and audio magnetotelluric data" }
null
null
null
null
true
null
5249
null
Default
null
null
null
{ "abstract": " We study the use of randomized value functions to guide deep exploration in\nreinforcement learning. This offers an elegant means for synthesizing\nstatistically and computationally efficient exploration with common practical\napproaches to value function learning. We present several reinforcement\nlearning algorithms that leverage randomized value functions and demonstrate\ntheir efficacy through computational studies. We also prove a regret bound that\nestablishes statistical efficiency with a tabular representation.\n", "title": "Deep Exploration via Randomized Value Functions" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
5250
null
Validated
null
null
null
{ "abstract": " We investigate the generalizability of deep learning based on the sensitivity\nto input perturbation. We hypothesize that the high sensitivity to the\nperturbation of data degrades the performance on it. To reduce the sensitivity\nto perturbation, we propose a simple and effective regularization method,\nreferred to as spectral norm regularization, which penalizes the high spectral\nnorm of weight matrices in neural networks. We provide supportive evidence for\nthe abovementioned hypothesis by experimentally confirming that the models\ntrained using spectral norm regularization exhibit better generalizability than\nother baseline methods.\n", "title": "Spectral Norm Regularization for Improving the Generalizability of Deep Learning" }
null
null
null
null
true
null
5251
null
Default
null
null
null
{ "abstract": " In this work we use the semi-empirical atmospheric modeling method to obtain\nthe chro-mospheric temperature, pressure, density and magnetic field\ndistribution versus height in the K2 primary component of the RS CVn binary\nsystem HR 7428. While temperature, pressure, density are the standard output of\nthe semi-empirical modeling technique, the chromospheric magnetic field\nestimation versus height comes from considering the possibility of not\nim-posing hydrostatic equilibrium in the semi-empirical computation. The\nstability of the best non-hydrostatic equilibrium model, implies the presence\nof an additive (toward the center of the star) pressure, that decreases in\nstrength from the base of the chromosphere toward the outer layers.\nInterpreting the additive pressure as magnetic pressure we estimated a magnetic\nfield intensity of about 500 gauss at the base of the chromosphere.\n", "title": "Estimating the chromospheric magnetic field from a revised NLTE modeling: the case of HR7428" }
null
null
[ "Physics" ]
null
true
null
5252
null
Validated
null
null
null
{ "abstract": " In this paper, we present a very accurate approximation for gamma function:\n\\begin{equation*} \\Gamma \\left( x+1\\right) \\thicksim \\sqrt{2\\pi x}\\left(\n\\dfrac{x}{e}\\right) ^{x}\\left( x\\sinh \\frac{1}{x}\\right) ^{x/2}\\exp \\left(\n\\frac{7}{324}\\frac{1}{ x^{3}\\left( 35x^{2}+33\\right) }\\right) =W_{2}\\left(\nx\\right) \\end{equation*} as $x\\rightarrow \\infty $, and prove that the function\n$x\\mapsto \\ln \\Gamma \\left( x+1\\right) -\\ln W_{2}\\left( x\\right) $ is strictly\ndecreasing and convex from $\\left( 1,\\infty \\right) $ onto $\\left( 0,\\beta\n\\right) $, where \\begin{equation*} \\beta =\\frac{22\\,025}{22\\,032}-\\ln\n\\sqrt{2\\pi \\sinh 1}\\approx 0.00002407. \\end{equation*}\n", "title": "An accurate approximation formula for gamma function" }
null
null
null
null
true
null
5253
null
Default
null
null
null
{ "abstract": " A central theme in classical algorithms for the reconstruction of\ndiscontinuous functions from observational data is perimeter regularization. On\nthe other hand, sparse or noisy data often demands a probabilistic approach to\nthe reconstruction of images, to enable uncertainty quantification; the\nBayesian approach to inversion is a natural framework in which to carry this\nout. The link between Bayesian inversion methods and perimeter regularization,\nhowever, is not fully understood. In this paper two links are studied: (i) the\nMAP objective function of a suitably chosen phase-field Bayesian approach is\nshown to be closely related to a least squares plus perimeter regularization\nobjective; (ii) sample paths of a suitably chosen Bayesian level set\nformulation are shown to possess finite perimeter and to have the ability to\nlearn about the true perimeter. Furthermore, the level set approach is shown to\nlead to faster algorithms for uncertainty quantification than the phase field\napproach.\n", "title": "Reconciling Bayesian and Total Variation Methods for Binary Inversion" }
null
null
null
null
true
null
5254
null
Default
null
null
null
{ "abstract": " This paper investigates a novel task of generating texture images from\nperceptual descriptions. Previous work on texture generation focused on either\nsynthesis from examples or generation from procedural models. Generating\ntextures from perceptual attributes have not been well studied yet. Meanwhile,\nperceptual attributes, such as directionality, regularity and roughness are\nimportant factors for human observers to describe a texture. In this paper, we\npropose a joint deep network model that combines adversarial training and\nperceptual feature regression for texture generation, while only random noise\nand user-defined perceptual attributes are required as input. In this model, a\npreliminary trained convolutional neural network is essentially integrated with\nthe adversarial framework, which can drive the generated textures to possess\ngiven perceptual attributes. An important aspect of the proposed model is that,\nif we change one of the input perceptual features, the corresponding appearance\nof the generated textures will also be changed. We design several experiments\nto validate the effectiveness of the proposed method. The results show that the\nproposed method can produce high quality texture images with desired perceptual\nproperties.\n", "title": "Perception Driven Texture Generation" }
null
null
null
null
true
null
5255
null
Default
null
null
null
{ "abstract": " Follicle-stimulating hormone (FSH) and luteinizing hormone (LH) play\nessential roles in animal reproduction. They exert their function through\nbinding to their cognate receptors, which belong to the large family of G\nprotein-coupled receptors (GPCRs). This recognition at the plasma membrane\ntriggers a plethora of cellular events, whose processing and integration\nultimately lead to an adapted biological response. Understanding the nature and\nthe kinetics of these events is essential for innovative approaches in drug\ndiscovery. The study and manipulation of such complex systems requires the use\nof computational modeling approaches combined with robust in vitro functional\nassays for calibration and validation. Modeling brings a detailed understanding\nof the system and can also be used to understand why existing drugs do not work\nas well as expected, and how to design more efficient ones.\n", "title": "Computational modeling approaches in gonadotropin signaling" }
null
null
null
null
true
null
5256
null
Default
null
null
null
{ "abstract": " While going deeper has been witnessed to improve the performance of\nconvolutional neural networks (CNN), going smaller for CNN has received\nincreasing attention recently due to its attractiveness for mobile/embedded\napplications. It remains an active and important topic how to design a small\nnetwork while retaining the performance of large and deep CNNs (e.g., Inception\nNets, ResNets). Albeit there are already intensive studies on compressing the\nsize of CNNs, the considerable drop of performance is still a key concern in\nmany designs. This paper addresses this concern with several new contributions.\nFirst, we propose a simple yet powerful method for compressing the size of deep\nCNNs based on parameter binarization. The striking difference from most\nprevious work on parameter binarization/quantization lies at different\ntreatments of $1\\times 1$ convolutions and $k\\times k$ convolutions ($k>1$),\nwhere we only binarize $k\\times k$ convolutions into binary patterns. The\nresulting networks are referred to as pattern networks. By doing this, we show\nthat previous deep CNNs such as GoogLeNet and Inception-type Nets can be\ncompressed dramatically with marginal drop in performance. Second, in light of\nthe different functionalities of $1\\times 1$ (data projection/transformation)\nand $k\\times k$ convolutions (pattern extraction), we propose a new block\nstructure codenamed the pattern residual block that adds transformed feature\nmaps generated by $1\\times 1$ convolutions to the pattern feature maps\ngenerated by $k\\times k$ convolutions, based on which we design a small network\nwith $\\sim 1$ million parameters. Combining with our parameter binarization, we\nachieve better performance on ImageNet than using similar sized networks\nincluding recently released Google MobileNets.\n", "title": "SEP-Nets: Small and Effective Pattern Networks" }
null
null
null
null
true
null
5257
null
Default
null
null
null
{ "abstract": " Atom interferometers employing optical cavities to enhance the beam splitter\npulses promise significant advances in science and technology, notably for\nfuture gravitational wave detectors. Long cavities, on the scale of hundreds of\nmeters, have been proposed in experiments aiming to observe gravitational waves\nwith frequencies below 1 Hz, where laser interferometers, such as LIGO, have\npoor sensitivity. Alternatively, short cavities have also been proposed for\nenhancing the sensitivity of more portable atom interferometers. We explore the\nfundamental limitations of two-mirror cavities for atomic beam splitting, and\nestablish upper bounds on the temperature of the atomic ensemble as a function\nof cavity length and three design parameters: the cavity g-factor, the\nbandwidth, and the optical suppression factor of the first and second order\nspatial modes. A lower bound to the cavity bandwidth is found which avoids\nelongation of the interaction time and maximizes power enhancement. An upper\nlimit to cavity length is found for symmetric two-mirror cavities, restricting\nthe practicality of long baseline detectors. For shorter cavities, an upper\nlimit on the beam size was derived from the geometrical stability of the\ncavity. These findings aim to aid the design of current and future\ncavity-assisted atom interferometers.\n", "title": "Fundamental Limitations of Cavity-assisted Atom Interferometry" }
null
null
null
null
true
null
5258
null
Default
null
null
null
{ "abstract": " Rule-based modelling allows to represent molecular interactions in a compact\nand natural way. The underlying molecular dynamics, by the laws of stochastic\nchemical kinetics, behaves as a continuous-time Markov chain. However, this\nMarkov chain enumerates all possible reaction mixtures, rendering the analysis\nof the chain computationally demanding and often prohibitive in practice. We\nhere describe how it is possible to efficiently find a smaller, aggregate\nchain, which preserves certain properties of the original one. Formal methods\nand lumpability notions are used to define algorithms for automated and\nefficient construction of such smaller chains (without ever constructing the\noriginal ones). We here illustrate the method on an example and we discuss the\napplicability of the method in the context of modelling large signalling\npathways.\n", "title": "Markov chain aggregation and its application to rule-based modelling" }
null
null
[ "Computer Science", "Quantitative Biology" ]
null
true
null
5259
null
Validated
null
null
null
{ "abstract": " In previous work, we introduced a method for modeling a configuration of\nobjects in 2D and 3D images using a mathematical \"medial/skeletal linking\nstructure.\" In this paper, we show how these structures allow us to capture\npositional properties of a multi-object configuration in addition to the shape\nproperties of the individual objects. In particular, we introduce numerical\ninvariants for positional properties which measure the closeness of neighboring\nobjects, including identifying the parts of the objects which are close, and\nthe \"relative significance\" of objects compared with the other objects in the\nconfiguration. Using these numerical measures, we introduce a hierarchical\nordering and relations between the individual objects, and quantitative\ncriteria for identifying subconfigurations. In addition, the invariants provide\na \"proximity matrix\" which yields a unique set of weightings measuring overall\nproximity of objects in the configuration. Furthermore, we show that these\ninvariants, which are volumetrically defined and involve external regions, may\nbe computed via integral formulas in terms of \"skeletal linking integrals\"\ndefined on the internal skeletal structures of the objects.\n", "title": "Shape and Positional Geometry of Multi-Object Configurations" }
null
null
null
null
true
null
5260
null
Default
null
null
null
{ "abstract": " Through the combination of transmission electron microscopy analysis of the\ndeformed microstructure and molecular dynamics computer simulations of the\ndeformation processes, the mechanisms of plastic strain recovery in bulk AgCu\neutectic with either incoherent twin or cube-on-cube interfaces between the Ag\nand Cu layers and a bilayer thickness of 500 nm have been revealed. The\ncharacter of the incoherent twin interfaces changed uniquely after dynamic\ncompressive loading for samples that exhibited plastic strain recovery and was\nfound to drive the recovery, which is due to dislocation retraction and\nrearrangement of the interfaces. The magnitude of the recovery decreased with\nincreasing strain as dislocation tangles and dislocation cell structures\nformed. No change in the orientation relationship was found at cube-on-cube\ninterfaces and these exhibited a lesser amount of plastic strain recovery in\nthe simulations and none experimentally in samples with larger layer\nthicknesses with predominantly cube-on-cube interfaces. Molecular dynamics\ncomputer simulations verified the importance of the change in the incoherent\ntwin interface structure as the driving force for dislocation annihilation at\nthe interfaces and the plastic strain recovery.\n", "title": "Interface mediated mechanisms of plastic strain recovery in AgCu alloy" }
null
null
null
null
true
null
5261
null
Default
null
null
null
{ "abstract": " The fusion of humans and technology takes us into an unknown world described\nby some authors as populated by quasi living species that would relegate us -\nordinary humans - to the rank of alienated agents emptied of our identity and\nconsciousness. I argue instead that our world is woven of simple though\ninvisible perspectives which - if we become aware of them - may renew our\nability for making judgments and enhance our autonomy. I became aware of these\ninvisible perspectives by observing and practicing a real time collective net\nart experiment called the Poietic Generator. As the perspectives unveiled by\nthis experiment are invisible I have called them anoptical perspectives i.e.\nnon-optical by analogy with the optical perspective of the Renaissance. Later I\nhave come to realize that these perspectives obtain their cognitive structure\nfrom the political origins of our language. Accordingly it is possible to\ndefine certain cognitive criteria for assessing the legitimacy of the anoptical\nperspectives just like some artists and architects of the Renaissance defined\nthe geometrical criteria that established the legitimacy of the optical one.\n", "title": "Refounding legitimacy towards Aethogenesis" }
null
null
null
null
true
null
5262
null
Default
null
null
null
{ "abstract": " The time-dependent generator coordinate method (TDGCM) is a powerful method\nto study the large amplitude collective motion of quantum many-body systems\nsuch as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the\nTDGCM leads to a local, time-dependent Schrödinger equation in a\nmulti-dimensional collective space. In this paper, we present the version 2.0\nof the code FELIX that solves the collective Schrödinger equation in a finite\nelement basis. This new version features: (i) the ability to solve a\ngeneralized TDGCM+GOA equation with a metric term in the collective\nHamiltonian, (ii) support for new kinds of finite elements and different types\nof quadrature to compute the discretized Hamiltonian and overlap matrices,\n(iii) the possibility to leverage the spectral element scheme, (iv) an explicit\nKrylov approximation of the time propagator for time integration instead of the\nimplicit Crank-Nicolson method implemented in the first version, (v) an\nentirely redesigned workflow. We benchmark this release on an analytic problem\nas well as on realistic two-dimensional calculations of the low-energy fission\nof Pu240 and Fm256. Low to moderate numerical precision calculations are most\nefficiently performed with simplex elements with a degree 2 polynomial basis.\nHigher precision calculations should instead use the spectral element method\nwith a degree 4 polynomial basis. We emphasize that in a realistic calculation\nof fission mass distributions of Pu240, FELIX-2.0 is about 20 times faster than\nits previous release (within a numerical precision of a few percents).\n", "title": "FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation" }
null
null
null
null
true
null
5263
null
Default
null
null
null
{ "abstract": " In this note we show that all small solutions in the energy space of the\ngeneralized 1D Boussinesq equation must decay to zero as time tends to\ninfinity, strongly on slightly proper subsets of the space-time light cone. Our\nresult does not require any assumption on the power of the nonlinearity,\nworking even for the supercritical range of scattering. No parity assumption on\nthe initial data is needed.\n", "title": "Scattering in the energy space for Boussinesq equations" }
null
null
null
null
true
null
5264
null
Default
null
null
null
{ "abstract": " The digital economy is a highly relevant item on the European Union's policy\nagenda. Cross-border internet purchases are part of the digital economy, but\ntheir total value can currently not be accurately measured or estimated.\nTraditional approaches based on consumer surveys or business surveys are shown\nto be inadequate for this purpose, due to language bias and sampling issues,\nrespectively. We address both problems by proposing a novel approach based on\nsupply-side data, namely tax returns. The proposed data-driven record-linkage\ntechniques and machine learning algorithms utilize two additional open data\nsources: European business registers and internet data. Our main finding is\nthat the value of total cross-border internet purchases within the European\nUnion by Dutch consumers was over EUR 1.3 billion in 2016. This is more than 6\ntimes as high as current estimates. Our finding motivates the implementation of\nthe proposed methodology in other EU member states. Ultimately, it could lead\nto more accurate estimates of cross-border internet purchases within the entire\nEuropean Union.\n", "title": "A Data-Driven Supply-Side Approach for Measuring Cross-Border Internet Purchases" }
null
null
null
null
true
null
5265
null
Default
null
null
null
{ "abstract": " Purpose: Magnetic Resonance Fingerprinting (MRF) is a relatively new approach\nthat provides quantitative MRI measures using randomized acquisition.\nExtraction of physical quantitative tissue parameters is performed off-line,\nwithout the need of patient presence, based on acquisition with varying\nparameters and a dictionary generated according to the Bloch equation\nsimulations. MRF uses hundreds of radio frequency (RF) excitation pulses for\nacquisition, and therefore a high undersampling ratio in the sampling domain\n(k-space) is required for reasonable scanning time. This undersampling causes\nspatial artifacts that hamper the ability to accurately estimate the tissue's\nquantitative values. In this work, we introduce a new approach for quantitative\nMRI using MRF, called magnetic resonance Fingerprinting with LOw Rank (FLOR).\nMethods: We exploit the low rank property of the concatenated temporal\nimaging contrasts, on top of the fact that the MRF signal is sparsely\nrepresented in the generated dictionary domain. We present an iterative scheme\nthat consists of a gradient step followed by a low rank projection using the\nsingular value decomposition.\nResults: Experimental results consist of retrospective sampling, that allows\ncomparison to a well defined reference, and prospective sampling that shows the\nperformance of FLOR for a real-data sampling scenario. Both experiments\ndemonstrate improved parameter accuracy compared to other compressed-sensing\nand low-rank based methods for MRF at 5% and 9% sampling ratios, for the\nretrospective and prospective experiments, respectively.\nConclusions: We have shown through retrospective and prospective experiments\nthat by exploiting the low rank nature of the MRF signal, FLOR recovers the MRF\ntemporal undersampled images and provides more accurate parameter maps compared\nto previous iterative methods.\n", "title": "Low Rank Magnetic Resonance Fingerprinting" }
null
null
null
null
true
null
5266
null
Default
null
null
null
{ "abstract": " In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH),\nas well as its practical variant SARAH+, as a novel approach to the finite-sum\nminimization problems. Different from the vanilla SGD and other modern\nstochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple\nrecursive framework for updating stochastic gradient estimates; when comparing\nto SAG/SAGA, SARAH does not require a storage of past gradients. The linear\nconvergence rate of SARAH is proven under strong convexity assumption. We also\nprove a linear convergence rate (in the strongly convex case) for an inner loop\nof SARAH, the property that SVRG does not possess. Numerical experiments\ndemonstrate the efficiency of our algorithm.\n", "title": "SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient" }
null
null
null
null
true
null
5267
null
Default
null
null
null
{ "abstract": " A crucial role in the Nyman-Beurling-Báez-Duarte approach to the Riemann\nHypothesis is played by the distance \\[\nd_N^2:=\\inf_{A_N}\\frac{1}{2\\pi}\\int_{-\\infty}^\\infty\\left|1-\\zeta\nA_N\\left(\\frac{1}{2}+it\\right)\\right|^2\\frac{dt}{\\frac{1}{4}+t^2}\\:, \\] where\nthe infimum is over all Dirichlet polynomials\n$$A_N(s)=\\sum_{n=1}^{N}\\frac{a_n}{n^s}$$ of length $N$. In this paper we\ninvestigate $d_N^2$ under the assumption that the Riemann zeta function has\nfour non-trivial zeros off the critical line. Thus we obtain a criterion for\nthe non validity of the Riemann Hypothesis.\n", "title": "A criterion related to the Riemann Hypothesis" }
null
null
null
null
true
null
5268
null
Default
null
null
null
{ "abstract": " We propose non-stationary spectral kernels for Gaussian process regression.\nWe propose to model the spectral density of a non-stationary kernel function as\na mixture of input-dependent Gaussian process frequency density surfaces. We\nsolve the generalised Fourier transform with such a model, and present a family\nof non-stationary and non-monotonic kernels that can learn input-dependent and\npotentially long-range, non-monotonic covariances between inputs. We derive\nefficient inference using model whitening and marginalized posterior, and show\nwith case studies that these kernels are necessary when modelling even rather\nsimple time series, image or geospatial data with non-stationary\ncharacteristics.\n", "title": "Non-Stationary Spectral Kernels" }
null
null
null
null
true
null
5269
null
Default
null
null
null
{ "abstract": " One of key 5G scenarios is that device-to-device (D2D) and massive\nmultiple-input multiple-output (MIMO) will be co-existed. However, interference\nin the uplink D2D underlaid massive MIMO cellular networks needs to be\ncoordinated, due to the vast cellular and D2D transmissions. To this end, this\npaper introduces a spatially dynamic power control solution for mitigating the\ncellular-to-D2D and D2D-to-cellular interference. In particular, the proposed\nD2D power control policy is rather flexible including the special cases of no\nD2D links or using maximum transmit power. Under the considered power control,\nan analytical approach is developed to evaluate the spectral efficiency (SE)\nand energy efficiency (EE) in such networks. Thus, the exact expressions of SE\nfor a cellular user or D2D transmitter are derived, which quantify the impacts\nof key system parameters such as massive MIMO antennas and D2D density.\nMoreover, the D2D scale properties are obtained, which provide the sufficient\nconditions for achieving the anticipated SE. Numerical results corroborate our\nanalysis and show that the proposed power control solution can efficiently\nmitigate interference between the cellular and D2D tier. The results\ndemonstrate that there exists the optimal D2D density for maximizing the area\nSE of D2D tier. In addition, the achievable EE of a cellular user can be\ncomparable to that of a D2D user.\n", "title": "Spectral and Energy Efficiency of Uplink D2D Underlaid Massive MIMO Cellular Networks" }
null
null
null
null
true
null
5270
null
Default
null
null
null
{ "abstract": " Neural networks are among the most accurate supervised learning methods in\nuse today, but their opacity makes them difficult to trust in critical\napplications, especially when conditions in training differ from those in test.\nRecent work on explanations for black-box models has produced tools (e.g. LIME)\nto show the implicit rules behind predictions, which can help us identify when\nmodels are right for the wrong reasons. However, these methods do not scale to\nexplaining entire datasets and cannot correct the problems they reveal. We\nintroduce a method for efficiently explaining and regularizing differentiable\nmodels by examining and selectively penalizing their input gradients, which\nprovide a normal to the decision boundary. We apply these penalties both based\non expert annotation and in an unsupervised fashion that encourages diverse\nmodels with qualitatively different decision boundaries for the same\nclassification problem. On multiple datasets, we show our approach generates\nfaithful explanations and models that generalize much better when conditions\ndiffer between training and test.\n", "title": "Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations" }
null
null
null
null
true
null
5271
null
Default
null
null
null
{ "abstract": " Papers on the ANTARES multi-messenger program, prepared for the 35th\nInternational Cosmic Ray Conference (ICRC 2017, Busan, South Korea) by the\nANTARES Collaboration\n", "title": "The ANTARES Collaboration: Contributions to ICRC 2017 Part II: The multi-messenger program" }
null
null
null
null
true
null
5272
null
Default
null
null
null
{ "abstract": " The modified Camassa-Holm (mCH) equation is a bi-Hamiltonian system\npossessing $N$-peakon weak solutions, for all $N\\geq 1$, in the setting of an\nintegral formulation which is used in analysis for studying local\nwell-posedness, global existence, and wave breaking for non-peakon solutions.\nUnlike the original Camassa-Holm equation, the two Hamiltonians of the mCH\nequation do not reduce to conserved integrals (constants of motion) for\n$2$-peakon weak solutions. This perplexing situation is addressed here by\nfinding an explicit conserved integral for $N$-peakon weak solutions for all\n$N\\geq 2$. When $N$ is even, the conserved integral is shown to provide a\nHamiltonian structure with the use of a natural Poisson bracket that arises\nfrom reduction of one of the Hamiltonian structures of the mCH equation. But\nwhen $N$ is odd, the Hamiltonian equations of motion arising from the conserved\nintegral using this Poisson bracket are found to differ from the dynamical\nequations for the mCH $N$-peakon weak solutions. Moreover, the lack of\nconservation of the two Hamiltonians of the mCH equation when they are reduced\nto $2$-peakon weak solutions is shown to extend to $N$-peakon weak solutions\nfor all $N\\geq 2$. The connection between this loss of integrability structure\nand related work by Chang and Szmigielski on the Lax pair for the mCH equation\nis discussed.\n", "title": "Hamiltonian structure of peakons as weak solutions for the modified Camassa-Holm equation" }
null
null
null
null
true
null
5273
null
Default
null
null
null
{ "abstract": " We investigate the normal state of the superconducting compound PuCoGa$_5$\nusing the combination of density functional theory (DFT) and dynamical mean\nfield theory (DMFT), with the continuous time quantum Monte Carlo (CTQMC) and\nthe vertex-corrected one-crossing approximation (OCA) as the impurity solvers.\nOur DFT+DMFT(CTQMC) calculations suggest a strong tendency of Pu-5$f$ orbitals\nto differentiate at low temperatures. The renormalized 5$f_{5/2}$ states\nexhibit a Fermi-liquid behavior whereas one electron in the 5$f_{7/2}$ states\nis at the edge of a Mott localization. We find that the orbital differentiation\nis manifested as the removing of 5$f_{7/2}$ spectral weight from the Fermi\nlevel relative to DFT. We corroborate these conclusions with DFT+DMFT(OCA)\ncalculations which demonstrate that 5$f_{5/2}$ electrons have a much larger\nKondo scale than the 5$f_{7/2}$.\n", "title": "Orbital-dependent correlations in PuCoGa$_5$" }
null
null
null
null
true
null
5274
null
Default
null
null
null
{ "abstract": " Irreversible processes play a major role in the description and prediction of\natmospheric dynamics. In this paper, we present a variational derivation of the\nevolution equations for a moist atmosphere with rain process and subject to the\nirreversible processes of viscosity, heat conduction, diffusion, and phase\ntransition. This derivation is based on a general variational formalism for\nnonequilibrium thermodynamics which extends Hamilton's principle to\nincorporates irreversible processes. It is valid for any state equation and\nthus also covers the case of the atmosphere of other planets. In this approach,\nthe second law of thermodynamics is understood as a nonlinear constraint\nformulated with the help of new variables, called thermodynamic displacements,\nwhose time derivative coincides with the thermodynamic force of the\nirreversible process. The formulation is written both in the Lagrangian and\nEulerian descriptions and can be directly adapted to oceanic dynamics. We\nillustrate the efficiency of our variational formulation as a modeling tool in\natmospheric thermodynamics, by deriving a pseudoincompressible model for moist\natmospheric thermodynamics with general equations of state and subject to the\nirreversible processes of viscosity, heat conduction, diffusion, and phase\ntransition.\n", "title": "A variational derivation of the nonequilibrium thermodynamics of a moist atmosphere with rain process and its pseudoincompressible approximation" }
null
null
null
null
true
null
5275
null
Default
null
null
null
{ "abstract": " Inspired by Andrews' 2-colored generalized Frobenius partitions, we consider\ncertain weighted 7-colored partition functions and establish some interesting\nRamanujan-type identities and congruences. Moreover, we provide combinatorial\ninterpretations of some congruences modulo 5 and 7. Finally, we study the\nproperties of weighted 7-colored partitions weighted by the parity of certain\npartition statistics.\n", "title": "On certain weighted 7-colored partitions" }
null
null
null
null
true
null
5276
null
Default
null
null
null
{ "abstract": " This paper illustrates the similarities between the problems of customer\nchurn and employee turnover. An example of employee turnover prediction model\nleveraging classical machine learning techniques is developed. Model outputs\nare then discussed to design \\& test employee retention policies. This type of\nretention discussion is, to our knowledge, innovative and constitutes the main\nvalue of this paper.\n", "title": "Employee turnover prediction and retention policies design: a case study" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
5277
null
Validated
null
null
null
{ "abstract": " With progress in enabling autonomous cars to drive safely on the road, it is\ntime to start asking how they should be driving. A common answer is that they\nshould be adopting their users' driving style. This makes the assumption that\nusers want their autonomous cars to drive like they drive - aggressive drivers\nwant aggressive cars, defensive drivers want defensive cars. In this paper, we\nput that assumption to the test. We find that users tend to prefer a\nsignificantly more defensive driving style than their own. Interestingly, they\nprefer the style they think is their own, even though their actual driving\nstyle tends to be more aggressive. We also find that preferences do depend on\nthe specific driving scenario, opening the door for new ways of learning\ndriving style preference.\n", "title": "Do You Want Your Autonomous Car To Drive Like You?" }
null
null
null
null
true
null
5278
null
Default
null
null
null
{ "abstract": " Existing music recognition applications require a connection to a server that\nperforms the actual recognition. In this paper we present a low-power music\nrecognizer that runs entirely on a mobile device and automatically recognizes\nmusic without user interaction. To reduce battery consumption, a small music\ndetector runs continuously on the mobile device's DSP chip and wakes up the\nmain application processor only when it is confident that music is present.\nOnce woken, the recognizer on the application processor is provided with a few\nseconds of audio which is fingerprinted and compared to the stored fingerprints\nin the on-device fingerprint database of tens of thousands of songs. Our\npresented system, Now Playing, has a daily battery usage of less than 1% on\naverage, respects user privacy by running entirely on-device and can passively\nrecognize a wide range of music.\n", "title": "Now Playing: Continuous low-power music recognition" }
null
null
null
null
true
null
5279
null
Default
null
null
null
{ "abstract": " Decentralized machine learning is a promising emerging paradigm in view of\nglobal challenges of data ownership and privacy. We consider learning of linear\nclassification and regression models, in the setting where the training data is\ndecentralized over many user devices, and the learning algorithm must run\non-device, on an arbitrary communication network, without a central\ncoordinator. We propose COLA, a new decentralized training algorithm with\nstrong theoretical guarantees and superior practical performance. Our framework\novercomes many limitations of existing methods, and achieves communication\nefficiency, scalability, elasticity as well as resilience to changes in data\nand participating devices.\n", "title": "COLA: Decentralized Linear Learning" }
null
null
null
null
true
null
5280
null
Default
null
null
null
{ "abstract": " The expected improvement (EI) algorithm is a popular strategy for information\ncollection in optimization under uncertainty. The algorithm is widely known to\nbe too greedy, but nevertheless enjoys wide use due to its simplicity and\nability to handle uncertainty and noise in a coherent decision theoretic\nframework. To provide rigorous insight into EI, we study its properties in a\nsimple setting of Bayesian optimization where the domain consists of a finite\ngrid of points. This is the so-called best-arm identification problem, where\nthe goal is to allocate measurement effort wisely to confidently identify the\nbest arm using a small number of measurements. In this framework, one can show\nformally that EI is far from optimal. To overcome this shortcoming, we\nintroduce a simple modification of the expected improvement algorithm.\nSurprisingly, this simple change results in an algorithm that is asymptotically\noptimal for Gaussian best-arm identification problems, and provably outperforms\nstandard EI by an order of magnitude.\n", "title": "Improving the Expected Improvement Algorithm" }
null
null
null
null
true
null
5281
null
Default
null
null
null
{ "abstract": " We study performance limits of solutions to utility maximization problems\n(e.g., max-min problems) in wireless networks as a function of the power budget\n$\\bar{p}$ available to transmitters. Special focus is devoted to the utility\nand the transmit energy efficiency (i.e., utility over transmit power) of the\nsolution. Briefly, we show tight bounds for the general class of network\nutility optimization problems that can be solved by computing conditional\neigenvalues of standard interference mappings. The proposed bounds, which are\nbased on the concept of asymptotic functions, are simple to compute, provide us\nwith good estimates of the performance of networks for any value of $\\bar{p}$\nin many real-world applications, and enable us to determine points in which\nnetworks move from a noise limited regime to an interference limited regime.\nFurthermore, they also show that the utility and the transmit energy efficiency\nscales as $\\Theta(1)$ and $\\Theta(1/\\bar{p})$, respectively, as\n$\\bar{p}\\to\\infty$.\n", "title": "Performance Limits of Solutions to Network Utility Maximization Problems" }
null
null
[ "Computer Science" ]
null
true
null
5282
null
Validated
null
null
null
{ "abstract": " A new approach to problems of the Uncertainty Principle in Harmonic Analysis,\nbased on the use of Toeplitz operators, has brought progress to some of the\nclassical problems in the area. The goal of this paper is to develop and\nsystematize the function theoretic component of the Toeplitz approach by\nintroducing a partial order on the set of inner functions induced by the action\nof Toeplitz operators. We study connections of the new order with some of the\nclassical problems and known results. We discuss remaining problems and\npossible directions for further research.\n", "title": "Toeplitz Order" }
null
null
null
null
true
null
5283
null
Default
null
null
null
{ "abstract": " We study definably compact definably connected groups definable in a\nsufficiently saturated real closed field $R$. We introduce the notion of\ngroup-generic point for $\\bigvee$-definable groups and show the existence of\ngroup-generic points for definably compact groups definable in a sufficiently\nsaturated o-minimal expansion of a real closed field. We use this notion along\nwith some properties of generic sets to prove that for every definably compact\ndefinably connected group $G$ definable in $R$ there are a connected\n$R$-algebraic group $H$, a definable injective map $\\phi$ from a generic\ndefinable neighborhood of the identity of $G$ into the group $H\\left(R\\right)$\nof $R$-points of $H$ such that $\\phi$ acts as a group homomorphism inside its\ndomain. This result is used in [2] to prove that the o-minimal universal\ncovering group of an abelian connected definably compact group definable in a\nsufficiently saturated real closed field $R$ is, up to locally definable\nisomorphisms, an open connected locally definable subgroup of the o-minimal\nuniversal covering group of the $R$-points of some $R$-algebraic group.\n", "title": "Definably compact groups definable in real closed fields. I" }
null
null
null
null
true
null
5284
null
Default
null
null
null
{ "abstract": " Despite the widely-spread consensus on the brain complexity, sprouts of the\nsingle neuron revolution emerged in neuroscience in the 1970s. They brought\nmany unexpected discoveries, including grandmother or concept cells and sparse\ncoding of information in the brain.\nIn machine learning for a long time, the famous curse of dimensionality\nseemed to be an unsolvable problem. Nevertheless, the idea of the blessing of\ndimensionality becomes gradually more and more popular. Ensembles of\nnon-interacting or weakly interacting simple units prove to be an effective\ntool for solving essentially multidimensional problems. This approach is\nespecially useful for one-shot (non-iterative) correction of errors in large\nlegacy artificial intelligence systems.\nThese simplicity revolutions in the era of complexity have deep fundamental\nreasons grounded in geometry of multidimensional data spaces. To explore and\nunderstand these reasons we revisit the background ideas of statistical\nphysics. In the course of the 20th century they were developed into the\nconcentration of measure theory. New stochastic separation theorems reveal the\nfine structure of the data clouds.\nWe review and analyse biological, physical, and mathematical problems at the\ncore of the fundamental question: how can high-dimensional brain organise\nreliable and fast learning in high-dimensional world of data by simple tools?\nTwo critical applications are reviewed to exemplify the approach: one-shot\ncorrection of errors in intellectual systems and emergence of static and\nassociative memories in ensembles of single neurons.\n", "title": "The unreasonable effectiveness of small neural ensembles in high-dimensional brain" }
null
null
null
null
true
null
5285
null
Default
null
null
null
{ "abstract": " We provide explicit and unified formulas for the cocycles of all degrees on\nthe normalized bar resolutions of finite abelian groups. This is achieved by\nconstructing a chain map from the normalized bar resolution to a Koszul-like\nresolution for any given finite abelian group. With a help of the obtained\ncocycle formulas, we determine all the braided linear Gr-categories and compute\nthe Dijkgraaf-Witten Invariants of the $n$-torus for all $n$.\n", "title": "Explicit cocycle formulas on finite abelian groups with applications to braided linear Gr-categories and Dijkgraaf-Witten invariants" }
null
null
[ "Mathematics" ]
null
true
null
5286
null
Validated
null
null
null
{ "abstract": " Regular variation is often used as the starting point for modeling\nmultivariate heavy-tailed data. A random vector is regularly varying if and\nonly if its radial part $R$ is regularly varying and is asymptotically\nindependent of the angular part $\\Theta$ as $R$ goes to infinity. The\nconditional limiting distribution of $\\Theta$ given $R$ is large characterizes\nthe tail dependence of the random vector and hence its estimation is the\nprimary goal of applications. A typical strategy is to look at the angular\ncomponents of the data for which the radial parts exceed some threshold. While\na large class of methods has been proposed to model the angular distribution\nfrom these exceedances, the choice of threshold has been scarcely discussed in\nthe literature. In this paper, we describe a procedure for choosing the\nthreshold by formally testing the independence of $R$ and $\\Theta$ using a\nmeasure of dependence called distance covariance. We generalize the limit\ntheorem for distance covariance to our unique setting and propose an algorithm\nwhich selects the threshold for $R$. This algorithm incorporates a subsampling\nscheme that is also applicable to weakly dependent data. Moreover, it avoids\nthe heavy computation in the calculation of the distance covariance, a typical\nlimitation for this measure. The performance of our method is illustrated on\nboth simulated and real data.\n", "title": "Threshold Selection for Multivariate Heavy-Tailed Data" }
null
null
null
null
true
null
5287
null
Default
null
null
null
{ "abstract": " Deep learning (DL) has recently achieved tremendous success in a variety of\ncutting-edge applications, e.g., image recognition, speech and natural language\nprocessing, and autonomous driving. Besides the available big data and hardware\nevolution, DL frameworks and platforms play a key role to catalyze the\nresearch, development, and deployment of DL intelligent solutions. However, the\ndifference in computation paradigm, architecture design and implementation of\nexisting DL frameworks and platforms brings challenges for DL software\ndevelopment, deployment, maintenance, and migration. Up to the present, it\nstill lacks a comprehensive study on how current diverse DL frameworks and\nplatforms influence the DL software development process.\nIn this paper, we initiate the first step towards the investigation on how\nexisting state-of-the-art DL frameworks (i.e., TensorFlow, Theano, and Torch)\nand platforms (i.e., server/desktop, web, and mobile) support the DL software\ndevelopment activities. We perform an in-depth and comparative evaluation on\nmetrics such as learning accuracy, DL model size, robustness, and performance,\non state-of-the-art DL frameworks across platforms using two popular datasets\nMNIST and CIFAR-10. Our study reveals that existing DL frameworks still suffer\nfrom compatibility issues, which becomes even more severe when it comes to\ndifferent platforms. We pinpoint the current challenges and opportunities\ntowards developing high quality and compatible DL systems. To ignite further\ninvestigation along this direction to address urgent industrial demands of\nintelligent solutions, we make all of our assembled feasible toolchain and\ndataset publicly available.\n", "title": "An Orchestrated Empirical Study on Deep Learning Frameworks and Platforms" }
null
null
null
null
true
null
5288
null
Default
null
null
null
{ "abstract": " In this note, we analyze the classification problem for compact metrizable\n$G$-ambits for a countable discrete group $G$ from the point of view of\ndescriptive set theory. More precisely, we prove that the topological conjugacy\nrelation on the standard Borel space of compact metrizable $G$-ambits is Borel\nfor every countable discrete group $G$.\n", "title": "On the complexity of topological conjugacy of compact metrizable $G$-ambits" }
null
null
null
null
true
null
5289
null
Default
null
null
null
{ "abstract": " The purpose of this article is to investigate relations between\nW-superalgebras and integrable super-Hamiltonian systems. To this end, we\nintroduce the generalized Drinfel'd-Sokolov (D-S) reduction associated to a Lie\nsuperalgebra $g$ and its even nilpotent element $f$, and we find a new\ndefinition of the classical affine W-superalgebra $W(g,f,k)$ via the D-S\nreduction. This new construction allows us to find free generators of\n$W(g,f,k)$, as a differential superalgebra, and two independent Lie brackets on\n$W(g,f,k)/\\partial W(g,f,k).$ Moreover, we describe super-Hamiltonian systems\nwith the Poisson vertex algebras theory. A W-superalgebra with certain\nproperties can be understood as an underlying differential superalgebra of a\nseries of integrable super-Hamiltonian systems.\n", "title": "Classical affine W-superalgebras via generalized Drinfeld-Sokolov reductions and related integrable systems" }
null
null
null
null
true
null
5290
null
Default
null
null
null
{ "abstract": " Temporal object detection has attracted significant attention, but most\npopular detection methods can not leverage the rich temporal information in\nvideos. Very recently, many different algorithms have been developed for video\ndetection task, but real-time online approaches are frequently deficient. In\nthis paper, based on attention mechanism and convolutional long short-term\nmemory (ConvLSTM), we propose a temporal signal-shot detector (TSSD) for\nreal-world detection. Distinct from previous methods, we take aim at temporally\nintegrating pyramidal feature hierarchy using ConvLSTM, and design a novel\nstructure including a low-level temporal unit as well as a high-level one\n(HL-TU) for multi-scale feature maps. Moreover, we develop a creative temporal\nanalysis unit, namely, attentional ConvLSTM (AC-LSTM), in which a temporal\nattention module is specially tailored for background suppression and scale\nsuppression while a ConvLSTM integrates attention-aware features through time.\nAn association loss is designed for temporal coherence. Besides, online tubelet\nanalysis (OTA) is exploited for identification. Finally, our method is\nevaluated on ImageNet VID dataset and 2DMOT15 dataset. Extensive comparisons on\nthe detection and tracking capability validate the superiority of the proposed\napproach. Consequently, the developed TSSD-OTA is fairly faster and achieves an\noverall competitive performance in terms of detection and tracking. The source\ncode will be made available.\n", "title": "Temporally Identity-Aware SSD with Attentional LSTM" }
null
null
null
null
true
null
5291
null
Default
null
null
null
{ "abstract": " This paper develops a Carleman type estimate for immersed surface in\nEuclidean space at infinity. With this estimate, we obtain an unique\ncontinuation property for harmonic functions on immersed surfaces vanishing at\ninfinity, which leads to rigidity results in geometry.\n", "title": "Carleman Estimate for Surface in Euclidean Space at Infinity" }
null
null
null
null
true
null
5292
null
Default
null
null
null
{ "abstract": " Several temporal logics have been proposed to formalise timing diagram\nrequirements over hardware and embedded controllers. These include LTL,\ndiscrete time MTL and the recent industry standard PSL. However, succintness\nand visual structure of a timing diagram are not adequately captured by their\nformulae. Interval temporal logic QDDC is a highly succint and visual notation\nfor specifying patterns of behaviours.\nIn this paper, we propose a practically useful notation called SeCeCntnl\nwhich enhances negation free fragment of QDDC with features of nominals and\nlimited liveness. We show that timing diagrams can be naturally\n(compositionally) and succintly formalized in SeCeCntnl as compared with PSL\nand MTL. We give a linear time translation from timing diagrams to SeCeCntnl.\nAs our second main result, we propose a linear time translation of SeCeCntnl\ninto QDDC. This allows QDDC tools such as DCVALID and DCSynth to be used for\nchecking consistency of timing diagram requirements as well as for automatic\nsynthesis of property monitors and controllers. We give examples of a minepump\ncontroller and a bus arbiter to illustrate our tools. Giving a theoretical\nanalysis, we show that for the proposed SeCeCntnl, the satisfiability and model\nchecking have elementary complexity as compared to the non-elementary\ncomplexity for the full logic QDDC.\n", "title": "Formalizing Timing Diagram Requirements in Discrete Duration Calulus" }
null
null
[ "Computer Science" ]
null
true
null
5293
null
Validated
null
null
null
{ "abstract": " Wild sets in $\\mathbb{R}^n$ can be tamed through the use of various\nrepresentations though sometimes this taming removes features considered\nimportant. Finding the wildest sets for which it is still true that the\nrepresentations faithfully inform us about the original set is the focus of\nthis rather playful, expository paper that we hope will stimulate interest in\ncubical coverings as well as the other two ideas we explore briefly: Jones'\n$\\beta$ numbers and varifolds from geometric measure theory.\n", "title": "Cubical Covers of Sets in $\\mathbb{R}^n$" }
null
null
null
null
true
null
5294
null
Default
null
null
null
{ "abstract": " Mitochondrial DNA (mtDNA) mutations cause severe congenital diseases but may\nalso be associated with healthy aging. MtDNA is stochastically replicated and\ndegraded, and exists within organelles which undergo dynamic fusion and\nfission. The role of the resulting mitochondrial networks in determining the\ntime evolution of the cellular proportion of mutated mtDNA molecules\n(heteroplasmy), and cell-to-cell variability in heteroplasmy (heteroplasmy\nvariance), remains incompletely understood. Heteroplasmy variance is\nparticularly important since it modulates the number of pathological cells in a\ntissue. Here, we provide the first wide-reaching mathematical treatment which\nbridges mitochondrial network and genetic states. We show that, for a range of\nmodels, the rate of increase in heteroplasmy variance, and the rate of\n\\textit{de novo} mutation, is proportionately modulated by the fraction of\nunfused mitochondria, independently of the absolute fission-fusion rate. In the\ncontext of selective fusion, we show that intermediate fusion/fission ratios\nare optimal for the clearance of mtDNA mutants. Our findings imply that\nmodulating network state, mitophagy rate and copy number to slow down\nheteroplasmy dynamics when mean heteroplasmy is low, could have therapeutic\nadvantages for mitochondrial disease and healthy aging.\n", "title": "Mitochondrial network fragmentation modulates mutant mtDNA accumulation independently of absolute fission-fusion rates" }
null
null
null
null
true
null
5295
null
Default
null
null
null
{ "abstract": " Let $f:\\mathbb{S}^{d-1}\\times \\mathbb{S}^{d-1}\\to\\mathbb{S}$ be a function of\nthe form $f(\\mathbf{x},\\mathbf{x}') = g(\\langle\\mathbf{x},\\mathbf{x}'\\rangle)$\nfor $g:[-1,1]\\to \\mathbb{R}$. We give a simple proof that shows that poly-size\ndepth two neural networks with (exponentially) bounded weights cannot\napproximate $f$ whenever $g$ cannot be approximated by a low degree polynomial.\nMoreover, for many $g$'s, such as $g(x)=\\sin(\\pi d^3x)$, the number of neurons\nmust be $2^{\\Omega\\left(d\\log(d)\\right)}$. Furthermore, the result holds\nw.r.t.\\ the uniform distribution on $\\mathbb{S}^{d-1}\\times \\mathbb{S}^{d-1}$.\nAs many functions of the above form can be well approximated by poly-size depth\nthree networks with poly-bounded weights, this establishes a separation between\ndepth two and depth three networks w.r.t.\\ the uniform distribution on\n$\\mathbb{S}^{d-1}\\times \\mathbb{S}^{d-1}$.\n", "title": "Depth Separation for Neural Networks" }
null
null
null
null
true
null
5296
null
Default
null
null
null
{ "abstract": " For nanotechnology nodes, the feature size is shrunk rapidly, the wire\nbecomes narrow and thin, it leads to high RC parasitic, especially for\nresistance. The overall system performance are dominated by interconnect rather\nthan device. As such, it is imperative to accurately measure and model\ninterconnect parasitic in order to predict interconnect performance on silicon.\nDespite many test structures developed in the past to characterize device\nmodels and layout effects, only few of them are available for interconnects.\nNevertheless, they are either not suitable for real chip implementation or too\ncomplicated to be embedded. A compact yet comprehensive test structure to\ncapture all interconnect parasitic in a real chip is needed. To address this\nproblem, this paper describes a set of test structures that can be used to\nstudy the timing performance (i.e. propagation delay and crosstalk) of various\ninterconnect configurations. Moreover, an empirical model is developed to\nestimate the actual RC parasitic. Compared with the state-of-the-art\ninterconnect test structures, the new structure is compact in size and can be\neasily embedded on die as a parasitic variation monitor. We have validated the\nproposed structure on a test chip in TSMC 28nm HPM process. Recently, the test\nstructure is further modified to identify the serious interconnect process\nissues for critical path design using TSMC 7nm FF process.\n", "title": "An Accurate Interconnect Test Structure for Parasitic Validation in On-Chip Machine Learning Accelerators" }
null
null
null
null
true
null
5297
null
Default
null
null
null
{ "abstract": " NGC 1448 is one of the nearest luminous galaxies ($L_{8-1000\\mu m} >$ 10$^{9}\nL_{\\odot}$) to ours ($z$ $=$ 0.00390), and yet the active galactic nucleus\n(AGN) it hosts was only recently discovered, in 2009. In this paper, we present\nan analysis of the nuclear source across three wavebands: mid-infrared (MIR)\ncontinuum, optical, and X-rays. We observed the source with the Nuclear\nSpectroscopic Telescope Array (NuSTAR), and combined this data with archival\nChandra data to perform broadband X-ray spectral fitting ($\\approx$0.5-40 keV)\nof the AGN for the first time. Our X-ray spectral analysis reveals that the AGN\nis buried under a Compton-thick (CT) column of obscuring gas along our\nline-of-sight, with a column density of $N_{\\rm H}$(los) $\\gtrsim$ 2.5 $\\times$\n10$^{24}$ cm$^{-2}$. The best-fitting torus models measured an intrinsic 2-10\nkeV luminosity of $L_{2-10\\rm{,int}}$ $=$ (3.5-7.6) $\\times$ 10$^{40}$ erg\ns$^{-1}$, making NGC 1448 one of the lowest luminosity CTAGNs known. In\naddition to the NuSTAR observation, we also performed optical spectroscopy for\nthe nucleus in this edge-on galaxy using the European Southern Observatory New\nTechnology Telescope. We re-classify the optical nuclear spectrum as a Seyfert\non the basis of the Baldwin-Philips-Terlevich diagnostic diagrams, thus\nidentifying the AGN at optical wavelengths for the first time. We also present\nhigh spatial resolution MIR observations of NGC 1448 with Gemini/T-ReCS, in\nwhich a compact nucleus is clearly detected. The absorption-corrected 2-10 keV\nluminosity measured from our X-ray spectral analysis agrees with that predicted\nfrom the optical [OIII]$\\lambda$5007\\AA\\ emission line and the MIR 12$\\mu$m\ncontinuum, further supporting the CT nature of the AGN.\n", "title": "A New Compton-thick AGN in our Cosmic Backyard: Unveiling the Buried Nucleus in NGC 1448 with NuSTAR" }
null
null
null
null
true
null
5298
null
Default
null
null
null
{ "abstract": " We present natural and general ways of building Lie groupoids, by using the\nclassical procedures of blowups and of deformations to the normal cone. Our\nconstructions are seen to recover many known ones involved in index theory. The\ndeformation and blowup groupoids obtained give rise to several extensions of\n$C^*$-algebras and to full index problems. We compute the corresponding\nK-theory maps. Finally, the blowup of a manifold sitting in a transverse way in\nthe space of objects of a Lie groupoid leads to a calculus, quite similar to\nthe Boutet de Monvel calculus for manifolds with boundary.\n", "title": "Blowup constructions for Lie groupoids and a Boutet de Monvel type calculus" }
null
null
null
null
true
null
5299
null
Default
null
null
null
{ "abstract": " Spin-gapless semiconductors with their unique band structures have recently\nattracted much attention due to their interesting transport properties that can\nbe utilized in spintronics applications. We have successfully deposited the\nthin films of quaternary spin-gapless semiconductor CoFeMnSi Heusler alloy on\nMgO (001) substrates using a pulsed laser deposition system. These films show\nepitaxial growth along (001) direction and display uniform and smooth\ncrystalline surface. The magnetic properties reveal that the film is\nferromagnetically soft along the in-plane direction and its Curie temperature\nis well above 400 K. The electrical conductivity of the film is low and\nexhibits a nearly temperature independent semiconducting behaviour. The\nestimated temperature coefficient of resistivity for the film is -7x10^-10\nOhm.m/K, which is comparable to the values reported for spin-gapless\nsemiconductors.\n", "title": "Possible spin gapless semiconductor type behaviour in CoFeMnSi epitaxial thin films" }
null
null
[ "Physics" ]
null
true
null
5300
null
Validated
null
null