text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " We study phase transitions in a two dimensional weakly interacting Bose gas\nin a random potential at finite temperatures. We identify superfluid, normal\nfluid, and insulator phases and construct the phase diagram. At T=0 one has a\ntricritical point where the three phases coexist. The truncation of the energy\ndistribution at the trap barrier, which is a generic phenomenon in cold atom\nsystems, limits the growth of the localization length and in contrast to the\nthermodynamic limit the insulator phase is present at any temperature.\n",
"title": "Finite temperature disordered bosons in two dimensions"
}
| null | null | null | null | true | null |
15801
| null |
Default
| null | null |
null |
{
"abstract": " In this work, the study of thermal conductivity before and after in-situ\nring-opening polymerization of cyclic butylene terephthalate into poly\n(butylene terephthalate) in presence of graphene-related materials (GRM) is\naddressed, to gain insight in the modification of nanocomposites morphology\nupon polymerization. Five types of GRM were used: one type of graphite\nnanoplatelets, two different grades of reduced graphene oxide (rGO) and the\nsame rGO grades after thermal annealing for 1 hour at 1700°C under vacuum\nto reduce their defectiveness. Polymerization of CBT into pCBT, morphology and\nnanoparticle organization were investigated by means of differential scanning\ncalorimetry, electron microscopy and rheology. Electrical and thermal\nproperties were investigated by means of volumetric resistivity and bulk\nthermal conductivity measurement. In particular, the reduction of nanoflake\naspect ratio during ring-opening polymerization was found to have a detrimental\neffect on both electrical and thermal conductivities in nanocomposites.\n",
"title": "Morphology and properties evolution upon ring-opening polymerization during extrusion of cyclic butylene terephthalate and graphene-related-materials into thermally conductive nanocomposites"
}
| null | null | null | null | true | null |
15802
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the forecasting ability of the most commonly used benchmarks\nin financial economics. We approach the usual caveats of probabilistic\nforecasts studies -small samples, limited models and non-holistic validations-\nby performing a comprehensive comparison of 15 predictive schemes during a time\nperiod of over 21 years. All densities are evaluated in terms of their\nstatistical consistency, local accuracy and forecasting errors. Using a new\ncomposite indicator, the Integrated Forecast Score (IFS), we show that\nrisk-neutral densities outperform historical-based predictions in terms of\ninformation content. We find that the Variance Gamma model generates the\nhighest out-of-sample likelihood of observed prices and the lowest predictive\nerrors, whereas the ARCH-based GJR-FHS delivers the most consistent forecasts\nacross the entire density range. In contrast, lognormal densities, the Heston\nmodel or the Breeden-Litzenberger formula yield biased predictions and are\nrejected in statistical tests.\n",
"title": "Financial density forecasts: A comprehensive comparison of risk-neutral and historical schemes"
}
| null | null | null | null | true | null |
15803
| null |
Default
| null | null |
null |
{
"abstract": " Recent years have seen the increasing need of location awareness by mobile\napplications. This paper presents a room-level indoor localization approach\nbased on the measured room's echos in response to a two-millisecond single-tone\ninaudible chirp emitted by a smartphone's loudspeaker. Different from other\nacoustics-based room recognition systems that record full-spectrum audio for up\nto ten seconds, our approach records audio in a narrow inaudible band for 0.1\nseconds only to preserve the user's privacy. However, the short-time and\nnarrowband audio signal carries limited information about the room's\ncharacteristics, presenting challenges to accurate room recognition. This paper\napplies deep learning to effectively capture the subtle fingerprints in the\nrooms' acoustic responses. Our extensive experiments show that a two-layer\nconvolutional neural network fed with the spectrogram of the inaudible echos\nachieve the best performance, compared with alternative designs using other raw\ndata formats and deep models. Based on this result, we design a RoomRecognize\ncloud service and its mobile client library that enable the mobile application\ndevelopers to readily implement the room recognition functionality without\nresorting to any existing infrastructures and add-on hardware.\nExtensive evaluation shows that RoomRecognize achieves 99.7%, 97.7%, 99%, and\n89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in\na quiet museum, and 15 spots in a crowded museum, respectively. Compared with\nthe state-of-the-art approaches based on support vector machine, RoomRecognize\nsignificantly improves the Pareto frontier of recognition accuracy versus\nrobustness against interfering sounds (e.g., ambient music).\n",
"title": "Deep Room Recognition Using Inaudible Echos"
}
| null | null | null | null | true | null |
15804
| null |
Default
| null | null |
null |
{
"abstract": " The massive popularity of online social media provides a unique opportunity\nfor researchers to study the linguistic characteristics and patterns of user's\ninteractions. In this paper, we provide an in-depth characterization of\nlanguage usage across demographic groups in Twitter. In particular, we extract\nthe gender and race of Twitter users located in the U.S. using advanced image\nprocessing algorithms from Face++. Then, we investigate how demographic groups\n(i.e. male/female, Asian/Black/White) differ in terms of linguistic styles and\nalso their interests. We extract linguistic features from 6 categories\n(affective attributes, cognitive attributes, lexical density and awareness,\ntemporal references, social and personal concerns, and interpersonal focus), in\norder to identify the similarities and differences in particular writing set of\nattributes. In addition, we extract the absolute ranking difference of top\nphrases between demographic groups. As a dimension of diversity, we also use\nthe topics of interest that we retrieve from each user. Our analysis unveils\nclear differences in the writing styles (and the topics of interest) of\ndifferent demographic groups, with variation seen across both gender and race\nlines. We hope our effort can stimulate the development of new studies related\nto demographic information in the online space.\n",
"title": "Linguistic Diversities of Demographic Groups in Twitter"
}
| null | null | null | null | true | null |
15805
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the characteristics of factual and emotional argumentation\nstyles observed in online debates. Using an annotated set of \"factual\" and\n\"feeling\" debate forum posts, we extract patterns that are highly correlated\nwith factual and emotional arguments, and then apply a bootstrapping\nmethodology to find new patterns in a larger pool of unannotated forum posts.\nThis process automatically produces a large set of patterns representing\nlinguistic expressions that are highly correlated with factual and emotional\nlanguage. Finally, we analyze the most discriminating patterns to better\nunderstand the defining characteristics of factual and emotional arguments.\n",
"title": "And That's A Fact: Distinguishing Factual and Emotional Argumentation in Online Dialogue"
}
| null | null | null | null | true | null |
15806
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we develop a conservative sharp-interface method dedicated to\nsimulating multiple compressible fluids. Numerical treatments for a cut cell\nshared by more than two materials are proposed. First, we simplify the\ninterface interaction inside such a cell with a reduced model to avoid explicit\ninterface reconstruction and complex flux calculation. Second, conservation is\nstrictly preserved by an efficient conservation correction procedure for the\ncut cell. To improve the robustness, a multi-material scale separation model is\ndeveloped to consistently remove non-resolved interface scales. In addition,\nthe multi-resolution method and local time-stepping scheme are incorporated\ninto the proposed multi-material method to speed up the high-resolution\nsimulations. Various numerical test cases, including the multi-material shock\ntube problem, inertial confinement fusion implosion, triple-point shock\ninteraction and shock interaction with multi-material bubbles, show that the\nmethod is suitable for a wide range of complex compressible multi-material\nflows.\n",
"title": "A conservative sharp-interface method for compressible multi-material flows"
}
| null | null | null | null | true | null |
15807
| null |
Default
| null | null |
null |
{
"abstract": " Among several developments, the field of Economic Complexity (EC) has notably\nseen the introduction of two new techniques. One is the Bootstrapped Selective\nPredictability Scheme (SPSb), which can provide quantitative forecasts of the\nGross Domestic Product of countries. The other, Hidden Markov Model (HMM)\nregularisation, denoises the datasets typically employed in the literature. We\ncontribute to EC along three different directions. First, we prove the\nconvergence of the SPSb algorithm to a well-known statistical learning\ntechnique known as Nadaraya-Watson Kernel regression. The latter has\nsignificantly lower time complexity, produces deterministic results, and it is\ninterchangeable with SPSb for the purpose of making predictions. Second, we\nstudy the effects of HMM regularization on the Product Complexity and logPRODY\nmetrics, for which a model of time evolution has been recently proposed. We\nfind confirmation for the original interpretation of the logPRODY model as\ndescribing the change in the global market structure of products with new\ninsights allowing a new interpretation of the Complexity measure, for which we\npropose a modification. Third, we explore new effects of regularisation on the\ndata. We find that it reduces noise, and observe for the first time that it\nincreases nestedness in the export network adjacency matrix.\n",
"title": "Complexity of products: the effect of data regularisation"
}
| null | null | null | null | true | null |
15808
| null |
Default
| null | null |
null |
{
"abstract": " Two-dimensional (2D) materials, such as graphene and MoS2, have been\nattracting wide interest in surface enhancement Raman spectroscopy. This\nperspective gives an overview of recent developments in 2D materials'\napplication in surface enhanced Raman spectroscopy. This review focuses on the\napplications of using bare 2D materials and metal/2D material hybrid substrate\nfor Raman enhancement. The Raman enhancing mechanism of 2D materials will also\nbe discussed. The progress covered herein shows great promise for widespread\nadoption of 2D materials in SERS application.\n",
"title": "A review on applications of two-dimensional materials in surface enhanced Raman spectroscopy"
}
| null | null | null | null | true | null |
15809
| null |
Default
| null | null |
null |
{
"abstract": " We perform ultrasound velocity measurements on a single crystal of\nnearly-metallic spinel Co$_{1.21}$V$_{1.79}$O$_4$ which exhibits a\nferrimagnetic phase transition at $T_C \\sim$ 165 K. The experiments reveal a\nvariety of elastic anomalies in not only the paramagnetic phase above $T_C$ but\nalso the ferrimagnetic phase below $T_C$, which should be driven by the\nnearly-itinerant character of the orbitally-degenerate V 3$d$ electrons. In the\nparamagnetic phase above $T_C$, the elastic moduli exhibit\nelastic-mode-dependent unusual temperature variations, suggesting the existence\nof a dynamic spin-cluster state. Furthermore, above $T_C$, the sensitive\nmagnetic-field response of the elastic moduli suggests that, with the negative\nmagnetoresistance, the magnetic-field-enhanced nearly-itinerant character of\nthe V 3$d$ electrons emerges from the spin-cluster state. This should be\ntriggered by the inter-V-site interactions acting on the orbitally-degenerate\n3$d$ electrons. In the ferrimagnetic phase below $T_C$, the elastic moduli\nexhibit distinct anomalies at $T_1\\sim$ 95 K and $T_2\\sim$ 50 K, with a sign\nchange of the magnetoresistance at $T_1$ (positive below $T_1$) and an\nenhancement of the positive magnetoresistance below $T_2$, respectively. These\nobservations below $T_C$ suggest the successive occurrence of an orbital glassy\norder at $T_1$ and a structural phase transition at $T_2$, where the rather\nlocalized character of the V 3$d$ electrons evolves below $T_1$ and is further\nenhanced below $T_2$.\n",
"title": "A variety of elastic anomalies in orbital-active nearly-itinerant cobalt vanadate spinel"
}
| null | null | null | null | true | null |
15810
| null |
Default
| null | null |
null |
{
"abstract": " We propose a one-class neural network (OC-NN) model to detect anomalies in\ncomplex data sets. OC-NN combines the ability of deep networks to extract a\nprogressively rich representation of data with the one-class objective of\ncreating a tight envelope around normal data. The OC-NN approach breaks new\nground for the following crucial reason: data representation in the hidden\nlayer is driven by the OC-NN objective and is thus customized for anomaly\ndetection. This is a departure from other approaches which use a hybrid\napproach of learning deep features using an autoencoder and then feeding the\nfeatures into a separate anomaly detection method like one-class SVM (OC-SVM).\nThe hybrid OC-SVM approach is sub-optimal because it is unable to influence\nrepresentational learning in the hidden layers. A comprehensive set of\nexperiments demonstrate that on complex data sets (like CIFAR and GTSRB), OC-NN\nperforms on par with state-of-the-art methods and outperformed conventional\nshallow methods in some scenarios.\n",
"title": "Anomaly Detection using One-Class Neural Networks"
}
| null | null | null | null | true | null |
15811
| null |
Default
| null | null |
null |
{
"abstract": " An open-source vehicle testbed to enable the exploration of automation\ntechnologies for road vehicles is presented. The platform hardware and\nsoftware, based on the Robot Operating System (ROS), are detailed. Two methods\nare discussed for enabling the remote control of a vehicle (in this case, an\nelectric 2013 Ford Focus). The first approach used digital filtering of\nController Area Network (CAN) messages. In the case of the test vehicle, this\napproach allowed for the control of acceleration from a tap-point on the CAN\nbus and the OBD-II port. The second approach, based on the emulation of the\nanalog output(s) of a vehicle's accelerator pedal, brake pedal, and steering\ntorque sensors, is more generally applicable and, in the test vehicle, allowed\nfor the full control vehicle acceleration, braking, and steering. To\ndemonstrate the utility of the testbed for vehicle automation research, system\nidentification was performed on the test vehicle and speed and steering\ncontrollers were designed to allow the vehicle to follow a predetermined path.\nThe resulting system was shown to be differentially flat, and a high level path\nfollowing algorithm was developed using the differentially flat properties and\nstate feedback. The path following algorithm is experimentally validated on the\nautomation testbed developed in the paper.\n",
"title": "Low Cost, Open-Source Testbed to Enable Full-Sized Automated Vehicle Research"
}
| null | null | null | null | true | null |
15812
| null |
Default
| null | null |
null |
{
"abstract": " We consider the minimization of non-convex functions that typically arise in\nmachine learning. Specifically, we focus our attention on a variant of trust\nregion methods known as cubic regularization. This approach is particularly\nattractive because it escapes strict saddle points and it provides stronger\nconvergence guarantees than first- and second-order as well as classical trust\nregion methods. However, it suffers from a high computational complexity that\nmakes it impractical for large-scale learning. Here, we propose a novel method\nthat uses sub-sampling to lower this computational cost. By the use of\nconcentration inequalities we provide a sampling scheme that gives sufficiently\naccurate gradient and Hessian approximations to retain the strong global and\nlocal convergence guarantees of cubically regularized methods. To the best of\nour knowledge this is the first work that gives global convergence guarantees\nfor a sub-sampled variant of cubic regularization on non-convex functions.\nFurthermore, we provide experimental results supporting our theory.\n",
"title": "Sub-sampled Cubic Regularization for Non-convex Optimization"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
15813
| null |
Validated
| null | null |
null |
{
"abstract": " Following the advent of electromagnetic metamaterials at the turn of the\ncentury, researchers working in other areas of wave physics have translated\nconcepts of electromagnetic metamaterials to acoustics, elastodynamics, as well\nas to heat, mass and light diffusion processes. In elastodynamics, seismic\nmetamaterials have emerged in the last decade for soft soils structured at the\nmeter scale, and have been tested thanks to full-scale experiments on holey\nsoils five years ago. Born in the soil, seismic metamaterials grow\nsimultaneously on the field of tuned-resonators buried in the soil, around\nbuilding's foundations or near the soil-structure's interface, and on the field\nof above-surface resonators. In this perspective article, we quickly recall\nsome research advances made in all these types of seismic metamaterials and we\nfurther dress an inventory of which material parameters can be achieved and\nwhich cannot, notably from the effective medium theory perspective. We finally\nenvision perspectives on future developments of large scale auxetic\nmetamaterials for building's foundations, forests of trees for seismic\nprotection and metamaterial-like transformed urbanism at the city scale.\n",
"title": "Emergence of Seismic Metamaterials: Current State and Future Perspectives"
}
| null | null | null | null | true | null |
15814
| null |
Default
| null | null |
null |
{
"abstract": " Community structure describes the organization of a network into subgraphs\nthat contain a prevalence of edges within each subgraph and relatively few\nedges across boundaries between subgraphs. The development of\ncommunity-detection methods has occurred across disciplines, with numerous and\nvaried algorithms proposed to find communities. As we present in this Chapter\nvia several case studies, community detection is not just an \"end game\" unto\nitself, but rather a step in the analysis of network data which is then useful\nfor furthering research in the disciplinary domain of interest. These\ncase-study examples arise from diverse applications, ranging from social and\npolitical science to neuroscience and genetics, and we have chosen them to\ndemonstrate key aspects of community detection and to highlight that community\ndetection, in practice, should be directed by the application at hand.\n",
"title": "Case studies in network community detection"
}
| null | null |
[
"Computer Science",
"Physics"
] | null | true | null |
15815
| null |
Validated
| null | null |
null |
{
"abstract": " Weak attractive interactions in a spin-imbalanced Fermi gas induce a\nmulti-particle instability, binding multiple fermions together. The maximum\nbinding energy per particle is achieved when the ratio of the number of up- and\ndown-spin particles in the instability is equal to the ratio of the up- and\ndown-spin densities of states in momentum at the Fermi surfaces, to utilize the\nvariational freedom of all available momentum states. We derive this result\nusing an analytical approach, and verify it using exact diagonalization. The\nmulti-particle instability extends the Cooper pairing instability of balanced\nFermi gases to the imbalanced case, and could form the basis of a many-body\nstate, analogously to the construction of the Bardeen-Cooper-Schrieffer theory\nof superconductivity out of Cooper pairs.\n",
"title": "Multi-particle instability in a spin-imbalanced Fermi gas"
}
| null | null | null | null | true | null |
15816
| null |
Default
| null | null |
null |
{
"abstract": " The aim of this paper is to study two-weight norm inequalities for fractional\nmaximal functions and fractional Bergman operator defined on the upper-half\nspace. Namely, we characterize those pairs of weights for which these maximal\noperators satisfy strong and weak type inequalities. Our characterizations are\nin terms of Sawyer and Békollé-Bonami type conditions. We also obtain a\n$\\Phi$-bump characterization for these maximal functions, where $\\Phi$ is a\nOrlicz function. As a consequence, we obtain two-weight norm inequalities for\nfractional Bergman operators. Finally, we provide some sharp weighted\ninequalities for the fractional maximal functions.\n",
"title": "Weighted boundedness of maximal functions and fractional Bergman operators"
}
| null | null | null | null | true | null |
15817
| null |
Default
| null | null |
null |
{
"abstract": " We say that an algorithm is stable if small changes in the input result in\nsmall changes in the output. This kind of algorithm stability is particularly\nrelevant when analyzing and visualizing time-varying data. Stability in general\nplays an important role in a wide variety of areas, such as numerical analysis,\nmachine learning, and topology, but is poorly understood in the context of\n(combinatorial) algorithms. In this paper we present a framework for analyzing\nthe stability of algorithms. We focus in particular on the tradeoff between the\nstability of an algorithm and the quality of the solution it computes. Our\nframework allows for three types of stability analysis with increasing degrees\nof complexity: event stability, topological stability, and Lipschitz stability.\nWe demonstrate the use of our stability framework by applying it to kinetic\nEuclidean minimum spanning trees.\n",
"title": "A Framework for Algorithm Stability"
}
| null | null | null | null | true | null |
15818
| null |
Default
| null | null |
null |
{
"abstract": " Turing test was long considered the measure for artificial intelligence. But\nwith the advances in AI, it has proved to be insufficient measure. We can now\naim to mea- sure machine intelligence like we measure human intelligence. One\nof the widely accepted measure of intelligence is standardized math and science\ntest. In this paper, we explore the progress we have made towards the goal of\nmaking a machine smart enough to pass the standardized test. We see the\nchallenges and opportunities posed by the domain, and note that we are quite\nsome ways from actually making a system as smart as a even a middle school\nscholar.\n",
"title": "A Survey of Question Answering for Math and Science Problem"
}
| null | null | null | null | true | null |
15819
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the basic thermal, mechanical and structural properties of\nbody centred cubic iron ($\\alpha$-Fe) at several temperatures and positive\nloading by means of Molecular Dynamics simulations in conjunction with the\nembedded-atom method potential and its modified counterpart one. Computations\nof its thermal properties like average energy and density of atoms, transport\nsound velocities at finite temperatures and pressures are detailed studied as\nwell. Moreover, there are suggestions to obtain hexagonal close- packed\nstructure ($\\varepsilon$-phase) of this metal under positive loading. To\ndemonstrate that, one can increase sufficiently the pressure of simulated\nsystem at several temperature's ranges; these structural changes depend only on\npotential type used. The ensuring structures are studied via the pair radial\ndistribution functions (PRDF) and precise common- neighbour analysis method\n(CNA) as well.\n",
"title": "Thermal and structural properties of iron at high pressure by molecular dynamics"
}
| null | null | null | null | true | null |
15820
| null |
Default
| null | null |
null |
{
"abstract": " We present Magnetohydrodynamic (MHD) simulations of the magnetic interactions\nbetween a solar type star and short period hot Jupiter exoplanets, using the\npublicly available MHD code PLUTO. It has been predicted that emission due to\nmagnetic interactions such as the electron cyclotron maser instability (ECMI)\nwill be observable. In our simulations, a planetary outflow, due to UV\nevaporation of the exoplanets atmosphere, results in the build-up of\ncircumplanetary material. We predict the ECMI emission and determine that the\nemission is prevented from escaping from the system. This is due to the\nevaporated material leading to a high plasma frequency in the vicinity of the\nplanet, which inhibits the ECMI process.\n",
"title": "Interacting Fields and Flows: Magnetic Hot Jupiters"
}
| null | null | null | null | true | null |
15821
| null |
Default
| null | null |
null |
{
"abstract": " Discovering statistical structure from links is a fundamental problem in the\nanalysis of social networks. Choosing a misspecified model, or equivalently, an\nincorrect inference algorithm will result in an invalid analysis or even\nfalsely uncover patterns that are in fact artifacts of the model. This work\nfocuses on unifying two of the most widely used link-formation models: the\nstochastic blockmodel (SBM) and the small world (or latent space) model (SWM).\nIntegrating techniques from kernel learning, spectral graph theory, and\nnonlinear dimensionality reduction, we develop the first statistically sound\npolynomial-time algorithm to discover latent patterns in sparse graphs for both\nmodels. When the network comes from an SBM, the algorithm outputs a block\nstructure. When it is from an SWM, the algorithm outputs estimates of each\nnode's latent position.\n",
"title": "From which world is your graph?"
}
| null | null | null | null | true | null |
15822
| null |
Default
| null | null |
null |
{
"abstract": " The signature of closed oriented manifolds is well-known to be multiplicative\nunder finite covers. This fails for Poincaré complexes as examples of C. T.\nC. Wall show. We establish the multiplicativity of the signature, and more\ngenerally, the topological L-class, for closed oriented stratified\npseudomanifolds that can be equipped with a middle-perverse Verdier self-dual\ncomplex of sheaves, determined by Lagrangian sheaves along strata of odd\ncodimension (so-called L-pseudomanifolds). This class of spaces contains all\nWitt spaces and thus all pure-dimensional complex algebraic varieties. We apply\nthis result in proving the Brasselet-Schürmann-Yokura conjecture for normal\ncomplex projective 3-folds with at most canonical singularities, trivial\ncanonical class and positive irregularity. The conjecture asserts the equality\nof topological and Hodge L-class for compact complex algebraic rational\nhomology manifolds.\n",
"title": "Topological and Hodge L-Classes of Singular Covering Spaces and Varieties with Trivial Canonical Class"
}
| null | null | null | null | true | null |
15823
| null |
Default
| null | null |
null |
{
"abstract": " We develop new closed form representations of sums of (n + {\\alpha})th\nshifted harmonic numbers and reciprocal binomial coefficients in terms of\n{\\alpha}th shifted harmonic numbers. Some interesting new consequences and\nillustrative examples are considered.\n",
"title": "Identities for the shifted harmonic numbers and binomial coefficients"
}
| null | null |
[
"Mathematics"
] | null | true | null |
15824
| null |
Validated
| null | null |
null |
{
"abstract": " We studied intermediate filaments (IFs) in the retina of the Pied flycatcher\n(Ficedula hypoleuca) in the foveolar zone. Single IFs span Müller cells (MC)\nlengthwise; cylindrical bundles of IFs (IFBs) appear inside the cone inner\nsegment (CIS) at the outer limiting membrane (OLM) level. IFBs adjoin the cone\ncytoplasmatic membrane, following lengthwise regularly spaced, forming a\nskeleton of the CIS, located above the OLM. IFBs follow along the cone outer\nsegment (COS), with single IFs separating from the IFB, touching and entering\nin-between the light-sensitive disks of the cone membrane. We propose a\nmechanism of exciton transfer from the inner retinal surface to the visual\npigments in the photoreceptor cells. This includes excitation transfer in\ndonor-acceptor systems, from the IF donors to the rhodopsin acceptors, with\ntheoretic efficiency over 80%. This explains high image contrast in fovea and\nfoveola in daylight, while the classical mechanism that describes Müller\ncells as optical lightguides operates in night vision, with loss of resolution\ntraded for sensitivity. Our theory receives strong confirmation in morphology\nand function of the cones and pigment cells. In daylight the lateral surface of\nthe photosensor disks is blocked from the (scattered or oblique) light by the\npigment cells. Thus the light energy can only get to the cone via intermediate\nfilaments that absorb photons in the Müller cell endfeet and conduct excitons\nto the cone. Thus, the disks are consumed at their lateral surfaces, moving to\nthe apex of the cone, with new disks produced below. An alternative hypothesis\nof direct light passing through the cone with its organelles and hitting the\nlowest disk contradicts morphological evidence, as thus all of the other disks\nwould have no useful function in daylight vision.\n",
"title": "Mechanism of light energy transport in the avian retina"
}
| null | null | null | null | true | null |
15825
| null |
Default
| null | null |
null |
{
"abstract": " Using the Purple Mountain Observatory Delingha (PMODLH) 13.7 m telescope, we\nreport a 96-square-degree 12CO/13CO/C18O mapping observation toward the\nGalactic region of l = [139.75, 149.75]$^\\circ$, b = [-5.25, 5.25]$^\\circ$. The\nmolecular structure of the Local Arm and Perseus Arm are presented. Combining\nHI data and part of the Outer Arm results, we obtain that the warp structure of\nboth atomic and molecular gas is obvious, while the flare structure only exists\nin atomic gas in this observing region. In addition, five filamentary giant\nmolecular clouds on the Perseus Arm are identified. Among them, four are newly\nidentified. Their relations with the Milky Way large-scale structure are\ndiscussed.\n",
"title": "The Molecular Structures of Local Arm and Perseus Arm in the Galactic Region of l=[139.75,149.75]$^\\circ$, b=[-5.25,5.25]$^\\circ$"
}
| null | null | null | null | true | null |
15826
| null |
Default
| null | null |
null |
{
"abstract": " When we test a theory using data, it is common to focus on correctness: do\nthe predictions of the theory match what we see in the data? But we also care\nabout completeness: how much of the predictable variation in the data is\ncaptured by the theory? This question is difficult to answer, because in\ngeneral we do not know how much \"predictable variation\" there is in the\nproblem. In this paper, we consider approaches motivated by machine learning\nalgorithms as a means of constructing a benchmark for the best attainable level\nof prediction.\nWe illustrate our methods on the task of predicting human-generated random\nsequences. Relative to an atheoretical machine learning algorithm benchmark, we\nfind that existing behavioral models explain roughly 15 percent of the\npredictable variation in this problem. This fraction is robust across several\nvariations on the problem. We also consider a version of this approach for\nanalyzing field data from domains in which human perception and generation of\nrandomness has been used as a conceptual framework; these include sequential\ndecision-making and repeated zero-sum games. In these domains, our framework\nfor testing the completeness of theories provides a way of assessing their\neffectiveness over different contexts; we find that despite some differences,\nthe existing theories are fairly stable across our field domains in their\nperformance relative to the benchmark. Overall, our results indicate that (i)\nthere is a significant amount of structure in this problem that existing models\nhave yet to capture and (ii) there are rich domains in which machine learning\nmay provide a viable approach to testing completeness.\n",
"title": "The Theory is Predictive, but is it Complete? An Application to Human Perception of Randomness"
}
| null | null | null | null | true | null |
15827
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents a clustering approach that allows for rigorous\nstatistical error control similar to a statistical test. We develop estimators\nfor both the unknown number of clusters and the clusters themselves. The\nestimators depend on a tuning parameter alpha which is similar to the\nsignificance level of a statistical hypothesis test. By choosing alpha, one can\ncontrol the probability of overestimating the true number of clusters, while\nthe probability of underestimation is asymptotically negligible. In addition,\nthe probability that the estimated clusters differ from the true ones is\ncontrolled. In the theoretical part of the paper, formal versions of these\nstatements on statistical error control are derived in a standard model setting\nwith convex clusters. A simulation study and two applications to temperature\nand gene expression microarray data complement the theoretical analysis.\n",
"title": "Clustering with Statistical Error Control"
}
| null | null | null | null | true | null |
15828
| null |
Default
| null | null |
null |
{
"abstract": " Double-stranded DNA may contain mismatched base pairs beyond the Watson-Crick\npairs guanine-cytosine and adenine-thymine. Such mismatches bear adverse\nconsequences for human health. We utilize molecular dynamics and metadynamics\ncomputer simulations to study the equilibrium structure and dynamics for both\nmatched and mismatched base pairs. We discover significant differences between\nmatched and mismatched pairs in structure, hydrogen bonding, and base flip work\nprofiles. Mismatched pairs shift further in the plane normal to the DNA strand\nand are more likely to exhibit non-canonical structures, including the e-motif.\nWe discuss potential implications on mismatch repair enzymes' detection of DNA\nmismatches.\n",
"title": "DNA Base Pair Mismatches Induce Structural Changes and Alter the Free Energy Landscape of Base Flip"
}
| null | null | null | null | true | null |
15829
| null |
Default
| null | null |
null |
{
"abstract": " In the setting of a weighted combinatorial finite or infinite countable graph\n$G$ we introduce functional Paley-Wiener spaces $PW_{\\omega}(L),\\>\\omega>0,$\ndefined in terms of the spectral resolution of the combinatorial Laplace\noperator $L$ in the space $L_{2}(G)$. It is shown that functions in certain\n$PW_{\\omega}(L),\\>\\omega>0,$ are uniquely defined by their averages over some\nfamilies of \"small\" subgraphs which form a cover of $G$. Reconstruction methods\nfor reconstruction of an $f\\in PW_{\\omega}(L)$ from appropriate set of its\naverages are introduced. One method is using language of Hilbert frames.\nAnother one is using average variational interpolating splines which are\nconstructed in the setting of combinatorial graphs.\n",
"title": "Average sampling and average splines on combinatorial graphs"
}
| null | null | null | null | true | null |
15830
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we characterize the surjective linear variation norm isometries\non JB-algebras. Variation norm isometries are precisely the maps that preserve\nthe maximal deviation, the quantum analogue of the standard deviation, which\nplays an important role in quantum statistics. Consequently, we characterize\nthe Hilbert's metric isometries on cones in JB-algebras.\n",
"title": "Hilbert isometries and maximal deviation preserving maps on JB-algebras"
}
| null | null | null | null | true | null |
15831
| null |
Default
| null | null |
null |
{
"abstract": " Existing visual reasoning datasets such as Visual Question Answering (VQA),\noften suffer from biases conditioned on the question, image or answer\ndistributions. The recently proposed CLEVR dataset addresses these limitations\nand requires fine-grained reasoning but the dataset is synthetic and consists\nof similar objects and sentence structures across the dataset.\nIn this paper, we introduce a new inference task, Visual Entailment (VE) -\nconsisting of image-sentence pairs whereby a premise is defined by an image,\nrather than a natural language sentence as in traditional Textual Entailment\ntasks. The goal of a trained VE model is to predict whether the image\nsemantically entails the text. To realize this task, we build a dataset SNLI-VE\nbased on the Stanford Natural Language Inference corpus and Flickr30k dataset.\nWe evaluate various existing VQA baselines and build a model called Explainable\nVisual Entailment (EVE) system to address the VE task. EVE achieves up to 71%\naccuracy and outperforms several other state-of-the-art VQA based models.\nFinally, we demonstrate the explainability of EVE through cross-modal attention\nvisualizations. The SNLI-VE dataset is publicly available at\nthis https URL necla-ml/SNLI-VE.\n",
"title": "Visual Entailment: A Novel Task for Fine-Grained Image Understanding"
}
| null | null | null | null | true | null |
15832
| null |
Default
| null | null |
null |
{
"abstract": " In this article we characterize all possible cases that may occur in the\nrelations between the sets of $p$ for which weak type $(p,p)$ and strong type\n$(p,p)$ inequalities for the Hardy--Littlewood maximal operators, both centered\nand non-centered, hold in the context of general metric measure spaces.\n",
"title": "On relations between weak and strong type inequalities for maximal operators on non-doubling metric measure spaces"
}
| null | null | null | null | true | null |
15833
| null |
Default
| null | null |
null |
{
"abstract": " Base station cooperation in heterogeneous wireless networks (HetNets) is a\npromising approach to improve the network performance, but it also imposes a\nsignificant challenge on backhaul. On the other hand, caching at small base\nstations (SBSs) is considered as an efficient way to reduce backhaul load in\nHetNets. In this paper, we jointly consider SBS caching and cooperation in a\ndownlink largescale HetNet. We propose two SBS cooperative transmission schemes\nunder random caching at SBSs with the caching distribution as a design\nparameter. Using tools from stochastic geometry and adopting appropriate\nintegral transformations, we first derive a tractable expression for the\nsuccessful transmission probability under each scheme. Then, under each scheme,\nwe consider the successful transmission probability maximization by optimizing\nthe caching distribution, which is a challenging optimization problem with a\nnon-convex objective function. By exploring optimality properties and using\noptimization techniques, under each scheme, we obtain a local optimal solution\nin the general case and global optimal solutions in some special cases.\nCompared with some existing caching designs in the literature, e.g., the most\npopular caching, the i.i.d. caching and the uniform caching, the optimal random\ncaching under each scheme achieves better successful transmission probability\nperformance. The analysis and optimization results provide valuable design\ninsights for practical HetNets.\n",
"title": "Random Caching Based Cooperative Transmission in Heterogeneous Wireless Networks"
}
| null | null | null | null | true | null |
15834
| null |
Default
| null | null |
null |
{
"abstract": " SrRuO$_3$ (SRO) films are known to exhibit insulating behavior as their\nthickness approaches four unit cells. We employ electron energy$-$loss (EEL)\nspectroscopy to probe the spatially resolved electronic structures of both\ninsulating and conducting SRO to correlate them with the metal$-$insulator\ntransition (MIT). Importantly, the central layer of the ultrathin insulating\nfilm exhibits distinct features from the metallic SRO. Moreover, EEL near edge\nspectra adjacent to the SrTiO$_3$ (STO) substrate or to the capping layer are\nremarkably similar to those of STO. The site$-$projected density of states\nbased on density functional theory (DFT) partially reflects the characteristics\nof the spectra of these layers. These results may provide important information\non the possible influence of STO on the electronic states of ultrathin SRO.\n",
"title": "Electronic characteristics of ultrathin SrRuO$_3$ films and their relationship with the metal$-$insulator transition"
}
| null | null | null | null | true | null |
15835
| null |
Default
| null | null |
null |
{
"abstract": " We prove local well-posedness in regular spaces and a Beale-Kato-Majda\nblow-up criterion for a recently derived stochastic model of the 3D Euler fluid\nequation for incompressible flow. This model describes incompressible fluid\nmotions whose Lagrangian particle paths follow a stochastic process with\ncylindrical noise and also satisfy Newton's 2nd Law in every Lagrangian domain.\n",
"title": "Solution properties of a 3D stochastic Euler fluid equation"
}
| null | null | null | null | true | null |
15836
| null |
Default
| null | null |
null |
{
"abstract": " We study a question which has natural interpretations in both quantum\nmechanics and in geometry. Let $V_1,..., V_n$ be complex vector spaces of\ndimension $d_1,...,d_n$ and let $G= SL_{d_1} \\times \\dots \\times SL_{d_n}$.\nGeometrically, we ask given $(d_1,...,d_n)$, when is the geometric invariant\ntheory quotient $\\mathbb{P}(V_1 \\otimes \\dots \\otimes V_n)// G$ non-empty? This\nis equivalent to the quantum mechanical question of whether the multipart\nquantum system with Hilbert space $V_1\\otimes \\dots \\otimes V_n$ has a locally\nmaximally entangled state, i.e. a state such that the density matrix for each\nelementary subsystem is a multiple of the identity. We show that the answer to\nthis question is yes if and only if $R(d_1,...,d_n)\\geqslant 0$ where \\[\nR(d_1,...,d_n) = \\prod_i d_i +\\sum_{k=1}^n (-1)^k \\sum_{1\\leq i_1<\\dotsb\n<i_k\\leq n} (\\gcd(d_{i_1},\\dotsc ,d_{i_k}) )^{2}. \\] We also provide a simple\nrecursive algorithm which determines the answer to the question, and we compute\nthe dimension of the resulting quotient in the non-empty cases.\n",
"title": "Existence of locally maximally entangled quantum states via geometric invariant theory"
}
| null | null | null | null | true | null |
15837
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a dynamical systems perspective of the\nExpectation-Maximization (EM) algorithm. More precisely, we can analyze the EM\nalgorithm as a nonlinear state-space dynamical system. The EM algorithm is\nwidely adopted for data clustering and density estimation in statistics,\ncontrol systems, and machine learning. This algorithm belongs to a large class\nof iterative algorithms known as proximal point methods. In particular, we\nre-interpret limit points of the EM algorithm and other local maximizers of the\nlikelihood function it seeks to optimize as equilibria in its dynamical system\nrepresentation. Furthermore, we propose to assess its convergence as asymptotic\nstability in the sense of Lyapunov. As a consequence, we proceed by leveraging\nrecent results regarding discrete-time Lyapunov stability theory in order to\nestablish asymptotic stability (and thus, convergence) in the dynamical system\nrepresentation of the EM algorithm.\n",
"title": "Convergence of the Expectation-Maximization Algorithm Through Discrete-Time Lyapunov Stability Theory"
}
| null | null | null | null | true | null |
15838
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we consider a general matrix factorization model which covers a\nlarge class of existing models with many applications in areas such as machine\nlearning and imaging sciences. To solve this possibly nonconvex, nonsmooth and\nnon-Lipschitz problem, we develop a non-monotone alternating updating method\nbased on a potential function. Our method essentially updates two blocks of\nvariables in turn by inexactly minimizing this potential function, and updates\nanother auxiliary block of variables using an explicit formula. The special\nstructure of our potential function allows us to take advantage of efficient\ncomputational strategies for non-negative matrix factorization to perform the\nalternating minimization over the two blocks of variables. A suitable line\nsearch criterion is also incorporated to improve the numerical performance.\nUnder some mild conditions, we show that the line search criterion is well\ndefined, and establish that the sequence generated is bounded and any cluster\npoint of the sequence is a stationary point. Finally, we conduct some numerical\nexperiments using real datasets to compare our method with some existing\nefficient methods for non-negative matrix factorization and matrix completion.\nThe numerical results show that our method can outperform these methods for\nthese specific applications.\n",
"title": "A Non-monotone Alternating Updating Method for A Class of Matrix Factorization Problems"
}
| null | null | null | null | true | null |
15839
| null |
Default
| null | null |
null |
{
"abstract": " Error bound conditions (EBC) are properties that characterize the growth of\nan objective function when a point is moved away from the optimal set. They\nhave recently received increasing attention in the field of optimization for\ndeveloping optimization algorithms with fast convergence. However, the studies\nof EBC in statistical learning are hitherto still limited. The main\ncontributions of this paper are two-fold. First, we develop fast and\nintermediate rates of empirical risk minimization (ERM) under EBC for risk\nminimization with Lipschitz continuous, and smooth convex random functions.\nSecond, we establish fast and intermediate rates of an efficient stochastic\napproximation (SA) algorithm for risk minimization with Lipschitz continuous\nrandom functions, which requires only one pass of $n$ samples and adapts to\nEBC. For both approaches, the convergence rates span a full spectrum between\n$\\widetilde O(1/\\sqrt{n})$ and $\\widetilde O(1/n)$ depending on the power\nconstant in EBC, and could be even faster than $O(1/n)$ in special cases for\nERM. Moreover, these convergence rates are automatically adaptive without using\nany knowledge of EBC. Overall, this work not only strengthens the understanding\nof ERM for statistical learning but also brings new fast stochastic algorithms\nfor solving a broad range of statistical learning problems.\n",
"title": "Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions"
}
| null | null |
[
"Statistics"
] | null | true | null |
15840
| null |
Validated
| null | null |
null |
{
"abstract": " The runtime performance of modern SAT solvers on random $k$-CNF formulas is\ndeeply connected with the 'phase-transition' phenomenon seen empirically in the\nsatisfiability of random $k$-CNF formulas. Recent universal hashing-based\napproaches to sampling and counting crucially depend on the runtime performance\nof SAT solvers on formulas expressed as the conjunction of both $k$-CNF and XOR\nconstraints (known as $k$-CNF-XOR formulas), but the behavior of random\n$k$-CNF-XOR formulas is unexplored in prior work. In this paper, we present the\nfirst study of the satisfiability of random $k$-CNF-XOR formulas. We show\nempirical evidence of a surprising phase-transition that follows a linear\ntrade-off between $k$-CNF and XOR constraints. Furthermore, we prove that a\nphase-transition for $k$-CNF-XOR formulas exists for $k = 2$ and (when the\nnumber of $k$-CNF constraints is small) for $k > 2$.\n",
"title": "Combining the $k$-CNF and XOR Phase-Transitions"
}
| null | null |
[
"Computer Science"
] | null | true | null |
15841
| null |
Validated
| null | null |
null |
{
"abstract": " Little is known about how different types of advertising affect brand\nattitudes. We investigate the relationships between three brand attitude\nvariables (perceived quality, perceived value and recent satisfaction) and\nthree types of advertising (national traditional, local traditional and\ndigital). The data represent ten million brand attitude surveys and $264\nbillion spent on ads by 575 regular advertisers over a five-year period,\napproximately 37% of all ad spend measured between 2008 and 2012. Inclusion of\nbrand/quarter fixed effects and industry/week fixed effects brings parameter\nestimates closer to expectations without major reductions in estimation\nprecision. The findings indicate that (i) national traditional ads increase\nperceived quality, perceived value, and recent satisfaction; (ii) local\ntraditional ads increase perceived quality and perceived value; (iii) digital\nads increase perceived value; and (iv) competitor ad effects are generally\nnegative.\n",
"title": "Advertising and Brand Attitudes: Evidence from 575 Brands over Five Years"
}
| null | null | null | null | true | null |
15842
| null |
Default
| null | null |
null |
{
"abstract": " Te NMR studies were carried out for the bismuth telluride topological\ninsulator in a wide range from room temperature down to 12.5 K. The\nmeasurements were made on a Bruker Avance 400 pulse spectrometer. The NMR\nspectra were collected for the mortar and pestle powder sample and for single\ncrystalline stacks with orientations c parallel and perpendicular to field. The\nactivation energy responsible for thermal activation. The spectra for the stack\nwith c parallel to field showed some particular behavior below 91 K.\n",
"title": "NMR studies of the topological insulator Bi2Te3"
}
| null | null | null | null | true | null |
15843
| null |
Default
| null | null |
null |
{
"abstract": " We discuss the Ricci-flat `model metrics' on $\\mathbb{C}^2$ with cone\nsingularities along the conic $\\{zw=1\\}$ constructed by Donaldson using the\nGibbons-Hawking ansatz over wedges in $\\mathbb{R}^3$. In particular we describe\ntheir asymptotic behavior at infinity and compute their energies.\n",
"title": "The Gibbons-Hawking ansatz over a wedge"
}
| null | null | null | null | true | null |
15844
| null |
Default
| null | null |
null |
{
"abstract": " Motivated by station-keeping applications in various unmanned settings, this\npaper introduces a steering control law for a pair of agents operating in the\nvicinity of a fixed beacon in a three-dimensional environment. This feedback\nlaw is a modification of the previously studied three-dimensional constant\nbearing (CB) pursuit law, in the sense that it incorporates an additional term\nto allocate attention to the beacon. We investigate the behavior of the\nclosed-loop dynamics for a two agent mutual pursuit system in which each agent\nemploys the beacon-referenced CB pursuit law with regards to the other agent\nand a stationary beacon. Under certain assumptions on the associated control\nparameters, we demonstrate that this problem admits circling equilibria wherein\nthe agents move on circular orbits with a common radius, in planes\nperpendicular to a common axis passing through the beacon. As the common radius\nand distances from the beacon are determined by choice of parameters in the\nfeedback law, this approach provides a means to engineer desired formations in\na three-dimensional setting.\n",
"title": "Beacon-referenced Mutual Pursuit in Three Dimensions"
}
| null | null | null | null | true | null |
15845
| null |
Default
| null | null |
null |
{
"abstract": " We carried out molecular dynamics simulations (MD) using realistic empirical\npotentials for the vapor deposition (VD) of CuZrAl glasses. VD glasses have\nhigher densities and lower potential and inherent structure energies than the\nmelt-quenched glasses for the same alloys. The optimal substrate temperature\nfor the deposition process is 0.625$\\times T_\\mathrm{g}$. In VD metallic\nglasses (MGs), the total number of icosahedral like clusters is higher than in\nthe melt-quenched MGs. Surprisingly, the VD glasses have a lower degree of\nchemical mixing than the melt-quenched glasses. The reason for it is that the\nmelt-quenched MGs can be viewed as frozen liquids, which means that their\nchemical order is the same as in the liquid state. In contrast, during the\nformation of the VD MGs, the absence of the liquid state results in the\ncreation of a different chemical order with more Zr-Zr homonuclear bonds\ncompared with the melt-quenched MGs. In order to obtain MGs from melt-quench\ntechnique with similarly low energies as in the VD process, the cooling rate\nduring quenching would have to be many orders of magnitude lower than currently\naccessible to MD simulations. The method proposed in this manuscript is a more\nefficient way to create MGs by using MD simulations.\n",
"title": "Increased stability of CuZrAl metallic glasses prepared by physical vapor deposition"
}
| null | null | null | null | true | null |
15846
| null |
Default
| null | null |
null |
{
"abstract": " We show that a subcategory of the $m$-cluster category of type $\\tilde{D_n}$\nis isomorphic to a category consisting of arcs in an $(n-2)m$-gon with two\ncentral $(m-1)$-gons inside of it. We show that the mutation of colored quivers\nand $m$-cluster-tilting objects is compatible with the flip of an\n$(m+2)$-angulation. In the final part of this paper, we detail an example of a\nquiver of type $\\tilde{D_7}$.\n",
"title": "A geometric realization of the $m$-cluster categories of type $\\tilde{D_n}$"
}
| null | null |
[
"Mathematics"
] | null | true | null |
15847
| null |
Validated
| null | null |
null |
{
"abstract": " The SoLid collaboration have developed an intelligent readout system to\nreduce their 3200 silicon photomultiplier detector's data rate by a factor of\n10000 whilst maintaining high efficiency for storing data from anti-neutrino\ninteractions. The system employs an FPGA-level waveform characterisation to\ntrigger on neutron signals. Following a trigger, data from a space time region\nof interest around the neutron will be read out using the IPbus protocol. In\nthese proceedings the design of the readout system is explained and results\nshowing the performance of a prototype version of the system are presented.\n",
"title": "The SoLid anti-neutrino detector's readout system"
}
| null | null | null | null | true | null |
15848
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a novel scheme for data hiding in the fingerprint\nminutiae template, which is the most popular in fingerprint recognition\nsystems. Various strategies are proposed in data embedding in order to maintain\nthe accuracy of fingerprint recognition as well as the undetectability of data\nhiding. In bits replacement based data embedding, we replace the last few bits\nof each element of the original minutiae template with the data to be hidden.\nThis strategy can be further improved using an optimized bits replacement based\ndata embedding, which is able to minimize the impact of data hiding on the\nperformance of fingerprint recognition. The third strategy is an order\npreserving mechanism which is proposed to reduce the detectability of data\nhiding. By using such a mechanism, it would be difficult for the attacker to\ndifferentiate the minutiae template with hidden data from the original minutiae\ntemplates. The experimental results show that the proposed data hiding scheme\nachieves sufficient capacity for hiding common personal data, where the\naccuracy of fingerprint recognition is acceptable after the data hiding.\n",
"title": "Data hiding in Fingerprint Minutiae Template for Privacy Protection"
}
| null | null | null | null | true | null |
15849
| null |
Default
| null | null |
null |
{
"abstract": " We consider asymptotic normality of linear rank statistics under various\nrandomization rules met in clinical trials and designed for patients'\nallocation into treatment and placebo arms. Exposition relies on some general\nlimit theorem due to McLeish (1974) which appears to be well suited for the\nproblem considered and may be employed for other similar rules undis- cussed in\nthe paper. Examples of applications include well known results as well as\nseveral new ones.\n",
"title": "On asymptotic normality of certain linear rank statistics"
}
| null | null | null | null | true | null |
15850
| null |
Default
| null | null |
null |
{
"abstract": " This document contains the notes of a lecture I gave at the \"Journées\nNationales du Calcul Formel\" (JNCF) on January 2017. The aim of the lecture was\nto discuss low-level algorithmics for p-adic numbers. It is divided into two\nmain parts: first, we present various implementations of p-adic numbers and\ncompare them and second, we introduce a general framework for studying\nprecision issues and apply it in several concrete situations.\n",
"title": "Computations with p-adic numbers"
}
| null | null | null | null | true | null |
15851
| null |
Default
| null | null |
null |
{
"abstract": " We demonstrate the presence of chaos in stochastic simulations that are\nwidely used to study biodiversity in nature. The investigation deals with a set\nof three distinct species that evolve according to the standard rules of\nmobility, reproduction and predation, with predation following the cyclic rules\nof the popular rock, paper and scissors game. The study uncovers the\npossibility to distinguish between time evolutions that start from slightly\ndifferent initial states, guided by the Hamming distance which heuristically\nunveils the chaotic behavior. The finding opens up a quantitative approach that\nrelates the correlation length to the average density of maxima of a typical\nspecies, and an ensemble of stochastic simulations is implemented to support\nthe procedure. The main result of the work shows how a single and simple\nexperimental realization that counts the density of maxima associated with the\nchaotic evolution of the species serves to infer its correlation length. We use\nthe result to investigate others distinct complex systems, one dealing with a\nset of differential equations that can be used to model a diversity of natural\nand artificial chaotic systems, and another one, focusing on the ocean water\nlevel.\n",
"title": "A novel procedure for the identification of chaos in complex biological systems"
}
| null | null | null | null | true | null |
15852
| null |
Default
| null | null |
null |
{
"abstract": " Question-answering (QA) on video contents is a significant challenge for\nachieving human-level intelligence as it involves both vision and language in\nreal-world settings. Here we demonstrate the possibility of an AI agent\nperforming video story QA by learning from a large amount of cartoon videos. We\ndevelop a video-story learning model, i.e. Deep Embedded Memory Networks\n(DEMN), to reconstruct stories from a joint scene-dialogue video stream using a\nlatent embedding space of observed data. The video stories are stored in a\nlong-term memory component. For a given question, an LSTM-based attention model\nuses the long-term memory to recall the best question-story-answer triplet by\nfocusing on specific words containing key information. We trained the DEMN on a\nnovel QA dataset of children's cartoon video series, Pororo. The dataset\ncontains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained\nsentences for scene description, and 8,913 story-related QA pairs. Our\nexperimental results show that the DEMN outperforms other QA models. This is\nmainly due to 1) the reconstruction of video stories in a scene-dialogue\ncombined form that utilize the latent embedding and 2) attention. DEMN also\nachieved state-of-the-art results on the MovieQA benchmark.\n",
"title": "DeepStory: Video Story QA by Deep Embedded Memory Networks"
}
| null | null | null | null | true | null |
15853
| null |
Default
| null | null |
null |
{
"abstract": " This work proposes a study of quality of service (QoS) in cognitive radio\nnetworks. This study is based on a stochastic optimization method called\nshuffled frog leaping algorithm (SFLA). The interest of the SFLA algorithm is\nto guarantee a better solution in a multi-carrier context in order to satisfy\nthe requirements of the secondary user (SU).\n",
"title": "Optimisation de la QoS dans un r{é}seau de radio cognitive en utilisant la m{é}taheuristique SFLA (Shuffled Frog Leaping Algorithm)"
}
| null | null | null | null | true | null |
15854
| null |
Default
| null | null |
null |
{
"abstract": " We introduce the notion of $K$-ideals associated with Kuratowski partitions\nand we prove that each $\\kappa$-complete ideal on a measurable cardinal\n$\\kappa$ can be represented as a $K$-ideal. Moreover, we show some results\nconcerning precipitous and Fréchet ideals.\n",
"title": "Some remarks on Kuratowski partitions"
}
| null | null | null | null | true | null |
15855
| null |
Default
| null | null |
null |
{
"abstract": " Given functional data samples from a survival process with time dependent\ncovariates, we propose a practical boosting procedure for estimating its hazard\nfunction nonparametrically. The estimator is consistent if the model is\ncorrectly specified; alternatively an oracle inequality can be demonstrated for\ntree-based models. To avoid overfitting, boosting employs several\nregularization devices. One of them is step-size restriction, but the rationale\nfor this is somewhat mysterious from the viewpoint of consistency. Our\nconvergence bounds bring some clarity to this issue by revealing that step-size\nrestriction is a mechanism for preventing the curvature of the risk from\nderailing convergence. We use our boosting procedure to shed new light on a\nquestion from the operations literature concerning the effect of workload on\nservice rates in an emergency department.\n",
"title": "Boosted nonparametric hazards with time-dependent covariates"
}
| null | null | null | null | true | null |
15856
| null |
Default
| null | null |
null |
{
"abstract": " We determine three invariants: Arnold's $J^+$-invariant as well as\n$\\mathcal{J}_1$ and $\\mathcal{J}_2$ invariants, which were introduced by\nCieliebak-Frauenfelder-van Koert, of periodic orbits of the second kind near\nthe heavier primary in the restricted three-body problem, provided that the\nmass ratio is sufficiently small.\n",
"title": "$J^+$-like invariants of periodic orbits of the second kind in the restricted three body problem"
}
| null | null | null | null | true | null |
15857
| null |
Default
| null | null |
null |
{
"abstract": " Being motivated by the problem of deducing $L^p$-bounds on the second\nfundamental form of an isometric immersion from $L^p$-bounds on its mean\ncurvature vector field, we prove a (nonlinear) Calderón-Zygmund inequality\nfor maps between complete (possibly noncompact) Riemannian manifolds.\n",
"title": "Nonlinear Calderón-Zygmund inequalities for maps"
}
| null | null |
[
"Mathematics"
] | null | true | null |
15858
| null |
Validated
| null | null |
null |
{
"abstract": " Biomedical sciences are increasingly recognising the relevance of gene\nco-expression-networks for analysing complex-systems, phenotypes or diseases.\nWhen the goal is investigating complex-phenotypes under varying conditions, it\ncomes naturally to employ comparative network methods. While approaches for\ncomparing two networks exist, this is not the case for multiple networks. Here\nwe present a method for the systematic comparison of an unlimited number of\nnetworks: Co-expression Differential Network Analysis (CoDiNA) for detecting\nlinks and nodes that are common, specific or different to the networks.\nApplying CoDiNA to a neurogenesis study identified genes for neuron\ndifferentiation. Experimentally overexpressing one candidate resulted in\nsignificant disturbance in the underlying neurogenesis' gene regulatory\nnetwork. We compared data from adults and children with active tuberculosis to\ntest for signatures of HIV. We also identified common and distinct network\nfeatures for particular cancer types with CoDiNA. These studies show that\nCoDiNA successfully detects genes associated with the diseases.\n",
"title": "Comparing multiple networks using the Co-expression Differential Network Analysis (CoDiNA)"
}
| null | null | null | null | true | null |
15859
| null |
Default
| null | null |
null |
{
"abstract": " Given a classical channel, a stochastic map from inputs to outputs, can we\nreplace the input with a simple intermediate variable that still yields the\ncorrect conditional output distribution? We examine two cases: first, when the\nintermediate variable is classical; second, when the intermediate variable is\nquantum. We show that the quantum variable's size is generically smaller than\nthe classical, according to two different measures---cardinality and entropy.\nWe demonstrate optimality conditions for a special case. We end with several\nrelated results: a proposal for extending the special case, a demonstration of\nthe impact of quantum phases, and a case study concerning pure versus mixed\nstates.\n",
"title": "Classical and Quantum Factors of Channels"
}
| null | null | null | null | true | null |
15860
| null |
Default
| null | null |
null |
{
"abstract": " Model compression is significant for the wide adoption of Recurrent Neural\nNetworks (RNNs) in both user devices possessing limited resources and business\nclusters requiring quick responses to large-scale service requests. This work\naims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the\nsizes of basic structures within LSTM units, including input updates, gates,\nhidden states, cell states and outputs. Independently reducing the sizes of\nbasic structures can result in inconsistent dimensions among them, and\nconsequently, end up with invalid LSTM units. To overcome the problem, we\npropose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS\nwill simultaneously decrease the sizes of all basic structures by one and\nthereby always maintain the dimension consistency. By learning ISS within LSTM\nunits, the obtained LSTMs remain regular while having much smaller basic\nstructures. Based on group Lasso regularization, our method achieves 10.59x\nspeedup without losing any perplexity of a language modeling of Penn TreeBank\ndataset. It is also successfully evaluated through a compact model with only\n2.69M weights for machine Question Answering of SQuAD dataset. Our approach is\nsuccessfully extended to non- LSTM RNNs, like Recurrent Highway Networks\n(RHNs). Our source code is publicly available at\nthis https URL\n",
"title": "Learning Intrinsic Sparse Structures within Long Short-Term Memory"
}
| null | null | null | null | true | null |
15861
| null |
Default
| null | null |
null |
{
"abstract": " Thunderstorms produce strong electric fields over regions on the order of\nkilometer. The corresponding electric potential differences are on the order of\n100 MV. Secondary cosmic rays reaching these regions may be significantly\naccelerated and even amplified in relativistic runaway avalanche processes.\nThese phenomena lead to enhancements of the high-energy background radiation\nobserved by detectors on the ground and on board aircraft. Moreover, intense\nsubmillisecond gamma-ray bursts named terrestrial gamma-ray flashes (TGFs)\nproduced in thunderstorms are detected from low Earth orbit satellites. When\npassing through the atmosphere, these gamma-rays are recognized to produce\nsecondary relativistic electrons and positrons rapidly trapped in the\ngeomagnetic field and injected into the near-Earth space environment. In the\npresent work, we attempt to give an overview of the current state of research\non high-energy phenomena associated with thunderstorms.\n",
"title": "Electron Acceleration Mechanisms in Thunderstorms"
}
| null | null | null | null | true | null |
15862
| null |
Default
| null | null |
null |
{
"abstract": " Cox proportional hazards model with measurement error is investigated. In\nKukush et al. (2011) [Journal of Statistical Research 45, 77-94] and Chimisov\nand Kukush (2014) [Modern Stochastics: Theory and Applications 1, 13-32]\nasymptotic properties of simultaneous estimator $\\lambda_n(\\cdot)$, $\\beta_n$\nwere studied for baseline hazard rate $\\lambda(\\cdot)$ and regression parameter\n$\\beta$, at that the parameter set $\\Theta=\\Theta_{\\lambda}\\times\n\\Theta_{\\beta}$ was assumed bounded. In the present paper, the set\n$\\Theta_{\\lambda}$ is unbounded from above and not separated away from $0$. We\nconstruct the estimator in two steps: first we derive a strongly consistent\nestimator and then modify it to provide its asymptotic normality.\n",
"title": "Consistent estimation in Cox proportional hazards model with measurement errors and unbounded parameter set"
}
| null | null | null | null | true | null |
15863
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study the efficiency of egoistic and altruistic strategies\nwithin the model of social dynamics determined by voting in a stochastic\nenvironment (the ViSE model) using two criteria: maximizing the average capital\nincrement and minimizing the number of bankrupt participants. The proposals are\ngenerated stochastically; three families of the corresponding distributions are\nconsidered: normal distributions, symmetrized Pareto distributions, and\nStudent's $t$-distributions. It is found that the \"pit of losses\" paradox\ndescribed earlier does not occur in the case of heavy-tailed distributions. The\negoistic strategy better protects agents from extinction in aggressive\nenvironments than the altruistic ones, however, the efficiency of altruism is\nhigher in more favorable environments. A comparison of altruistic strategies\nwith each other shows that in aggressive environments, everyone should be\nsupported to minimize extinction, while under more favorable conditions, it is\nmore efficient to support the weakest participants. Studying the dynamics of\nparticipants' capitals we identify situations where the two considered criteria\ncontradict each other. At the next stage of the study, combined voting\nstrategies and societies involving participants with selfish and altruistic\nstrategies will be explored.\n",
"title": "Comparative Efficiency of Altruism and Egoism as Voting Strategies in Stochastic Environment"
}
| null | null | null | null | true | null |
15864
| null |
Default
| null | null |
null |
{
"abstract": " Learning to drive faithfully in highly stochastic urban settings remains an\nopen problem. To that end, we propose a Multi-task Learning from Demonstration\n(MT-LfD) framework which uses supervised auxiliary task prediction to guide the\nmain task of predicting the driving commands. Our framework involves an\nend-to-end trainable network for imitating the expert demonstrator's driving\ncommands. The network intermediately predicts visual affordances and action\nprimitives through direct supervision which provide the aforementioned\nauxiliary supervised guidance. We demonstrate that such joint learning and\nsupervised guidance facilitates hierarchical task decomposition, assisting the\nagent to learn faster, achieve better driving performance and increases\ntransparency of the otherwise black-box end-to-end network. We run our\nexperiments to validate the MT-LfD framework in CARLA, an open-source urban\ndriving simulator. We introduce multiple non-player agents in CARLA and induce\ntemporal noise in them for realistic stochasticity.\n",
"title": "Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision"
}
| null | null | null | null | true | null |
15865
| null |
Default
| null | null |
null |
{
"abstract": " Anomaly detection (AD) task corresponds to identifying the true anomalies\nfrom a given set of data instances. AD algorithms score the data instances and\nproduce a ranked list of candidate anomalies, which are then analyzed by a\nhuman to discover the true anomalies. However, this process can be laborious\nfor the human analyst when the number of false-positives is very high.\nTherefore, in many real-world AD applications including computer security and\nfraud prevention, the anomaly detector must be configurable by the human\nanalyst to minimize the effort on false positives.\nIn this paper, we study the problem of active learning to automatically tune\nensemble of anomaly detectors to maximize the number of true anomalies\ndiscovered. We make four main contributions towards this goal. First, we\npresent an important insight that explains the practical successes of AD\nensembles and how ensembles are naturally suited for active learning. Second,\nwe present several algorithms for active learning with tree-based AD ensembles.\nThese algorithms help us to improve the diversity of discovered anomalies,\ngenerate rule sets for improved interpretability of anomalous instances, and\nadapt to streaming data settings in a principled manner. Third, we present a\nnovel algorithm called GLocalized Anomaly Detection (GLAD) for active learning\nwith generic AD ensembles. GLAD allows end-users to retain the use of simple\nand understandable global anomaly detectors by automatically learning their\nlocal relevance to specific data instances using label feedback. Fourth, we\npresent extensive experiments to evaluate our insights and algorithms. Our\nresults show that in addition to discovering significantly more anomalies than\nstate-of-the-art unsupervised baselines, our active learning algorithms under\nthe streaming-data setup are competitive with the batch setup.\n",
"title": "Active Anomaly Detection via Ensembles: Insights, Algorithms, and Interpretability"
}
| null | null | null | null | true | null |
15866
| null |
Default
| null | null |
null |
{
"abstract": " An accurate calculation of proton ranges in phantoms or detector geometries\nis crucial for decision making in proton therapy and proton imaging. To this\nend, several parameterizations of the range-energy relationship exist, with\ndifferent levels of complexity and accuracy. In this study we compare the\naccuracy four different parameterizations models: Two analytical models derived\nfrom the Bethe equation, and two different interpolation schemes applied to\nrange-energy tables. In conclusion, a spline interpolation scheme yields the\nhighest reproduction accuracy, while the shape of the energy loss-curve is best\nreproduced with the differentiated Bragg-Kleeman equation.\n",
"title": "Accuracy of parameterized proton range models; a comparison"
}
| null | null | null | null | true | null |
15867
| null |
Default
| null | null |
null |
{
"abstract": " We show that the level sets of automorphisms of free groups with respect to\nthe Lipschitz metric are connected as subsets of Culler-Vogtmann space. In fact\nwe prove our result in a more general setting of deformation spaces. As\napplications, we give metric solutions of the conjugacy problem for irreducible\nautomorphisms and the detection of reducibility. We additionally prove\ntechnical results that may be of independent interest --- such as the fact that\nthe set of displacements is well ordered.\n",
"title": "On the connectivity of level sets of automorphisms of free groups, with applications to decision problems"
}
| null | null | null | null | true | null |
15868
| null |
Default
| null | null |
null |
{
"abstract": " Machine learning algorithms are typically run on large scale, distributed\ncompute infrastructure that routinely face a number of unavailabilities such as\nfailures and temporary slowdowns. Adding redundant computations using\ncoding-theoretic tools called \"codes\" is an emerging technique to alleviate the\nadverse effects of such unavailabilities. A code consists of an encoding\nfunction that proactively introduces redundant computation and a decoding\nfunction that reconstructs unavailable outputs using the available ones. Past\nwork focuses on using codes to provide resilience for linear computations and\nspecific iterative optimization algorithms. However, computations performed for\na variety of applications including inference on state-of-the-art machine\nlearning algorithms, such as neural networks, typically fall outside this\nrealm. In this paper, we propose taking a learning-based approach to designing\ncodes that can handle non-linear computations. We present carefully designed\nneural network architectures and a training methodology for learning encoding\nand decoding functions that produce approximate reconstructions of unavailable\ncomputation results. We present extensive experimental results demonstrating\nthe effectiveness of the proposed approach: we show that the our learned codes\ncan accurately reconstruct $64 - 98\\%$ of the unavailable predictions from\nneural-network based image classifiers on the MNIST, Fashion-MNIST, and\nCIFAR-10 datasets. To the best of our knowledge, this work proposes the first\nlearning-based approach for designing codes, and also presents the first\ncoding-theoretic solution that can provide resilience for any non-linear\n(differentiable) computation. Our results show that learning can be an\neffective technique for designing codes, and that learned codes are a highly\npromising approach for bringing the benefits of coding to non-linear\ncomputations.\n",
"title": "Learning a Code: Machine Learning for Approximate Non-Linear Coded Computation"
}
| null | null | null | null | true | null |
15869
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we consider the detection of manoeuvring small objects with\nradars. Such objects induce low signal to noise ratio (SNR) reflections in the\nreceived signal. We consider both co-located and separated transmitter/receiver\npairs, i.e., mono-static and bi-static configurations, respectively, as well as\nmulti-static settings involving both types. We propose a detection approach\nwhich is capable of coherently integrating these reflections within a coherent\nprocessing interval (CPI) in all these configurations and continuing\nintegration for an arbitrarily long time across consecutive CPIs. We estimate\nthe complex value of the reflection coefficients for integration while\nsimultaneously estimating the object trajectory. Compounded with this is the\nestimation of the unknown time reference shift of the separated transmitters\nnecessary for coherent processing. Detection is made by using the resulting\nintegration value in a Neyman-Pearson test against a constant false alarm rate\nthreshold. We demonstrate the efficacy of our approach in a simulation example\nwith a very low SNR object which cannot be detected with conventional\ntechniques.\n",
"title": "Detection via simultaneous trajectory estimation and long time integration"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
15870
| null |
Validated
| null | null |
null |
{
"abstract": " The recent rapid progress in observations of circumstellar disks and\nextrasolar planets has reinforced the importance of understanding an intimate\ncoupling between star and planet formation. Under such a circumstance, it may\nbe invaluable to attempt to specify when and how planet formation begins in\nstar-forming regions and to identify what physical processes/quantities are the\nmost significant to make a link between star and planet formation. To this end,\nwe have recently developed a couple of projects. These include an observational\nproject about dust growth in Class 0 YSOs and a theoretical modeling project of\nthe HL Tauri disk. For the first project, we utilize the archive data of radio\ninterferometric observations, and examine whether dust growth, a first step of\nplanet formation, occurs in Class 0 YSOs. We find that while our observational\nresults can be reproduced by the presence of large ($\\sim$ mm) dust grains for\nsome of YSOs under the single-component modified blackbody formalism, an\ninterpretation of no dust growth would be possible when a more detailed model\nis used. For the second project, we consider an origin of the disk\nconfiguration around HL Tauri, focusing on magnetic fields. We find that\nmagnetically induced disk winds may play an important role in the HL Tauri\ndisk. The combination of these attempts may enable us to move towards a\ncomprehensive understanding of how star and planet formation are intimately\ncoupled with each other.\n",
"title": "Dust Growth and Magnetic Fields: from Cores to Disks (even down to Planets)"
}
| null | null |
[
"Physics"
] | null | true | null |
15871
| null |
Validated
| null | null |
null |
{
"abstract": " The memory-type control charts, such as EWMA and CUSUM, are powerful tools\nfor detecting small quality changes in univariate and multivariate processes.\nMany papers on economic design of these control charts use the formula proposed\nby Lorenzen and Vance (1986) [Lorenzen, T. J., & Vance, L. C. (1986). The\neconomic design of control charts: A unified approach. Technometrics, 28(1),\n3-10, DOI: 10.2307/1269598]. This paper shows that this formula is not correct\nfor memory-type control charts and its values can significantly deviate from\nthe original values even if the ARL values used in this formula are accurately\ncomputed. Consequently, the use of this formula can result in charts that are\nnot economically optimal. The formula is corrected for memory-type control\ncharts, but unfortunately the modified formula is not a helpful tool from a\ncomputational perspective. We show that simulation-based optimization is a\npossible alternative method.\n",
"title": "Economic Design of Memory-Type Control Charts: The Fallacy of the Formula Proposed by Lorenzen and Vance (1986)"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
15872
| null |
Validated
| null | null |
null |
{
"abstract": " In this short note, we obtain error estimates for Riemann sums of some\nsingular functions.\n",
"title": "Error estimates for Riemann sums of some singular functions"
}
| null | null | null | null | true | null |
15873
| null |
Default
| null | null |
null |
{
"abstract": " The observed constraints on the variability of the proton to electron mass\nratio $\\mu$ and the fine structure constant $\\alpha$ are used to establish\nconstraints on the variability of the Quantum Chromodynamic Scale and a\ncombination of the Higgs Vacuum Expectation Value and the Yukawa couplings.\nFurther model dependent assumptions provide constraints on the Higgs VEV and\nthe Yukawa couplings separately. A primary conclusion is that limits on the\nvariability of dimensionless fundamental constants such as $\\mu$ and $\\alpha$\nprovide important constraints on the parameter space of new physics and\ncosmologies.\n",
"title": "The Relation Between Fundamental Constants and Particle Physics Parameters"
}
| null | null | null | null | true | null |
15874
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we introduce pose interpreter networks for 6-DoF object pose\nestimation. In contrast to other CNN-based approaches to pose estimation that\nrequire expensively annotated object pose data, our pose interpreter network is\ntrained entirely on synthetic pose data. We use object masks as an intermediate\nrepresentation to bridge real and synthetic. We show that when combined with a\nsegmentation model trained on RGB images, our synthetically trained pose\ninterpreter network is able to generalize to real data. Our end-to-end system\nfor object pose estimation runs in real-time (20 Hz) on live RGB data, without\nusing depth information or ICP refinement.\n",
"title": "Real-Time Object Pose Estimation with Pose Interpreter Networks"
}
| null | null | null | null | true | null |
15875
| null |
Default
| null | null |
null |
{
"abstract": " A regularized risk minimization procedure for regression function estimation\nis introduced that achieves near optimal accuracy and confidence under general\nconditions, including heavy-tailed predictor and response variables. The\nprocedure is based on median-of-means tournaments, introduced by the authors in\n[8]. It is shown that the new procedure outperforms standard regularized\nempirical risk minimization procedures such as lasso or slope in heavy-tailed\nproblems.\n",
"title": "Regularization, sparse recovery, and median-of-means tournaments"
}
| null | null | null | null | true | null |
15876
| null |
Default
| null | null |
null |
{
"abstract": " One of the primary objectives of human brain mapping is the division of the\ncortical surface into functionally distinct regions, i.e. parcellation. While\nit is generally agreed that at macro-scale different regions of the cortex have\ndifferent functions, the exact number and configuration of these regions is not\nknown. Methods for the discovery of these regions are thus important,\nparticularly as the volume of available information grows. Towards this end, we\npresent a parcellation method based on a Bayesian non-parametric mixture model\nof cortical connectivity.\n",
"title": "A Restaurant Process Mixture Model for Connectivity Based Parcellation of the Cortex"
}
| null | null | null | null | true | null |
15877
| null |
Default
| null | null |
null |
{
"abstract": " Deep generative models have achieved impressive success in recent years.\nGenerative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as\nemerging families for generative model learning, have largely been considered\nas two distinct paradigms and received extensive independent studies\nrespectively. This paper aims to establish formal connections between GANs and\nVAEs through a new formulation of them. We interpret sample generation in GANs\nas performing posterior inference, and show that GANs and VAEs involve\nminimizing KL divergences of respective posterior and inference distributions\nwith opposite directions, extending the two learning phases of classic\nwake-sleep algorithm, respectively. The unified view provides a powerful tool\nto analyze a diverse set of existing model variants, and enables to transfer\ntechniques across research lines in a principled way. For example, we apply the\nimportance weighting method in VAE literatures for improved GAN learning, and\nenhance VAEs with an adversarial mechanism that leverages generated samples.\nExperiments show generality and effectiveness of the transferred techniques.\n",
"title": "On Unifying Deep Generative Models"
}
| null | null | null | null | true | null |
15878
| null |
Default
| null | null |
null |
{
"abstract": " Face recognition (FR) methods report significant performance by adopting the\nconvolutional neural network (CNN) based learning methods. Although CNNs are\nmostly trained by optimizing the softmax loss, the recent trend shows an\nimprovement of accuracy with different strategies, such as task-specific CNN\nlearning with different loss functions, fine-tuning on target dataset, metric\nlearning and concatenating features from multiple CNNs. Incorporating these\ntasks obviously requires additional efforts. Moreover, it demotivates the\ndiscovery of efficient CNN models for FR which are trained only with identity\nlabels. We focus on this fact and propose an easily trainable and single CNN\nbased FR method. Our CNN model exploits the residual learning framework.\nAdditionally, it uses normalized features to compute the loss. Our extensive\nexperiments show excellent generalization on different datasets. We obtain very\ncompetitive and state-of-the-art results on the LFW, IJB-A, YouTube faces and\nCACD datasets.\n",
"title": "DeepVisage: Making face recognition simple yet with powerful generalization skills"
}
| null | null | null | null | true | null |
15879
| null |
Default
| null | null |
null |
{
"abstract": " Ellenberg and Gijswijt gave recently a new exponential upper bound for the\nsize of three-term arithmetic progression free sets in $({\\mathbb Z_p})^n$,\nwhere $p$ is a prime. Petrov summarized their method and generalized their\nresult to linear forms.\nIn this short note we use Petrov's result to give new exponential upper\nbounds for the Erdős-Ginzburg-Ziv constant of finite Abelian groups of high\nrank. Our main results depend on a conjecture about Property D.\n",
"title": "The Erdős-Ginzburg-Ziv constant and progression-free subsets"
}
| null | null | null | null | true | null |
15880
| null |
Default
| null | null |
null |
{
"abstract": " We study the role of environment in the evolution of central and satellite\ngalaxies with the Sloan Digital Sky Survey. We begin by studying the size-mass\nrelation, replicating previous studies, which showed no difference between the\nsizes of centrals and satellites at fixed stellar mass, before turning our\nattention to the size-core velocity dispersion ($\\sigma_0$) and mass-$\\sigma_0$\nrelations. By comparing the median size and mass of the galaxies at fixed\nvelocity dispersion we find that the central galaxies are consistently larger\nand more massive than their satellite counterparts in the quiescent population.\nIn the star forming population we find there is no difference in size and only\na small difference in mass. To analyse why these difference may be present we\ninvestigate the radial mass profiles and stellar metallicity of the galaxies.\nWe find that in the cores of the galaxies there is no difference in mass\nsurface density between centrals and satellites, but there is a large\ndifference at larger radii. We also find almost no difference between the\nstellar metallicity of centrals and satellites when they are separated into\nstar forming and quiescent groups. Under the assumption that $\\sigma_0$ is\ninvariant to environmental processes, our results imply that central galaxies\nare likely being increased in mass and size by processes such as minor mergers,\nparticularly at high $\\sigma_0$, while satellites are being slightly reduced in\nmass and size by tidal stripping and harassment, particularly at low\n$\\sigma_0$, all of which predominantly affect the outer regions of the\ngalaxies.\n",
"title": "The Differing Relationships Between Size, Mass, Metallicity and Core Velocity Dispersion of Central and Satellite Galaxies"
}
| null | null | null | null | true | null |
15881
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the configuration space of the Delta-Manipulator, identify 24\npoints in the configuration space, where the Jacobian of the Constraint\nEquations looses rank and show, that these are not manifold points of the Real\nAlgebraic Set, which is defined by the Constraint Equations.\n",
"title": "Configuration Space Singularities of The Delta Manipulator"
}
| null | null | null | null | true | null |
15882
| null |
Default
| null | null |
null |
{
"abstract": " Earlier this decade, the so-called FEAST algorithm was released for computing\nthe eigenvalues of a matrix in a given interval. Previously, rational filter\nfunctions have been examined as a parameter of FEAST. In this thesis, we expand\non existing work with the following contributions: (i) Obtaining\nwell-performing rational filter functions via standard minimisation algorithms,\n(ii) Obtaining constrained rational filter functions efficiently, and (iii)\nImproving existing rational filter functions algorithmically. Using our new\nrational filter functions, FEAST requires up to one quarter fewer iterations on\naverage compared to state-of-art rational filter functions.\n",
"title": "Constrained Optimisation of Rational Functions for Accelerating Subspace Iteration"
}
| null | null | null | null | true | null |
15883
| null |
Default
| null | null |
null |
{
"abstract": " For an element $a$ of a monoid $H$, its set of lengths $\\mathsf L (a) \\subset\n\\mathbb N$ is the set of all positive integers $k$ for which there is a\nfactorization $a=u_1 \\cdot \\ldots \\cdot u_k$ into $k$ atoms. We study the\nsystem $\\mathcal L (H) = \\{\\mathsf L (a) \\mid a \\in H \\}$ with a focus on the\nunions $\\mathcal U_k (H) \\subset \\mathbb N$ which are the unions of all sets of\nlengths containing a given $k \\in \\mathbb N$. The Structure Theorem for Unions\n-- stating that for all sufficiently large $k$, the sets $\\mathcal U_k (H)$ are\nalmost arithmetical progressions with the same difference and global bound --\nhas found much attention for commutative monoids and domains. We show that it\nholds true for the not necessarily commutative monoids in the title satisfying\nsuitable algebraic finiteness conditions. Furthermore, we give an explicit\ndescription of the system of sets of lengths of monoids $B_{n} = \\langle a,b\n\\mid ba=b^{n} \\rangle$ for $n \\in \\N_{\\ge 2}$. Based on this description, we\nshow that the monoids $B_n$ are not transfer Krull, which implies that their\nsystems $\\mathcal L (B_n)$ are distinct from systems of sets of lengths of\ncommutative Krull monoids and others.\n",
"title": "Sets of lengths in atomic unit-cancellative finitely presented monoids"
}
| null | null | null | null | true | null |
15884
| null |
Default
| null | null |
null |
{
"abstract": " In the increasing interests on spin-orbit torque (SOT) with various magnetic\nmaterials, we investigated SOT in rare earth-transition metal ferrimagnetic\nalloys. The harmonic Hall measurements were performed in Pt/GdFeCo bilayers to\nquantify the effective fields resulting from the SOT. It is found that the\ndamping-like torque rapidly increases near the magnetization compensation\ntemperature TM of the GdFeCo, which is attributed to the reduction of the net\nmagnetic moment.\n",
"title": "Spin-orbit effective fields in Pt/GdFeCo bilayers"
}
| null | null | null | null | true | null |
15885
| null |
Default
| null | null |
null |
{
"abstract": " Static program analysis is used to summarize properties over all dynamic\nexecutions. In a unifying approach based on 3-valued logic properties are\neither assigned a definite value or unknown. But in summarizing a set of\nexecutions, a property is more accurately represented as being biased towards\ntrue, or towards false. Compilers use program analysis to determine benefit of\nan optimization. Since benefit (e.g., performance) is justified based on the\ncommon case understanding bias is essential in guiding the compiler.\nFurthermore, successful optimization also relies on understanding the quality\nof the information, i.e. the plausibility of the bias. If the quality of the\nstatic information is too low to form a decision we would like a mechanism that\nimproves dynamically.\nWe consider the problem of building such a reasoning framework and present\nthe fuzzy data-flow analysis. Our approach generalize previous work that use\n3-valued logic. We derive fuzzy extensions of data-flow analyses used by the\nlazy code motion optimization and unveil opportunities previous work would not\ndetect due to limited expressiveness. Furthermore we show how the results of\nour analysis can be used in an adaptive classifier that improve as the\napplication executes.\n",
"title": "Bridging Static and Dynamic Program Analysis using Fuzzy Logic"
}
| null | null | null | null | true | null |
15886
| null |
Default
| null | null |
null |
{
"abstract": " A major challenge in solar and heliospheric physics is understanding how\nhighly localized regions, far smaller than 1 degree at the Sun, are the source\nof solar-wind structures spanning more than 20 degrees near Earth. The Sun's\natmosphere is divided into magnetically open regions, coronal holes, where\nsolar-wind plasma streams out freely and fills the solar system, and closed\nregions, where the plasma is confined to coronal loops. The boundary between\nthese regions extends outward as the heliospheric current sheet (HCS).\nMeasurements of plasma composition imply that the solar wind near the HCS, the\nso-called slow solar wind, originates in closed regions, presumably by the\nprocesses of field-line opening or interchange reconnection. Mysteriously,\nhowever, slow wind is also often seen far from the HCS. We use high-resolution,\nthree-dimensional magnetohydrodynamic simulations to calculate the dynamics of\na coronal hole whose geometry includes a narrow corridor flanked by closed\nfield and which is driven by supergranule-like flows at the coronal-hole\nboundary. We find that these dynamics result in the formation of giant arcs of\nclosed-field plasma that extend far from the HCS and span tens of degrees in\nlatitude and longitude at Earth, accounting for the slow solar wind\nobservations.\n",
"title": "The Formation of Heliospheric Arcs of Slow Solar Wind"
}
| null | null | null | null | true | null |
15887
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we construct explicit smooth solutions to the Strominger system\non generalized Calabi-Gray manifolds, which are compact non-Kähler Calabi-Yau\n3-folds with infinitely many distinct topological types and sets of Hodge\nnumbers.\n",
"title": "A Construction of Infinitely Many Solutions to the Strominger System"
}
| null | null | null | null | true | null |
15888
| null |
Default
| null | null |
null |
{
"abstract": " Common clustering algorithms require multiple scans of all the data to\nachieve convergence, and this is prohibitive when large databases, with data\narriving in streams, must be processed. Some algorithms to extend the popular\nK-means method to the analysis of streaming data are present in literature\nsince 1998 (Bradley et al. in Scaling clustering algorithms to large databases.\nIn: KDD. p. 9-15, 1998; O'Callaghan et al. in Streaming-data algorithms for\nhigh-quality clustering. In: Proceedings of IEEE international conference on\ndata engineering. p. 685, 2001), based on the memorization and recursive update\nof a small number of summary statistics, but they either don't take into\naccount the specific variability of the clusters, or assume that the random\nvectors which are processed and grouped have uncorrelated components.\nUnfortunately this is not the case in many practical situations. We here\npropose a new algorithm to process data streams, with data having correlated\ncomponents and coming from clusters with different covariance matrices. Such\ncovariance matrices are estimated via an optimal double shrinkage method, which\nprovides positive definite estimates even in presence of a few data points, or\nof data having components with small variance. This is needed to invert the\nmatrices and compute the Mahalanobis distances that we use for the data\nassignment to the clusters. We also estimate the total number of clusters from\nthe data.\n",
"title": "A clustering algorithm for multivariate data streams with correlated components"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
15889
| null |
Validated
| null | null |
null |
{
"abstract": " In implicit models, one often interpolates between sampled points in latent\nspace. As we show in this paper, care needs to be taken to match-up the\ndistributional assumptions on code vectors with the geometry of the\ninterpolating paths. Otherwise, typical assumptions about the quality and\nsemantics of in-between points may not be justified. Based on our analysis we\npropose to modify the prior code distribution to put significantly more\nprobability mass closer to the origin. As a result, linear interpolation paths\nare not only shortest paths, but they are also guaranteed to pass through\nhigh-density regions, irrespective of the dimensionality of the latent space.\nExperiments on standard benchmark image datasets demonstrate clear visual\nimprovements in the quality of the generated samples and exhibit more\nmeaningful interpolation paths.\n",
"title": "Semantic Interpolation in Implicit Models"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
15890
| null |
Validated
| null | null |
null |
{
"abstract": " In [1] we consider an optimal control problem subject to a semilinear\nelliptic PDE together with its variational discretization, where we provide a\ncondition which allows to decide whether a solution of the necessary first\norder conditions is a global minimum. This condition can be explicitly\nevaluated at the discrete level. Furthermore, we prove that if the above\ncondition holds uniformly with respect to the discretization parameter the\nsequence of discrete solutions converges to a global solution of the\ncorresponding limit problem. With the present work we complement our\ninvestigations of [1] in that we prove an error estimate for those discrete\nglobal solutions. Numerical experiments confirm our analytical findings.\n",
"title": "Error analysis for global minima of semilinear optimal control problems"
}
| null | null | null | null | true | null |
15891
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider a stochastic model of incompressible non-Newtonian\nfluids of second grade on a bounded domain of $\\mathbb{R}^2$ with\nmultiplicative noise. We first show that the solutions to the stochastic\nequations of second grade fluids generate a continuous random dynamical system.\nSecond, we investigate the Fréchet differentiability of the random\ndynamical system. Finally, we establish the asymptotic compactness of the\nrandom dynamical system, and the existence of random attractors for the random\ndynamical system, we also obtain the upper semi-continuity of the perturbed\nrandom attractors when the noise intensity approaches zero.\n",
"title": "Random dynamics of two-dimensional stochastic second grade fluids"
}
| null | null | null | null | true | null |
15892
| null |
Default
| null | null |
null |
{
"abstract": " We describe a set of tools, services and strategies of the Latin American\nGiant Observatory (LAGO) data repository network, to implement Data\nAccessibility, Reproducibility and Trustworthiness.\n",
"title": "Lago Distributed Network Of Data Repositories"
}
| null | null | null | null | true | null |
15893
| null |
Default
| null | null |
null |
{
"abstract": " In this article we investigate the Duistermaat-Heckman theorem using the\ntheory of hyperfunctions. In applications involving Hamiltonian torus actions\non infinite dimensional manifolds, this more general theory seems to be\nnecessary in order to accomodate the existence of the infinite order\ndifferential operators which arise from the isotropy representations on the\ntangent spaces to fixed points. We will quickly review of the theory of\nhyperfunctions and their Fourier transforms. We will then apply this theory to\nconstruct a hyperfunction analogue of the Duistermaat-Heckman distribution. Our\nmain goal will be to study the Duistermaat-Heckman hyperfunction of $\\Omega\nSU(2)$, but in getting to this goal we will also characterize the singular\nlocus of the moment map for the Hamiltonian action of $T\\times S^1$ on $\\Omega\nG$. The main goal of this paper is to present a Duistermaat-Heckman\nhyperfunction arising from a Hamiltonian action on an infinite dimensional\nmanifold.\n",
"title": "Hyperfunctions, the Duistermaat-Heckman theorem, and Loop Groups"
}
| null | null | null | null | true | null |
15894
| null |
Default
| null | null |
null |
{
"abstract": " This paper examines the behavior of the price of anarchy as a function of the\ntraffic inflow in nonatomic congestion games with multiple origin-destination\n(O/D) pairs. Empirical studies in real-world networks show that the price of\nanarchy is close to 1 in both light and heavy traffic, thus raising the\nquestion: can these observations be justified theoretically? We first show that\nthis is not always the case: the price of anarchy may remain a positive\ndistance away from 1 for all values of the traffic inflow, even in simple\nthree-link networks with a single O/D pair and smooth, convex costs. On the\nother hand, for a large class of cost functions (including all polynomials),\nthe price of anarchy does converge to 1 in both heavy and light traffic,\nirrespective of the network topology and the number of O/D pairs in the\nnetwork. We also examine the rate of convergence of the price of anarchy, and\nwe show that it follows a power law whose degree can be computed explicitly\nwhen the network's cost functions are polynomials.\n",
"title": "When is selfish routing bad? The price of anarchy in light and heavy traffic"
}
| null | null | null | null | true | null |
15895
| null |
Default
| null | null |
null |
{
"abstract": " In line with its terms of reference the ICFA Neutrino Panel has developed a\nroadmapfor the international, accelerator-based neutrino programme. A \"roadmap\ndiscussion document\" was presented in May 2016 taking into account the\npeer-group-consultation described in the Panel's initial report. The \"roadmap\ndiscussion document\" was used to solicit feedback from the neutrino\ncommunity---and more broadly, the particle- and astroparticle-physics\ncommunities---and the various stakeholders in the programme. The roadmap, the\nconclusions and recommendations presented in this document take into account\nthe comments received following the publication of the roadmap discussion\ndocument.\nWith its roadmap the Panel documents the approved objectives and milestones\nof the experiments that are presently in operation or under construction.\nApproval, construction and exploitation milestones are presented for\nexperiments that are being considered for approval. The timetable proposed by\nthe proponents is presented for experiments that are not yet being considered\nformally for approval. Based on this information, the evolution of the\nprecision with which the critical parameters governinger the neutrino are known\nhas been evaluated. Branch or decision points have been identified based on the\nanticipated evolution in precision. The branch or decision points have in turn\nbeen used to identify desirable timelines for the neutrino-nucleus cross\nsection and hadro-production measurements that are required to maximise the\nintegrated scientific output of the programme. The branch points have also been\nused to identify the timeline for the R&D required to take the programme beyond\nthe horizon of the next generation of experiments. The theory and phenomenology\nprogramme, including nuclear theory, required to ensure that maximum benefit is\nderived from the experimental programme is also discussed.\n",
"title": "Roadmap for the international, accelerator-based neutrino programme"
}
| null | null | null | null | true | null |
15896
| null |
Default
| null | null |
null |
{
"abstract": " Many scientific data sets contain temporal dimensions. These are the data\nstoring information at the same spatial location but different time stamps.\nSome of the biggest temporal datasets are produced by parallel computing\napplications such as simulations of climate change and fluid dynamics. Temporal\ndatasets can be very large and cost a huge amount of time to transfer among\nstorage locations. Using data compression techniques, files can be transferred\nfaster and save storage space. NUMARCK is a lossy data compression algorithm\nfor temporal data sets that can learn emerging distributions of element-wise\nchange ratios along the temporal dimension and encodes them into an index table\nto be concisely represented. This paper presents a parallel implementation of\nNUMARCK. Evaluated with six data sets obtained from climate and astrophysics\nsimulations, parallel NUMARCK achieved scalable speedups of up to 8788 when\nrunning 12800 MPI processes on a parallel computer. We also compare the\ncompression ratios against two lossy data compression algorithms, ISABELA and\nZFP. The results show that NUMARCK achieved higher compression ratio than\nISABELA and ZFP.\n",
"title": "Parallel Implementation of Lossy Data Compression for Temporal Data Sets"
}
| null | null | null | null | true | null |
15897
| null |
Default
| null | null |
null |
{
"abstract": " As the first step to model emotional state of a person, we build sentiment\nanalysis models with existing deep neural network algorithms and compare the\nmodels with psychological measurements to enlighten the relationship. In the\nexperiments, we first examined psychological state of 64 participants and asked\nthem to summarize the story of a book, Chronicle of a Death Foretold (Marquez,\n1981). Secondly, we trained models using crawled 365,802 movie review data;\nthen we evaluated participants' summaries using the pretrained model as a\nconcept of transfer learning. With the background that emotion affects on\nmemories, we investigated the relationship between the evaluation score of the\nsummaries from computational models and the examined psychological\nmeasurements. The result shows that although CNN performed the best among other\ndeep neural network algorithms (LSTM, GRU), its results are not related to the\npsychological state. Rather, GRU shows more explainable results depending on\nthe psychological state. The contribution of this paper can be summarized as\nfollows: (1) we enlighten the relationship between computational models and\npsychological measurements. (2) we suggest this framework as objective methods\nto evaluate the emotion; the real sentiment analysis of a person.\n",
"title": "What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State"
}
| null | null | null | null | true | null |
15898
| null |
Default
| null | null |
null |
{
"abstract": " In many situations across computational science and engineering, multiple\ncomputational models are available that describe a system of interest. These\ndifferent models have varying evaluation costs and varying fidelities.\nTypically, a computationally expensive high-fidelity model describes the system\nwith the accuracy required by the current application at hand, while\nlower-fidelity models are less accurate but computationally cheaper than the\nhigh-fidelity model. Outer-loop applications, such as optimization, inference,\nand uncertainty quantification, require multiple model evaluations at many\ndifferent inputs, which often leads to computational demands that exceed\navailable resources if only the high-fidelity model is used. This work surveys\nmultifidelity methods that accelerate the solution of outer-loop applications\nby combining high-fidelity and low-fidelity model evaluations, where the\nlow-fidelity evaluations arise from an explicit low-fidelity model (e.g., a\nsimplified physics approximation, a reduced model, a data-fit surrogate, etc.)\nthat approximates the same output quantity as the high-fidelity model. The\noverall premise of these multifidelity methods is that low-fidelity models are\nleveraged for speedup while the high-fidelity model is kept in the loop to\nestablish accuracy and/or convergence guarantees. We categorize multifidelity\nmethods according to three classes of strategies: adaptation, fusion, and\nfiltering. The paper reviews multifidelity methods in the outer-loop contexts\nof uncertainty propagation, inference, and optimization.\n",
"title": "Survey of multifidelity methods in uncertainty propagation, inference, and optimization"
}
| null | null | null | null | true | null |
15899
| null |
Default
| null | null |
null |
{
"abstract": " We present a tutorial on the determination of the physical conditions and\nchemical abundances in gaseous nebulae. We also include a brief review of\nrecent results on the study of gaseous nebulae, their relevance for the study\nof stellar evolution, galactic chemical evolution, and the evolution of the\nuniverse. One of the most important problems in abundance determinations is the\nexistence of a discrepancy between the abundances determined with collisionally\nexcited lines and those determined by recombination lines, this is called the\nADF (abundance discrepancy factor) problem; we review results related to this\nproblem. Finally, we discuss possible reasons for the large t$^2$ values\nobserved in gaseous nebulae.\n",
"title": "Nebular spectroscopy: A guide on H II regions and planetary nebulae"
}
| null | null | null | null | true | null |
15900
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.