text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " We present the Vortex Image Processing (VIP) library, a python package\ndedicated to astronomical high-contrast imaging. Our package relies on the\nextensive python stack of scientific libraries and aims to provide a flexible\nframework for high-contrast data and image processing. In this paper, we\ndescribe the capabilities of VIP related to processing image sequences acquired\nusing the angular differential imaging (ADI) observing technique. VIP\nimplements functionalities for building high-contrast data processing\npipelines, encompass- ing pre- and post-processing algorithms, potential\nsources position and flux estimation, and sensitivity curves generation. Among\nthe reference point-spread function subtraction techniques for ADI\npost-processing, VIP includes several flavors of principal component analysis\n(PCA) based algorithms, such as annular PCA and incremental PCA algorithm\ncapable of processing big datacubes (of several gigabytes) on a computer with\nlimited memory. Also, we present a novel ADI algorithm based on non-negative\nmatrix factorization (NMF), which comes from the same family of low-rank matrix\napproximations as PCA and provides fairly similar results. We showcase the ADI\ncapabilities of the VIP library using a deep sequence on HR8799 taken with the\nLBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP\nwe investigated the presence of additional companions around HR8799 and did not\nfind any significant additional point source beyond the four known planets. VIP\nis available at this http URL and is accompanied with\nJupyter notebook tutorials illustrating the main functionalities of the\nlibrary.\n",
"title": "VIP: Vortex Image Processing package for high-contrast direct imaging"
}
| null | null | null | null | true | null |
12101
| null |
Default
| null | null |
null |
{
"abstract": " HTTP/2 (h2) is a new standard for Web communications that already delivers a\nlarge share of Web traffic. Unlike HTTP/1, h2 uses only one underlying TCP\nconnection. In a cellular network with high loss and sudden spikes in latency,\nwhich the TCP stack might interpret as loss, using a single TCP connection can\nnegatively impact Web performance. In this paper, we perform an extensive\nanalysis of real world cellular network traffic and design a testbed to emulate\nloss characteristics in cellular networks. We use the emulated cellular network\nto measure h2 performance in comparison to HTTP/1.1, for webpages synthesized\nfrom HTTP Archive repository data.\nOur results show that, in lossy conditions, h2 achieves faster page load\ntimes (PLTs) for webpages with small objects. For webpages with large objects,\nh2 degrades the PLT. We devise a new domain-sharding technique that isolates\nlarge and small object downloads on separate connections. Using sharding, we\nshow that under lossy cellular conditions, h2 over multiple connections\nimproves the PLT compared to h2 with one connection and HTTP/1.1 with six\nconnections. Finally, we recommend content providers and content delivery\nnetworks to apply h2-aware domain-sharding on webpages currently served over h2\nfor improved mobile Web performance.\n",
"title": "Domain-Sharding for Faster HTTP/2 in Lossy Cellular Networks"
}
| null | null | null | null | true | null |
12102
| null |
Default
| null | null |
null |
{
"abstract": " Let $A \\in {\\cal C}^n$ be an extremal copositive matrix with unit diagonal.\nThen the minimal zeros of $A$ all have supports of cardinality two if and only\nif the elements of $A$ are all from the set $\\{-1,0,1\\}$. Thus the extremal\ncopositive matrices with minimal zero supports of cardinality two are exactly\nthose matrices which can be obtained by diagonal scaling from the extremal\n$\\{-1,0,1\\}$ unit diagonal matrices characterized by Hoffman and Pereira in\n1973.\n",
"title": "Extremal copositive matrices with minimal zero supports of cardinality two"
}
| null | null | null | null | true | null |
12103
| null |
Default
| null | null |
null |
{
"abstract": " We propose a method for dual-arm manipulation of rigid objects, subject to\nexternal disturbance. The problem is formulated as a Cartesian impedance\ncontroller within a projected inverse dynamics framework. We use the\nconstrained component of the controller to enforce contact and the\nunconstrained controller to accomplish the task with a desired 6-DOF impedance\nbehaviour. Furthermore, the proposed method optimises the torque required to\nmaintain contact, subject to unknown disturbances, and can do so without direct\nmeasurement of external force. The techniques are evaluated on a single-arm\nwiping a table and a dual-arm platform manipulating a rigid object of unknown\nmass and with human interaction.\n",
"title": "A Projected Inverse Dynamics Approach for Dual-arm Cartesian Impedance Control"
}
| null | null |
[
"Computer Science"
] | null | true | null |
12104
| null |
Validated
| null | null |
null |
{
"abstract": " We develop an ac-biased shift register introduced in our previous work (V.K.\nSemenov et al., IEEE Trans. Appl. Supercond., vol. 25, no. 3, 1301507, June\n2015) into a benchmark circuit for evaluation of superconductor electronics\nfabrication technology. The developed testing technique allows for extracting\nmargins of all individual cells in the shift register, which in turn makes it\npossible to estimate statistical distribution of Josephson junctions in the\ncircuit. We applied this approach to successfully test registers having 8, 16,\n36, and 202 thousand cells and, respectively, about 33000, 65000, 144000, and\n809000 Josephson junctions. The circuits were fabricated at MIT Lincoln\nLaboratory, using a fully planarized process, 0.4 {\\mu}m inductor linewidth,\nand 1.33x10^6 cm^-2 junction density. They are presently the largest\noperational superconducting SFQ circuits ever made. The developed technique\ndistinguishes between hard defects (fabrication-related) and soft defects\n(measurement-related) and locates them in the circuit. The soft defects are\nspecific to superconducting circuits and caused by magnetic flux trapping\neither inside the active cells or in the dedicated flux-trapping moats near the\ncells. The number and distribution of soft defects depend on the ambient\nmagnetic field and vary with thermal cycling even if done in the same magnetic\nenvironment.\n",
"title": "AC-Biased Shift Registers as Fabrication Process Benchmark Circuits and Flux Trapping Diagnostic Tool"
}
| null | null |
[
"Physics"
] | null | true | null |
12105
| null |
Validated
| null | null |
null |
{
"abstract": " The emerging era of personalized medicine relies on medical decisions,\npractices, and products being tailored to the individual patient. Point-of-care\nsystems, at the heart of this model, play two important roles. First, they are\nrequired for identifying subjects for optimal therapies based on their genetic\nmake-up and epigenetic profile. Second, they will be used for assessing the\nprogression of such therapies. Central to this vision is designing systems\nthat, with minimal user-intervention, can transduce complex signals from\nbiosystems in complement with clinical information to inform medical decision\nwithin point-of-care settings. To reach our ultimate goal of developing\npoint-of-care systems and realizing personalized medicine, we are taking a\nmultistep systems-level approach towards understanding cellular processes and\nbiomolecular profiles, to quantify disease states and external interventions.\n",
"title": "Designing diagnostic platforms for analysis of disease patterns and probing disease emergence"
}
| null | null | null | null | true | null |
12106
| null |
Default
| null | null |
null |
{
"abstract": " While the costs of human violence have attracted a great deal of attention\nfrom the research community, the effects of the network-on-network (NoN)\nviolence popularised by Generative Adversarial Networks have yet to be\naddressed. In this work, we quantify the financial, social, spiritual,\ncultural, grammatical and dermatological impact of this aggression and address\nthe issue by proposing a more peaceful approach which we term Generative\nUnadversarial Networks (GUNs). Under this framework, we simultaneously train\ntwo models: a generator G that does its best to capture whichever data\ndistribution it feels it can manage, and a motivator M that helps G to achieve\nits dream. Fighting is strictly verboten and both models evolve by learning to\nrespect their differences. The framework is both theoretically and electrically\ngrounded in game theory, and can be viewed as a winner-shares-all two-player\ngame in which both players work as a team to achieve the best score.\nExperiments show that by working in harmony, the proposed model is able to\nclaim both the moral and log-likelihood high ground. Our work builds on a rich\nhistory of carefully argued position-papers, published as anonymous YouTube\ncomments, which prove that the optimal solution to NoN violence is more GUNs.\n",
"title": "Stopping GAN Violence: Generative Unadversarial Networks"
}
| null | null | null | null | true | null |
12107
| null |
Default
| null | null |
null |
{
"abstract": " We give some examples of the existence of solutions of geometric PDEs (Yamabe\nequation, Prescribed Scalar Curvature Equation, Gaussian curvature).We also\ngive some remarks on second order PDE and Green functions and on the maximum\nprinciples.\n",
"title": "Cas d'existence de solutions d'EDP"
}
| null | null | null | null | true | null |
12108
| null |
Default
| null | null |
null |
{
"abstract": " We present accurate mass and thermodynamic profiles for a sample of 56 galaxy\nclusters observed with the Chandra X-ray Observatory. We investigate the\neffects of local gravitational acceleration in central cluster galaxies, and we\nexplore the role of the local free-fall time (t$_{\\rm ff}$) in thermally\nunstable cooling. We find that the local cooling time (t$_{\\rm cool}$) is as\neffective an indicator of cold gas, traced through its nebular emission, as the\nratio of t$_{\\rm cool}$/t$_{\\rm ff}$. Therefore, t$_{\\rm cool}$ alone\napparently governs the onset of thermally unstable cooling in hot atmospheres.\nThe location of the minimum t$_{\\rm cool}$/t$_{\\rm ff}$, a thermodynamic\nparameter that simulations suggest may be key in driving thermal instability,\nis unresolved in most systems. As a consequence, selection effects bias the\nvalue and reduce the observed range in measured t$_{\\rm cool}$/t$_{\\rm ff}$\nminima. The entropy profiles of cool-core clusters are characterized by broken\npower-laws down to our resolution limit, with no indication of isentropic\ncores. We show, for the first time, that mass isothermality and the $K \\propto\nr^{2/3}$ entropy profile slope imply a floor in t$_{\\rm cool}$/t$_{\\rm ff}$\nprofiles within central galaxies. No significant departures of t$_{\\rm\ncool}$/t$_{\\rm ff}$ below 10 are found, which is inconsistent with many recent\nfeedback models. The inner densities and cooling times of cluster atmospheres\nare resilient to change in response to powerful AGN activity, suggesting that\nthe energy coupling between AGN heating and atmospheric gas is gentler than\nmost models predict.\n",
"title": "The Onset of Thermally Unstable Cooling from the Hot Atmospheres of Giant Galaxies in Clusters - Constraints on Feedback Models"
}
| null | null | null | null | true | null |
12109
| null |
Default
| null | null |
null |
{
"abstract": " In this article we study the linearized anisotropic Calderon problem. In a\ncompact manifold with boundary, this problem amounts to showing that products\nof harmonic functions form a complete set. Assuming that the manifold is\ntransversally anisotropic, we show that the boundary measurements determine an\nFBI type transform at certain points in the transversal manifold. This leads to\nrecovery of transversal singularities in the linearized problem. The method\nrequires a geometric condition on the transversal manifold related to pairs of\nintersecting geodesics, but it does not involve the geodesic X-ray transform\nwhich has limited earlier results on this problem.\n",
"title": "The linearized Calderon problem in transversally anisotropic geometries"
}
| null | null | null | null | true | null |
12110
| null |
Default
| null | null |
null |
{
"abstract": " We prove that the Teichmüller space of surfaces with given boundary lengths\nequipped with the arc metric (resp. the Teichmüller metric) is almost\nisometric to the Teichmüller space of punctured surfaces equipped with the\nThurston metric (resp. the Teichmüller metric).\n",
"title": "Almost isometries between Teichmüller spaces"
}
| null | null | null | null | true | null |
12111
| null |
Default
| null | null |
null |
{
"abstract": " Decision making in multi-agent systems (MAS) is a great challenge due to\nenormous state and joint action spaces as well as uncertainty, making\ncentralized control generally infeasible. Decentralized control offers better\nscalability and robustness but requires mechanisms to coordinate on joint tasks\nand to avoid conflicts. Common approaches to learn decentralized policies for\ncooperative MAS suffer from non-stationarity and lacking credit assignment,\nwhich can lead to unstable and uncoordinated behavior in complex environments.\nIn this paper, we propose Strong Emergent Policy approximation (STEP), a\nscalable approach to learn strong decentralized policies for cooperative MAS\nwith a distributed variant of policy iteration. For that, we use function\napproximation to learn from action recommendations of a decentralized\nmulti-agent planning algorithm. STEP combines decentralized multi-agent\nplanning with centralized learning, only requiring a generative model for\ndistributed black box optimization. We experimentally evaluate STEP in two\nchallenging and stochastic domains with large state and joint action spaces and\nshow that STEP is able to learn stronger policies than standard multi-agent\nreinforcement learning algorithms, when combining multi-agent open-loop\nplanning with centralized function approximation. The learned policies can be\nreintegrated into the multi-agent planning process to further improve\nperformance.\n",
"title": "Distributed Policy Iteration for Scalable Approximation of Cooperative Multi-Agent Policies"
}
| null | null | null | null | true | null |
12112
| null |
Default
| null | null |
null |
{
"abstract": " We discuss several classes of integrable Floquet systems, i.e. systems which\ndo not exhibit chaotic behavior even under a time dependent perturbation. The\nfirst class is associated with finite-dimensional Lie groups and\ninfinite-dimensional generalization thereof. The second class is related to the\nrow transfer matrices of the 2D statistical mechanics models. The third class\nof models, called here \"boost models\", is constructed as a periodic interchange\nof two Hamiltonians - one is the integrable lattice model Hamiltonian, while\nthe second is the boost operator. The latter for known cases coincides with the\nentanglement Hamiltonian and is closely related to the corner transfer matrix\nof the corresponding 2D statistical models. We present several explicit\nexamples. As an interesting application of the boost models we discuss a\npossibility of generating periodically oscillating states with the period\ndifferent from that of the driving field. In particular, one can realize an\noscillating state by performing a static quench to a boost operator. We term\nthis state a \"Quantum Boost Clock\". All analyzed setups can be readily realized\nexperimentally, for example in cod atoms.\n",
"title": "Integrable Floquet dynamics"
}
| null | null | null | null | true | null |
12113
| null |
Default
| null | null |
null |
{
"abstract": " It is a simple fact that a subgroup generated by a subset $A$ of an abelian\ngroup is the direct sum of the cyclic groups $\\langle a\\rangle$, $a\\in A$ if\nand only if the set $A$ is independent. In [5] the concept of an $independent$\nset in an abelian group was generalized to a $topologically$ $independent$\n$set$ in a topological abelian group (these two notions coincide in discrete\nabelian groups). It was proved that a topological subgroup generated by a\nsubset $A$ of an abelian topological group is the Tychonoff direct sum of the\ncyclic topological groups $\\langle a\\rangle$, $a\\in A$ if and only if the set\n$A$ is topologically independent and absolutely Cauchy summable. Further, it\nwas shown, that the assumption of absolute Cauchy summability of $A$ can not be\nremoved in general in this result. In our paper we show that it can be removed\nin precompact groups.\nIn other words, we prove that if $A$ is a subset of a {\\em precompact}\nabelian group, then the topological subgroup generated by $A$ is the Tychonoff\ndirect sum of the topological cyclic subgroups $\\langle a\\rangle$, $a\\in A$ if\nand only if $A$ is topologically independent. We show that precompactness can\nnot be replaced by local compactness in this result.\n",
"title": "Topologically independent sets in precompact groups"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12114
| null |
Validated
| null | null |
null |
{
"abstract": " Transmission lines are vital components in power systems. Tripping of\ntransmission lines caused by over-temperature is a major threat to the security\nof system operations, so it is necessary to efficiently simulate line\ntemperature under both normal operation conditions and foreseen fault\nconditions. Existing methods based on thermal-steady-state analyses cannot\nreflect transient temperature evolution, and thus cannot provide timing\ninformation needed for taking remedial actions. Moreover, conventional\nnumerical method requires huge computational efforts and barricades system-wide\nanalysis. In this regard, this paper derives an approximate analytical solution\nof transmission-line temperature evolution enabling efficient analysis on\nmultiple operation states. Considering the uncertainties in environmental\nparameters, the region of over-temperature is constructed in the environmental\nparameter space to realize the over-temperature risk assessment in both the\nplanning stage and real-time operations. A test on a typical conductor model\nverifies the accuracy of the approximate analytical solution. Based on the\nanalytical solution and numerical weather prediction (NWP) data, an efficient\nsimulation method for temperature evolution of transmission systems under\nmultiple operation states is proposed. As demonstrated on an NPCC 140-bus\nsystem, it achieves over 1000 times of efficiency enhancement, verifying its\npotentials in online risk assessment and decision support.\n",
"title": "Efficient Simulation of Temperature Evolution of Overhead Transmission Lines Based on Analytical Solution and NWP"
}
| null | null | null | null | true | null |
12115
| null |
Default
| null | null |
null |
{
"abstract": " The eigenstructure of the discrete Fourier transform (DFT) is examined and\nnew systematic procedures to generate eigenvectors of the unitary DFT are\nproposed. DFT eigenvectors are suggested as user signatures for data\ncommunication over the real adder channel (RAC). The proposed multiuser\ncommunication system over the 2-user RAC is detailed.\n",
"title": "Multiuser Communication Based on the DFT Eigenstructure"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12116
| null |
Validated
| null | null |
null |
{
"abstract": " This paper introduces a new Urban Point Cloud Dataset for Automatic\nSegmentation and Classification acquired by Mobile Laser Scanning (MLS). We\ndescribe how the dataset is obtained from acquisition to post-processing and\nlabeling. This dataset can be used to learn classification algorithm, however,\ngiven that a great attention has been paid to the split between the different\nobjects, this dataset can also be used to learn the segmentation. The dataset\nconsists of around 2km of MLS point cloud acquired in two cities. The number of\npoints and range of classes make us consider that it can be used to train\nDeep-Learning methods. Besides we show some results of automatic segmentation\nand classification. The dataset is available at:\nthis http URL\n",
"title": "Paris-Lille-3D: a large and high-quality ground truth urban point cloud dataset for automatic segmentation and classification"
}
| null | null | null | null | true | null |
12117
| null |
Default
| null | null |
null |
{
"abstract": " Various defense schemes --- which determine the presence of an attack on the\nin-vehicle network --- have recently been proposed. However, they fail to\nidentify which Electronic Control Unit (ECU) actually mounted the attack.\nClearly, pinpointing the attacker ECU is essential for fast/efficient forensic,\nisolation, security patch, etc. To meet this need, we propose a novel scheme,\ncalled Viden (Voltage-based attacker identification), which can identify the\nattacker ECU by measuring and utilizing voltages on the in-vehicle network. The\nfirst phase of Viden, called ACK learning, determines whether or not the\nmeasured voltage signals really originate from the genuine message transmitter.\nViden then exploits the voltage measurements to construct and update the\ntransmitter ECUs' voltage profiles as their fingerprints. It finally uses the\nvoltage profiles to identify the attacker ECU. Since Viden adapts its profiles\nto changes inside/outside of the vehicle, it can pinpoint the attacker ECU\nunder various conditions. Moreover, its efficiency and design-compliance with\nmodern in-vehicle network implementations make Viden practical and easily\ndeployable. Our extensive experimental evaluations on both a CAN bus prototype\nand two real vehicles have shown that Viden can accurately fingerprint ECUs\nbased solely on voltage measurements and thus identify the attacker ECU with a\nlow false identification rate of 0.2%.\n",
"title": "Viden: Attacker Identification on In-Vehicle Networks"
}
| null | null | null | null | true | null |
12118
| null |
Default
| null | null |
null |
{
"abstract": " Black hole X-ray transients show a variety of state transitions during their\noutburst phases, characterized by changes in their spectral and timing\nproperties. In particular, power density spectra (PDS) show quasi periodic\noscillations (QPOs) that can be related to the accretion regime of the source.\nWe looked for type-C QPOs in the disc-dominated state (i.e. the high soft\nstate) and in the ultra-luminous state in the RXTE archival data of 12\ntransient black hole X-ray binaries known to show QPOs during their outbursts.\nWe detected 6 significant QPOs in the soft state that can be classified as\ntype-C QPOs. Under the assumption that the accretion disc in disc-dominated\nstates extends down or close to the innermost stable circular orbit (ISCO) and\nthat type-C QPOs would arise at the inner edge of the accretion flow, we use\nthe relativistic precession model (RPM) to place constraints on the black hole\nspin. We were able to place lower limits on the spin value for all the 12\nsources of our sample while we could place also an upper limit on the spin for\n5 sources.\n",
"title": "Constraining black hole spins with low-frequency quasi-periodic oscillations in soft states"
}
| null | null | null | null | true | null |
12119
| null |
Default
| null | null |
null |
{
"abstract": " We define a generalization of the free Lie algebra based on an $n$-ary\ncommutator and call it the free LAnKe. We show that the action of the symmetric\ngroup $S_{2n-1}$ on the multilinear component with $2n-1$ generators is given\nby the representation $S^{2^{n-1}1}$, whose dimension is the $n$th Catalan\nnumber. An application involving Specht modules of staircase shape is\npresented. We also introduce a conjecture that extends the relation between the\nWhitehouse representation and Lie($k$).\n",
"title": "On a generalization of Lie($k$): a CataLAnKe theorem"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12120
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we develop a generalized likelihood ratio scan method (GLRSM)\nfor multiple change-points inference in piecewise stationary time series, which\nestimates the number and positions of change-points and provides a confidence\ninterval for each change-point. The computational complexity of using GLRSM for\nmultiple change-points detection is as low as $O(n(\\log n)^3)$ for a series of\nlength $n$. Consistency of the estimated numbers and positions of the\nchange-points is established. Extensive simulation studies are provided to\ndemonstrate the effectiveness of the proposed methodology under different\nscenarios.\n",
"title": "Inference for Multiple Change-points in Linear and Non-linear Time Series Models"
}
| null | null | null | null | true | null |
12121
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we deal with the acceleration of the convergence of infinite\nseries $\\sum^\\infty_{n=1}a_n$, when the terms $a_n$ are in general complex and\nhave asymptotic expansions that can be expressed in the form $$\na_n\\sim[\\Gamma(n)]^{s/m}\\exp\\left[Q(n)\\right]\\sum^\\infty_{i=0}w_i\nn^{\\gamma-i/m}\\quad\\text{as $n\\to\\infty$},$$ where $\\Gamma(z)$ is the gamma\nfunction, $m\\geq1$ is an arbitrary integer, $Q(n)=\\sum^{m}_{i=0}q_in^{i/m}$ is\na polynomial of degree at most $m$ in $n^{1/m}$, $s$ is an arbitrary integer,\nand $\\gamma$ is an arbitrary complex number. This can be achieved effectively\nby applying the $\\tilde{d}^{(m)}$ transformation of the author to the sequence\n$\\{A_n\\}$ of the partial sums $A_n=\\sum^n_{k=1}a_k$, $n=1,2,\\dots\\ .$\nWe give a detailed review of the properties of such series and of the\n$\\tilde{d}^{(m)}$ transformation and the recursive W-algorithm that implements\nit. We illustrate with several numerical examples of varying nature the\nremarkable performance of this transformation on both convergent and divergent\nseries. We also show that the $\\tilde{d}^{(m)}$ transformation can be used\nefficiently to accelerate the convergence of some infinite products of the form\n$\\prod^\\infty_{n=1}(1+v_n)$, where $$v_n\\sim\n\\sum^\\infty_{i=0}e_in^{-t/m-i/m}\\quad \\text{as $n\\to\\infty$,\\ \\ $t\\geq m+1$ an\ninteger,}$$ and illustrate this with numerical examples. We put special\nemphasis on the issue of numerical stability, we show how to monitor stability,\nor lack thereof, numerically, and discuss how it can be achieved/improved in\nsuitable ways.\n",
"title": "Acceleration of Convergence of Some Infinite Sequences $\\boldsymbol{\\{A_n\\}}$ Whose Asymptotic Expansions Involve Fractional Powers of $\\boldsymbol{n}$"
}
| null | null | null | null | true | null |
12122
| null |
Default
| null | null |
null |
{
"abstract": " Diffusion magnetic resonance imaging (dMRI) is currently the only tool for\nnoninvasively imaging the brain's white matter tracts. The fiber orientation\n(FO) is a key feature computed from dMRI for fiber tract reconstruction.\nBecause the number of FOs in a voxel is usually small, dictionary-based sparse\nreconstruction has been used to estimate FOs with a relatively small number of\ndiffusion gradients. However, accurate FO estimation in regions with complex FO\nconfigurations in the presence of noise can still be challenging. In this work\nwe explore the use of a deep network for FO estimation in a dictionary-based\nframework and propose an algorithm named Fiber Orientation Reconstruction\nguided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a\nsmaller dictionary encoding coarse basis FOs to represent the diffusion\nsignals. To estimate the mixture fractions of the dictionary atoms (and thus\ncoarse FOs), a deep network is designed specifically for solving the sparse\nreconstruction problem. Here, the smaller dictionary is used to reduce the\ncomputational cost of training. Second, the coarse FOs inform the final FO\nestimation, where a larger dictionary encoding dense basis FOs is used and a\nweighted l1-norm regularized least squares problem is solved to encourage FOs\nthat are consistent with the network output. FORDN was evaluated and compared\nwith state-of-the-art algorithms that estimate FOs using sparse reconstruction\non simulated and real dMRI data, and the results demonstrate the benefit of\nusing a deep network for FO estimation.\n",
"title": "Fiber Orientation Estimation Guided by a Deep Network"
}
| null | null | null | null | true | null |
12123
| null |
Default
| null | null |
null |
{
"abstract": " The apelinergic system is an important player in the regulation of both\nvascular tone and cardiovascular function, making this physiological system an\nattractive target for drug development for hypertension, heart failure and\nischemic heart disease. Indeed, apelin exerts a positive inotropic effect in\nhumans whilst reducing peripheral vascular resistance. In this study, we\ninvestigated the signaling pathways through which apelin exerts its hypotensive\naction. We synthesized a series of apelin-13 analogs whereby the C-terminal\nPhe13 residue was replaced by natural or unnatural amino acids. In HEK293 cells\nexpressing APJ, we evaluated the relative efficacy of these compounds to\nactivate G{\\alpha}i1 and G{\\alpha}oA G-proteins, recruit \\b{eta}-arrestins 1\nand 2 (\\b{eta}arrs), and inhibit cAMP production. Calculating the transduction\nratio for each pathway allowed us to identify several analogs with distinct\nsignaling profiles. Furthermore, we found that these analogs delivered i.v. to\nSprague-Dawley rats exerted a wide range of hypotensive responses. Indeed, two\ncompounds lost their ability to lower blood pressure, while other analogs\nsignificantly reduced blood pressure as apelin-13. Interestingly, analogs that\ndid not lower blood pressure were less effective at recruiting \\b{eta}arrs.\nFinally, using Spearman correlations, we established that the hypotensive\nresponse was significantly correlated with \\b{eta}arr recruitment but not with\nG protein- dependent signaling. In conclusion, our results demonstrated that\nthe \\b{eta}arr recruitment potency is involved in the hypotensive efficacy of\nactivated APJ.\n",
"title": "The hypotensive effect of activated apelin receptor is correlated with \\b{eta}-arrestin recruitment"
}
| null | null | null | null | true | null |
12124
| null |
Default
| null | null |
null |
{
"abstract": " What happens to the most general closed oscillating universes in general\nrelativity? We sketch the development of interest in cyclic universes from the\nearly work of Friedmann and Tolman to modern variations introduced by the\npresence of a cosmological constant. Then we show what happens in the cyclic\nevolution of the most general closed anisotropic universes provided by the\nMixmaster universe. We show that in the presence of entropy increase its cycles\ngrow in size and age, increasingly approaching flatness. But these cycles also\ngrow increasingly anisotropic at their expansion maxima. If there is a positive\ncosmological constant, or dark energy, present then these oscillations always\nend and the last cycle evolves from an anisotropic inflexion point towards a de\nSitter future of everlasting expansion.\n",
"title": "The Shape of Bouncing Universes"
}
| null | null |
[
"Physics"
] | null | true | null |
12125
| null |
Validated
| null | null |
null |
{
"abstract": " The widespread use of smartphones gives rise to new security and privacy\nconcerns. Smartphone thefts account for the largest percentage of thefts in\nrecent crime statistics. Using a victim's smartphone, the attacker can launch\nimpersonation attacks, which threaten the security of the victim and other\nusers in the network. Our threat model includes the attacker taking over the\nphone after the user has logged on with his password or pin. Our goal is to\ndesign a mechanism for smartphones to better authenticate the current user,\ncontinuously and implicitly, and raise alerts when necessary. In this paper, we\npropose a multi-sensors-based system to achieve continuous and implicit\nauthentication for smartphone users. The system continuously learns the owner's\nbehavior patterns and environment characteristics, and then authenticates the\ncurrent user without interrupting user-smartphone interactions. Our method can\nadaptively update a user's model considering the temporal change of user's\npatterns. Experimental results show that our method is efficient, requiring\nless than 10 seconds to train the model and 20 seconds to detect the abnormal\nuser, while achieving high accuracy (more than 90%). Also the combination of\nmore sensors provide better accuracy. Furthermore, our method enables adjusting\nthe security level by changing the sampling rate.\n",
"title": "Multi-sensor authentication to improve smartphone security"
}
| null | null | null | null | true | null |
12126
| null |
Default
| null | null |
null |
{
"abstract": " The Algorithms for Lattice Fermions package provides a general code for the\nfinite temperature auxiliary field quantum Monte Carlo algorithm. The code is\nengineered to be able to simulate any model that can be written in terms of\nsums of single-body operators, of squares of single-body operators and\nsingle-body operators coupled to an Ising field with given dynamics. We provide\npredefined types that allow the user to specify the model, the Bravais lattice\nas well as equal time and time displaced observables. The code supports an MPI\nimplementation. Examples such as the Hubbard model on the honeycomb lattice and\nthe Hubbard model on the square lattice coupled to a transverse Ising field are\nprovided and discussed in the documentation. We furthermore discuss how to use\nthe package to implement the Kondo lattice model and the\n$SU(N)$-Hubbard-Heisenberg model. One can download the code from our Git\ninstance at this https URL and sign in to file issues.\n",
"title": "The ALF (Algorithms for Lattice Fermions) project release 1.0. Documentation for the auxiliary field quantum Monte Carlo code"
}
| null | null | null | null | true | null |
12127
| null |
Default
| null | null |
null |
{
"abstract": " As a disruptive technology, blockchain, particularly its original form of\nbitcoin as a type of digital currency, has attracted great attentions. The\ninnovative distributed decision making and security mechanism lay the technical\nfoundation for its success, making us consider to penetrate the power of\nblockchain technology to distributed control and cooperative robotics, in which\nthe distributed and secure mechanism is also highly demanded. Actually,\nsecurity and distributed communication have long been unsolved problems in the\nfield of distributed control and cooperative robotics. It has been reported on\nthe network failure and intruder attacks of distributed control and\nmulti-robotic systems. Blockchain technology provides promise to remedy this\nsituation thoroughly. This work is intended to create a global picture of\nblockchain technology on its working principle and key elements in the language\nof control and robotics, to provide a shortcut for beginners to step into this\nresearch field.\n",
"title": "A Survey on Blockchain Technology and Its Potential Applications in Distributed Control and Cooperative Robots"
}
| null | null | null | null | true | null |
12128
| null |
Default
| null | null |
null |
{
"abstract": " We analytically construct an infinite number of trapped toroids in\nspherically symmetric Cauchy hypersurfaces of the Einstein equations. We focus\non initial data which represent \"constant density stars\" momentarily at rest.\nThere exists an infinite number of constant mean curvature tori, but we also\ndeal with more general configurations. The marginally trapped toroids have been\nfound analytically and numerically; they are unstable. The topologically\ntoroidal trapped surfaces appear in a finite region surrounded by the\nSchwarzschild horizon.\n",
"title": "Toroidal trapped surfaces and isoperimetric inequalities"
}
| null | null | null | null | true | null |
12129
| null |
Default
| null | null |
null |
{
"abstract": " Tensor factorization models offer an effective approach to convert massive\nelectronic health records into meaningful clinical concepts (phenotypes) for\ndata analysis. These models need a large amount of diverse samples to avoid\npopulation bias. An open challenge is how to derive phenotypes jointly across\nmultiple hospitals, in which direct patient-level data sharing is not possible\n(e.g., due to institutional policies). In this paper, we developed a novel\nsolution to enable federated tensor factorization for computational phenotyping\nwithout sharing patient-level data. We developed secure data harmonization and\nfederated computation procedures based on alternating direction method of\nmultipliers (ADMM). Using this method, the multiple hospitals iteratively\nupdate tensors and transfer secure summarized information to a central server,\nand the server aggregates the information to generate phenotypes. We\ndemonstrated with real medical datasets that our method resembles the\ncentralized training model (based on combined datasets) in terms of accuracy\nand phenotypes discovery while respecting privacy.\n",
"title": "Federated Tensor Factorization for Computational Phenotyping"
}
| null | null | null | null | true | null |
12130
| null |
Default
| null | null |
null |
{
"abstract": " In this work we first examine transverse and longitudinal fluxes in a $\\cal\nPT$-symmetric photonic dimer using a coupled-mode theory. Several surprising\nunderstandings are obtained from this perspective: The longitudinal flux shows\nthat the $\\cal PT$ transition in a dimer can be regarded as a classical effect,\ndespite its analogy to $\\cal PT$-symmetric quantum mechanics. The longitudinal\nflux also indicates that the so-called giant amplification in the $\\cal\nPT$-symmetric phase is a sub-exponential behavior and does not outperform a\nsingle gain waveguide. The transverse flux, on the other hand, reveals that the\napparent power oscillations between the gain and loss waveguides in the $\\cal\nPT$-symmetric phase can be deceiving in certain cases, where the transverse\npower transfer is in fact unidirectional. We also show that this power transfer\ncannot be arbitrarily fast even when the exceptional point is approached.\nFinally, we go beyond the coupled-mode theory by using the paraxial wave\nequation and also extend our discussions to a $\\cal PT$ diamond and a\none-dimensional periodic lattice.\n",
"title": "Optical fluxes in coupled $\\cal PT$-symmetric photonic structures"
}
| null | null | null | null | true | null |
12131
| null |
Default
| null | null |
null |
{
"abstract": " This paper describes QCRI's machine translation systems for the IWSLT 2016\nevaluation campaign. We participated in the Arabic->English and English->Arabic\ntracks. We built both Phrase-based and Neural machine translation models, in an\neffort to probe whether the newly emerged NMT framework surpasses the\ntraditional phrase-based systems in Arabic-English language pairs. We trained a\nvery strong phrase-based system including, a big language model, the Operation\nSequence Model, Neural Network Joint Model and Class-based models along with\ndifferent domain adaptation techniques such as MML filtering, mixture modeling\nand using fine tuning over NNJM model. However, a Neural MT system, trained by\nstacking data from different genres through fine-tuning, and applying ensemble\nover 8 models, beat our very strong phrase-based system by a significant 2 BLEU\npoints margin in Arabic->English direction. We did not obtain similar gains in\nthe other direction but were still able to outperform the phrase-based system.\nWe also applied system combination on phrase-based and NMT outputs.\n",
"title": "QCRI Machine Translation Systems for IWSLT 16"
}
| null | null | null | null | true | null |
12132
| null |
Default
| null | null |
null |
{
"abstract": " We consider one of the most important problems in directional statistics,\nnamely the problem of testing the null hypothesis that the spike direction\n${\\pmb \\theta}$ of a Fisher-von Mises-Langevin distribution on the\n$p$-dimensional unit hypersphere is equal to a given direction ${\\pmb\n\\theta}_0$. After a reduction through invariance arguments, we derive local\nasymptotic normality (LAN) results in a general high-dimensional framework\nwhere the dimension $p_n$ goes to infinity at an arbitrary rate with the sample\nsize $n$, and where the concentration $\\kappa_n$ behaves in a completely free\nway with $n$, which offers a spectrum of problems ranging from arbitrarily easy\nto arbitrarily challenging ones. We identify seven asymptotic regimes,\ndepending on the convergence/divergence properties of $(\\kappa_n)$, that yield\ndifferent contiguity rates and different limiting experiments. In each regime,\nwe derive Le Cam optimal tests under specified $\\kappa_n$ and we compute, from\nthe Le Cam third lemma, asymptotic powers of the classical Watson test under\ncontiguous alternatives. We further establish LAN results with respect to both\nspike direction and concentration, which allows us to discuss optimality also\nunder unspecified $\\kappa_n$. To obtain a full understanding of the non-null\nbehavior of the Watson test, we use martingale CLTs to derive its local\nasymptotic powers in the broader, semiparametric, model of rotationally\nsymmetric distributions. A Monte Carlo study shows that the finite-sample\nbehaviors of the various tests remarkably agree with our asymptotic results.\n",
"title": "Detecting the direction of a signal on high-dimensional spheres: Non-null and Le Cam optimality results"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
12133
| null |
Validated
| null | null |
null |
{
"abstract": " In 2007, Arkin et al. initiated a systematic study of the complexity of the\nHamiltonian cycle problem on square, triangular, or hexagonal grid graphs,\nrestricted to polygonal, thin, superthin, degree-bounded, or solid grid graphs.\nThey solved many combinations of these problems, proving them either\npolynomially solvable or NP-complete, but left three combinations open. In this\npaper, we prove two of these unsolved combinations to be NP-complete:\nHamiltonicity of Square Polygonal Grid Graphs and Hamiltonicity of Hexagonal\nThin Grid Graphs. We also consider a new restriction, where the grid graph is\nboth thin and polygonal, and prove that Hamiltonicity then becomes polynomially\nsolvable for square, triangular, and hexagonal grid graphs.\n",
"title": "Hamiltonicity is Hard in Thin or Polygonal Grid Graphs, but Easy in Thin Polygonal Grid Graphs"
}
| null | null | null | null | true | null |
12134
| null |
Default
| null | null |
null |
{
"abstract": " For two Banach algebras $A$ and $B$, the $T$-Lau product $A\\times_T B$, was\nrecently introduced and studied for some bounded homomorphism $T:B\\to A$ with\n$\\|T\\|\\leq 1$. Here, we give general nessesary and sufficent conditions for\n$A\\times_T B$ to be (approximately) cyclic amenable. In particular, we extend\nsome recent results on (approximate) cyclic amenability of direct product\n$A\\oplus B$ and $T$-Lau product $A\\times_T B$ and answer a question on cyclic\namenability of $A\\times_T B$.\n",
"title": "More on cyclic amenability of the Lau product of Banach algebras defined by a Banach algebra morphism"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12135
| null |
Validated
| null | null |
null |
{
"abstract": " Accurately modeling contact behaviors for real-world, near-rigid materials\nremains a grand challenge for existing rigid-body physics simulators. This\npaper introduces a data-augmented contact model that incorporates analytical\nsolutions with observed data to predict the 3D contact impulse which could\nresult in rigid bodies bouncing, sliding or spinning in all directions. Our\nmethod enhances the expressiveness of the standard Coulomb contact model by\nlearning the contact behaviors from the observed data, while preserving the\nfundamental contact constraints whenever possible. For example, a classifier is\ntrained to approximate the transitions between static and dynamic frictions,\nwhile non-penetration constraint during collision is enforced analytically. Our\nmethod computes the aggregated effect of contact for the entire rigid body,\ninstead of predicting the contact force for each contact point individually,\nremoving the exponential decline in accuracy as the number of contact points\nincreases.\n",
"title": "Data-Augmented Contact Model for Rigid Body Simulation"
}
| null | null | null | null | true | null |
12136
| null |
Default
| null | null |
null |
{
"abstract": " In retailer management, the Newsvendor problem has widely attracted attention\nas one of basic inventory models. In the traditional approach to solving this\nproblem, it relies on the probability distribution of the demand. In theory, if\nthe probability distribution is known, the problem can be considered as fully\nsolved. However, in any real world scenario, it is almost impossible to even\napproximate or estimate a better probability distribution for the demand. In\nrecent years, researchers start adopting machine learning approach to learn a\ndemand prediction model by using other feature information. In this paper, we\npropose a supervised learning that optimizes the demand quantities for products\nbased on feature information. We demonstrate that the original Newsvendor loss\nfunction as the training objective outperforms the recently suggested quadratic\nloss function. The new algorithm has been assessed on both the synthetic data\nand real-world data, demonstrating better performance.\n",
"title": "Assessing the Performance of Deep Learning Algorithms for Newsvendor Problem"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12137
| null |
Validated
| null | null |
null |
{
"abstract": " Analyzing the temporal behavior of nodes in time-varying graphs is useful for\nmany applications such as targeted advertising, community evolution and outlier\ndetection. In this paper, we present a novel approach, STWalk, for learning\ntrajectory representations of nodes in temporal graphs. The proposed framework\nmakes use of structural properties of graphs at current and previous time-steps\nto learn effective node trajectory representations. STWalk performs random\nwalks on a graph at a given time step (called space-walk) as well as on graphs\nfrom past time-steps (called time-walk) to capture the spatio-temporal behavior\nof nodes. We propose two variants of STWalk to learn trajectory\nrepresentations. In one algorithm, we perform space-walk and time-walk as part\nof a single step. In the other variant, we perform space-walk and time-walk\nseparately and combine the learned representations to get the final trajectory\nembedding. Extensive experiments on three real-world temporal graph datasets\nvalidate the effectiveness of the learned representations when compared to\nthree baseline methods. We also show the goodness of the learned trajectory\nembeddings for change point detection, as well as demonstrate that arithmetic\noperations on these trajectory representations yield interesting and\ninterpretable results.\n",
"title": "STWalk: Learning Trajectory Representations in Temporal Graphs"
}
| null | null | null | null | true | null |
12138
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we present a novel cache design based on Multi-Level Cell\nSpin-Transfer Torque RAM (MLC STTRAM) that can dynamically adapt the set\ncapacity and associativity to use efficiently the full potential of MLC STTRAM.\nWe exploit the asymmetric nature of the MLC storage scheme to build cache lines\nfeaturing heterogeneous performances, that is, half of the cache lines are\nread-friendly, while the other is write-friendly. Furthermore, we propose to\nopportunistically deactivate ways in underutilized sets to convert MLC to\nSingle-Level Cell (SLC) mode, which features overall better performance and\nlifetime. Our ultimate goal is to build a cache architecture that combines the\ncapacity advantages of MLC and performance/energy advantages of SLC. Our\nexperiments show an improvement of 43% in total numbers of conflict misses, 27%\nin memory access latency, 12% in system performance, and 26% in LLC access\nenergy, with a slight degradation in cache lifetime (about 7%) compared to an\nSLC cache.\n",
"title": "A Study on Performance and Power Efficiency of Dense Non-Volatile Caches in Multi-Core Systems"
}
| null | null |
[
"Computer Science"
] | null | true | null |
12139
| null |
Validated
| null | null |
null |
{
"abstract": " Let $X$ be a locally compact Abelian group, $Y$ be its character group.\nFollowing A. Kagan and G. Székely we introduce a notion of $Q$-independence\nfor random variables with values in $X$. We prove group analogues of the\nCramér, Kac-Bernstein, Skitovich-Darmois and Heyde theorems for\n$Q$-independent random variables with values in $X$. The proofs of these\ntheorems are reduced to solving some functional equations on the group $Y$.\n",
"title": "Characterization theorems for $Q$-independent random variables with values in a locally compact Abelian group"
}
| null | null | null | null | true | null |
12140
| null |
Default
| null | null |
null |
{
"abstract": " Ultrafast perturbations offer a unique tool to manipulate correlated systems\ndue to their ability to promote transient behaviors with no equilibrium\ncounterpart. A widely employed strategy is the excitation of coherent optical\nphonons, as they can cause significant changes in the electronic structure and\ninteractions on short time scales. Here, we explore a promising alternative\nroute: the non-equilibrium excitation of acoustic phonons. We demonstrate that\nit leads to the remarkable phenomenon of a momentum-dependent temperature, by\nwhich electronic states at different regions of the Fermi surface are subject\nto distinct local temperatures. Such an anisotropic electronic temperature can\nhave a profound effect on the delicate balance between competing ordered states\nin unconventional superconductors, opening a novel avenue to control correlated\nphases.\n",
"title": "Controlling competing orders via non-equilibrium acoustic phonons: emergence of anisotropic electronic temperature"
}
| null | null | null | null | true | null |
12141
| null |
Default
| null | null |
null |
{
"abstract": " In recent years, the phenomenon of online misinformation and junk news\ncirculating on social media has come to constitute an important and widespread\nproblem affecting public life online across the globe, particularly around\nimportant political events such as elections. At the same time, there have been\ncalls for more transparency around misinformation on social media platforms, as\nmany of the most popular social media platforms function as \"walled gardens,\"\nwhere it is impossible for researchers and the public to readily examine the\nscale and nature of misinformation activity as it is unfolding on the\nplatforms. In order to help address this, this paper, we present the Junk News\nAggregator, an interactive web tool made publicly available, which allows the\npublic to examine, in near real-time, all of the public content posted to\nFacebook by important junk news sources in the US. It allows the public to gain\naccess to and examine the latest articles posted on Facebook (the most popular\nsocial media platform in the US and one where content is not readily accessible\nat scale from the open Web), as well as organise them by time, news publisher,\nand keywords of interest, and sort them based on all eight engagement metrics\navailable on Facebook. Therefore, the Aggregator allows the public to gain\ninsights on the volume, content, key themes, and types and volumes of\nengagement received by content posted by junk news publishers, in near real\ntime, hence opening up and offering transparency in these activities, at scale\nacross the top most popular junk news publishers and in near real time. In this\nway, the Aggregator can help increase transparency around the nature, volume,\nand engagement with junk news on social media, and serve as a media literacy\ntool for the public.\n",
"title": "The Junk News Aggregator: Examining junk news posted on Facebook, starting with the 2018 US Midterm Elections"
}
| null | null | null | null | true | null |
12142
| null |
Default
| null | null |
null |
{
"abstract": " A conceptual design for a quantum blockchain is proposed. Our method involves\nencoding the blockchain into a temporal GHZ (Greenberger-Horne-Zeilinger) state\nof photons that do not simultaneously coexist. It is shown that the\nentanglement in time, as opposed to an entanglement in space, provides the\ncrucial quantum advantage. All the subcomponents of this system have already\nbeen shown to be experimentally realized. Perhaps more shockingly, our encoding\nprocedure can be interpreted as non-classically influencing the past; hence\nthis decentralized quantum blockchain can be viewed as a quantum networked time\nmachine.\n",
"title": "Quantum Blockchain using entanglement in time"
}
| null | null | null | null | true | null |
12143
| null |
Default
| null | null |
null |
{
"abstract": " Recent work on encoder-decoder models for sequence-to-sequence mapping has\nshown that integrating both temporal and spatial attention mechanisms into\nneural networks increases the performance of the system substantially. In this\nwork, we report on the application of an attentional signal not on temporal and\nspatial regions of the input, but instead as a method of switching among inputs\nthemselves. We evaluate the particular role of attentional switching in the\npresence of dynamic noise in the sensors, and demonstrate how the attentional\nsignal responds dynamically to changing noise levels in the environment to\nachieve increased performance on both audio and visual tasks in three\ncommonly-used datasets: TIDIGITS, Wall Street Journal, and GRID. Moreover, the\nproposed sensor transformation network architecture naturally introduces a\nnumber of advantages that merit exploration, including ease of adding new\nsensors to existing architectures, attentional interpretability, and increased\nrobustness in a variety of noisy environments not seen during training.\nFinally, we demonstrate that the sensor selection attention mechanism of a\nmodel trained only on the small TIDIGITS dataset can be transferred directly to\na pre-existing larger network trained on the Wall Street Journal dataset,\nmaintaining functionality of switching between sensors to yield a dramatic\nreduction of error in the presence of noise.\n",
"title": "Sensor Transformation Attention Networks"
}
| null | null | null | null | true | null |
12144
| null |
Default
| null | null |
null |
{
"abstract": " The capacity of a neural network to absorb information is limited by its\nnumber of parameters. Conditional computation, where parts of the network are\nactive on a per-example basis, has been proposed in theory as a way of\ndramatically increasing model capacity without a proportional increase in\ncomputation. In practice, however, there are significant algorithmic and\nperformance challenges. In this work, we address these challenges and finally\nrealize the promise of conditional computation, achieving greater than 1000x\nimprovements in model capacity with only minor losses in computational\nefficiency on modern GPU clusters. We introduce a Sparsely-Gated\nMixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward\nsub-networks. A trainable gating network determines a sparse combination of\nthese experts to use for each example. We apply the MoE to the tasks of\nlanguage modeling and machine translation, where model capacity is critical for\nabsorbing the vast quantities of knowledge available in the training corpora.\nWe present model architectures in which a MoE with up to 137 billion parameters\nis applied convolutionally between stacked LSTM layers. On large language\nmodeling and machine translation benchmarks, these models achieve significantly\nbetter results than state-of-the-art at lower computational cost.\n",
"title": "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer"
}
| null | null | null | null | true | null |
12145
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we study principal components analysis in the regime of high\ndimensionality and high noise. Our model of the problem is a rank-one\ndeformation of a Wigner matrix where the signal-to-noise ratio (SNR) is of\nconstant order, and we are interested in the fundamental limits of detection of\nthe spike. Our main goal is to gain a fine understanding of the asymptotics for\nthe log-likelihood ratio process, also known as the free energy, as a function\nof the SNR. Our main results are twofold. We first prove that the free energy\nhas a finite-size correction to its limit---the replica-symmetric\nformula---which we explicitly compute. This provides a formula for the\nKullback-Leibler divergence between the planted and null models. Second, we\nprove that below the reconstruction threshold, where it becomes impossible to\nreconstruct the spike, the log-likelihood ratio has fluctuations of constant\norder and converges in distribution to a Gaussian under both the planted and\n(under restrictions) the null model. As a consequence, we provide a general\nproof of contiguity between these two distributions that holds up to the\nreconstruction threshold, and is valid for an arbitrary separable prior on the\nspike. Formulae for the total variation distance, and the Type-I and Type-II\nerrors of the optimal test are also given. Our proofs are based on Gaussian\ninterpolation methods and a rigorous incarnation of the cavity method, as\ndevised by Guerra and Talagrand in their study of the Sherrington--Kirkpatrick\nspin-glass model.\n",
"title": "Finite Size Corrections and Likelihood Ratio Fluctuations in the Spiked Wigner Model"
}
| null | null | null | null | true | null |
12146
| null |
Default
| null | null |
null |
{
"abstract": " In meta-learning an agent extracts knowledge from observed tasks, aiming to\nfacilitate learning of novel future tasks. Under the assumption that future\ntasks are 'related' to previous tasks, the accumulated knowledge should be\nlearned in a way which captures the common structure across learned tasks,\nwhile allowing the learner sufficient flexibility to adapt to novel aspects of\nnew tasks. We present a framework for meta-learning that is based on\ngeneralization error bounds, allowing us to extend various PAC-Bayes bounds to\nmeta-learning. Learning takes place through the construction of a distribution\nover hypotheses based on the observed tasks, and its utilization for learning a\nnew task. Thus, prior knowledge is incorporated through setting an\nexperience-dependent prior for novel tasks. We develop a gradient-based\nalgorithm which minimizes an objective function derived from the bounds and\ndemonstrate its effectiveness numerically with deep neural networks. In\naddition to establishing the improved performance available through\nmeta-learning, we demonstrate the intuitive way by which prior information is\nmanifested at different levels of the network.\n",
"title": "Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory"
}
| null | null | null | null | true | null |
12147
| null |
Default
| null | null |
null |
{
"abstract": " We present an optical flow estimation approach that operates on the full\nfour-dimensional cost volume. This direct approach shares the structural\nbenefits of leading stereo matching pipelines, which are known to yield high\naccuracy. To this day, such approaches have been considered impractical due to\nthe size of the cost volume. We show that the full four-dimensional cost volume\ncan be constructed in a fraction of a second due to its regularity. We then\nexploit this regularity further by adapting semi-global matching to the\nfour-dimensional setting. This yields a pipeline that achieves significantly\nhigher accuracy than state-of-the-art optical flow methods while being faster\nthan most. Our approach outperforms all published general-purpose optical flow\nmethods on both Sintel and KITTI 2015 benchmarks.\n",
"title": "Accurate Optical Flow via Direct Cost Volume Processing"
}
| null | null | null | null | true | null |
12148
| null |
Default
| null | null |
null |
{
"abstract": " We propose a rank-$k$ variant of the classical Frank-Wolfe algorithm to solve\nconvex optimization over a trace-norm ball. Our algorithm replaces the top\nsingular-vector computation ($1$-SVD) in Frank-Wolfe with a top-$k$\nsingular-vector computation ($k$-SVD), which can be done by repeatedly applying\n$1$-SVD $k$ times. Alternatively, our algorithm can be viewed as a rank-$k$\nrestricted version of projected gradient descent. We show that our algorithm\nhas a linear convergence rate when the objective function is smooth and\nstrongly convex, and the optimal solution has rank at most $k$. This improves\nthe convergence rate and the total time complexity of the Frank-Wolfe method\nand its variants.\n",
"title": "Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls"
}
| null | null | null | null | true | null |
12149
| null |
Default
| null | null |
null |
{
"abstract": " In this manuscript we present exponential inequalities for spatial lattice\nprocesses which take values in a separable Hilbert space and satisfy certain\ndependence conditions. We consider two types of dependence: spatial data under\n$\\alpha$-mixing conditions and spatial data which satisfies a weak dependence\ncondition introduced by Dedecker and Prieur [2005]. We demonstrate their\nusefulness in the functional kernel regression model of Ferraty and Vieu [2004]\nwhere we study uniform consistency properties of the estimated regression\noperator on increasing subsets of the underlying function space.\n",
"title": "A Note on Exponential Inequalities in Hilbert Spaces for Spatial Processes with Applications to the Functional Kernel Regression Model"
}
| null | null | null | null | true | null |
12150
| null |
Default
| null | null |
null |
{
"abstract": " We consider self-avoiding lattice polygons, in the hypercubic lattice, as a\nmodel of a ring polymer adsorbed at a surface and either being desorbed by the\naction of a force, or pushed towards the surface. We show that, when there is\nno interaction with the surface, then the response of the polygon to the\napplied force is identical (in the thermodynamic limit) for two ways in which\nwe apply the force. When the polygon is attracted to the surface then, when the\ndimension is at least 3, we have a complete characterization of the critical\nforce--temperature curve in terms of the behaviour, (a) when there is no force,\nand, (b) when there is no surface interaction. For the 2-dimensional case we\nhave upper and lower bounds on the free energy. We use both Monte Carlo and\nexact enumeration and series analysis methods to investigate the form of the\nphase diagram in two dimensions. We find evidence for the existence of a\n\\emph{mixed phase} where the free energy depends on the strength of the\ninteraction with the adsorbing line and on the applied force.\n",
"title": "Polygons pulled from an adsorbing surface"
}
| null | null | null | null | true | null |
12151
| null |
Default
| null | null |
null |
{
"abstract": " Temporal difference learning and Residual Gradient methods are the most\nwidely used temporal difference based learning algorithms; however, it has been\nshown that none of their objective functions is optimal w.r.t approximating the\ntrue value function $V$. Two novel algorithms are proposed to approximate the\ntrue value function $V$. This paper makes the following contributions: (1) A\nbatch algorithm that can help find the approximate optimal off-policy\nprediction of the true value function $V$. (2) A linear computational cost (per\nstep) near-optimal algorithm that can learn from a collection of off-policy\nsamples. (3) A new perspective of the emphatic temporal difference learning\nwhich bridges the gap between off-policy optimality and off-policy stability.\n",
"title": "O$^2$TD: (Near)-Optimal Off-Policy TD Learning"
}
| null | null | null | null | true | null |
12152
| null |
Default
| null | null |
null |
{
"abstract": " We present analytical studies of a boson-fermion mixture at zero temperature\nwith spin-polarized fermions. Using the Thomas-Fermi approximation for bosons\nand the local-density approximation for fermions, we find a large variety of\ndifferent density shapes. In the case of continuous density, we obtain analytic\nconditions for each configuration for attractive as well as repulsive\nboson-fermion interaction. Furthermore, we analytically show that all the\nscenarios we describe are minima of the grand-canonical energy functional.\nFinally, we provide a full classification of all possible ground states in the\ninterpenetrative regime. Our results also apply to binary mixtures of bosons.\n",
"title": "Comprehensive classification for Bose-Fermi mixtures"
}
| null | null | null | null | true | null |
12153
| null |
Default
| null | null |
null |
{
"abstract": " Existing works for extracting navigation objects from webpages focus on\nnavigation menus, so as to reveal the information architecture of the site.\nHowever, web 2.0 sites such as social networks, e-commerce portals etc. are\nmaking the understanding of the content structure in a web site increasingly\ndifficult. Dynamic and personalized elements such as top stories, recommended\nlist in a webpage are vital to the understanding of the dynamic nature of web\n2.0 sites. To better understand the content structure in web 2.0 sites, in this\npaper we propose a new extraction method for navigation objects in a webpage.\nOur method will extract not only the static navigation menus, but also the\ndynamic and personalized page-specific navigation lists. Since the navigation\nobjects in a webpage naturally come in blocks, we first cluster hyperlinks into\ndifferent blocks by exploiting spatial locations of hyperlinks, the\nhierarchical structure of the DOM-tree and the hyperlink density. Then we\nidentify navigation objects from those blocks using the SVM classifier with\nnovel features such as anchor text lengths etc. Experiments on real-world data\nsets with webpages from various domains and styles verified the effectiveness\nof our method.\n",
"title": "Navigation Objects Extraction for Better Content Structure Understanding"
}
| null | null |
[
"Computer Science"
] | null | true | null |
12154
| null |
Validated
| null | null |
null |
{
"abstract": " We present optimized source galaxy selection schemes for measuring cluster\nweak lensing (WL) mass profiles unaffected by cluster member dilution from the\nSubaru Hyper Suprime-Cam Strategic Survey Program (HSC-SSP). The ongoing\nHSC-SSP survey will uncover thousands of galaxy clusters to $z\\lesssim1.5$. In\nderiving cluster masses via WL, a critical source of systematics is\ncontamination and dilution of the lensing signal by cluster {members, and by\nforeground galaxies whose photometric redshifts are biased}. Using the\nfirst-year CAMIRA catalog of $\\sim$900 clusters with richness larger than 20\nfound in $\\sim$140 deg$^2$ of HSC-SSP data, we devise and compare several\nsource selection methods, including selection in color-color space (CC-cut),\nand selection of robust photometric redshifts by applying constraints on their\ncumulative probability distribution function (PDF; P-cut). We examine the\ndependence of the contamination on the chosen limits adopted for each method.\nUsing the proper limits, these methods give mass profiles with minimal dilution\nin agreement with one another. We find that not adopting either the CC-cut or\nP-cut methods results in an underestimation of the total cluster mass\n($13\\pm4\\%$) and the concentration of the profile ($24\\pm11\\%$). The level of\ncluster contamination can reach as high as $\\sim10\\%$ at $R\\approx 0.24$\nMpc/$h$ for low-z clusters without cuts, while employing either the P-cut or\nCC-cut results in cluster contamination consistent with zero to within the 0.5%\nuncertainties. Our robust methods yield a $\\sim60\\sigma$ detection of the\nstacked CAMIRA surface mass density profile, with a mean mass of\n$M_\\mathrm{200c} = (1.67\\pm0.05({\\rm {stat}}))\\times 10^{14}\\,M_\\odot/h$.\n",
"title": "Source Selection for Cluster Weak Lensing Measurements in the Hyper Suprime-Cam Survey"
}
| null | null | null | null | true | null |
12155
| null |
Default
| null | null |
null |
{
"abstract": " The main result of this paper is that there are examples of stochastic\npartial differential equations [hereforth, SPDEs] of the type $$ \\partial_t\nu=\\frac12\\Delta u +\\sigma(u)\\eta \\qquad\\text{on\n$(0\\,,\\infty)\\times\\mathbb{R}^3$}$$ such that the solution exists and is unique\nas a random field in the sense of Dalang and Walsh, yet the solution has\nunbounded oscillations in every open neighborhood of every space-time point. We\nare not aware of the existence of such a construction in spatial dimensions\nbelow $3$. En route, it will be proved that there exist a large family of\nparabolic SPDEs whose moment Lyapunov exponents grow at least sub exponentially\nin its order parameter in the sense that there exist $A_1,\\beta\\in(0\\,,1)$ such\nthat \\[\n\\underline{\\gamma}(k) :=\n\\liminf_{t\\to\\infty}t^{-1}\\inf_{x\\in\\mathbb{R}^3}\n\\log\\mathbb{E}\\left(|u(t\\,,x)|^k\\right) \\ge A_1\\exp(A_1 k^\\beta)\n\\qquad\\text{for all $k\\ge 2$}.\n\\] This sort of \"super intermittency\" is combined with a local linearization\nof the solution, and with techniques from Gaussian analysis in order to\nestablish the unbounded oscillations of the sample functions of the solution to\nour SPDE.\n",
"title": "Dense blowup for parabolic SPDEs"
}
| null | null | null | null | true | null |
12156
| null |
Default
| null | null |
null |
{
"abstract": " Optimization of high-dimensional black-box functions is an extremely\nchallenging problem. While Bayesian optimization has emerged as a popular\napproach for optimizing black-box functions, its applicability has been limited\nto low-dimensional problems due to its computational and statistical challenges\narising from high-dimensional settings. In this paper, we propose to tackle\nthese challenges by (1) assuming a latent additive structure in the function\nand inferring it properly for more efficient and effective BO, and (2)\nperforming multiple evaluations in parallel to reduce the number of iterations\nrequired by the method. Our novel approach learns the latent structure with\nGibbs sampling and constructs batched queries using determinantal point\nprocesses. Experimental validations on both synthetic and real-world functions\ndemonstrate that the proposed method outperforms the existing state-of-the-art\napproaches.\n",
"title": "Batched High-dimensional Bayesian Optimization via Structural Kernel Learning"
}
| null | null | null | null | true | null |
12157
| null |
Default
| null | null |
null |
{
"abstract": " At the core of understanding dynamical systems is the ability to maintain and\ncontrol the systems behavior that includes notions of robustness,\nheterogeneity, and/or regime-shift detection. Recently, to explore such\nfunctional properties, a convenient representation has been to model such\ndynamical systems as a weighted graph consisting of a finite, but very large\nnumber of interacting agents. This said, there exists very limited relevant\nstatistical theory that is able cope with real-life data, i.e., how does\nperform simple analysis and/or statistics over a family of networks as opposed\nto a specific network or network-to-network variation. Here, we are interested\nin the analysis of network families whereby each network represents a point on\nan underlying statistical manifold. From this, we explore the Riemannian\nstructure of the statistical (tensor) manifold in order to define notions of\ngeodesics or shortest distance amongst such points as well as a statistical\nframework for time-varying complex networks for which we can utilize in higher\norder classification tasks.\n",
"title": "Classifying Time-Varying Complex Networks on the Tensor Manifold"
}
| null | null | null | null | true | null |
12158
| null |
Default
| null | null |
null |
{
"abstract": " We propose a method for semi-supervised training of structured-output neural\nnetworks. Inspired by the framework of Generative Adversarial Networks (GAN),\nwe train a discriminator network to capture the notion of a quality of network\noutput. To this end, we leverage the qualitative difference between outputs\nobtained on the labelled training data and unannotated data. We then use the\ndiscriminator as a source of error signal for unlabelled data. This effectively\nboosts the performance of a network on a held out test set. Initial experiments\nin image segmentation demonstrate that the proposed framework enables achieving\nthe same network performance as in a fully supervised scenario, while using two\ntimes less annotations.\n",
"title": "An Adversarial Regularisation for Semi-Supervised Training of Structured Output Neural Networks"
}
| null | null | null | null | true | null |
12159
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a refined Sobolev scale on a vector bundle over a closed\ninfinitely smooth manifold. This scale consists of inner product Hörmander\nspaces parametrized with a real number and a function varying slowly at\ninfinity in the sense of Karamata. We prove that these spaces are obtained by\nthe interpolation with a function parameter between inner product Sobolev\nspaces. An arbitrary classical elliptic pseudodifferential operator acting\nbetween vector bundles of the same rank is investigated on this scale. We prove\nthat this operator is bounded and Fredholm on pairs of appropriate Hörmander\nspaces. We also prove that the solutions to the corresponding elliptic equation\nsatisfy a certain a priori estimate on these spaces. The local regularity of\nthese solutions is investigated on the refined Sobolev scale. We find new\nsufficient conditions for the solutions to have continuous derivatives of a\ngiven order.\n",
"title": "Elliptic operators on refined Sobolev scales on vector bundles"
}
| null | null | null | null | true | null |
12160
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we propose a goal-driven collaborative task that contains\nlanguage, vision, and action in a virtual environment as its core components.\nSpecifically, we develop a Collaborative image-Drawing game between two agents,\ncalled CoDraw. Our game is grounded in a virtual world that contains movable\nclip art objects. The game involves two players: a Teller and a Drawer. The\nTeller sees an abstract scene containing multiple clip art pieces in a\nsemantically meaningful configuration, while the Drawer tries to reconstruct\nthe scene on an empty canvas using available clip art pieces. The two players\ncommunicate via two-way communication using natural language. We collect the\nCoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between\nhuman agents. We define protocols and metrics to evaluate the effectiveness of\nlearned agents on this testbed, highlighting the need for a novel crosstalk\ncondition which pairs agents trained independently on disjoint subsets of the\ntraining data for evaluation. We present models for our task, including simple\nbut effective nearest-neighbor techniques and neural network approaches trained\nusing a combination of imitation learning and goal-driven training. All models\nare benchmarked using both fully automated evaluation and by playing the game\nwith live human agents.\n",
"title": "CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication"
}
| null | null | null | null | true | null |
12161
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we introduce and evaluate PROPEDEUTICA, a novel methodology\nand framework for efficient and effective real-time malware detection,\nleveraging the best of conventional machine learning (ML) and deep learning\n(DL) algorithms. In PROPEDEUTICA, all software processes in the system start\nexecution subjected to a conventional ML detector for fast classification. If a\npiece of software receives a borderline classification, it is subjected to\nfurther analysis via more performance expensive and more accurate DL methods,\nvia our newly proposed DL algorithm DEEPMALWARE. Further, we introduce delays\nto the execution of software subjected to deep learning analysis as a way to\n\"buy time\" for DL analysis and to rate-limit the impact of possible malware in\nthe system. We evaluated PROPEDEUTICA with a set of 9,115 malware samples and\n877 commonly used benign software samples from various categories for the\nWindows OS. Our results show that the false positive rate for conventional ML\nmethods can reach 20%, and for modern DL methods it is usually below 6%.\nHowever, the classification time for DL can be 100X longer than conventional ML\nmethods. PROPEDEUTICA improved the detection F1-score from 77.54% (conventional\nML method) to 90.25%, and reduced the detection time by 54.86%. Further, the\npercentage of software subjected to DL analysis was approximately 40% on\naverage. Further, the application of delays in software subjected to ML reduced\nthe detection time by approximately 10%. Finally, we found and discussed a\ndiscrepancy between the detection accuracy offline (analysis after all traces\nare collected) and on-the-fly (analysis in tandem with trace collection). Our\ninsights show that conventional ML and modern DL-based malware detectors in\nisolation cannot meet the needs of efficient and effective malware detection:\nhigh accuracy, low false positive rate, and short classification time.\n",
"title": "Learning Fast and Slow: PROPEDEUTICA for Real-time Malware Detection"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12162
| null |
Validated
| null | null |
null |
{
"abstract": " Organisms use hair-like cilia that beat in a metachronal fashion to actively\ntransport fluid and suspended particles. Metachronal motion emerges due to a\nphase difference between beating cycles of neighboring cilia and appears as\ntraveling waves propagating along ciliary carpet. In this work, we demonstrate\nbiomimetic artificial cilia capable of metachronal motion. The cilia are\nmicromachined magnetic thin filaments attached at one end to a substrate and\nactuated by a uniform rotating magnetic field. We show that the difference in\nmagnetic cilium length controls the phase of the beating motion. We use this\nproperty to induce metachronal waves within a ciliary array and explore the\neffect of operation parameters on the wave motion. The metachronal motion in\nour artificial system is shown to depend on the magnetic and elastic properties\nof the filaments, unlike natural cilia, where metachronal motion arises due to\nfluid coupling. Our approach enables an easy integration of metachronal\nmagnetic cilia in lab-on-a-chip devices for enhanced fluid and particle\nmanipulations.\n",
"title": "Metachronal motion of artificial magnetic cilia"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
12163
| null |
Validated
| null | null |
null |
{
"abstract": " In multi-server distributed queueing systems, the access of stochastically\narriving jobs to resources is often regulated by a dispatcher, also known as\nload balancer. A fundamental problem consists in designing a load balancing\nalgorithm that minimizes the delays experienced by jobs. During the last two\ndecades, the power-of-$d$-choice algorithm, based on the idea of dispatching\neach job to the least loaded server out of $d$ servers randomly sampled at the\narrival of the job itself, has emerged as a breakthrough in the foundations of\nthis area due to its versatility and appealing asymptotic properties. In this\npaper, we consider the power-of-$d$-choice algorithm with the addition of a\nlocal memory that keeps track of the latest observations collected over time on\nthe sampled servers. Then, each job is sent to a server with the lowest\nobservation. We show that this algorithm is asymptotically optimal in the sense\nthat the load balancer can always assign each job to an idle server in the\nlarge-server limit. This holds true if and only if the system load $\\lambda$ is\nless than $1-\\frac{1}{d}$. If this condition is not satisfied, we show that\nqueue lengths are bounded by $j^\\star+1$, where $j^\\star\\in\\mathbb{N}$ is given\nby the solution of a polynomial equation. This is in contrast with the classic\nversion of the power-of-$d$-choice algorithm, where queue lengths are\nunbounded. Our upper bound on the size of the most loaded server, $j^*+1$, is\ntight and increases slowly when $\\lambda$ approaches its critical value from\nbelow. For instance, when $\\lambda= 0.995$ and $d=2$ (respectively, $d=3$), we\nfind that no server will contain more than just $5$ ($3$) jobs in equilibrium.\nOur results quantify and highlight the importance of using memory as a means to\nenhance performance in randomized load balancing.\n",
"title": "Power-of-$d$-Choices with Memory: Fluid Limit and Optimality"
}
| null | null | null | null | true | null |
12164
| null |
Default
| null | null |
null |
{
"abstract": " Convolutional neural networks (CNNs) have state-of-the-art performance on\nmany problems in machine vision. However, networks with superior performance\noften have millions of weights so that it is difficult or impossible to use\nCNNs on computationally limited devices or to humanly interpret them. A myriad\nof CNN compression approaches have been proposed and they involve pruning and\ncompressing the weights and filters. In this article, we introduce a greedy\nstructural compression scheme that prunes filters in a trained CNN. We define a\nfilter importance index equal to the classification accuracy reduction (CAR) of\nthe network after pruning that filter (similarly defined as RAR for\nregression). We then iteratively prune filters based on the CAR index. This\nalgorithm achieves substantially higher classification accuracy in AlexNet\ncompared to other structural compression schemes that prune filters. Pruning\nhalf of the filters in the first or second layer of AlexNet, our CAR algorithm\nachieves 26% and 20% higher classification accuracies respectively, compared to\nthe best benchmark filter pruning scheme. Our CAR algorithm, combined with\nfurther weight pruning and compressing, reduces the size of first or second\nconvolutional layer in AlexNet by a factor of 42, while achieving close to\noriginal classification accuracy through retraining (or fine-tuning) network.\nFinally, we demonstrate the interpretability of CAR-compressed CNNs by showing\nthat our algorithm prunes filters with visually redundant functionalities. In\nfact, out of top 20 CAR-pruned filters in AlexNet, 17 of them in the first\nlayer and 14 of them in the second layer are color-selective filters as opposed\nto shape-selective filters. To our knowledge, this is the first reported result\non the connection between compression and interpretability of CNNs.\n",
"title": "Structural Compression of Convolutional Neural Networks Based on Greedy Filter Pruning"
}
| null | null | null | null | true | null |
12165
| null |
Default
| null | null |
null |
{
"abstract": " Recently a repeating fast radio burst (FRB) 121102 has been confirmed to be\nan extragalactic event and a persistent radio counterpart has been identified.\nWhile other possibilities are not ruled out, the emission properties are\nbroadly consistent with Murase et al. (2016) that theoretically proposed\nquasi-steady radio emission as a counterpart of both FRBs and pulsar-driven\nsupernovae. Here we constrain the model parameters of such a young neutron star\nscenario for FRB 121102. If the associated supernova has a conventional ejecta\nmass of $M_{\\rm ej}\\gtrsim{\\rm a \\ few}\\ M_\\odot$, a neutron star with an age\nof $t_{\\rm age} \\sim 10-100 \\ \\rm yrs$, an initial spin period of $P_{i}\n\\lesssim$ a few ms, and a dipole magnetic field of $B_{\\rm dip} \\lesssim {\\rm a\n\\ few} \\times 10^{13} \\ \\rm G$ can be compatible with the observations.\nHowever, in this case, the magnetically-powered scenario may be favored as an\nFRB energy source because of the efficiency problem in the rotation-powered\nscenario. On the other hand, if the associated supernova is an ultra-stripped\none or the neutron star is born by the accretion-induced collapse with $M_{\\rm\nej} \\sim 0.1 \\ M_\\odot$, a younger neutron star with $t_{\\rm age} \\sim 1-10$\nyrs can be the persistent radio source and might produce FRBs with the\nspin-down power. These possibilities can be distinguished by the decline rate\nof the quasi-steady radio counterpart.\n",
"title": "Testing the Young Neutron Star Scenario with Persistent Radio Emission Associated with FRB 121102"
}
| null | null | null | null | true | null |
12166
| null |
Default
| null | null |
null |
{
"abstract": " A vertex or edge in a graph is critical if its deletion reduces the chromatic\nnumber of the graph by 1. We consider the problems of deciding whether a graph\nhas a critical vertex or edge, respectively. We give a complexity dichotomy for\nboth problems restricted to $H$-free graphs, that is, graphs with no induced\nsubgraph isomorphic to $H$. Moreover, we show that an edge is critical if and\nonly if its contraction reduces the chromatic number by 1. Hence, we also\nobtain a complexity dichotomy for the problem of deciding if a graph has an\nedge whose contraction reduces the chromatic number by 1.\n",
"title": "Critical Vertices and Edges in $H$-free Graphs"
}
| null | null | null | null | true | null |
12167
| null |
Default
| null | null |
null |
{
"abstract": " We prove transverse Weitzenböck identities for the horizontal Laplacians of\na totally geodesic foliation. As a consequence, we obtain nullity theorems for\nthe de Rham cohomology assuming only the positivity of curvature quantities\ntransverse to the leaves. Those curvature quantities appear in the adiabatic\nlimit of the canonical variation of the metric.\n",
"title": "Transverse Weitzenböck formulas and de Rham cohomology of totally geodesic foliations"
}
| null | null | null | null | true | null |
12168
| null |
Default
| null | null |
null |
{
"abstract": " In this work we present an adaptive Newton-type method to solve nonlinear\nconstrained optimization problems in which the constraint is a system of\npartial differential equations discretized by the finite element method. The\nadaptive strategy is based on a goal-oriented a posteriori error estimation for\nthe discretization and for the iteration error. The iteration error stems from\nan inexact solution of the nonlinear system of first order optimality\nconditions by the Newton-type method. This strategy allows to balance the two\nerrors and to derive effective stopping criteria for the Newton-iterations. The\nalgorithm proceeds with the search of the optimal point on coarse grids which\nare refined only if the discretization error becomes dominant. Using computable\nerror indicators the mesh is refined locally leading to a highly efficient\nsolution process. The performance of the algorithm is shown with several\nexamples and in particular with an application in the neurosciences: the\noptimal electrode design for the study of neuronal networks.\n",
"title": "An adaptive Newton algorithm for optimal control problems with application to optimal electrode design"
}
| null | null | null | null | true | null |
12169
| null |
Default
| null | null |
null |
{
"abstract": " In 1885, Fedorov discovered that a convex domain can form a lattice tiling of\nthe Euclidean plane if and only if it is a parallelogram or a centrally\nsymmetric hexagon. It is known that there is no other convex domain which can\nform a two-, three- or four-fold lattice tiling in the Euclidean plane, but\nthere is a centrally symmetric convex decagon which can form a five-fold\nlattice tiling. This paper characterizes all the convex domains which can form\na five-fold lattice tiling of the Euclidean plane.\n",
"title": "Characterization of the Two-Dimensional Five-Fold Lattice Tiles"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12170
| null |
Validated
| null | null |
null |
{
"abstract": " The spectrum of $L^2$ on a pseudo-unitary group $U(p,q)$ (we assume $p\\ge q$\nnaturally splits into $q+1$ types. We write explicitly orthogonal projectors in\n$L^2$ to subspaces with uniform spectra (this is an old question formulated by\nGelfand and Gindikin). We also write two finer separations of $L^2$. In the\nfirst case pieces are enumerated by $r=0$, 1,..., $q$ and representations of\ndiscrete series of $U(p-r,q-r)$, where $r=0$, \\dots, $q$. In the second case\npieces are enumerated by all discrete parameters of the tempered spectrum of\n$U(p,q)$.\n",
"title": "Projectors separating spectra for $L^2$ on pseudounitary groups $U(p,q)$"
}
| null | null | null | null | true | null |
12171
| null |
Default
| null | null |
null |
{
"abstract": " Objective: Predict patient-specific vitals deemed medically acceptable for\ndischarge from a pediatric intensive care unit (ICU). Design: The means of each\npatient's hr, sbp and dbp measurements between their medical and physical\ndischarge from the ICU were computed as a proxy for their physiologically\nacceptable state space (PASS) for successful ICU discharge. These individual\nPASS values were compared via root mean squared error (rMSE) to population\nage-normal vitals, a polynomial regression through the PASS values of a\nPediatric ICU (PICU) population and predictions from two recurrent neural\nnetwork models designed to predict personalized PASS within the first twelve\nhours following ICU admission. Setting: PICU at Children's Hospital Los Angeles\n(CHLA). Patients: 6,899 PICU episodes (5,464 patients) collected between 2009\nand 2016. Interventions: None. Measurements: Each episode data contained 375\nvariables representing vitals, labs, interventions, and drugs. They also\nincluded a time indicator for PICU medical discharge and physical discharge.\nMain Results: The rMSEs between individual PASS values and population\nage-normals (hr: 25.9 bpm, sbp: 13.4 mmHg, dbp: 13.0 mmHg) were larger than the\nrMSEs corresponding to the polynomial regression (hr: 19.1 bpm, sbp: 12.3 mmHg,\ndbp: 10.8 mmHg). The rMSEs from the best performing RNN model were the lowest\n(hr: 16.4 bpm; sbp: 9.9 mmHg, dbp: 9.0 mmHg). Conclusion: PICU patients are a\nunique subset of the general population, and general age-normal vitals may not\nbe suitable as target values indicating physiologic stability at discharge.\nAge-normal vitals that were specifically derived from the medical-to-physical\ndischarge window of ICU patients may be more appropriate targets for\n'acceptable' physiologic state for critical care patients. Going beyond simple\nage bins, an RNN model can provide more personalized target values.\n",
"title": "Predicting Individual Physiologically Acceptable States for Discharge from a Pediatric Intensive Care Unit"
}
| null | null | null | null | true | null |
12172
| null |
Default
| null | null |
null |
{
"abstract": " One of the key challenges of visual perception is to extract abstract models\nof 3D objects and object categories from visual measurements, which are\naffected by complex nuisance factors such as viewpoint, occlusion, motion, and\ndeformations. Starting from the recent idea of viewpoint factorization, we\npropose a new approach that, given a large number of images of an object and no\nother supervision, can extract a dense object-centric coordinate frame. This\ncoordinate frame is invariant to deformations of the images and comes with a\ndense equivariant labelling neural network that can map image pixels to their\ncorresponding object coordinates. We demonstrate the applicability of this\nmethod to simple articulated objects and deformable objects such as human\nfaces, learning embeddings from random synthetic transformations or optical\nflow correspondences, all without any manual supervision.\n",
"title": "Unsupervised learning of object frames by dense equivariant image labelling"
}
| null | null | null | null | true | null |
12173
| null |
Default
| null | null |
null |
{
"abstract": " We discuss channel surfaces in the context of Lie sphere geometry and\ncharacterise them as certain $\\Omega_{0}$-surfaces. Since $\\Omega_{0}$-surfaces\npossess a rich transformation theory, we study the behaviour of channel\nsurfaces under these transformations. Furthermore, by using certain Dupin\ncyclide congruences, we characterise Ribaucour pairs of channel surfaces.\n",
"title": "Channel surfaces in Lie sphere geometry"
}
| null | null | null | null | true | null |
12174
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents an acceleration framework for packing linear programming\nproblems where the amount of data available is limited, i.e., where the number\nof constraints m is small compared to the variable dimension n. The framework\ncan be used as a black box to speed up linear programming solvers dramatically,\nby two orders of magnitude in our experiments. We present worst-case guarantees\non the quality of the solution and the speedup provided by the algorithm,\nshowing that the framework provides an approximately optimal solution while\nrunning the original solver on a much smaller problem. The framework can be\nused to accelerate exact solvers, approximate solvers, and parallel/distributed\nsolvers. Further, it can be used for both linear programs and integer linear\nprograms.\n",
"title": "A Parallelizable Acceleration Framework for Packing Linear Programs"
}
| null | null | null | null | true | null |
12175
| null |
Default
| null | null |
null |
{
"abstract": " We obtain $L^p$ regularity for the Bergman projection on some Reinhardt\ndomains. We start with a bounded initial domain $\\Omega$ with some symmetry\nproperties and generate successor domains in higher {dimensions}. We prove: If\nthe Bergman kernel on $\\Omega$ satisfies appropriate estimates, then the\nBergman projection on the successor is $L^p$ bounded. For example, the Bergman\nprojection on successors of strictly pseudoconvex initial domains is bounded on\n$L^p$ for $1<p<\\infty$. The successor domains need not have smooth boundary nor\nbe strictly pseudoconvex.\n",
"title": "$L^p$ estimates for the Bergman projection on some Reinhardt domains"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12176
| null |
Validated
| null | null |
null |
{
"abstract": " We present a simple encoding for unlabeled noncrossing graphs and show how\nits latent counterpart helps us to represent several families of directed and\nundirected graphs used in syntactic and semantic parsing of natural language as\ncontext-free languages. The families are separated purely on the basis of\nforbidden patterns in latent encoding, eliminating the need to differentiate\nthe families of non-crossing graphs in inference algorithms: one algorithm\nworks for all when the search space can be controlled in parser input.\n",
"title": "Generic Axiomatization of Families of Noncrossing Graphs in Dependency Parsing"
}
| null | null | null | null | true | null |
12177
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we investigate the numerical approximation of an analogue of\nthe Wasserstein distance for optimal transport on graphs that is defined via a\ndiscrete modification of the Benamou--Brenier formula. This approach involves\nthe logarithmic mean of measure densities on adjacent nodes of the graph. For\nthis model a variational time discretization of the probability densities on\ngraph nodes and the momenta on graph edges is proposed. A robust descent\nalgorithm for the action functional is derived, which in particular uses a\nproximal splitting with an edgewise nonlinear projection on the convex subgraph\nof the logarithmic mean. Thereby, suitable chosen slack variables avoid a\nglobal coupling of probability densities on all graph nodes in the projection\nstep. For the time discrete action functional $\\Gamma$--convergence to the time\ncontinuous action is established. Numerical results for a selection of test\ncases show qualitative and quantitative properties of the optimal transport on\ngraphs. Finally, we use our algorithm to implement a JKO scheme for the\ngradient flow of the entropy in the discrete transportation distance, which is\nknown to coincide with the underlying Markov semigroup, and test our results\nagainst a classical backward Euler discretization of this discrete heat flow.\n",
"title": "Computation of Optimal Transport on Discrete Metric Measure Spaces"
}
| null | null | null | null | true | null |
12178
| null |
Default
| null | null |
null |
{
"abstract": " According to a result of Ehresmann, the torsions of integral homology of real\nGrassmannian are all of order $2$. In this note, We compute the\n$\\mathbb{Z}_2$-dimensions of torsions in the integral homology and cohomology\nof real Grassmannian.\n",
"title": "Torsions of integral homology and cohomology of real Grassmannians"
}
| null | null | null | null | true | null |
12179
| null |
Default
| null | null |
null |
{
"abstract": " Many signal processing algorithms operate by breaking the target signal into\npossibly overlapping segments (typically called windows or patches), processing\nthem separately, and then stitching them back into place to produce a unified\noutput. In most cases where pach overlapping occurs, the final value of those\nsamples that are estimated by more than one patch is resolved by averaging\nthose estimates; this includes many recent image processing algorithms. In\nother cases, typically frequency-based restoration methods, the average is\nimplicitly weighted by some window function such as Hanning, Blackman, etc.\nwhich is applied prior to the Fourier/DCT transform in order to avoid Gibbs\noscillations in the processed patches. Such averaging may incidentally help in\ncovering up artifacts in the restoration process, but more often will simply\ndegrade the overall result, posing an upper limit to the size of the patches\nthat can be used. In order to avoid such drawbacks, we propose a new\nmethodology where the different estimates of any given sample are forced to be\nidentical. We show that, together, these consensus constraints constitute a\nnon-empty convex feasible set, provide a general formulation of the resulting\nconstrained optimization problem which can be applied to a wide variety of\nsignal restoration tasks, and propose an efficient algorithm for finding the\ncorresponding solutions. Finally, we describe in detail the application of the\nproposed methodology to three different signal processing problems, in some\ncases surpassing the state of the art by a significant margin.\n",
"title": "PACO: Signal Restoration via PAtch COnsensus"
}
| null | null | null | null | true | null |
12180
| null |
Default
| null | null |
null |
{
"abstract": " We study the hyperplane arrangements associated, via the minimal model\nprogramme, to symplectic quotient singularities. We show that this hyperplane\narrangement equals the arrangement of CM-hyperplanes coming from the\nrepresentation theory of restricted rational Cherednik algebras. We explain\nsome of the interesting consequences of this identification for the\nrepresentation theory of restricted rational Cherednik algebras. We also show\nthat the Calogero-Moser space is smooth if and only if the Calogero-Moser\nfamilies are trivial. We describe the arrangements of CM-hyperplanes associated\nto several exceptional complex reflection groups, some of which are free.\n",
"title": "Hyperplane arrangements associated to symplectic quotient singularities"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12181
| null |
Validated
| null | null |
null |
{
"abstract": " We report an extension of a Keras Model, called CTCModel, to perform the\nConnectionist Temporal Classification (CTC) in a transparent way. Combined with\nRecurrent Neural Networks, the Connectionist Temporal Classification is the\nreference method for dealing with unsegmented input sequences, i.e. with data\nthat are a couple of observation and label sequences where each label is\nrelated to a subset of observation frames. CTCModel makes use of the CTC\nimplementation in the Tensorflow backend for training and predicting sequences\nof labels using Keras. It consists of three branches made of Keras models: one\nfor training, computing the CTC loss function; one for predicting, providing\nsequences of labels; and one for evaluating that returns standard metrics for\nanalyzing sequences of predictions.\n",
"title": "CTCModel: a Keras Model for Connectionist Temporal Classification"
}
| null | null | null | null | true | null |
12182
| null |
Default
| null | null |
null |
{
"abstract": " A large user base relies on software updates provided through package\nmanagers. This provides a unique lever for improving the security of the\nsoftware update process. We propose a transparency system for software updates\nand implement it for a widely deployed Linux package manager, namely APT. Our\nsystem is capable of detecting targeted backdoors without producing overhead\nfor maintainers. In addition, in our system, the availability of source code is\nensured, the binding between source and binary code is verified using\nreproducible builds, and the maintainer responsible for distributing a specific\npackage can be identified. We describe a novel \"hidden version\" attack against\ncurrent software transparency systems and propose as well as integrate a\nsuitable defense. To address equivocation attacks by the transparency log\nserver, we introduce tree root cross logging, where the log's Merkle tree root\nis submitted into a separately operated log server. This significantly relaxes\nthe inter-operator cooperation requirements compared to other systems. Our\nimplementation is evaluated by replaying over 3000 updates of the Debian\noperating system over the course of two years, demonstrating its viability and\nidentifying numerous irregularities.\n",
"title": "Software Distribution Transparency and Auditability"
}
| null | null | null | null | true | null |
12183
| null |
Default
| null | null |
null |
{
"abstract": " The analysis of telemetry data is common in animal ecological studies. While\nthe collection of telemetry data for individual animals has improved\ndramatically, the methods to properly account for inherent uncertainties (e.g.,\nmeasurement error, dependence, barriers to movement) have lagged behind. Still,\nmany new statistical approaches have been developed to infer unknown quantities\naffecting animal movement or predict movement based on telemetry data.\nHierarchical statistical models are useful to account for some of the\naforementioned uncertainties, as well as provide population-level inference,\nbut they often come with an increased computational burden. For certain types\nof statistical models, it is straightforward to provide inference if the latent\ntrue animal trajectory is known, but challenging otherwise. In these cases,\napproaches related to multiple imputation have been employed to account for the\nuncertainty associated with our knowledge of the latent trajectory. Despite the\nincreasing use of imputation approaches for modeling animal movement, the\ngeneral sensitivity and accuracy of these methods have not been explored in\ndetail. We provide an introduction to animal movement modeling and describe how\nimputation approaches may be helpful for certain types of models. We also\nassess the performance of imputation approaches in a simulation study. Our\nsimulation study suggests that inference for model parameters directly related\nto the location of an individual may be more accurate than inference for\nparameters associated with higher-order processes such as velocity or\nacceleration. Finally, we apply these methods to analyze a telemetry data set\ninvolving northern fur seals (Callorhinus ursinus) in the Bering Sea.\n",
"title": "Imputation Approaches for Animal Movement Modeling"
}
| null | null | null | null | true | null |
12184
| null |
Default
| null | null |
null |
{
"abstract": " Motion planning is a key tool that allows robots to navigate through an\nenvironment without collisions. The problem of robot motion planning has been\nstudied in great detail over the last several decades, with researchers\ninitially focusing on systems such as planar mobile robots and low\ndegree-of-freedom (DOF) robotic arms. The increased use of high DOF robots that\nmust perform tasks in real time in complex dynamic environments spurs the need\nfor fast motion planning algorithms. In this overview, we discuss several types\nof strategies for motion planning in high dimensional spaces and dissect some\nof them, namely grid search based, sampling based and trajectory optimization\nbased approaches. We compare them and outline their advantages and\ndisadvantages, and finally, provide an insight into future research\nopportunities.\n",
"title": "Motion planning in high-dimensional spaces"
}
| null | null | null | null | true | null |
12185
| null |
Default
| null | null |
null |
{
"abstract": " A repulsive Coulomb interaction between electrons in different orbitals in\ncorrelated materials can give rise to bound quasiparticle states. We study the\nnon-hybridized two-orbital Hubbard model with intra (inter)-orbital interaction\n$U$ ($U_{12}$) and different band widths using an improved dynamical mean field\ntheory numerical technique which leads to reliable spectra on the real energy\naxis directly at zero temperature. We find that a finite density of states at\nthe Fermi energy in one band is correlated with the emergence of well defined\nquasiparticle states at excited energies $\\Delta=U-U_{12}$ in the other band.\nThese excitations are inter-band holon-doublon bound states. At the symmetric\npoint $U=U_{12}$, the quasiparticle peaks are located at the Fermi energy,\nleading to a simultaneous and continuous Mott transition settling a\nlong-standing controversy.\n",
"title": "Emergent low-energy bound states in the two-orbital Hubbard model"
}
| null | null | null | null | true | null |
12186
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, an improved thermal lattice Boltzmann (LB) model is proposed\nfor simulating liquid-vapor phase change, which is aimed at improving an\nexisting thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng,\nInt. J. Heat Mass Transfer 55, 4923 (2012)]. First, we emphasize that the\nreplacement of \\[{\\left( {\\rho {c_V}} \\right)^{ - 1}}\\nabla \\cdot \\left(\n{\\lambda \\nabla T} \\right)\\] with \\[\\nabla \\cdot \\left( {\\chi \\nabla T}\n\\right)\\] is an inappropriate treatment for diffuse interface modeling of\nliquid-vapor phase change. Furthermore, the error terms ${\\partial_{t0}}\\left(\n{Tv} \\right) + \\nabla \\cdot \\left( {Tvv} \\right)$, which exist in the\nmacroscopic temperature equation recovered from the standard thermal LB\nequation, are eliminated in the present model through a way that is consistent\nwith the philosophy of the LB method. In addition, the discrete effect of the\nsource term is also eliminated in the present model. Numerical simulations are\nperformed for droplet evaporation and bubble nucleation to validate the\ncapability of the improved model for simulating liquid-vapor phase change.\nNumerical comparisons show that the aforementioned replacement leads to\nsignificant numerical errors and the error terms in the recovered macroscopic\ntemperature equation also result in considerable errors.\n",
"title": "Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change"
}
| null | null |
[
"Physics"
] | null | true | null |
12187
| null |
Validated
| null | null |
null |
{
"abstract": " Interactions and effect aliasing are among the fundamental concepts in\nexperimental design. In this paper, some new insights and approaches are\nprovided on these subjects. In the literature, the \"de-aliasing\" of aliased\neffects is deemed to be impossible. We argue that this \"impossibility\" can\nindeed be resolved by employing a new approach which consists of\nreparametrization of effects and exploitation of effect non-orthogonality. This\napproach is successfully applied to three classes of designs: regular and\nnonregular two-level fractional factorial designs, and three-level fractional\nfactorial designs. For reparametrization, the notion of conditional main\neffects (cme's) is employed for two-level regular designs, while the\nlinear-quadratic system is used for three-level designs. For nonregular\ntwo-level designs, reparametrization is not needed because the partial aliasing\nof their effects already induces non-orthogonality. The approach can be\nextended to general observational data by using a new bi-level variable\nselection technique based on the cme's. A historical recollection is given on\nhow these ideas were discovered.\n",
"title": "A fresh look at effect aliasing and interactions: some new wine in old bottles"
}
| null | null | null | null | true | null |
12188
| null |
Default
| null | null |
null |
{
"abstract": " Importance-weighting is a popular and well-researched technique for dealing\nwith sample selection bias and covariate shift. It has desirable\ncharacteristics such as unbiasedness, consistency and low computational\ncomplexity. However, weighting can have a detrimental effect on an estimator as\nwell. In this work, we empirically show that the sampling distribution of an\nimportance-weighted estimator can be skewed. For sample selection bias\nsettings, and for small sample sizes, the importance-weighted risk estimator\nproduces overestimates for datasets in the body of the sampling distribution,\ni.e. the majority of cases, and large underestimates for data sets in the tail\nof the sampling distribution. These over- and underestimates of the risk lead\nto suboptimal regularization parameters when used for importance-weighted\nvalidation.\n",
"title": "Effects of sampling skewness of the importance-weighted risk estimator on model selection"
}
| null | null | null | null | true | null |
12189
| null |
Default
| null | null |
null |
{
"abstract": " A group of mobile agents is given a task to explore an edge-weighted graph\n$G$, i.e., every vertex of $G$ has to be visited by at least one agent. There\nis no centralized unit to coordinate their actions, but they can freely\ncommunicate with each other. The goal is to construct a deterministic strategy\nwhich allows agents to complete their task optimally. In this paper we are\ninterested in a cost-optimal strategy, where the cost is understood as the\ntotal distance traversed by agents coupled with the cost of invoking them. Two\ngraph classes are analyzed, rings and trees, in the off-line and on-line\nsetting, i.e., when a structure of a graph is known and not known to agents in\nadvance. We present algorithms that compute the optimal solutions for a given\nring and tree of order $n$, in $O(n)$ time units. For rings in the on-line\nsetting, we give the $2$-competitive algorithm and prove the lower bound of\n$3/2$ for the competitive ratio for any on-line strategy. For every strategy\nfor trees in the on-line setting, we prove the competitive ratio to be no less\nthan $2$, which can be achieved by the $DFS$ algorithm.\n",
"title": "Minimizing the Cost of Team Exploration"
}
| null | null | null | null | true | null |
12190
| null |
Default
| null | null |
null |
{
"abstract": " We propose a source/channel duality in the exponential regime, where\nsuccess/failure in source coding parallels error/correctness in channel coding,\nand a distortion constraint becomes a log-likelihood ratio (LLR) threshold. We\nestablish this duality by first deriving exact exponents for lossy coding of a\nmemoryless source P, at distortion D, for a general i.i.d. codebook\ndistribution Q, for both encoding success (R < R(P,Q,D)) and failure (R >\nR(P,Q,D)). We then turn to maximum likelihood (ML) decoding over a memoryless\nchannel P with an i.i.d. input Q, and show that if we substitute P=QP, Q=Q, and\nD=0 under the LLR distortion measure, then the exact exponents for\ndecoding-error (R < I(Q, P)) and strict correct-decoding (R > I(Q, P)) follow\nas special cases of the exponents for source encoding success/failure,\nrespectively. Moreover, by letting the threshold D take general values, the\nexact random-coding exponents for erasure (D > 0) and list decoding (D < 0)\nunder the simplified Forney decoder are obtained. Finally, we derive the exact\nrandom-coding exponent for Forney's optimum tradeoff erasure/list decoder, and\nshow that at the erasure regime it coincides with Forney's lower bound and with\nthe simplified decoder exponent.\n",
"title": "Exponential Source/Channel Duality"
}
| null | null | null | null | true | null |
12191
| null |
Default
| null | null |
null |
{
"abstract": " We provide a new version of delta theorem, that takes into account of high\ndimensional parameter estimation. We show that depending on the structure of\nthe function, the limits of functions of estimators have faster or slower rate\nof convergence than the limits of estimators. We illustrate this via two\nexamples. First, we use it for testing in high dimensions, and second in\nestimating large portfolio risk. Our theorem works in the case of larger number\nof parameters, $p$, than the sample size, $n$: $p>n$.\n",
"title": "Delta Theorem in the Age of High Dimensions"
}
| null | null | null | null | true | null |
12192
| null |
Default
| null | null |
null |
{
"abstract": " Interior tomography for the region-of-interest (ROI) imaging has advantages\nof using a small detector and reducing X-ray radiation dose. However, standard\nanalytic reconstruction suffers from severe cupping artifacts due to existence\nof null space in the truncated Radon transform. Existing penalized\nreconstruction methods may address this problem but they require extensive\ncomputations due to the iterative reconstruction. Inspired by the recent deep\nlearning approaches to low-dose and sparse view CT, here we propose a deep\nlearning architecture that removes null space signals from the FBP\nreconstruction. Experimental results have shown that the proposed method\nprovides near-perfect reconstruction with about 7-10 dB improvement in PSNR\nover existing methods in spite of significantly reduced run-time complexity.\n",
"title": "Deep Learning Interior Tomography for Region-of-Interest Reconstruction"
}
| null | null | null | null | true | null |
12193
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we present a new algorithm for parallel Monte Carlo tree\nsearch (MCTS). It is based on the pipeline pattern and allows flexible\nmanagement of the control flow of the operations in parallel MCTS. The pipeline\npattern provides for the first structured parallel programming approach to\nMCTS. Moreover, we propose a new lock-free tree data structure for parallel\nMCTS which removes synchronization overhead. The Pipeline Pattern for Parallel\nMCTS algorithm (called 3PMCTS), scales very well to higher numbers of cores\nwhen compared to the existing methods.\n",
"title": "Structured Parallel Programming for Monte Carlo Tree Search"
}
| null | null | null | null | true | null |
12194
| null |
Default
| null | null |
null |
{
"abstract": " Extreme nanowires (ENs) represent the ultimate class of crystals: They are\nthe smallest possible periodic materials. With atom-wide motifs repeated in one\ndimension (1D), they offer a privileged perspective into the Physics and\nChemistry of low-dimensional systems. Single-walled carbon nanotubes (SWCNTs)\nprovide ideal environments for the creation of such materials. Here we present\na comprehensive study of Te ENs encapsulated inside ultra- narrow SWCNTs with\ndiameters between 0.7 nm and 1.1 nm. We combine state-of-the-art imaging\ntechniques and 1D-adapted ab initio structure prediction to treat both\nconfinement and periodicity effects. The studied Te ENs adopt a variety of\nstructures, exhibiting a true 1D realisation of a Peierls structural distortion\nand transition from metallic to insulating behaviour as a function of\nencapsulating diameter. We analyse the mechanical stability of the encapsulated\nENs and show that nanoconfinement is not only a useful means to produce ENs,\nbut may actually be necessary, in some cases, to prevent them from\ndisintegrating. The ability to control functional properties of these ENs with\nconfinement has numerous applications in future device technologies, and we\nanticipate that our study will set the basic paradigm to be adopted in the\ncharacterisation and understanding of such systems.\n",
"title": "Single-Atom Scale Structural Selectivity in Te Nanowires Encapsulated inside Ultra-Narrow, Single-Walled Carbon Nanotubes"
}
| null | null |
[
"Physics"
] | null | true | null |
12195
| null |
Validated
| null | null |
null |
{
"abstract": " The advent of miniaturized biologging devices has provided ecologists with\nunparalleled opportunities to record animal movement across scales, and led to\nthe collection of ever-increasing quantities of tracking data. In parallel,\nsophisticated tools to process, visualize and analyze tracking data have been\ndeveloped in abundance. Within the R software alone, we listed 57 focused on\nthese tasks, called here tracking packages. Here, we reviewed these tracking\npackages, as an introduction to this set of packages for researchers, and to\nprovide feedback and recommendations to package developers, from a user\nperspective. We described each package based on a workflow centered around\ntracking data (i.e. (x,y,t)), broken down in three stages: pre-processing,\npost-processing, and analysis (data visualization, track description, path\nreconstruction, behavioral pattern identification, space use characterization,\ntrajectory simulation and others).\nSupporting documentation is key to the accessibility of a package for users.\nBased on a user survey, we reviewed the quality of packages' documentation, and\nidentified $12$ packages with good or excellent documentation. Links between\npackages were assessed through a network graph analysis. Although a large group\nof packages shows some degree of connectivity (either depending on functions or\nsuggesting the use of another tracking package), a third of tracking packages\nwork on isolation, reflecting a fragmentation in the R Movement-Ecology\nprogramming community.\nFinally, we provide recommendations for users to choose packages, and for\ndevelopers to maximize usefulness of their contribution and strengthen the\nlinks between the programming community.\n",
"title": "Navigating through the R packages for movement"
}
| null | null | null | null | true | null |
12196
| null |
Default
| null | null |
null |
{
"abstract": " We study image classification and retrieval performance in a feature space\ngiven by random depthwise convolutional neural networks. Intuitively our\nnetwork can be interpreted as applying random hyperplanes to the space of all\npatches of input images followed by average pooling to obtain final features.\nWe show that the ratio of image pixel distribution similarity across classes to\nwithin classes and the average margin of the linear support vector machine on\ntest data are both higher in our network's final layer compared to the input\nspace. We then apply the linear support vector machine for image classification\nand $k$-nearest neighbor for image similarity detection on our network's final\nlayer. We show that for classification our network attains higher accuracies\nthan previous random networks and is not far behind in accuracy to trained\nstate of the art networks, especially in the top-k setting. For example the\ntop-2 accuracy of our network is near 90\\% on both CIFAR10 and a 10-class mini\nImageNet, and 85\\% on STL10. In the problem of image similarity we find that\n$k$-nearest neighbor gives a comparable precision on the Corel Princeton Image\nSimilarity Benchmark than if we were to use the last hidden layer of trained\nnetworks. We highlight sensitivity of our network to background color as a\npotential pitfall. Overall our work pushes the boundary of what can be achieved\nwith random weights.\n",
"title": "Image classification and retrieval with random depthwise signed convolutional neural networks"
}
| null | null | null | null | true | null |
12197
| null |
Default
| null | null |
null |
{
"abstract": " Understanding the emergence of biological structures and their changes is a\ncomplex problem. On a biochemical level, it is based on gene regulatory\nnetworks (GRN) consisting on interactions between the genes responsible for\ncell differentiation and coupled in a greater scale with external factors. In\nthis work we provide a systematic methodological framework to construct\nWaddington's epigenetic landscape of the GRN involved in cellular determination\nduring the early stages of development of angiosperms. As a specific example we\nconsider the flower of the plant \\textit{Arabidopsis thaliana}. Our model,\nwhich is based on experimental data, recovers accurately the spatial\nconfiguration of the flower during cell fate determination, not only for the\nwild type, but for its homeotic mutants as well. The method developed in this\nproject is general enough to be used in the study of the relationship between\ngenotype-phenotype in other living organisms.\n",
"title": "Spatial dynamics of flower organ formation"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
12198
| null |
Validated
| null | null |
null |
{
"abstract": " Online experiments are a fundamental component of the development of\nweb-facing products. Given the large user-base, even small product improvements\ncan have a large impact on an absolute scale. As a result, accurately\nestimating the relative impact of these changes is extremely important. I\npropose an approach based on an objective Bayesian model to improve the\nsensitivity of percent change estimation in A/B experiments. Leveraging\npre-period information, this approach produces more robust and accurate point\nestimates and up to 50% tighter credible intervals than traditional methods.\nThe R package abpackage provides an implementation of the approach.\n",
"title": "Percent Change Estimation in Large Scale Online Experiments"
}
| null | null | null | null | true | null |
12199
| null |
Default
| null | null |
null |
{
"abstract": " We present here VRI spectrophotometry of 39 near-Earth asteroids (NEAs)\nobserved with the Sutherland, South Africa, node of the Korea Microlensing\nTelescope Network (KMTNet). Of the 39 NEAs, 19 were targeted, but because of\nKMTNet's large 2 deg by 2 deg field of view, 20 serendipitous NEAs were also\ncaptured in the observing fields. Targeted observations were performed within\n44 days (median: 16 days, min: 4 days) of each NEA's discovery date. Our\nbroadband spectrophotometry is reliable enough to distinguish among four\nasteroid taxonomies and we were able to confidently categorize 31 of the 39\nobserved targets as either a S-, C-, X- or D-type asteroid by means of a\nMachine Learning (ML) algorithm approach. Our data suggest that the ratio\nbetween \"stony\" S-type NEAs and \"not-stony\" (C+X+D)-type NEAs, with H\nmagnitudes between 15 and 25, is roughly 1:1. Additionally, we report ~1-hour\nlight curve data for each NEA and of the 39 targets we were able to resolve the\ncomplete rotation period and amplitude for six targets and report lower limits\nfor the remaining targets.\n",
"title": "Characterization of Near-Earth Asteroids using KMTNet-SAAO"
}
| null | null |
[
"Physics"
] | null | true | null |
12200
| null |
Validated
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.