text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " Low-dimensional plasmonic materials can function as high quality terahertz\nand infrared antennas at deep subwavelength scales. Despite these antennas'\nstrong coupling to electromagnetic fields, there is a pressing need to further\nstrengthen their absorption. We address this problem by fabricating thick films\nof aligned, uniformly sized carbon nanotubes and showing that their plasmon\nresonances are strong, narrow, and broadly tunable. With thicknesses ranging\nfrom 25 to 250 nm, our films exhibit peak attenuation reaching 70%, quality\nfactors reaching 9, and electrostatically tunable peak frequencies by a factor\nof 2.3x. Excellent nanotube alignment leads to the attenuation being 99%\nlinearly polarized along the nanotube axis. Increasing the film thickness\nblueshifts the plasmon resonators down to peak wavelengths as low as 1.4\nmicrometers, promoting them to a new near-infrared regime in which they can\nboth overlap the S11 nanotube exciton energy and access the technologically\nimportant infrared telecom band.\n",
"title": "Strong and broadly tunable plasmon resonances in thick films of aligned carbon nanotubes"
} | null | null | null | null | true | null | 1301 | null | Default | null | null |
null | {
"abstract": " In this paper, we analyse the interaction between centralised carbon emissive\ntechnologies and distributed intermittent non-emissive technologies. A\nrepresentative consumer can satisfy his electricity demand by investing in\ndistributed generation (solar panels) and by buying power from a centralised\nfirm at a price the firm sets. Distributed generation is intermittent and\ninduces an externality cost to the consumer. The firm provides non-random\nelectricity generation subject to a carbon tax and to transmission costs. The\nobjective of the consumer is to satisfy her demand while minimising investment\ncosts, payments to the firm, and intermittency costs. The objective of the firm\nis to satisfy the consumer's residual demand while minimising investment costs,\ndemand deviation costs, and maximising the payments from the consumer. We\nformulate the investment decisions as McKean-Vlasov control problems with\nstochastic coefficients. We provide explicit, price model-free solutions to the\noptimal decision problems faced by each player, the solution of the Pareto\noptimum, and the laissez-faire market situation represented by a Stackelberg\nequilibrium where the firm is the leader. We find that, from the social\nplanner's point of view, the high adjustment cost of centralised technology\ndamages the development of distributed generation. The Stackelberg equilibrium\nleads to significant deviation from the socially desirable ratio of centralised\nversus distributed generation. In a situation where a power system is to be\nbuilt from zero, the optimal strategy of the firm is high price/low\nmarket-share, but is low price/large market share for existing power systems.\nFurther, from a regulation policy, we find that a carbon tax or a subsidy to\ndistributed technology has the same efficiency in achieving a given level of\ndistributed generation.\n",
"title": "The coordination of centralised and distributed generation"
} | null | null | null | null | true | null | 1302 | null | Default | null | null |
null | {
"abstract": " The first author introduced a relative symplectic capacity $C$ for a\nsymplectic manifold $(N,\\omega_N)$ and its subset $X$ which measures the\nexistence of non-contractible periodic trajectories of Hamiltonian isotopies on\nthe product of $N$ with the annulus $A_R=(R,R)\\times\\mathbb{R}/\\mathbb{Z}$. In\nthe present paper, we give an exact computation of the capacity $C$ of the\n$2n$-torus $\\mathbb{T}^{2n}$ relative to a Lagrangian submanifold\n$\\mathbb{T}^n$ which implies the existence of non-contractible Hamiltonian\nperiodic trajectories on $A_R\\times\\mathbb{T}^{2n}$. Moreover, we give a lower\nbound on the number of such trajectories.\n",
"title": "Computation of annular capacity by Hamiltonian Floer theory of non-contractible periodic trajectories"
} | null | null | null | null | true | null | 1303 | null | Default | null | null |
null | {
"abstract": " We report on the design and sensitivity of a new torsion pendulum for\nmeasuring the performance of ultra-precise inertial sensors and for the\ndevelopment of associated technologies for space-based gravitational wave\nobservatories and geodesy missions. The apparatus comprises a 1 m-long, 50\num-diameter, tungsten fiber that supports an inertial member inside a vacuum\nsystem. The inertial member is an aluminum crossbar with four hollow cubic test\nmasses at each end. This structure converts the rotation of the torsion\npendulum into translation of the test masses. Two test masses are enclosed in\ncapacitive sensors which provide readout and actuation. These test masses are\nelectrically insulated from the rest of the cross-bar and their electrical\ncharge is controlled by photoemission using fiber-coupled ultraviolet light\nemitting diodes. The capacitive readout measures the test mass displacement\nwith a broadband sensitivity of 30 nm / sqrt(Hz), and is complemented by a\nlaser interferometer with a sensitivity of about 0.5 nm / sqrt(Hz). The\nperformance of the pendulum, as determined by the measured residual torque\nnoise and expressed in terms of equivalent force acting on a single test mass,\nis roughly 200 fN / sqrt(Hz) around 2 mHz, which is about a factor of 20 above\nthe thermal noise limit of the fiber.\n",
"title": "A New Torsion Pendulum for Gravitational Reference Sensor Technology Development"
} | null | null | null | null | true | null | 1304 | null | Default | null | null |
null | {
"abstract": " Invoking Maxwell's classical equations in conjunction with expressions for\nthe electromagnetic (EM) energy, momentum, force, and torque, we use a few\nsimple examples to demonstrate the nature of the EM angular momentum. The\nenergy and the angular momentum of an EM field will be shown to have an\nintimate relationship; a source radiating EM angular momentum will, of\nnecessity, pick up an equal but opposite amount of mechanical angular momentum;\nand the spin and orbital angular momenta of the EM field, when absorbed by a\nsmall particle, will be seen to elicit different responses from the particle.\n",
"title": "Optical Angular Momentum in Classical Electrodynamics"
} | null | null | null | null | true | null | 1305 | null | Default | null | null |
null | {
"abstract": " In this work we perform outlier detection using ensembles of neural networks\nobtained by variational approximation of the posterior in a Bayesian neural\nnetwork setting. The variational parameters are obtained by sampling from the\ntrue posterior by gradient descent. We show our outlier detection results are\ncomparable to those obtained using other efficient ensembling methods.\n",
"title": "Efficient variational Bayesian neural network ensembles for outlier detection"
} | null | null | null | null | true | null | 1306 | null | Default | null | null |
null | {
"abstract": " The local electronic and magnetic properties of superconducting FeSe have\nbeen investigated by K$\\beta$ x-ray emission (XES) and simultaneous x-ray\nabsorption spectroscopy (XAS) at the Fe K-edge at high pressure and low\ntemperature. Our results indicate a sluggish decrease of the local Fe spin\nmoment under pressure up to 7~GPa, in line with previous reports, followed by a\nsudden increase at higher pressure which has been hitherto unobserved. The\nmagnetic surge is preceded by an abrupt change of the Fe local structure as\nobserved by the decrease of the XAS pre-edge region intensity and corroborated\nby ab-initio simulations. This pressure corresponds to a structural transition,\npreviously detected by x-ray diffraction, from the $Cmma$ form to the denser\n$Pbnm$ form with octahedral coordination of iron. Finally, the near-edge region\nof the XAS spectra shows a change before this transition at 5~GPa,\ncorresponding well with the onset pressure of the previously observed\nenhancement of $T_c$. Our results emphasize the delicate interplay between\nstructural, magnetic, and superconducting properties in FeSe under pressure.\n",
"title": "Emergent high-spin state above 7 GPa in superconducting FeSe"
} | null | null | null | null | true | null | 1307 | null | Default | null | null |
null | {
"abstract": " We prove the unique assembly and unique shape verification problems,\nbenchmark measures of self-assembly model power, are\n$\\mathrm{coNP}^{\\mathrm{NP}}$-hard and contained in $\\mathrm{PSPACE}$ (and in\n$\\mathrm{\\Pi}^\\mathrm{P}_{2s}$ for staged systems with $s$ stages). En route,\nwe prove that unique shape verification problem in the 2HAM is\n$\\mathrm{coNP}^{\\mathrm{NP}}$-complete.\n",
"title": "Verification in Staged Tile Self-Assembly"
} | null | null | null | null | true | null | 1308 | null | Default | null | null |
null | {
"abstract": " One of the defining characteristics of human creativity is the ability to\nmake conceptual leaps, creating something surprising from typical knowledge. In\ncomparison, deep neural networks often struggle to handle cases outside of\ntheir training data, which is especially problematic for problems with limited\ntraining data. Approaches exist to transfer knowledge from problems with\nsufficient data to those with insufficient data, but they tend to require\nadditional training or a domain-specific method of transfer. We present a new\napproach, conceptual expansion, that serves as a general representation for\nreusing existing trained models to derive new models without backpropagation.\nWe evaluate our approach on few-shot variations of two tasks: image\nclassification and image generation, and outperform standard transfer learning\napproaches.\n",
"title": "Combinets: Creativity via Recombination of Neural Networks"
} | null | null | null | null | true | null | 1309 | null | Default | null | null |
null | {
"abstract": " This paper addresses the problem of large scale image retrieval, with the aim\nof accurately ranking the similarity of a large number of images to a given\nquery image. To achieve this, we propose a novel Siamese network. This network\nconsists of two computational strands, each comprising of a CNN component\nfollowed by a Fisher vector component. The CNN component produces dense, deep\nconvolutional descriptors that are then aggregated by the Fisher Vector method.\nCrucially, we propose to simultaneously learn both the CNN filter weights and\nFisher Vector model parameters. This allows us to account for the evolving\ndistribution of deep descriptors over the course of the learning process. We\nshow that the proposed approach gives significant improvements over the\nstate-of-the-art methods on the Oxford and Paris image retrieval datasets.\nAdditionally, we provide a baseline performance measure for both these datasets\nwith the inclusion of 1 million distractors.\n",
"title": "Siamese Network of Deep Fisher-Vector Descriptors for Image Retrieval"
} | null | null | null | null | true | null | 1310 | null | Default | null | null |
null | {
"abstract": " Scientific collaborations shape ideas as well as innovations and are both the\nsubstrate for, and the outcome of, academic careers. Recent studies show that\ngender inequality is still present in many scientific practices ranging from\nhiring to peer-review processes and grant applications. In this work, we\ninvestigate gender-specific differences in collaboration patterns of more than\none million computer scientists over the course of 47 years. We explore how\nthese patterns change over years and career ages and how they impact scientific\nsuccess. Our results highlight that successful male and female scientists\nreveal the same collaboration patterns: compared to scientists in the same\ncareer age, they tend to collaborate with more colleagues than other\nscientists, seek innovations as brokers and establish longer-lasting and more\nrepetitive collaborations. However, women are on average less likely to adapt\nthe collaboration patterns that are related with success, more likely to embed\ninto ego networks devoid of structural holes, and they exhibit stronger gender\nhomophily as well as a consistently higher dropout rate than men in all career\nages.\n",
"title": "Gender Disparities in Science? Dropout, Productivity, Collaborations and Success of Male and Female Computer Scientists"
} | null | null | [
"Computer Science",
"Physics"
]
| null | true | null | 1311 | null | Validated | null | null |
null | {
"abstract": " Complex interactions between entities are often represented as edges in a\nnetwork. In practice, the network is often constructed from noisy measurements\nand inevitably contains some errors. In this paper we consider the problem of\nestimating a network from multiple noisy observations where edges of the\noriginal network are recorded with both false positives and false negatives.\nThis problem is motivated by neuroimaging applications where brain networks of\na group of patients with a particular brain condition could be viewed as noisy\nversions of an unobserved true network corresponding to the disease. The key to\noptimally leveraging these multiple observations is to take advantage of\nnetwork structure, and here we focus on the case where the true network\ncontains communities. Communities are common in real networks in general and in\nparticular are believed to be presented in brain networks. Under a community\nstructure assumption on the truth, we derive an efficient method to estimate\nthe noise levels and the original network, with theoretical guarantees on the\nconvergence of our estimates. We show on synthetic networks that the\nperformance of our method is close to an oracle method using the true parameter\nvalues, and apply our method to fMRI brain data, demonstrating that it\nconstructs stable and plausible estimates of the population network.\n",
"title": "Estimating a network from multiple noisy realizations"
} | null | null | null | null | true | null | 1312 | null | Default | null | null |
null | {
"abstract": " Drone racing is becoming a popular sport where human pilots have to control\ntheir drones to fly at high speed through complex environments and pass a\nnumber of gates in a pre-defined sequence. In this paper, we develop an\nautonomous system for drones to race fully autonomously using only onboard\nresources. Instead of commonly used visual navigation methods, such as\nsimultaneous localization and mapping and visual inertial odometry, which are\ncomputationally expensive for micro aerial vehicles (MAVs), we developed the\nhighly efficient snake gate detection algorithm for visual navigation, which\ncan detect the gate at 20HZ on a Parrot Bebop drone. Then, with the gate\ndetection result, we developed a robust pose estimation algorithm which has\nbetter tolerance to detection noise than a state-of-the-art perspective-n-point\nmethod. During the race, sometimes the gates are not in the drone's field of\nview. For this case, a state prediction-based feed-forward control strategy is\ndeveloped to steer the drone to fly to the next gate. Experiments show that the\ndrone can fly a half-circle with 1.5m radius within 2 seconds with only 30cm\nerror at the end of the circle without any position feedback. Finally, the\nwhole system is tested in a complex environment (a showroom in the faculty of\nAerospace Engineering, TU Delft). The result shows that the drone can complete\nthe track of 15 gates with a speed of 1.5m/s which is faster than the speeds\nexhibited at the 2016 and 2017 IROS autonomous drone races.\n",
"title": "Autonomous drone race: A computationally efficient vision-based navigation and control strategy"
} | null | null | null | null | true | null | 1313 | null | Default | null | null |
null | {
"abstract": " The combustion characteristics of ethanol/Jet A-1 fuel droplets having three\ndifferent proportions of ethanol (10%, 30%, and 50% by vol.) are investigated\nin the present study. The large volatility differential between ethanol and Jet\nA-1 and the nominal immiscibility of the fuels seem to result in combustion\ncharacteristics that are rather different from our previous work on butanol/Jet\nA-1 droplets (miscible blends). Abrupt explosion was facilitated in fuel\ndroplets comprising lower proportions of ethanol (10%), possibly due to\ninsufficient nucleation sites inside the droplet and the partially unmixed fuel\nmixture. For the fuel droplets containing higher proportions of ethanol (30%\nand 50%), micro-explosion occurred through homogeneous nucleation, leading to\nthe ejection of secondary droplets and subsequent significant reduction in the\noverall droplet lifetime. The rate of bubble growth is nearly similar in all\nthe blends of ethanol; however, the evolution of ethanol vapor bubble is\nsignificantly faster than that of a vapor bubble in the blends of butanol. The\nprobability of disruptive behavior is considerably higher in ethanol/Jet A-1\nblends than that of butanol/Jet A-1 blends. The Sauter mean diameter of the\nsecondary droplets produced from micro-explosion is larger for blends with a\nhigher proportion of ethanol. Both abrupt explosion and micro-explosion create\na large-scale distortion of the flame, which surrounds the parent droplet. The\nsecondary droplets generated from abrupt explosion undergo rapid evaporation\nwhereas the secondary droplets from micro-explosion carry their individual\nflame and evaporate slowly. The growth of vapor bubble was also witnessed in\nthe secondary droplets, which leads to the further breakup of the droplet\n(puffing/micro-explosion).\n",
"title": "Experimental investigations on nucleation, bubble growth, and micro-explosion characteristics during the combustion of ethanol/Jet A-1 fuel droplets"
} | null | null | null | null | true | null | 1314 | null | Default | null | null |
null | {
"abstract": " The graph Laplacian plays key roles in information processing of relational\ndata, and has analogies with the Laplacian in differential geometry. In this\npaper, we generalize the analogy between graph Laplacian and differential\ngeometry to the hypergraph setting, and propose a novel hypergraph\n$p$-Laplacian. Unlike the existing two-node graph Laplacians, this\ngeneralization makes it possible to analyze hypergraphs, where the edges are\nallowed to connect any number of nodes. Moreover, we propose a semi-supervised\nlearning method based on the proposed hypergraph $p$-Laplacian, and formalize\nthem as the analogue to the Dirichlet problem, which often appears in physics.\nWe further explore theoretical connections to normalized hypergraph cut on a\nhypergraph, and propose normalized cut corresponding to hypergraph\n$p$-Laplacian. The proposed $p$-Laplacian is shown to outperform standard\nhypergraph Laplacians in the experiment on a hypergraph semi-supervised\nlearning and normalized cut setting.\n",
"title": "Hypergraph $p$-Laplacian: A Differential Geometry View"
} | null | null | null | null | true | null | 1315 | null | Default | null | null |
null | {
"abstract": " Manipulating topological disclination networks that arise in a\nsymmetry-breaking phase transfor- mation in widely varied systems including\nanisotropic materials can potentially lead to the design of novel materials\nlike conductive microwires, self-assembled resonators, and active anisotropic\nmatter. However, progress in this direction is hindered by a lack of control of\nthe kinetics and microstructure due to inherent complexity arising from\ncompeting energy and topology. We have studied thermal and electrokinetic\neffects on disclinations in a three-dimensional nonabsorbing nematic material\nwith a positive and negative sign of the dielectric anisotropy. The electric\nflux lines are highly non-uniform in uniaxial media after an electric field\nbelow the Fréedericksz threshold is switched on, and the kinetics of the\ndisclination lines is slowed down. In biaxial media, depending on the sign of\nthe dielectric anisotropy, apart from the slowing down of the disclination\nkinetics, a non-uniform electric field filters out disclinations of different\ntopology by inducing a kinetic asymmetry. These results enhance the current\nunderstanding of forced disclination networks and establish the pre- sented\nmethod, which we call fluctuating electronematics, as a potentially useful tool\nfor designing materials with novel properties in silico.\n",
"title": "Controlling motile disclinations in a thick nematogenic material with an electric field"
} | null | null | null | null | true | null | 1316 | null | Default | null | null |
null | {
"abstract": " Generative Adversarial Networks (GAN) have received wide attention in the\nmachine learning field for their potential to learn high-dimensional, complex\nreal data distribution. Specifically, they do not rely on any assumptions about\nthe distribution and can generate real-like samples from latent space in a\nsimple manner. This powerful property leads GAN to be applied to various\napplications such as image synthesis, image attribute editing, image\ntranslation, domain adaptation and other academic fields. In this paper, we aim\nto discuss the details of GAN for those readers who are familiar with, but do\nnot comprehend GAN deeply or who wish to view GAN from various perspectives. In\naddition, we explain how GAN operates and the fundamental meaning of various\nobjective functions that have been suggested recently. We then focus on how the\nGAN can be combined with an autoencoder framework. Finally, we enumerate the\nGAN variants that are applied to various tasks and other fields for those who\nare interested in exploiting GAN for their research.\n",
"title": "How Generative Adversarial Networks and Their Variants Work: An Overview"
} | null | null | null | null | true | null | 1317 | null | Default | null | null |
null | {
"abstract": " We revisit the classification problem and focus on nonlinear methods for\nclassification on manifolds. For multivariate datasets lying on an embedded\nnonlinear Riemannian manifold within the higher-dimensional space, our aim is\nto acquire a classification boundary between the classes with labels. Motivated\nby the principal flow [Panaretos, Pham and Yao, 2014], a curve that moves along\na path of the maximum variation of the data, we introduce the principal\nboundary. From the classification perspective, the principal boundary is\ndefined as an optimal curve that moves in between the principal flows traced\nout from two classes of the data, and at any point on the boundary, it\nmaximizes the margin between the two classes. We estimate the boundary in\nquality with its direction supervised by the two principal flows. We show that\nthe principal boundary yields the usual decision boundary found by the support\nvector machine, in the sense that locally, the two boundaries coincide. By\nmeans of examples, we illustrate how to find, use and interpret the principal\nboundary.\n",
"title": "Principal Boundary on Riemannian Manifolds"
} | null | null | null | null | true | null | 1318 | null | Default | null | null |
null | {
"abstract": " The development of chemical reaction models aids understanding and prediction\nin areas ranging from biology to electrochemistry and combustion. A systematic\napproach to building reaction network models uses observational data not only\nto estimate unknown parameters, but also to learn model structure. Bayesian\ninference provides a natural approach to this data-driven construction of\nmodels. Yet traditional Bayesian model inference methodologies that numerically\nevaluate the evidence for each model are often infeasible for nonlinear\nreaction network inference, as the number of plausible models can be\ncombinatorially large. Alternative approaches based on model-space sampling can\nenable large-scale network inference, but their realization presents many\nchallenges. In this paper, we present new computational methods that make\nlarge-scale nonlinear network inference tractable. First, we exploit the\ntopology of networks describing potential interactions among chemical species\nto design improved \"between-model\" proposals for reversible-jump Markov chain\nMonte Carlo. Second, we introduce a sensitivity-based determination of move\ntypes which, when combined with network-aware proposals, yields significant\nadditional gains in sampling performance. These algorithms are demonstrated on\ninference problems drawn from systems biology, with nonlinear differential\nequation models of species interactions.\n",
"title": "Exploiting network topology for large-scale inference of nonlinear reaction models"
} | null | null | null | null | true | null | 1319 | null | Default | null | null |
null | {
"abstract": " We consider the Cauchy problem for the incompressible Navier-Stokes equations\nin $\\mathbb{R}^3$ for a one-parameter family of explicit scale-invariant\naxi-symmetric initial data, which is smooth away from the origin and invariant\nunder the reflection with respect to the $xy$-plane. Working in the class of\naxi-symmetric fields, we calculate numerically scale-invariant solutions of the\nCauchy problem in terms of their profile functions, which are smooth. The\nsolutions are necessarily unique for small data, but for large data we observe\na breaking of the reflection symmetry of the initial data through a\npitchfork-type bifurcation. By a variation of previous results by Jia &\nŠverák (2013) it is known rigorously that if the behavior seen here\nnumerically can be proved, optimal non-uniqueness examples for the Cauchy\nproblem can be established, and two different solutions can exists for the same\ninitial datum which is divergence-free, smooth away from the origin, compactly\nsupported, and locally $(-1)$-homogeneous near the origin. In particular,\nassuming our (finite-dimensional) numerics represents faithfully the behavior\nof the full (infinite-dimensional) system, the problem of uniqueness of the\nLeray-Hopf solutions (with non-smooth initial data) has a negative answer and,\nin addition, the perturbative arguments such those by Kato (1984) and Koch &\nTataru (2001), or the weak-strong uniqueness results by Leray, Prodi, Serrin,\nLadyzhenskaya and others, already give essentially optimal results. There are\nno singularities involved in the numerics, as we work only with smooth profile\nfunctions. It is conceivable that our calculations could be upgraded to a\ncomputer-assisted proof, although this would involve a substantial amount of\nadditional work and calculations, including a much more detailed analysis of\nthe asymptotic expansions of the solutions at large distances.\n",
"title": "Numerical investigations of non-uniqueness for the Navier-Stokes initial value problem in borderline spaces"
} | null | null | null | null | true | null | 1320 | null | Default | null | null |
null | {
"abstract": " Inspired by the success of deep learning techniques in the physical and\nchemical sciences, we apply a modification of an autoencoder type deep neural\nnetwork to the task of dimension reduction of molecular dynamics data. We can\nshow that our time-lagged autoencoder reliably finds low-dimensional embeddings\nfor high-dimensional feature spaces which capture the slow dynamics of the\nunderlying stochastic processes - beyond the capabilities of linear dimension\nreduction techniques.\n",
"title": "Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics"
} | null | null | [
"Computer Science",
"Physics",
"Statistics"
]
| null | true | null | 1321 | null | Validated | null | null |
null | {
"abstract": " We introduce a two-parameter family of birational maps, which reduces to a\nfamily previously found by Demskoi, Tran, van der Kamp and Quispel (DTKQ) when\none of the parameters is set to zero. The study of the singularity confinement\npattern for these maps leads to the introduction of a tau function satisfying a\nhomogeneous recurrence which has the Laurent property, and the tropical (or\nultradiscrete) analogue of this homogeneous recurrence confirms the quadratic\ndegree growth found empirically by Demskoi et al. We prove that the tau\nfunction also satisfies two different bilinear equations, each of which is a\nreduction of the Hirota-Miwa equation (also known as the discrete KP equation,\nor the octahedron recurrence). Furthermore, these bilinear equations are\nrelated to reductions of particular two-dimensional integrable lattice\nequations, of discrete KdV or discrete Toda type. These connections, as well as\nthe cluster algebra structure of the bilinear equations, allow a direct\nconstruction of Poisson brackets, Lax pairs and first integrals for the\nbirational maps. As a consequence of the latter results, we show how each\nmember of the family can be lifted to a system that is integrable in the\nLiouville sense, clarifying observations made previously in the original DTKQ\ncase.\n",
"title": "Some integrable maps and their Hirota bilinear forms"
} | null | null | null | null | true | null | 1322 | null | Default | null | null |
null | {
"abstract": " The integrable nonlocal nonlinear Schrodinger (NNLS) equation with the\nself-induced parity-time-symmetric potential [Phys. Rev. Lett. 110 (2013)\n064105] is investigated, which is an integrable extension of the standard NLS\nequation. Its novel higher-order rational solitons are found using the nonlocal\nversion of the generalized perturbation (1, N-1)-fold Darboux transformation.\nThese rational solitons illustrate abundant wave structures for the distinct\nchoices of parameters (e.g., the strong and weak interactions of bright and\ndark rational solitons). Moreover, we also explore the dynamical behaviors of\nthese higher-order rational solitons with some small noises on the basis of\nnumerical simulations.\n",
"title": "Dynamics of higher-order rational solitons for the nonlocal nonlinear Schrodinger equation with the self-induced parity-time-symmetric potential"
} | null | null | [
"Physics",
"Mathematics"
]
| null | true | null | 1323 | null | Validated | null | null |
null | {
"abstract": " While bigger and deeper neural network architectures continue to advance the\nstate-of-the-art for many computer vision tasks, real-world adoption of these\nnetworks is impeded by hardware and speed constraints. Conventional model\ncompression methods attempt to address this problem by modifying the\narchitecture manually or using pre-defined heuristics. Since the space of all\nreduced architectures is very large, modifying the architecture of a deep\nneural network in this way is a difficult task. In this paper, we tackle this\nissue by introducing a principled method for learning reduced network\narchitectures in a data-driven way using reinforcement learning. Our approach\ntakes a larger `teacher' network as input and outputs a compressed `student'\nnetwork derived from the `teacher' network. In the first stage of our method, a\nrecurrent policy network aggressively removes layers from the large `teacher'\nmodel. In the second stage, another recurrent policy network carefully reduces\nthe size of each remaining layer. The resulting network is then evaluated to\nobtain a reward -- a score based on the accuracy and compression of the\nnetwork. Our approach uses this reward signal with policy gradients to train\nthe policies to find a locally optimal student network. Our experiments show\nthat we can achieve compression rates of more than 10x for models such as\nResNet-34 while maintaining similar performance to the input `teacher' network.\nWe also present a valuable transfer learning result which shows that policies\nwhich are pre-trained on smaller `teacher' networks can be used to rapidly\nspeed up training on larger `teacher' networks.\n",
"title": "N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning"
} | null | null | null | null | true | null | 1324 | null | Default | null | null |
null | {
"abstract": " A novel approach towards the spectral analysis of stationary random bivariate\nsignals is proposed. Using the Quaternion Fourier Transform, we introduce a\nquaternion-valued spectral representation of random bivariate signals seen as\ncomplex-valued sequences. This makes possible the definition of a scalar\nquaternion-valued spectral density for bivariate signals. This spectral density\ncan be meaningfully interpreted in terms of frequency-dependent polarization\nattributes. A natural decomposition of any random bivariate signal in terms of\nunpolarized and polarized components is introduced. Nonparametric spectral\ndensity estimation is investigated, and we introduce the polarization\nperiodogram of a random bivariate signal. Numerical experiments support our\ntheoretical analysis, illustrating the relevance of the approach on synthetic\ndata.\n",
"title": "Spectral analysis of stationary random bivariate signals"
} | null | null | null | null | true | null | 1325 | null | Default | null | null |
null | {
"abstract": " We propose a method (TT-GP) for approximate inference in Gaussian Process\n(GP) models. We build on previous scalable GP research including stochastic\nvariational inference based on inducing inputs, kernel interpolation, and\nstructure exploiting algebra. The key idea of our method is to use Tensor Train\ndecomposition for variational parameters, which allows us to train GPs with\nbillions of inducing inputs and achieve state-of-the-art results on several\nbenchmarks. Further, our approach allows for training kernels based on deep\nneural networks without any modifications to the underlying GP model. A neural\nnetwork learns a multidimensional embedding for the data, which is used by the\nGP to make the final prediction. We train GP and neural network parameters\nend-to-end without pretraining, through maximization of GP marginal likelihood.\nWe show the efficiency of the proposed approach on several regression and\nclassification benchmark datasets including MNIST, CIFAR-10, and Airline.\n",
"title": "Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition"
} | null | null | null | null | true | null | 1326 | null | Default | null | null |
null | {
"abstract": " In the second edition of the congruence lattice book, Problem 22.1 asks for a\ncharacterization of subsets $Q$ of a finite distributive lattice $D$ such that\nthere is a finite lattice $L$ whose congruence lattice is isomorphic to $D$ and\nunder this isomorphism $Q$ corresponds the the principal congruences of $L$. In\nthis note, we prove some preliminary results.\n",
"title": "Some preliminary results on the set of principal congruences of a finite lattice"
} | null | null | null | null | true | null | 1327 | null | Default | null | null |
null | {
"abstract": " Vasculature is known to be of key biological significance, especially in the\nstudy of cancer. As such, considerable effort has been focused on the automated\nmeasurement and analysis of vasculature in medical and pre-clinical images. In\ntumors in particular, the vascular networks may be extremely irregular and the\nappearance of the individual vessels may not conform to classical descriptions\nof vascular appearance. Typically, vessels are extracted by either a\nsegmentation and thinning pipeline, or by direct tracking. Neither of these\nmethods are well suited to microscopy images of tumor vasculature. In order to\naddress this we propose a method to directly extract a medial representation of\nthe vessels using Convolutional Neural Networks. We then show that these\ntwo-dimensional centerlines can be meaningfully extended into 3D in anisotropic\nand complex microscopy images using the recently popularized Convolutional Long\nShort-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this\nhybrid convolutional-recurrent architecture over both 2D and 3D convolutional\ncomparators.\n",
"title": "Extracting 3D Vascular Structures from Microscopy Images using Convolutional Recurrent Networks"
} | null | null | [
"Computer Science"
]
| null | true | null | 1328 | null | Validated | null | null |
null | {
"abstract": " We report the results of a sensitive search for the 443.952902 GHz $J=1-0$\ntransition of the LiH molecule toward two interstellar clouds in the Milky Way,\nW49N and Sgr B2 (Main), that has been carried out using the Atacama Pathfinder\nExperiment (APEX) telescope. The results obtained toward W49N place an upper\nlimit of $1.9 \\times 10^{-11}\\, (3\\sigma)$ on the LiH abundance, $N({\\rm\nLiH})/N({\\rm H}_2)$, in a foreground, diffuse molecular cloud along the\nsight-line to W49N, corresponding to 0.5% of the solar system lithium\nabundance. Those obtained toward Sgr B2 (Main) place an abundance limit $N({\\rm\nLiH})/N({\\rm H}_2) < 3.6 \\times 10^{-13} \\,(3\\sigma)$ in the dense gas within\nthe Sgr B2 cloud itself. These limits are considerably smaller that those\nimplied by the tentative detection of LiH reported previously for the $z=0.685$\nabsorber toward B0218+357.\n",
"title": "Search for Interstellar LiH in the Milky Way"
} | null | null | null | null | true | null | 1329 | null | Default | null | null |
null | {
"abstract": " In this paper, we propose a probabilistic parsing model, which defines a\nproper conditional probability distribution over non-projective dependency\ntrees for a given sentence, using neural representations as inputs. The neural\nnetwork architecture is based on bi-directional LSTM-CNNs which benefits from\nboth word- and character-level representations automatically, by using\ncombination of bidirectional LSTM and CNN. On top of the neural network, we\nintroduce a probabilistic structured layer, defining a conditional log-linear\nmodel over non-projective trees. We evaluate our model on 17 different\ndatasets, across 14 different languages. By exploiting Kirchhoff's Matrix-Tree\nTheorem (Tutte, 1984), the partition functions and marginals can be computed\nefficiently, leading to a straight-forward end-to-end model training procedure\nvia back-propagation. Our parser achieves state-of-the-art parsing performance\non nine datasets.\n",
"title": "Neural Probabilistic Model for Non-projective MST Parsing"
} | null | null | null | null | true | null | 1330 | null | Default | null | null |
null | {
"abstract": " Modularity is designed to measure the strength of division of a network into\nclusters (known also as communities). Networks with high modularity have dense\nconnections between the vertices within clusters but sparse connections between\nvertices of different clusters. As a result, modularity is often used in\noptimization methods for detecting community structure in networks, and so it\nis an important graph parameter from a practical point of view. Unfortunately,\nmany existing non-spatial models of complex networks do not generate graphs\nwith high modularity; on the other hand, spatial models naturally create\nclusters. We investigate this phenomenon by considering a few examples from\nboth sub-classes. We prove precise theoretical results for the classical model\nof random d-regular graphs as well as the preferential attachment model, and\ncontrast these results with the ones for the spatial preferential attachment\n(SPA) model that is a model for complex networks in which vertices are embedded\nin a metric space, and each vertex has a sphere of influence whose size\nincreases if the vertex gains an in-link, and otherwise decreases with time.\nThe results obtained in this paper can be used for developing statistical tests\nfor models selection and to measure statistical significance of clusters\nobserved in complex networks.\n",
"title": "Modularity of complex networks models"
} | null | null | null | null | true | null | 1331 | null | Default | null | null |
null | {
"abstract": " Vision science, particularly machine vision, has been revolutionized by\nintroducing large-scale image datasets and statistical learning approaches.\nYet, human neuroimaging studies of visual perception still rely on small\nnumbers of images (around 100) due to time-constrained experimental procedures.\nTo apply statistical learning approaches that integrate neuroscience, the\nnumber of images used in neuroimaging must be significantly increased. We\npresent BOLD5000, a human functional MRI (fMRI) study that includes almost\n5,000 distinct images depicting real-world scenes. Beyond dramatically\nincreasing image dataset size relative to prior fMRI studies, BOLD5000 also\naccounts for image diversity, overlapping with standard computer vision\ndatasets by incorporating images from the Scene UNderstanding (SUN), Common\nObjects in Context (COCO), and ImageNet datasets. The scale and diversity of\nthese image datasets, combined with a slow event-related fMRI design, enable\nfine-grained exploration into the neural representation of a wide range of\nvisual features, categories, and semantics. Concurrently, BOLD5000 brings us\ncloser to realizing Marr's dream of a singular vision science - the intertwined\nstudy of biological and computer vision.\n",
"title": "BOLD5000: A public fMRI dataset of 5000 images"
} | null | null | null | null | true | null | 1332 | null | Default | null | null |
null | {
"abstract": " Next generation radio telescopes, namely the Five-hundred-meter Aperture\nSpherical Telescope (FAST) and the Square Kilometer Array (SKA), will\nrevolutionize the pulsar timing arrays (PTAs) based gravitational wave (GW)\nsearches. We review some of the characteristics of FAST and SKA, and the\nresulting PTAs, that are pertinent to the detection of gravitational wave\nsignals from individual supermassive black hole binaries.\n",
"title": "Prospects for gravitational wave astronomy with next generation large-scale pulsar timing arrays"
} | null | null | null | null | true | null | 1333 | null | Default | null | null |
null | {
"abstract": " In this letter, we propose a new identification criterion that guarantees the\nrecovery of the low-rank latent factors in the nonnegative matrix factorization\n(NMF) model, under mild conditions. Specifically, using the proposed criterion,\nit suffices to identify the latent factors if the rows of one factor are\n\\emph{sufficiently scattered} over the nonnegative orthant, while no structural\nassumption is imposed on the other factor except being full-rank. This is by\nfar the mildest condition under which the latent factors are provably\nidentifiable from the NMF model.\n",
"title": "On Identifiability of Nonnegative Matrix Factorization"
} | null | null | null | null | true | null | 1334 | null | Default | null | null |
null | {
"abstract": " Supervisory control synthesis encounters with computational complexity. This\ncan be reduced by decentralized supervisory control approach. In this paper, we\ndefine intrinsic control consistency for a pair of states of the plant.\nG-control consistency (GCC) is another concept which is defined for a natural\nprojection w.r.t. the plant. We prove that, if a natural projection is output\ncontrol consistent for the closed language of the plant, and is a natural\nobserver for the marked language of the plant, then it is G-control consistent.\nNamely, we relax the conditions for synthesis the optimal non-blocking\ndecentralized supervisory control by substituting GCC property for L-OCC and\nLm-observer properties of a natural projection. We propose a method to\nsynthesize the optimal non-blocking decentralized supervisory control based on\nGCC property for a natural projection. In fact, we change the approach from\nlanguage-based properties of a natural projection to DES-based property by\ndefining GCC property.\n",
"title": "Optimal Non-blocking Decentralized Supervisory Control Using G-Control Consistency"
} | null | null | null | null | true | null | 1335 | null | Default | null | null |
null | {
"abstract": " Agents vote to choose a fair mixture of public outcomes; each agent likes or\ndislikes each outcome. We discuss three outstanding voting rules. The\nConditional Utilitarian rule, a variant of the random dictator, is\nStrategyproof and guarantees to any group of like-minded agents an influence\nproportional to its size. It is easier to compute and more efficient than the\nfamiliar Random Priority rule. Its worst case (resp. average) inefficiency is\nprovably (resp. in numerical experiments) low if the number of agents is low.\nThe efficient Egalitarian rule protects similarly individual agents but not\ncoalitions. It is Excludable Strategyproof: I do not want to lie if I cannot\nconsume outcomes I claim to dislike. The efficient Nash Max Product rule offers\nthe strongest welfare guarantees to coalitions, who can force any outcome with\na probability proportional to their size. But it fails even the excludable form\nof Strategyproofness.\n",
"title": "Fair mixing: the case of dichotomous preferences"
} | null | null | null | null | true | null | 1336 | null | Default | null | null |
null | {
"abstract": " We give criteria on an inverse system of finite groups that ensure the limit\nis just infinite or hereditarily just infinite. More significantly, these\ncriteria are 'universal' in that all (hereditarily) just infinite profinite\ngroups arise as limits of the specified form.\nThis is a corrected and revised version of the article: 'Inverse system\ncharacterizations of the (hereditarily) just infinite property in profinite\ngroups', Bull. LMS vol 44, 3 (2012) 413--425.\n",
"title": "Inverse system characterizations of the (hereditarily) just infinite property in profinite groups"
} | null | null | [
"Mathematics"
]
| null | true | null | 1337 | null | Validated | null | null |
null | {
"abstract": " Recent advances in learning Deep Neural Network (DNN) architectures have\nreceived a great deal of attention due to their ability to outperform\nstate-of-the-art classifiers across a wide range of applications, with little\nor no feature engineering. In this paper, we broadly study the applicability of\ndeep learning to website fingerprinting. We show that unsupervised DNNs can be\nused to extract low-dimensional feature vectors that improve the performance of\nstate-of-the-art website fingerprinting attacks. When used as classifiers, we\nshow that they can match or exceed performance of existing attacks across a\nrange of application scenarios, including fingerprinting Tor website traces,\nfingerprinting search engine queries over Tor, defeating fingerprinting\ndefenses, and fingerprinting TLS-encrypted websites. Finally, we show that DNNs\ncan be used to predict the fingerprintability of a website based on its\ncontents, achieving 99% accuracy on a data set of 4500 website downloads.\n",
"title": "p-FP: Extraction, Classification, and Prediction of Website Fingerprints with Deep Learning"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 1338 | null | Validated | null | null |
null | {
"abstract": " In this article the issues are discussed with the Bayesian approach,\nleast-square fits, and most-likely fits. Trying to counter these issues, a\nmethod, based on weighted confidence, is proposed for estimating probabilities\nand other observables. This method sums over different model parameter\ncombinations but does not require the need for making assumptions on priors or\nunderlying probability functions. Moreover, by construction the results are\ninvariant under reparametrization of the model parameters. In one case the\nresult appears similar as in Bayesian statistics but in general there is no\nagreement. The binomial distribution is also studied which turns out to be\nuseful for making predictions on production processes without the need to make\nfurther assumptions. In the last part, the case of a simple linear fit (a\nmulti-variate example) is studied using the standard approaches and the\nconfidence weighted approach.\n",
"title": "Equal confidence weighted expectation value estimates"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 1339 | null | Validated | null | null |
null | {
"abstract": " In spite of decades of research, much remains to be discovered about folding:\nthe detailed structure of the initial (unfolded) state, vestigial folding\ninstructions remaining only in the unfolded state, the interaction of the\nmolecule with the solvent, instantaneous power at each point within the\nmolecule during folding, the fact that the process is stable in spite of myriad\npossible disturbances, potential stabilization of trajectory by chaos, and, of\ncourse, the exact physical mechanism (code or instructions) by which the\nfolding process is specified in the amino acid sequence. Simulations based upon\nmicroscopic physics have had some spectacular successes and continue to\nimprove, particularly as super-computer capabilities increase. The simulations,\nexciting as they are, are still too slow and expensive to deal with the\nenormous number of molecules of interest. In this paper, we introduce an\napproximate model based upon physics, empirics, and information science which\nis proposed for use in machine learning applications in which very large\nnumbers of sub-simulations must be made. In particular, we focus upon machine\nlearning applications in the learning phase and argue that our model is\nsufficiently close to the physics that, in spite of its approximate nature, can\nfacilitate stepping through machine learning solutions to explore the mechanics\nof folding mentioned above. We particularly emphasize the exploration of energy\nflow (power) within the molecule during folding, the possibility of energy\nscale invariance (above a threshold), vestigial information in the unfolded\nstate as attractive targets for such machine language analysis, and statistical\nanalysis of an ensemble of folding micro-steps.\n",
"title": "Protein Folding and Machine Learning: Fundamentals"
} | null | null | null | null | true | null | 1340 | null | Default | null | null |
null | {
"abstract": " We consider generalizations of the familiar fifteen-piece sliding puzzle on\nthe 4 by 4 square grid. On larger grids with more pieces and more holes,\nasymptotically how fast can we move the puzzle into the solved state? We also\ngive a variation with sliding hexagons. The square puzzles and the hexagon\npuzzles are both discrete versions of configuration spaces of disks, which are\nof interest in statistical mechanics and topological robotics. The\ncombinatorial theorems and proofs in this paper suggest followup questions in\nboth combinatorics and topology, and may turn out to be useful for proving\ntopological statements about configuration spaces.\n",
"title": "Discrete configuration spaces of squares and hexagons"
} | null | null | null | null | true | null | 1341 | null | Default | null | null |
null | {
"abstract": " We establish the Iwasawa main conjecture for semi-stable abelian varieties\nover a function field of characteristic $p$ under certain restrictive\nassumptions. Namely we consider $p$-torsion free $p$-adic Lie extensions of the\nbase field which contain the constant $\\mathbb Z_p$-extension and are\neverywhere unramified. Under the classical $\\mu=0$ hypothesis we give a proof\nwhich mainly relies on the interpretation of the Selmer complex in terms of\n$p$-adic cohomology [TV] together with the trace formulas of [EL1].\n",
"title": "On the non commutative Iwasawa main conjecture for abelian varieties over function fields"
} | null | null | null | null | true | null | 1342 | null | Default | null | null |
null | {
"abstract": " We consider induced emission of ultrarelativistic electrons in strong\nelectric (magnetic) fields that are uniform along the direction of the electron\nmotion and are not uniform in the transverse direction. The stimulated\nabsorption and emission probabilities are found in such system.\n",
"title": "Absorption and Emission Probabilities of Electrons in Electric and Magnetic Fields for FEL"
} | null | null | null | null | true | null | 1343 | null | Default | null | null |
null | {
"abstract": " Measuring gases for air quality monitoring is a challenging task that claims\na lot of time of observation and large numbers of sensors. The aim of this\nproject is to develop a partially autonomous unmanned aerial vehicle (UAV)\nequipped with sensors, in order to monitor and collect air quality real time\ndata in designated areas and send it to the ground base. This project is\ndesigned and implemented by a multidisciplinary team from electrical and\ncomputer engineering departments. The electrical engineering team responsible\nfor implementing air quality sensors for detecting real time data and transmit\nit from the plane to the ground. On the other hand, the computer engineering\nteam is in charge of Interface sensors and provide platform to view and\nvisualize air quality data and live video streaming. The proposed project\ncontains several sensors to measure Temperature, Humidity, Dust, CO, CO2 and\nO3. The collected data is transmitted to a server over a wireless internet\nconnection and the server will store, and supply these data to any party who\nhas permission to access it through android phone or website in semi-real time.\nThe developed UAV has carried several field tests in Al Shamal airport in\nQatar, with interesting results and proof of concept outcomes.\n",
"title": "Design, Development and Evaluation of a UAV to Study Air Quality in Qatar"
} | null | null | null | null | true | null | 1344 | null | Default | null | null |
null | {
"abstract": " In this paper, the problem of maximizing a black-box function $f:\\mathcal{X}\n\\to \\mathbb{R}$ is studied in the Bayesian framework with a Gaussian Process\n(GP) prior. In particular, a new algorithm for this problem is proposed, and\nhigh probability bounds on its simple and cumulative regret are established.\nThe query point selection rule in most existing methods involves an exhaustive\nsearch over an increasingly fine sequence of uniform discretizations of\n$\\mathcal{X}$. The proposed algorithm, in contrast, adaptively refines\n$\\mathcal{X}$ which leads to a lower computational complexity, particularly\nwhen $\\mathcal{X}$ is a subset of a high dimensional Euclidean space. In\naddition to the computational gains, sufficient conditions are identified under\nwhich the regret bounds of the new algorithm improve upon the known results.\nFinally an extension of the algorithm to the case of contextual bandits is\nproposed, and high probability bounds on the contextual regret are presented.\n",
"title": "Gaussian Process bandits with adaptive discretization"
} | null | null | null | null | true | null | 1345 | null | Default | null | null |
null | {
"abstract": " We present a method for conditional time series forecasting based on an\nadaptation of the recent deep convolutional WaveNet architecture. The proposed\nnetwork contains stacks of dilated convolutions that allow it to access a broad\nrange of history when forecasting, a ReLU activation function and conditioning\nis performed by applying multiple convolutional filters in parallel to separate\ntime series which allows for the fast processing of data and the exploitation\nof the correlation structure between the multivariate time series. We test and\nanalyze the performance of the convolutional network both unconditionally as\nwell as conditionally for financial time series forecasting using the S&P500,\nthe volatility index, the CBOE interest rate and several exchange rates and\nextensively compare it to the performance of the well-known autoregressive\nmodel and a long-short term memory network. We show that a convolutional\nnetwork is well-suited for regression-type problems and is able to effectively\nlearn dependencies in and between the series without the need for long\nhistorical time series, is a time-efficient and easy to implement alternative\nto recurrent-type networks and tends to outperform linear and recurrent models.\n",
"title": "Conditional Time Series Forecasting with Convolutional Neural Networks"
} | null | null | [
"Statistics"
]
| null | true | null | 1346 | null | Validated | null | null |
null | {
"abstract": " Bias is a common problem in today's media, appearing frequently in text and\nin visual imagery. Users on social media websites such as Twitter need better\nmethods for identifying bias. Additionally, activists --those who are motivated\nto effect change related to some topic, need better methods to identify and\ncounteract bias that is contrary to their mission. With both of these use cases\nin mind, in this paper we propose a novel tool called UnbiasedCrowd that\nsupports identification of, and action on bias in visual news media. In\nparticular, it addresses the following key challenges (1) identification of\nbias; (2) aggregation and presentation of evidence to users; (3) enabling\nactivists to inform the public of bias and take action by engaging people in\nconversation with bots. We describe a preliminary study on the Twitter platform\nthat explores the impressions that activists had of our tool, and how people\nreacted and engaged with online bots that exposed visual bias. We conclude by\ndiscussing design and implication of our findings for creating future systems\nto identify and counteract the effects of news bias.\n",
"title": "Automated Assistants to Identify and Prompt Action on Visual News Bias"
} | null | null | [
"Computer Science"
]
| null | true | null | 1347 | null | Validated | null | null |
null | {
"abstract": " Many astronomical sources produce transient phenomena at radio frequencies,\nbut the transient sky at low frequencies (<300 MHz) remains relatively\nunexplored. Blind surveys with new widefield radio instruments are setting\nincreasingly stringent limits on the transient surface density on various\ntimescales. Although many of these instruments are limited by classical\nconfusion noise from an ensemble of faint, unresolved sources, one can in\nprinciple detect transients below the classical confusion limit to the extent\nthat the classical confusion noise is independent of time. We develop a\ntechnique for detecting radio transients that is based on temporal matched\nfilters applied directly to time series of images rather than relying on\nsource-finding algorithms applied to individual images. This technique has\nwell-defined statistical properties and is applicable to variable and transient\nsearches for both confusion-limited and non-confusion-limited instruments.\nUsing the Murchison Widefield Array as an example, we demonstrate that the\ntechnique works well on real data despite the presence of classical confusion\nnoise, sidelobe confusion noise, and other systematic errors. We searched for\ntransients lasting between 2 minutes and 3 months. We found no transients and\nset improved upper limits on the transient surface density at 182 MHz for flux\ndensities between ~20--200 mJy, providing the best limits to date for hour- and\nmonth-long transients.\n",
"title": "A Matched Filter Technique For Slow Radio Transient Detection And First Demonstration With The Murchison Widefield Array"
} | null | null | null | null | true | null | 1348 | null | Default | null | null |
null | {
"abstract": " A method is developed for generating pseudopotentials for use in\ncorrelated-electron calculations. The paradigms of shape and energy consistency\nare combined and defined in terms of correlated-electron wave-functions. The\nresulting energy consistent correlated electron pseudopotentials (eCEPPs) are\nconstructed for H, Li--F, Sc--Fe, and Cu. Their accuracy is quantified by\ncomparing the relaxed molecular geometries and dissociation energies they\nprovide with all electron results, with all quantities evaluated using coupled\ncluster singles doubles and triples calculations. Errors inherent in the\npseudopotentials are also compared with those arising from a number of\napproximations commonly used with pseudopotentials. The eCEPPs provide a\nsignificant improvement in optimised geometries and dissociation energies for\nsmall molecules, with errors for the latter being an order-of-magnitude smaller\nthan for Hartree-Fock-based pseudopotentials available in the literature.\nGaussian basis sets are optimised for use with these pseudopotentials.\n",
"title": "Shape and Energy Consistent Pseudopotentials for Correlated Electron systems"
} | null | null | null | null | true | null | 1349 | null | Default | null | null |
null | {
"abstract": " We offer a general Bayes theoretic framework to tackle the model selection\nproblem under a two-step prior design: the first-step prior serves to assess\nthe model selection uncertainty, and the second-step prior quantifies the prior\nbelief on the strength of the signals within the model chosen from the first\nstep.\nWe establish non-asymptotic oracle posterior contraction rates under (i) a\nnew Bernstein-inequality condition on the log likelihood ratio of the\nstatistical experiment, (ii) a local entropy condition on the dimensionality of\nthe models, and (iii) a sufficient mass condition on the second-step prior near\nthe best approximating signal for each model. The first-step prior can be\ndesigned generically. The resulting posterior mean also satisfies an oracle\ninequality, thus automatically serving as an adaptive point estimator in a\nfrequentist sense. Model mis-specification is allowed in these oracle rates.\nThe new Bernstein-inequality condition not only eliminates the convention of\nconstructing explicit tests with exponentially small type I and II errors, but\nalso suggests the intrinsic metric to use in a given statistical experiment,\nboth as a loss function and as an entropy measurement. This gives a unified\nreduction scheme for many experiments considered in Ghoshal & van der\nVaart(2007) and beyond. As an illustration for the scope of our general results\nin concrete applications, we consider (i) trace regression, (ii)\nshape-restricted isotonic/convex regression, (iii) high-dimensional partially\nlinear regression and (iv) covariance matrix estimation in the sparse factor\nmodel. These new results serve either as theoretical justification of practical\nprior proposals in the literature, or as an illustration of the generic\nconstruction scheme of a (nearly) minimax adaptive estimator for a\nmulti-structured experiment.\n",
"title": "Bayes model selection"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 1350 | null | Validated | null | null |
null | {
"abstract": " Following the presentation and proof of the hypothesis that image features\nare particularly perceived at points where the Fourier components are maximally\nin phase, the concept of phase congruency (PC) is introduced. Subsequently, a\ntwo-dimensional multi-scale phase congruency (2D-MSPC) is developed, which has\nbeen an important tool for detecting and evaluation of image features. However,\nthe 2D-MSPC requires many parameters to be appropriately tuned for optimal\nimage features detection. In this paper, we defined a criterion for parameter\noptimization of the 2D-MSPC, which is a function of its maximum and minimum\nmoments. We formulated the problem in various optimal and suboptimal\nframeworks, and discussed the conditions and features of the suboptimal\nsolutions. The effectiveness of the proposed method was verified through\nseveral examples, ranging from natural objects to medical images from patients\nwith a neurological disease, multiple sclerosis.\n",
"title": "Phase Congruency Parameter Optimization for Enhanced Detection of Image Features for both Natural and Medical Applications"
} | null | null | null | null | true | null | 1351 | null | Default | null | null |
null | {
"abstract": " Detecting and evaluating regions of brain under various circumstances is one\nof the most interesting topics in computational neuroscience. However, the\nmajority of the studies on detecting communities of a functional connectivity\nnetwork of the brain is done on networks obtained from coherency attributes,\nand not from correlation. This lack of studies, in part, is due to the fact\nthat many common methods for clustering graphs require the nodes of the network\nto be `positively' linked together, a property that is guaranteed by a\ncoherency matrix, by definition. However, correlation matrices reveal more\ninformation regarding how each pair of nodes are linked together. In this\nstudy, for the first time we simultaneously examine four inherently different\nnetwork clustering methods (spectral, heuristic, and optimization methods)\napplied to the functional connectivity networks of the CA1 region of the\nhippocampus of an anaesthetized rat during pre-ictal and post-ictal states. The\nnetworks are obtained from correlation matrices, and its results are compared\nwith the ones obtained by applying the same methods to coherency matrices. The\ncorrelation matrices show a much finer community structure compared to the\ncoherency matrices. Furthermore, we examine the potential smoothing effect of\nchoosing various window sizes for computing the correlation/coherency matrices.\n",
"title": "Community structure detection and evaluation during the pre- and post-ictal hippocampal depth recordings"
} | null | null | null | null | true | null | 1352 | null | Default | null | null |
null | {
"abstract": " The global sensitivity analysis of a numerical model aims to quantify, by\nmeans of sensitivity indices estimate, the contributions of each uncertain\ninput variable to the model output uncertainty. The so-called Sobol' indices,\nwhich are based on the functional variance analysis, present a difficult\ninterpretation in the presence of statistical dependence between inputs. The\nShapley effect was recently introduced to overcome this problem as they\nallocate the mutual contribution (due to correlation and interaction) of a\ngroup of inputs to each individual input within the group.In this paper, using\nseveral new analytical results, we study the effects of linear correlation\nbetween some Gaussian input variables on Shapley effects, and compare these\neffects to classical first-order and total Sobol' indices.This illustrates the\ninterest, in terms of sensitivity analysis setting and interpretation, of the\nShapley effects in the case of dependent inputs. We also investigate the\nnumerical convergence of the estimated Shapley effects. For the practical issue\nof computationally demanding computer models, we show that the substitution of\nthe original model by a metamodel (here, kriging) makes it possible to estimate\nthese indices with precision at a reasonable computational cost.\n",
"title": "Shapley effects for sensitivity analysis with correlated inputs: comparisons with Sobol' indices, numerical estimation and applications"
} | null | null | null | null | true | null | 1353 | null | Default | null | null |
null | {
"abstract": " Ridesourcing platforms like Uber and Didi are getting more and more popular\naround the world. However, unauthorized ridesourcing activities taking\nadvantages of the sharing economy can greatly impair the healthy development of\nthis emerging industry. As the first step to regulate on-demand ride services\nand eliminate black market, we design a method to detect ridesourcing cars from\na pool of cars based on their trajectories. Since licensed ridesourcing car\ntraces are not openly available and may be completely missing in some cities\ndue to legal issues, we turn to transferring knowledge from public transport\nopen data, i.e, taxis and buses, to ridesourcing detection among ordinary\nvehicles. We propose a two-stage transfer learning framework. In Stage 1, we\ntake taxi and bus data as input to learn a random forest (RF) classifier using\ntrajectory features shared by taxis/buses and ridesourcing/other cars. Then, we\nuse the RF to label all the candidate cars. In Stage 2, leveraging the subset\nof high confident labels from the previous stage as input, we further learn a\nconvolutional neural network (CNN) classifier for ridesourcing detection, and\niteratively refine RF and CNN, as well as the feature set, via a co-training\nprocess. Finally, we use the resulting ensemble of RF and CNN to identify the\nridesourcing cars in the candidate pool. Experiments on real car, taxi and bus\ntraces show that our transfer learning framework, with no need of a pre-labeled\nridesourcing dataset, can achieve similar accuracy as the supervised learning\nmethods.\n",
"title": "Ridesourcing Car Detection by Transfer Learning"
} | null | null | null | null | true | null | 1354 | null | Default | null | null |
null | {
"abstract": " Managing dynamic information in large multi-site, multi-species, and\nmulti-discipline consortia is a challenging task for data management\napplications. Often in academic research studies the goals for informatics\nteams are to build applications that provide extract-transform-load (ETL)\nfunctionality to archive and catalog source data that has been collected by the\nresearch teams. In consortia that cross species and methodological or\nscientific domains, building interfaces that supply data in a usable fashion\nand make intuitive sense to scientists from dramatically different backgrounds\nincreases the complexity for developers. Further, reusing source data from\noutside one's scientific domain is fraught with ambiguities in understanding\nthe data types, analysis methodologies, and how to combine the data with those\nfrom other research teams. We report on the design, implementation, and\nperformance of a semantic data management application to support the NIMH\nfunded Conte Center at the University of California, Irvine. The Center is\ntesting a theory of the consequences of \"fragmented\" (unpredictable, high\nentropy) early-life experiences on adolescent cognitive and emotional outcomes\nin both humans and rodents. It employs cross-species neuroimaging, epigenomic,\nmolecular, and neuroanatomical approaches in humans and rodents to assess the\npotential consequences of fragmented unpredictable experience on brain\nstructure and circuitry. To address this multi-technology, multi-species\napproach, the system uses semantic web techniques based on the Neuroimaging\nData Model (NIDM) to facilitate data ETL functionality. We find this approach\nenables a low-cost, easy to maintain, and semantically meaningful information\nmanagement system, enabling the diverse research teams to access and use the\ndata.\n",
"title": "A Semantic Cross-Species Derived Data Management Application"
} | null | null | [
"Computer Science"
]
| null | true | null | 1355 | null | Validated | null | null |
null | {
"abstract": " We describe a fully data driven model that learns to perform a retrosynthetic\nreaction prediction task, which is treated as a sequence-to-sequence mapping\nproblem. The end-to-end trained model has an encoder-decoder architecture that\nconsists of two recurrent neural networks, which has previously shown great\nsuccess in solving other sequence-to-sequence prediction tasks such as machine\ntranslation. The model is trained on 50,000 experimental reaction examples from\nthe United States patent literature, which span 10 broad reaction types that\nare commonly used by medicinal chemists. We find that our model performs\ncomparably with a rule-based expert system baseline model, and also overcomes\ncertain limitations associated with rule-based expert systems and with any\nmachine learning approach that contains a rule-based expert system component.\nOur model provides an important first step towards solving the challenging\nproblem of computational retrosynthetic analysis.\n",
"title": "Retrosynthetic reaction prediction using neural sequence-to-sequence models"
} | null | null | null | null | true | null | 1356 | null | Default | null | null |
null | {
"abstract": " We present the results of the spectroscopic and photometric follow-up of two\nfield galaxies that were selected as possible stellar counterparts of local\nhigh velocity clouds. Our analysis shows that the two systems are distant (D>20\nMpc) dwarf irregular galaxies unrelated to the local HI clouds. However, the\nnewly derived distance and structural parameters reveal that the two galaxies\nhave luminosities and effective radii very similar to the recently identified\nUltra Diffuse Galaxies (UDGs). At odds with classical UDGs, they are remarkably\nisolated, having no known giant galaxy within ~2.0 Mpc. Moreover, one of them\nhas a very high gas content compared to galaxies of similar stellar mass, with\na HI to stellar mass ratio M_HI/M_* ~90, typical of almost-dark dwarfs.\nExpanding on this finding, we show that extended dwarf irregulars overlap the\ndistribution of UDGs in the M_V vs. log(r_e) plane and that the sequence\nincluding dwarf spheroidals, dwarf irregulars and UDGs appears as continuously\npopulated in this plane.\n",
"title": "Redshift, metallicity and size of two extended dwarf Irregular galaxies. A link between dwarf Irregulars and Ultra Diffuse Galaxies?"
} | null | null | null | null | true | null | 1357 | null | Default | null | null |
null | {
"abstract": " Given a projective hyperkahler manifold with a holomorphic Lagrangian\nfibration, we prove that hyperkahler metrics with volume of the torus fibers\nshrinking to zero collapse in the Gromov-Hausdorff sense (and smoothly away\nfrom the singular fibers) to a compact metric space which is a half-dimensional\nspecial Kahler manifold outside a singular set of real Hausdorff codimension 2\nand is homeomorphic to the base projective space.\n",
"title": "Collapsing hyperkähler manifolds"
} | null | null | [
"Mathematics"
]
| null | true | null | 1358 | null | Validated | null | null |
null | {
"abstract": " Calcium imaging permits optical measurement of neural activity. Since\nintracellular calcium concentration is an indirect measurement of neural\nactivity, computational tools are necessary to infer the true underlying\nspiking activity from fluorescence measurements. Bayesian model inversion can\nbe used to solve this problem, but typically requires either computationally\nexpensive MCMC sampling, or faster but approximate maximum-a-posteriori\noptimization. Here, we introduce a flexible algorithmic framework for fast,\nefficient and accurate extraction of neural spikes from imaging data. Using the\nframework of variational autoencoders, we propose to amortize inference by\ntraining a deep neural network to perform model inversion efficiently. The\nrecognition network is trained to produce samples from the posterior\ndistribution over spike trains. Once trained, performing inference amounts to a\nfast single forward pass through the network, without the need for iterative\noptimization or sampling. We show that amortization can be applied flexibly to\na wide range of nonlinear generative models and significantly improves upon the\nstate of the art in computation time, while achieving competitive accuracy. Our\nframework is also able to represent posterior distributions over spike-trains.\nWe demonstrate the generality of our method by proposing the first\nprobabilistic approach for separating backpropagating action potentials from\nputative synaptic inputs in calcium imaging of dendritic spines.\n",
"title": "Fast amortized inference of neural activity from calcium imaging data with variational autoencoders"
} | null | null | null | null | true | null | 1359 | null | Default | null | null |
null | {
"abstract": " We investigate multiparticle excitation effect on a collective density\nexcitation as well as a single-particle excitation in a weakly interacting\nBose--Einstein condensate (BEC). We find that although the weakly interacting\nBEC offers weak multiparticle excitation spectrum at low temperatures, this\nmultiparticle excitation effect may not remain hidden, but emerges as\nbimodality in the density response function through the single-particle\nexcitation. Identification of spectra in the BEC between the single-particle\nexcitation and the density excitation is also assessed at nonzero temperatures,\nwhich has been known to be unique nature in the BEC at absolute zero\ntemperature.\n",
"title": "Hidden multiparticle excitation in weakly interacting Bose-Einstein Condensate"
} | null | null | null | null | true | null | 1360 | null | Default | null | null |
null | {
"abstract": " In the present article we describe how one can define Hausdorff measure\nallowing empty elements in coverings, and using infinite countable coverings\nonly. In addition, we discuss how the use of different nonequivalent\ninterpretations of the notion \"countable set\", that is typical for classical\nand modern mathematics, may lead to contradictions.\n",
"title": "Hausdorff Measure: Lost in Translation"
} | null | null | null | null | true | null | 1361 | null | Default | null | null |
null | {
"abstract": " The popular Alternating Least Squares (ALS) algorithm for tensor\ndecomposition is efficient and easy to implement, but often converges to poor\nlocal optima---particularly when the weights of the factors are non-uniform. We\npropose a modification of the ALS approach that is as efficient as standard\nALS, but provably recovers the true factors with random initialization under\nstandard incoherence assumptions on the factors of the tensor. We demonstrate\nthe significant practical superiority of our approach over traditional ALS for\na variety of tasks on synthetic data---including tensor factorization on exact,\nnoisy and over-complete tensors, as well as tensor completion---and for\ncomputing word embeddings from a third-order word tri-occurrence tensor.\n",
"title": "Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 1362 | null | Validated | null | null |
null | {
"abstract": " Domain generalization is the problem of assigning class labels to an\nunlabeled test data set, given several labeled training data sets drawn from\nsimilar distributions. This problem arises in several applications where data\ndistributions fluctuate because of biological, technical, or other sources of\nvariation. We develop a distribution-free, kernel-based approach that predicts\na classifier from the marginal distribution of features, by leveraging the\ntrends present in related classification tasks. This approach involves\nidentifying an appropriate reproducing kernel Hilbert space and optimizing a\nregularized empirical risk over the space. We present generalization error\nanalysis, describe universal kernels, and establish universal consistency of\nthe proposed methodology. Experimental results on synthetic data and three real\ndata applications demonstrate the superiority of the method with respect to a\npooling strategy.\n",
"title": "Domain Generalization by Marginal Transfer Learning"
} | null | null | null | null | true | null | 1363 | null | Default | null | null |
null | {
"abstract": " We study two colored operads of configurations of little $n$-disks in a unit\n$n$-disk, with the centers of the small disks of one color restricted to an\n$m$-plane, $m<n$. We compute the rational homotopy type of these \\emph{extended\nSwiss Cheese operads} and show how they are connected to the rational homotopy\ntypes of the inclusion maps from the little $m$-disks to the little $n$-disks\noperad.\n",
"title": "(Non-)formality of the extended Swiss Cheese operads"
} | null | null | null | null | true | null | 1364 | null | Default | null | null |
null | {
"abstract": " This paper proposes a data-driven approach, by means of an Artificial Neural\nNetwork (ANN), to value financial options and to calculate implied volatilities\nwith the aim of accelerating the corresponding numerical methods. With ANNs\nbeing universal function approximators, this method trains an optimized ANN on\na data set generated by a sophisticated financial model, and runs the trained\nANN as an agent of the original solver in a fast and efficient way. We test\nthis approach on three different types of solvers, including the analytic\nsolution for the Black-Scholes equation, the COS method for the Heston\nstochastic volatility model and Brent's iterative root-finding method for the\ncalculation of implied volatilities. The numerical results show that the ANN\nsolver can reduce the computing time significantly.\n",
"title": "Pricing options and computing implied volatilities using neural networks"
} | null | null | null | null | true | null | 1365 | null | Default | null | null |
null | {
"abstract": " Tunneling of electrons into a two-dimensional electron system is known to\nexhibit an anomaly at low bias, in which the tunneling conductance vanishes due\nto a many-body interaction effect. Recent experiments have measured this\nanomaly between two copies of the half-filled Landau level as a function of\nin-plane magnetic field, and they suggest that increasing spin polarization\ndrives a deeper suppression of tunneling. Here we present a theory of the\ntunneling anomaly between two copies of the partially spin-polarized\nHalperin-Lee-Read state, and we show that the conventional description of the\ntunneling anomaly, based on the Coulomb self-energy of the injected charge\npacket, is inconsistent with the experimental observation. We propose that the\nexperiment is operating in a different regime, not previously considered, in\nwhich the charge-spreading action is determined by the compressibility of the\ncomposite fermions.\n",
"title": "Effect of magnetization on the tunneling anomaly in compressible quantum Hall states"
} | null | null | null | null | true | null | 1366 | null | Default | null | null |
null | {
"abstract": " We consider the problem of diagnosis where a set of simple observations are\nused to infer a potentially complex hidden hypothesis. Finding the optimal\nsubset of observations is intractable in general, thus we focus on the problem\nof active diagnosis, where the agent selects the next most-informative\nobservation based on the results of previous observations. We show that under\nthe assumption of uniform observation entropy, one can build an implication\nmodel which directly predicts the outcome of the potential next observation\nconditioned on the results of past observations, and selects the observation\nwith the maximum entropy. This approach enjoys reduced computation complexity\nby bypassing the complicated hypothesis space, and can be trained on\nobservation data alone, learning how to query without knowledge of the hidden\nhypothesis.\n",
"title": "Learning to Acquire Information"
} | null | null | null | null | true | null | 1367 | null | Default | null | null |
null | {
"abstract": " This work explores the feasibility of steering a drone with a (recurrent)\nneural network, based on input from a forward looking camera, in the context of\na high-level navigation task. We set up a generic framework for training a\nnetwork to perform navigation tasks based on imitation learning. It can be\napplied to both aerial and land vehicles. As a proof of concept we apply it to\na UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a\nroom containing a number of obstacles. So far only feedforward neural networks\n(FNNs) have been used to train UAV control. To cope with more complex tasks, we\npropose the use of recurrent neural networks (RNN) instead and successfully\ntrain an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision\nbased control is a sequential prediction problem, known for its highly\ncorrelated input data. The correlation makes training a network hard,\nespecially an RNN. To overcome this issue, we investigate an alternative\nsampling method during training, namely window-wise truncated backpropagation\nthrough time (WW-TBPTT). Further, end-to-end training requires a lot of data\nwhich often is not available. Therefore, we compare the performance of\nretraining only the Fully Connected (FC) and LSTM control layers with networks\nwhich are trained end-to-end. Performing the relatively simple task of crossing\na room already reveals important guidelines and good practices for training\nneural control networks. Different visualizations help to explain the behavior\nlearned.\n",
"title": "How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV"
} | null | null | null | null | true | null | 1368 | null | Default | null | null |
null | {
"abstract": " Locality-sensitive hashing (LSH) is a fundamental technique for similarity\nsearch and similarity estimation in high-dimensional spaces. The basic idea is\nthat similar objects should produce hash collisions with probability\nsignificantly larger than objects with low similarity. We consider LSH for\nobjects that can be represented as point sets in either one or two dimensions.\nTo make the point sets finite size we consider the subset of points on a grid.\nDirectly applying LSH (e.g. min-wise hashing) to these point sets would require\ntime proportional to the number of points. We seek to achieve time that is much\nlower than direct approaches.\nTechnically, we introduce new primitives for range-efficient consistent\nsampling (of independent interest), and show how to turn such samples into LSH\nvalues. Another application of our technique is a data structure for quickly\nestimating the size of the intersection or union of a set of preprocessed\npolygons. Curiously, our consistent sampling method uses transformation to a\ngeometric problem.\n",
"title": "Range-efficient consistent sampling and locality-sensitive hashing for polygons"
} | null | null | null | null | true | null | 1369 | null | Default | null | null |
null | {
"abstract": " A commonly cited inefficiency of neural network training by back-propagation\nis the update locking problem: each layer must wait for the signal to propagate\nthrough the network before updating. We consider and analyze a training\nprocedure, Decoupled Greedy Learning (DGL), that addresses this problem more\neffectively and at scales beyond those of previous solutions. It is based on a\ngreedy relaxation of the joint training objective, recently shown to be\neffective in the context of Convolutional Neural Networks (CNNs) on large-scale\nimage classification. We consider an optimization of this objective that\npermits us to decouple the layer training, allowing for layers or modules in\nnetworks to be trained with a potentially linear parallelization in layers. We\nshow theoretically and empirically that this approach converges. In addition,\nwe empirically find that it can lead to better generalization than sequential\ngreedy optimization and even standard end-to-end back-propagation. We show that\nan extension of this approach to asynchronous settings, where modules can\noperate with large communication delays, is possible with the use of a replay\nbuffer. We demonstrate the effectiveness of DGL on the CIFAR-10 datasets\nagainst alternatives and on the large-scale ImageNet dataset, where we are able\nto effectively train VGG and ResNet-152 models.\n",
"title": "Decoupled Greedy Learning of CNNs"
} | null | null | null | null | true | null | 1370 | null | Default | null | null |
null | {
"abstract": " We establish a Pontryagin maximum principle for discrete time optimal control\nproblems under the following three types of constraints: a) constraints on the\nstates pointwise in time, b) constraints on the control actions pointwise in\ntime, and c) constraints on the frequency spectrum of the optimal control\ntrajectories. While the first two types of constraints are already included in\nthe existing versions of the Pontryagin maximum principle, it turns out that\nthe third type of constraints cannot be recast in any of the standard forms of\nthe existing results for the original control system. We provide two different\nproofs of our Pontryagin maximum principle in this article, and include several\nspecial cases fine-tuned to control-affine nonlinear and linear system models.\nIn particular, for minimization of quadratic cost functions and linear time\ninvariant control systems, we provide tight conditions under which the optimal\ncontrols under frequency constraints are either normal or abnormal.\n",
"title": "Discrete time Pontryagin maximum principle for optimal control problems under state-action-frequency constraints"
} | null | null | [
"Computer Science",
"Mathematics"
]
| null | true | null | 1371 | null | Validated | null | null |
null | {
"abstract": " A system of $N$ particles in a chemical medium in $\\mathbb{R}^{d}$ is studied\nin a discrete time setting. Underlying interacting particle system in\ncontinuous time can be expressed as \\begin{eqnarray} dX_{i}(t)\n&=&[-(I-A)X_{i}(t) + \\bigtriangledown h(t,X_{i}(t))]dt + dW_{i}(t), \\,\\,\nX_{i}(0)=x_{i}\\in \\mathbb{R}^{d}\\,\\,\\forall i=1,\\ldots,N\\nonumber\\\\\n\\frac{\\partial}{\\partial t} h(t,x)&=&-\\alpha h(t,x) + D\\bigtriangleup h(t,x)\n+\\frac{\\beta}{n} \\sum_{i=1}^{N} g(X_{i}(t),x),\\quad h(0,\\cdot) =\nh(\\cdot).\\label{main} \\end{eqnarray} where $X_{i}(t)$ is the location of the\n$i$th particle at time $t$ and $h(t,x)$ is the function measuring the\nconcentration of the medium at location $x$ with $h(0,x) = h(x)$. In this\narticle we describe a general discrete time non-linear formulation of the\naforementioned model and a strongly coupled particle system approximating it.\nSimilar models have been studied before (Budhiraja et al.(2011)) under a\nrestrictive compactness assumption on the domain of particles. In current work\nthe particles take values in $\\R^{d}$ and consequently the stability analysis\nis particularly challenging. We provide sufficient conditions for the existence\nof a unique fixed point for the dynamical system governing the large $N$\nasymptotics of the particle empirical measure. We also provide uniform in time\nconvergence rates for the particle empirical measure to the corresponding limit\nmeasure under suitable conditions on the model.\n",
"title": "Quantitative evaluation of an active Chemotaxis model in Discrete time"
} | null | null | [
"Mathematics"
]
| null | true | null | 1372 | null | Validated | null | null |
null | {
"abstract": " It is well established that neural networks with deep architectures perform\nbetter than shallow networks for many tasks in machine learning. In statistical\nphysics, while there has been recent interest in representing physical data\nwith generative modelling, the focus has been on shallow neural networks. A\nnatural question to ask is whether deep neural networks hold any advantage over\nshallow networks in representing such data. We investigate this question by\nusing unsupervised, generative graphical models to learn the probability\ndistribution of a two-dimensional Ising system. Deep Boltzmann machines, deep\nbelief networks, and deep restricted Boltzmann networks are trained on thermal\nspin configurations from this system, and compared to the shallow architecture\nof the restricted Boltzmann machine. We benchmark the models, focussing on the\naccuracy of generating energetic observables near the phase transition, where\nthese quantities are most difficult to approximate. Interestingly, after\ntraining the generative networks, we observe that the accuracy essentially\ndepends only on the number of neurons in the first hidden layer of the network,\nand not on other model details such as network depth or model type. This is\nevidence that shallow networks are more efficient than deep networks at\nrepresenting physical probability distributions associated with Ising systems\nnear criticality.\n",
"title": "Deep Learning the Ising Model Near Criticality"
} | null | null | null | null | true | null | 1373 | null | Default | null | null |
null | {
"abstract": " For any stream of time-stamped edges that form a dynamic network, an\nimportant choice is the aggregation granularity that an analyst uses to bin the\ndata. Picking such a windowing of the data is often done by hand, or left up to\nthe technology that is collecting the data. However, the choice can make a big\ndifference in the properties of the dynamic network. This is the time scale\ndetection problem. In previous work, this problem is often solved with a\nheuristic as an unsupervised task. As an unsupervised problem, it is difficult\nto measure how well a given algorithm performs. In addition, we show that the\nquality of the windowing is dependent on which task an analyst wants to perform\non the network after windowing. Therefore the time scale detection problem\nshould not be handled independently from the rest of the analysis of the\nnetwork.\nWe introduce a framework that tackles both of these issues: By measuring the\nperformance of the time scale detection algorithm based on how well a given\ntask is accomplished on the resulting network, we are for the first time able\nto directly compare different time scale detection algorithms to each other.\nUsing this framework, we introduce time scale detection algorithms that take a\nsupervised approach: they leverage ground truth on training data to find a good\nwindowing of the test data. We compare the supervised approach to previous\napproaches and several baselines on real data.\n",
"title": "A supervised approach to time scale detection in dynamic networks"
} | null | null | null | null | true | null | 1374 | null | Default | null | null |
null | {
"abstract": " We revisit the generation of balanced octrees for adaptive mesh refinement\n(AMR) of Cartesian domains with immersed complex geometries. In a recent short\nnote [Hasbestan and Senocak, J. Comput. Phys. vol. 351:473-477 (2017)], we\nshowed that the data-locality of the Z-order curve in hashed linear octree\ngeneration methods may not be perfect because of potential collisions in the\nhash table. Building on that observation, we propose a binarized octree\ngeneration method that complies with the Z-order curve exactly. Similar to a\nhashed linear octree generation method, we use Morton encoding to index the\nnodes of an octree, but use a red-black tree in place of the hash table.\nRed-black tree is a special kind of a binary tree, which we use for insertion\nand deletion of elements during mesh adaptation. By strictly working with the\nbitwise representation of the octree, we remove computer hardware limitations\non the depth of adaptation on a single processor. Additionally, we introduce a\ngeometry encoding technique for rapidly tagging the solid geometry for\nrefinement. Our results for several geometries with different levels of\nadaptations show that the binarized octree generation outperforms the linear\noctree generation in terms of runtime performance at the expense of only a\nslight increase in memory usage. We provide the current AMR capability as\nopen-source software.\n",
"title": "Binarized octree generation for Cartesian adaptive mesh refinement around immersed geometries"
} | null | null | null | null | true | null | 1375 | null | Default | null | null |
null | {
"abstract": " With the advent of the era of artificial intelligence(AI), deep neural\nnetworks (DNNs) have shown huge superiority over human in image recognition,\nspeech processing, autonomous vehicles and medical diagnosis. However, recent\nstudies indicate that DNNs are vulnerable to adversarial examples (AEs) which\nare designed by attackers to fool deep learning models. Different from real\nexamples, AEs can hardly be distinguished from human eyes, but mislead the\nmodel to predict incorrect outputs and therefore threaten security critical\ndeep-learning applications. In recent years, the generation and defense of AEs\nhave become a research hotspot in the field of AI security. This article\nreviews the latest research progress of AEs. First, we introduce the concept,\ncause, characteristic and evaluation metrics of AEs, then give a survey on the\nstate-of-the-art AE generation methods with the discussion of advantages and\ndisadvantages. After that we review the existing defenses and discuss their\nlimitations. Finally, the future research opportunities and challenges of AEs\nare prospected.\n",
"title": "Adversarial Examples: Opportunities and Challenges"
} | null | null | null | null | true | null | 1376 | null | Default | null | null |
null | {
"abstract": " In the artificial intelligence field, learning often corresponds to changing\nthe parameters of a parameterized function. A learning rule is an algorithm or\nmathematical expression that specifies precisely how the parameters should be\nchanged. When creating an artificial intelligence system, we must make two\ndecisions: what representation should be used (i.e., what parameterized\nfunction should be used) and what learning rule should be used to search\nthrough the resulting set of representable functions. Using most learning\nrules, these two decisions are coupled in a subtle (and often unintentional)\nway. That is, using the same learning rule with two different representations\nthat can represent the same sets of functions can result in two different\noutcomes. After arguing that this coupling is undesirable, particularly when\nusing artificial neural networks, we present a method for partially decoupling\nthese two decisions for a broad class of learning rules that span unsupervised\nlearning, reinforcement learning, and supervised learning.\n",
"title": "Decoupling Learning Rules from Representations"
} | null | null | null | null | true | null | 1377 | null | Default | null | null |
null | {
"abstract": " The involution Stanley symmetric functions $\\hat{F}_y$ are the stable limits\nof the analogues of Schubert polynomials for the orbits of the orthogonal group\nin the flag variety. These symmetric functions are also generating functions\nfor involution words, and are indexed by the involutions in the symmetric\ngroup. By construction each $\\hat{F}_y$ is a sum of Stanley symmetric functions\nand therefore Schur positive. We prove the stronger fact that these power\nseries are Schur $P$-positive. We give an algorithm to efficiently compute the\ndecomposition of $\\hat{F}_y$ into Schur $P$-summands, and prove that this\ndecomposition is triangular with respect to the dominance order on partitions.\nAs an application, we derive pattern avoidance conditions which characterize\nthe involution Stanley symmetric functions which are equal to Schur\n$P$-functions. We deduce as a corollary that the involution Stanley symmetric\nfunction of the reverse permutation is a Schur $P$-function indexed by a\nshifted staircase shape. These results lead to alternate proofs of theorems of\nArdila-Serrano and DeWitt on skew Schur functions which are Schur\n$P$-functions. We also prove new Pfaffian formulas for certain related\ninvolution Schubert polynomials.\n",
"title": "Schur P-positivity and involution Stanley symmetric functions"
} | null | null | null | null | true | null | 1378 | null | Default | null | null |
null | {
"abstract": " In this paper, we focus on subspace learning problems on the Grassmann\nmanifold. Interesting applications in this setting include low-rank matrix\ncompletion and low-dimensional multivariate regression, among others. Motivated\nby privacy concerns, we aim to solve such problems in a decentralized setting\nwhere multiple agents have access to (and solve) only a part of the whole\noptimization problem. The agents communicate with each other to arrive at a\nconsensus, i.e., agree on a common quantity, via the gossip protocol.\nWe propose a novel cost function for subspace learning on the Grassmann\nmanifold, which is a weighted sum of several sub-problems (each solved by an\nagent) and the communication cost among the agents. The cost function has a\nfinite sum structure. In the proposed modeling approach, different agents learn\nindividual local subspace but they achieve asymptotic consensus on the global\nlearned subspace. The approach is scalable and parallelizable. Numerical\nexperiments show the efficacy of the proposed decentralized algorithms on\nvarious matrix completion and multivariate regression benchmarks.\n",
"title": "A Riemannian gossip approach to subspace learning on Grassmann manifold"
} | null | null | null | null | true | null | 1379 | null | Default | null | null |
null | {
"abstract": " Topologists are sometimes interested in space-valued diagrams over a given\nindex category, but it is tricky to say what such a diagram even is if we look\nfor a notion that is stable under equivalence. The same happens in (homotopy)\ntype theory, where it is known only for special cases how one can define a type\nof type-valued diagrams over a given index category. We offer several\nconstructions. We first show how to define homotopy coherent diagrams which\ncome with all higher coherence laws explicitly, with two variants that come\nwith assumption on the index category or on the type theory. Further, we\npresent a construction of diagrams over certain Reedy categories. As an\napplication, we add the degeneracies to the well-known construction of\nsemisimplicial types, yielding a construction of simplicial types up to any\ngiven finite level. The current paper is only an extended abstract, and a full\nversion is to follow. In the full paper, we will show that the different\nnotions of diagrams are equivalent to each other and to the known notion of\nReedy fibrant diagrams whenever the statement makes sense. In the current\npaper, we only sketch some core ideas of the proofs.\n",
"title": "Space-Valued Diagrams, Type-Theoretically (Extended Abstract)"
} | null | null | null | null | true | null | 1380 | null | Default | null | null |
null | {
"abstract": " An ancient repertoire of UV absorbing pigments which survive today in the\nphylogenetically oldest extant photosynthetic organisms the cyanobacteria point\nto a direction in evolutionary adaptation of the pigments and their associated\nbiota from largely UVC absorbing pigments in the Archean to pigments covering\never more of the longer wavelength UV and visible in the Phanerozoic.Such a\nscenario implies selection of photon dissipation rather than photoprotection\nover the evolutionary history of life.This is consistent with the thermodynamic\ndissipation theory of the origin and evolution of life which suggests that the\nmost important hallmark of biological evolution has been the covering of Earths\nsurface with organic pigment molecules and water to absorb and dissipate ever\nmore completely the prevailing surface solar spectrum.In this article we\ncompare a set of photophysical photochemical biosynthetic and other germane\nproperties of the two dominant classes of cyanobacterial UV absorbing pigments\nthe mycosporine like amino acids MAAs and scytonemins.Pigment wavelengths of\nmaximum absorption correspond with the time dependence of the prevailing Earth\nsurface solar spectrum and we proffer this as evidence for the selection of\nphoton dissipation rather than photoprotection over the history of life on\nEarth.\n",
"title": "Properties of cyanobacterial UV-absorbing pigments suggest their evolution was driven by optimizing photon dissipation rather than photoprotection"
} | null | null | null | null | true | null | 1381 | null | Default | null | null |
null | {
"abstract": " Output impedances are inherent elements of power sources in the electrical\ngrids. In this paper, we give an answer to the following question: What is the\neffect of output impedances on the inductivity of the power network? To address\nthis question, we propose a measure to evaluate the inductivity of a power\ngrid, and we compute this measure for various types of output impedances.\nFollowing this computation, it turns out that network inductivity highly\ndepends on the algebraic connectivity of the network. By exploiting the derived\nexpressions of the proposed measure, one can tune the output impedances in\norder to enforce a desired level of inductivity on the power system.\nFurthermore, the results show that the more \"connected\" the network is, the\nmore the output impedances diffuse into the network. Finally, using Kron\nreduction, we provide examples that demonstrate the utility and validity of the\nmethod.\n",
"title": "Output Impedance Diffusion into Lossy Power Lines"
} | null | null | null | null | true | null | 1382 | null | Default | null | null |
null | {
"abstract": " The quest to observe gravitational waves challenges our ability to\ndiscriminate signals from detector noise. This issue is especially relevant for\ntransient gravitational waves searches with a robust eyes wide open approach,\nthe so called all- sky burst searches. Here we show how signal classification\nmethods inspired by broad astrophysical characteristics can be implemented in\nall-sky burst searches preserving their generality. In our case study, we apply\na multivariate analyses based on artificial neural networks to classify waves\nemitted in compact binary coalescences. We enhance by orders of magnitude the\nsignificance of signals belonging to this broad astrophysical class against the\nnoise background. Alternatively, at a given level of mis-classification of\nnoise events, we can detect about 1/4 more of the total signal population. We\nalso show that a more general strategy of signal classification can actually be\nperformed, by testing the ability of artificial neural networks in\ndiscriminating different signal classes. The possible impact on future\nobservations by the LIGO-Virgo network of detectors is discussed by analysing\nrecoloured noise from previous LIGO-Virgo data with coherent WaveBurst, one of\nthe flagship pipelines dedicated to all-sky searches for transient\ngravitational waves.\n",
"title": "Enhancing the significance of gravitational wave bursts through signal classification"
} | null | null | [
"Physics"
]
| null | true | null | 1383 | null | Validated | null | null |
null | {
"abstract": " Finite Gaussian mixture models are widely used for model-based clustering of\ncontinuous data. Nevertheless, since the number of model parameters scales\nquadratically with the number of variables, these models can be easily\nover-parameterized. For this reason, parsimonious models have been developed\nvia covariance matrix decompositions or assuming local independence. However,\nthese remedies do not allow for direct estimation of sparse covariance matrices\nnor do they take into account that the structure of association among the\nvariables can vary from one cluster to the other. To this end, we introduce\nmixtures of Gaussian covariance graph models for model-based clustering with\nsparse covariance matrices. A penalized likelihood approach is employed for\nestimation and a general penalty term on the graph configurations can be used\nto induce different levels of sparsity and incorporate prior knowledge. Model\nestimation is carried out using a structural-EM algorithm for parameters and\ngraph structure estimation, where two alternative strategies based on a genetic\nalgorithm and an efficient stepwise search are proposed for inference. With\nthis approach, sparse component covariance matrices are directly obtained. The\nframework results in a parsimonious model-based clustering of the data via a\nflexible model for the within-group joint distribution of the variables.\nExtensive simulated data experiments and application to illustrative datasets\nshow that the method attains good classification performance and model quality.\n",
"title": "Model-based Clustering with Sparse Covariance Matrices"
} | null | null | null | null | true | null | 1384 | null | Default | null | null |
null | {
"abstract": " We document the data transfer workflow, data transfer performance, and other\naspects of staging approximately 56 terabytes of climate model output data from\nthe distributed Coupled Model Intercomparison Project (CMIP5) archive to the\nNational Energy Research Supercomputing Center (NERSC) at the Lawrence Berkeley\nNational Laboratory required for tracking and characterizing extratropical\nstorms, a phenomena of importance in the mid-latitudes. We present this\nanalysis to illustrate the current challenges in assembling multi-model data\nsets at major computing facilities for large-scale studies of CMIP5 data.\nBecause of the larger archive size of the upcoming CMIP6 phase of model\nintercomparison, we expect such data transfers to become of increasing\nimportance, and perhaps of routine necessity. We find that data transfer rates\nusing the ESGF are often slower than what is typically available to US\nresidences and that there is significant room for improvement in the data\ntransfer capabilities of the ESGF portal and data centers both in terms of\nworkflow mechanics and in data transfer performance. We believe performance\nimprovements of at least an order of magnitude are within technical reach using\ncurrent best practices, as illustrated by the performance we achieved in\ntransferring the complete raw data set between two high performance computing\nfacilities. To achieve these performance improvements, we recommend: that\ncurrent best practices (such as the Science DMZ model) be applied to the data\nservers and networks at ESGF data centers; that sufficient financial and human\nresources be devoted at the ESGF data centers for systems and network\nengineering tasks to support high performance data movement; and that\nperformance metrics for data transfer between ESGF data centers and major\ncomputing facilities used for climate data analysis be established, regularly\ntested, and published.\n",
"title": "An Assessment of Data Transfer Performance for Large-Scale Climate Data Analysis and Recommendations for the Data Infrastructure for CMIP6"
} | null | null | null | null | true | null | 1385 | null | Default | null | null |
null | {
"abstract": " Datasets are often reused to perform multiple statistical analyses in an\nadaptive way, in which each analysis may depend on the outcomes of previous\nanalyses on the same dataset. Standard statistical guarantees do not account\nfor these dependencies and little is known about how to provably avoid\noverfitting and false discovery in the adaptive setting. We consider a natural\nformalization of this problem in which the goal is to design an algorithm that,\ngiven a limited number of i.i.d.~samples from an unknown distribution, can\nanswer adaptively-chosen queries about that distribution.\nWe present an algorithm that estimates the expectations of $k$ arbitrary\nadaptively-chosen real-valued estimators using a number of samples that scales\nas $\\sqrt{k}$. The answers given by our algorithm are essentially as accurate\nas if fresh samples were used to evaluate each estimator. In contrast, prior\nwork yields error guarantees that scale with the worst-case sensitivity of each\nestimator. We also give a version of our algorithm that can be used to verify\nanswers to such queries where the sample complexity depends logarithmically on\nthe number of queries $k$ (as in the reusable holdout technique).\nOur algorithm is based on a simple approximate median algorithm that\nsatisfies the strong stability guarantees of differential privacy. Our\ntechniques provide a new approach for analyzing the generalization guarantees\nof differentially private algorithms.\n",
"title": "Generalization for Adaptively-chosen Estimators via Stable Median"
} | null | null | null | null | true | null | 1386 | null | Default | null | null |
null | {
"abstract": " Regression or classification? This is perhaps the most basic question faced\nwhen tackling a new supervised learning problem. We present an Evolutionary\nDeep Learning (EDL) algorithm that automatically solves this by identifying the\nquestion type with high accuracy, along with a proposed deep architecture.\nTypically, a significant amount of human insight and preparation is required\nprior to executing machine learning algorithms. For example, when creating deep\nneural networks, the number of parameters must be selected in advance and\nfurthermore, a lot of these choices are made based upon pre-existing knowledge\nof the data such as the use of a categorical cross entropy loss function.\nHumans are able to study a dataset and decide whether it represents a\nclassification or a regression problem, and consequently make decisions which\nwill be applied to the execution of the neural network. We propose the\nAutomated Problem Identification (API) algorithm, which uses an evolutionary\nalgorithm interface to TensorFlow to manipulate a deep neural network to decide\nif a dataset represents a classification or a regression problem. We test API\non 16 different classification, regression and sentiment analysis datasets with\nup to 10,000 features and up to 17,000 unique target values. API achieves an\naverage accuracy of $96.3\\%$ in identifying the problem type without hardcoding\nany insights about the general characteristics of regression or classification\nproblems. For example, API successfully identifies classification problems even\nwith 1000 target values. Furthermore, the algorithm recommends which loss\nfunction to use and also recommends a neural network architecture. Our work is\ntherefore a step towards fully automated machine learning.\n",
"title": "Automated Problem Identification: Regression vs Classification via Evolutionary Deep Networks"
} | null | null | null | null | true | null | 1387 | null | Default | null | null |
null | {
"abstract": " Anthropogenic climate change increased the probability that a short-duration,\nintense rainfall event would occur in parts of southeast China. This type of\nevent occurred in May 2015, causing serious flooding.\n",
"title": "Attribution of extreme rainfall in Southeast China during May 2015"
} | null | null | null | null | true | null | 1388 | null | Default | null | null |
null | {
"abstract": " In this paper boundary regularity for p-harmonic functions is studied with\nrespect to the Mazurkiewicz boundary and other compactifications. In\nparticular, the Kellogg property (which says that the set of irregular boundary\npoints has capacity zero) is obtained for a large class of compactifications,\nbut also two examples when it fails are given. This study is done for complete\nmetric spaces equipped with doubling measures supporting a p-Poincaré\ninequality, but the results are new also in unweighted Euclidean spaces.\n",
"title": "The Kellogg property and boundary regularity for p-harmonic functions with respect to the Mazurkiewicz boundary and other compactifications"
} | null | null | null | null | true | null | 1389 | null | Default | null | null |
null | {
"abstract": " In this paper, we propose to construct confidence bands by bootstrapping the\ndebiased kernel density estimator (for density estimation) and the debiased\nlocal polynomial regression estimator (for regression analysis). The idea of\nusing a debiased estimator was first introduced in Calonico et al. (2015),\nwhere they construct a confidence interval of the density function (and\nregression function) at a given point by explicitly estimating stochastic\nvariations. We extend their ideas and propose a bootstrap approach for\nconstructing confidence bands that is uniform for every point in the support.\nWe prove that the resulting bootstrap confidence band is asymptotically valid\nand is compatible with most tuning parameter selection approaches, such as the\nrule of thumb and cross-validation. We further generalize our method to\nconfidence sets of density level sets and inverse regression problems.\nSimulation studies confirm the validity of the proposed confidence bands/sets.\n",
"title": "Nonparametric Inference via Bootstrapping the Debiased Estimator"
} | null | null | null | null | true | null | 1390 | null | Default | null | null |
null | {
"abstract": " Finding actions that satisfy the constraints imposed by both external inputs\nand internal representations is central to decision making. We demonstrate that\nsome important classes of constraint satisfaction problems (CSPs) can be solved\nby networks composed of homogeneous cooperative-competitive modules that have\nconnectivity similar to motifs observed in the superficial layers of neocortex.\nThe winner-take-all modules are sparsely coupled by programming neurons that\nembed the constraints onto the otherwise homogeneous modular computational\nsubstrate. We show rules that embed any instance of the CSPs planar four-color\ngraph coloring, maximum independent set, and Sudoku on this substrate, and\nprovide mathematical proofs that guarantee these graph coloring problems will\nconvergence to a solution. The network is composed of non-saturating linear\nthreshold neurons. Their lack of right saturation allows the overall network to\nexplore the problem space driven through the unstable dynamics generated by\nrecurrent excitation. The direction of exploration is steered by the constraint\nneurons. While many problems can be solved using only linear inhibitory\nconstraints, network performance on hard problems benefits significantly when\nthese negative constraints are implemented by non-linear multiplicative\ninhibition. Overall, our results demonstrate the importance of instability\nrather than stability in network computation, and also offer insight into the\ncomputational role of dual inhibitory mechanisms in neural circuits.\n",
"title": "Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks"
} | null | null | null | null | true | null | 1391 | null | Default | null | null |
null | {
"abstract": " A major challenge in brain tumor treatment planning and quantitative\nevaluation is determination of the tumor extent. The noninvasive magnetic\nresonance imaging (MRI) technique has emerged as a front-line diagnostic tool\nfor brain tumors without ionizing radiation. Manual segmentation of brain tumor\nextent from 3D MRI volumes is a very time-consuming task and the performance is\nhighly relied on operator's experience. In this context, a reliable fully\nautomatic segmentation method for the brain tumor segmentation is necessary for\nan efficient measurement of the tumor extent. In this study, we propose a fully\nautomatic method for brain tumor segmentation, which is developed using U-Net\nbased deep convolutional networks. Our method was evaluated on Multimodal Brain\nTumor Image Segmentation (BRATS 2015) datasets, which contain 220 high-grade\nbrain tumor and 54 low-grade tumor cases. Cross-validation has shown that our\nmethod can obtain promising segmentation efficiently.\n",
"title": "Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks"
} | null | null | null | null | true | null | 1392 | null | Default | null | null |
null | {
"abstract": " In a localization network, the line-of-sight between anchors (transceivers)\nand targets may be blocked due to the presence of obstacles in the environment.\nDue to the non-zero size of the obstacles, the blocking is typically correlated\nacross both anchor and target locations, with the extent of correlation\nincreasing with obstacle size. If a target does not have line-of-sight to a\nminimum number of anchors, then its position cannot be estimated unambiguously\nand is, therefore, said to be in a blind-spot. However, the analysis of the\nblind-spot probability of a given target is challenging due to the inherent\nrandomness in the obstacle locations and sizes. In this letter, we develop a\nnew framework to analyze the worst-case impact of correlated blocking on the\nblind-spot probability of a typical target; in particular, we model the\nobstacles by a Poisson line process and the anchor locations by a Poisson point\nprocess. For this setup, we define the notion of the asymptotic blind-spot\nprobability of the typical target and derive a closed-form expression for it as\na function of the area distribution of a typical Poisson-Voronoi cell. As an\nupper bound for the more realistic case when obstacles have finite dimensions,\nthe asymptotic blind-spot probability is useful as a design tool to ensure that\nthe blind-spot probability of a typical target does not exceed a desired\nthreshold, $\\epsilon$.\n",
"title": "Asymptotic Blind-spot Analysis of Localization Networks under Correlated Blocking using a Poisson Line Process"
} | null | null | [
"Computer Science"
]
| null | true | null | 1393 | null | Validated | null | null |
null | {
"abstract": " We investigate the relation between kinematic morphology, intrinsic colour\nand stellar mass of galaxies in the EAGLE cosmological hydrodynamical\nsimulation. We calculate the intrinsic u-r colours and measure the fraction of\nkinetic energy invested in ordered corotation of 3562 galaxies at z=0 with\nstellar masses larger than $10^{10}M_{\\odot}$. We perform a visual inspection\nof gri-composite images and find that our kinematic morphology correlates\nstrongly with visual morphology. EAGLE produces a galaxy population for which\nmorphology is tightly correlated with the location in the colour- mass diagram,\nwith the red sequence mostly populated by elliptical galaxies and the blue\ncloud by disc galaxies. Satellite galaxies are more likely to be on the red\nsequence than centrals, and for satellites the red sequence is morphologically\nmore diverse. These results show that the connection between mass, intrinsic\ncolour and morphology arises from galaxy formation models that reproduce the\nobserved galaxy mass function and sizes.\n",
"title": "The relation between galaxy morphology and colour in the EAGLE simulation"
} | null | null | null | null | true | null | 1394 | null | Default | null | null |
null | {
"abstract": " In this paper, we introduce the BMT distribution as an unimodal alternative\nto continuous univariate distributions supported on a bounded interval. The\nideas behind the mathematical formulation of this new distribution come from\ncomputer aid geometric design, specifically from Bezier curves. First, we\nreview general properties of a distribution given by parametric equations and\nextend the definition of a Bezier distribution. Then, after proposing the BMT\ncumulative distribution function, we derive its probability density function\nand a closed-form expression for quantile function, median, interquartile\nrange, mode, and moments. The domain change from [0,1] to [c,d] is mentioned.\nEstimation of parameters is approached by the methods of maximum likelihood and\nmaximum product of spacing. We test the numerical estimation procedures using\nsome simulated data. Usefulness and flexibility of the new distribution are\nillustrated in three real data sets. The BMT distribution has a significant\npotential to estimate domain parameters and to model data outside the scope of\nthe beta or similar distributions.\n",
"title": "An alternative to continuous univariate distributions supported on a bounded interval: The BMT distribution"
} | null | null | null | null | true | null | 1395 | null | Default | null | null |
null | {
"abstract": " While learning visuomotor skills in an end-to-end manner is appealing, deep\nneural networks are often uninterpretable and fail in surprising ways. For\nrobotics tasks, such as autonomous driving, models that explicitly represent\nobjects may be more robust to new scenes and provide intuitive visualizations.\nWe describe a taxonomy of object-centric models which leverage both object\ninstances and end-to-end learning. In the Grand Theft Auto V simulator, we show\nthat object centric models outperform object-agnostic methods in scenes with\nother vehicles and pedestrians, even with an imperfect detector. We also\ndemonstrate that our architectures perform well on real world environments by\nevaluating on the Berkeley DeepDrive Video dataset.\n",
"title": "Deep Object Centric Policies for Autonomous Driving"
} | null | null | [
"Computer Science"
]
| null | true | null | 1396 | null | Validated | null | null |
null | {
"abstract": " We searched high resolution spectra of 5600 nearby stars for emission lines\nthat are both inconsistent with a natural origin and unresolved spatially, as\nwould be expected from extraterrestrial optical lasers. The spectra were\nobtained with the Keck 10-meter telescope, including light coming from within\n0.5 arcsec of the star, corresponding typically to within a few to tens of au\nof the star, and covering nearly the entire visible wavelength range from 3640\nto 7890 angstroms. We establish detection thresholds by injecting synthetic\nlaser emission lines into our spectra and blindly analyzing them for\ndetections. We compute flux density detection thresholds for all wavelengths\nand spectral types sampled. Our detection thresholds for the power of the\nlasers themselves range from 3 kW to 13 MW, independent of distance to the star\nbut dependent on the competing \"glare\" of the spectral energy distribution of\nthe star and on the wavelength of the laser light, launched from a benchmark,\ndiffraction-limited 10-meter class telescope. We found no such laser emission\ncoming from the planetary region around any of the 5600 stars. As they contain\nroughly 2000 lukewarm, Earth-size planets, we rule out models of the Milky Way\nin which over 0.1 percent of warm, Earth-size planets harbor technological\ncivilizations that, intentionally or not, are beaming optical lasers toward us.\nA next generation spectroscopic laser search will be done by the Breakthrough\nListen initiative, targeting more stars, especially stellar types overlooked\nhere including spectral types O, B, A, early F, late M, and brown dwarfs, and\nastrophysical exotica.\n",
"title": "A Search for Laser Emission with Megawatt Thresholds from 5600 FGKM Stars"
} | null | null | [
"Physics"
]
| null | true | null | 1397 | null | Validated | null | null |
null | {
"abstract": " Learning large scale nonlinear ordinary differential equation (ODE) systems\nfrom data is known to be computationally and statistically challenging. We\npresent a framework together with the adaptive integral matching (AIM)\nalgorithm for learning polynomial or rational ODE systems with a sparse network\nstructure. The framework allows for time course data sampled from multiple\nenvironments representing e.g. different interventions or perturbations of the\nsystem. The algorithm AIM combines an initial penalised integral matching step\nwith an adapted least squares step based on solving the ODE numerically. The R\npackage episode implements AIM together with several other algorithms and is\navailable from CRAN. It is shown that AIM achieves state-of-the-art network\nrecovery for the in silico phosphoprotein abundance data from the eighth DREAM\nchallenge with an AUROC of 0.74, and it is demonstrated via a range of\nnumerical examples that AIM has good statistical properties while being\ncomputationally feasible even for large systems.\n",
"title": "Learning Large Scale Ordinary Differential Equation Systems"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 1398 | null | Validated | null | null |
null | {
"abstract": " Clustering mixtures of Gaussian distributions is a fundamental and\nchallenging problem that is ubiquitous in various high-dimensional data\nprocessing tasks. While state-of-the-art work on learning Gaussian mixture\nmodels has focused primarily on improving separation bounds and their\ngeneralization to arbitrary classes of mixture models, less emphasis has been\npaid to practical computational efficiency of the proposed solutions. In this\npaper, we propose a novel and highly efficient clustering algorithm for $n$\npoints drawn from a mixture of two arbitrary Gaussian distributions in\n$\\mathbb{R}^p$. The algorithm involves performing random 1-dimensional\nprojections until a direction is found that yields a user-specified clustering\nerror $e$. For a 1-dimensional separation parameter $\\gamma$ satisfying\n$\\gamma=Q^{-1}(e)$, the expected number of such projections is shown to be\nbounded by $o(\\ln p)$, when $\\gamma$ satisfies $\\gamma\\leq\nc\\sqrt{\\ln{\\ln{p}}}$, with $c$ as the separability parameter of the two\nGaussians in $\\mathbb{R}^p$. Consequently, the expected overall running time of\nthe algorithm is linear in $n$ and quasi-linear in $p$ at $o(\\ln{p})O(np)$, and\nthe sample complexity is independent of $p$. This result stands in contrast to\nprior works which provide polynomial, with at-best quadratic, running time in\n$p$ and $n$. We show that our bound on the expected number of 1-dimensional\nprojections extends to the case of three or more Gaussian components, and we\npresent a generalization of our results to mixture distributions beyond the\nGaussian model.\n",
"title": "Linear Time Clustering for High Dimensional Mixtures of Gaussian Clouds"
} | null | null | null | null | true | null | 1399 | null | Default | null | null |
null | {
"abstract": " In this study, we developed a method to estimate the relationship between\nstimulation current and volatility during isometric contraction. In functional\nelectrical stimulation (FES), joints are driven by applying voltage to muscles.\nThis technology has been used for a long time in the field of rehabilitation,\nand recently application oriented research has been reported. However,\nestimation of the relationship between stimulus value and exercise capacity has\nnot been discussed to a great extent. Therefore, in this study, a human muscle\nmodel was estimated using the transfer function estimation method with fast\nFourier transform. It was found that the relationship between stimulation\ncurrent and force exerted could be expressed by a first-order lag system. In\nverification of the force estimate, the ability of the proposed model to\nestimate the exerted force under steady state response was found to be good.\n",
"title": "Estimation of Relationship between Stimulation Current and Force Exerted during Isometric Contraction"
} | null | null | null | null | true | null | 1400 | null | Default | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.