text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Transport and security protocols are essential to ensure reliable and secure\ncommunication between two parties. For IoT applications, these protocols must\nbe lightweight, since IoT devices are usually resource constrained.\nUnfortunately, the existing transport and security protocols -- namely TCP/TLS\nand UDP/DTLS -- fall short in terms of connection overhead, latency, and\nconnection migration when used in IoT applications. In this paper, after\nstudying the root causes of these shortcomings, we show how utilizing QUIC in\nIoT scenarios results in a higher performance. Based on these observations, and\ngiven the popularity of MQTT as an IoT application layer protocol, we integrate\nMQTT with QUIC. By presenting the main APIs and functions developed, we explain\nhow connection establishment and message exchange functionalities work. We\nevaluate the performance of MQTTw/QUIC versus MQTTw/TCP using wired, wireless,\nand long-distance testbeds. Our results show that MQTTw/QUIC reduces connection\noverhead in terms of the number of packets exchanged with the broker by up to\n56%. In addition, by eliminating half-open connections, MQTTw/QUIC reduces\nprocessor and memory usage by up to 83% and 50%, respectively. Furthermore, by\nremoving the head-of-line blocking problem, delivery latency is reduced by up\nto 55%. We also show that the throughput drops experienced by MQTTw/QUIC when a\nconnection migration happens is considerably lower than that of MQTTw/TCP.\n", "title": "Implementation and Analysis of QUIC for MQTT" }
null
null
null
null
true
null
12801
null
Default
null
null
null
{ "abstract": " Recent work by (Richardson and Kuhn, 2017a,b; Richardson et al., 2018) looks\nat semantic parser induction and question answering in the domain of source\ncode libraries and APIs. In this brief note, we formalize the representations\nbeing learned in these studies and introduce a simple domain specific language\nand a systematic translation from this language to first-order logic. By\nrecasting the target representations in terms of classical logic, we aim to\nbroaden the applicability of existing code datasets for investigating more\ncomplex natural language understanding and reasoning problems in the software\ndomain.\n", "title": "A Language for Function Signature Representations" }
null
null
null
null
true
null
12802
null
Default
null
null
null
{ "abstract": " This PhD thesis considers the performance evaluation and enhancement of video\ncommunication over wireless channels. The system model considers hybrid\nautomatic repeat request (HARQ) with Chase combining and turbo product codes\n(TPC). The thesis proposes algorithms and techniques to optimize the\nthroughput, transmission power and complexity of HARQ-based wireless video\ncommunication. A semi-analytical solution is developed to model the performance\nof delay-constrained HARQ systems. The semi-analytical and Monte Carlo\nsimulation results reveal that significant complexity reduction can be achieved\nby noting that the coding gain advantage of the soft over hard decoding is\nreduced when Chase combining is used, and it actually vanishes completely for\nparticular codes. Moreover, the thesis proposes a novel power optimization\nalgorithm that achieves a significant power saving of up to 80%. Joint\nthroughput maximization and complexity reduction is considered as well. A CRC\n(cyclic redundancy check)-free HARQ is proposed to improve the system\nthroughput when short packets are transmitted. In addition, the computational\ncomplexity/delay is reduced when the packets transmitted are long. Finally, a\ncontent-aware and occupancy-based HARQ scheme is proposed to ensure minimum\nvideo quality distortion with continuous playback.\n", "title": "Link Adaptation for Wireless Video Communication Systems" }
null
null
null
null
true
null
12803
null
Default
null
null
null
{ "abstract": " In this paper we consider polytopes given by systems of $n$ inequalities in\n$d$ variables, where every inequality has at most two variables with nonzero\ncoefficient. We denote this family by $LI(2)$. We show that despite of the easy\nalgebraic structure, polytopes in $LI(2)$ can have high complexity. We\nconstruct a polytope in $LI(2)$, whose number of vertices is almost the number\nof vertices of the dual cyclic polytope, the difference is a multiplicative\nfactor of depending on $d$ and in particular independent of $n$. Moreover we\nshow that the dual cyclic polytope can not be realized in $LI(2)$.\n", "title": "On the Complexity of Polytopes in $LI(2)$" }
null
null
[ "Computer Science" ]
null
true
null
12804
null
Validated
null
null
null
{ "abstract": " The dynamic behavior of a capacitive micro-electro-mechanical (MEMS)\naccelerometer is evaluated by using a theoretical approach which makes use of a\nsqueeze film damping (SFD) model and ideal gas approach. The study investigates\nthe performance of the device as a function of the temperature, from 228 K to\n398 K, and pressure, from 20 to 1000 Pa, observing the damping gas trapped\ninside de mechanical transducer. Thermoelastic properties of the silicon bulk\nare considered for the entire range of temperature. The damping gases\nconsidered are Air, Helium and Argon. The global behavior of the system is\nevaluated considering the electro-mechanical sensitivity (SEM) as the main\nfigure of merit in frequency domain. The results show the behavior of the main\nmechanism losses of SFD, as well as the dynamic sensitivity of the MEMS\ntransducer system, and are in good agreement with experimental dynamic results\nbehavior.\n", "title": "Dynamic Sensitivity Study of MEMS Capacitive Acceleration Transducer Based on Analytical Squeeze Film Damping and Mechanical Thermoelasticity Approaches" }
null
null
null
null
true
null
12805
null
Default
null
null
null
{ "abstract": " The Lorentz force law of classical electrodynamics states that the force F\nexerted by the magnetic induction B on a particle of charge q moving with\nvelocity V is given by F=qVxB. Since this force is orthogonal to the direction\nof motion, the magnetic field is said to be incapable of performing mechanical\nwork. Yet there is no denying that a permanent magnet can readily perform\nmechanical work by pushing/pulling on another permanent magnet -- or by\nattracting pieces of magnetizable material such as scrap iron or iron filings.\nWe explain this apparent contradiction by examining the magnetic Lorentz force\nacting on an Amperian current loop, which is the model for a magnetic dipole.\nWe then extend the discussion by analyzing the Einstein-Laub model of magnetic\ndipoles in the presence of external magnetic fields.\n", "title": "Nature of the electromagnetic force between classical magnetic dipoles" }
null
null
null
null
true
null
12806
null
Default
null
null
null
{ "abstract": " We consider multivariate $\\mathbb{L}_2$-approximation in reproducing kernel\nHilbert spaces which are tensor products of weighted Walsh spaces and weighted\nKorobov spaces. We study the minimal worst-case error\n$e^{\\mathbb{L}_2-\\mathrm{app},\\Lambda}(N,d)$ of all algorithms that use $N$\ninformation evaluations from the class $\\Lambda$ in the $d$-dimensional case.\nThe two classes $\\Lambda$ considered in this paper are the class $\\Lambda^{\\rm\nall}$ consisting of all linear functionals and the class $\\Lambda^{\\rm std}$\nconsisting only of function evaluations.\nThe focus lies on the dependence of\n$e^{\\mathbb{L}_2-\\mathrm{app},\\Lambda}(N,d)$ on the dimension $d$. The main\nresults are conditions for weak, polynomial, and strong polynomial\ntractability.\n", "title": "Tractability of $\\mathbb{L}_2$-approximation in hybrid function spaces" }
null
null
[ "Mathematics" ]
null
true
null
12807
null
Validated
null
null
null
{ "abstract": " We show from a weak comparison principle (the Ultrapower Axiom) that the\nMitchell order is linear on certain kinds of ultrafilters: normal ultrafilters,\nDodd solid ultrafilters, and assuming GCH, generalized normal ultrafilters. In\nthe process we prove a generalization of Solovay's lemma to singular cardinals.\n", "title": "The Linearity of the Mitchell Order" }
null
null
null
null
true
null
12808
null
Default
null
null
null
{ "abstract": " The precise knowledge of the atomic order in monocrystalline alloys is\nfundamental to understand and predict their physical properties. With this\nperspective, we utilized laser-assisted atom probe tomography to investigate\nthe three-dimensional distribution of atoms in non-equilibrium epitaxial\nSn-rich group IV SiGeSn ternary semiconductors. Different atom probe\nstatistical analysis tools including frequency distribution analysis, partial\nradial distribution functions, and nearest neighbor analysis were employed in\norder to evaluate and compare the behavior of the three elements to their\nspatial distributions in an ideal solid solution. This atomistic-level analysis\nprovided clear evidence of an unexpected repulsive interaction between Sn and\nSi leading to the deviation of Si atoms from the theoretical random\ndistribution. This departure from an ideal solid solution is supported by first\nprincipal calculations and attributed to the tendency of the system to reduce\nits mixing enthalpy throughout the layer-by-layer growth process.\n", "title": "Atomic Order in Non-Equilibrium Silicon-Germanium-Tin Semiconductors" }
null
null
[ "Physics" ]
null
true
null
12809
null
Validated
null
null
null
{ "abstract": " In this paper we obtain some possibilistic variants of the probabilistic laws\nof large numbers, different from those obtained by other authors, but very\nnatural extensions of the corresponding ones in probability theory. Our results\nare based on the possibility measure and on the \"maxitive\" definitions for\npossibility expectation and possibility variance. Also, we show that in this\nframe, the weak form of the law of large numbers, implies the strong law of\nlarge numbers.\n", "title": "On the Laws of Large Numbers in Possibility Theory" }
null
null
[ "Mathematics" ]
null
true
null
12810
null
Validated
null
null
null
{ "abstract": " The Highly-Adaptive-Lasso(HAL)-TMLE is an efficient estimator of a pathwise\ndifferentiable parameter in a statistical model that at minimal (and possibly\nonly) assumes that the sectional variation norm of the true nuisance parameters\nare finite. It relies on an initial estimator (HAL-MLE) of the nuisance\nparameters by minimizing the empirical risk over the parameter space under the\nconstraint that sectional variation norm is bounded by a constant, where this\nconstant can be selected with cross-validation. In the formulation of the\nHALMLE this sectional variation norm corresponds with the sum of absolute value\nof coefficients for an indicator basis. Due to its reliance on machine\nlearning, statistical inference for the TMLE has been based on its normal limit\ndistribution, thereby potentially ignoring a large second order remainder in\nfinite samples.\nIn this article, we present four methods for construction of a finite sample\n0.95-confidence interval that use the nonparametric bootstrap to estimate the\nfinite sample distribution of the HAL-TMLE or a conservative distribution\ndominating the true finite sample distribution. We prove that it consistently\nestimates the optimal normal limit distribution, while its approximation error\nis driven by the performance of the bootstrap for a well behaved empirical\nprocess. We demonstrate our general inferential methods for 1) nonparametric\nestimation of the average treatment effect based on observing on each unit a\ncovariate vector, binary treatment, and outcome, and for 2) nonparametric\nestimation of the integral of the square of the multivariate density of the\ndata distribution.\n", "title": "Finite Sample Inference for Targeted Learning" }
null
null
null
null
true
null
12811
null
Default
null
null
null
{ "abstract": " An elementary proof of the two-sidedness of the matrix-inverse is given using\nonly linear independence and the reduced row-echelon form of a matrix. In\naddition, it is shown that a matrix is invertible if and only if it is\nrow-equivalent to the identity matrix without appealing to elementary matrices.\nThis proof underscores the importance of a basis and provides a proof of the\ninvertible matrix theorem.\n", "title": "A Short and Elementary Proof of the Two-sidedness of the Matrix-Inverse" }
null
null
null
null
true
null
12812
null
Default
null
null
null
{ "abstract": " Rich-club ordering and the dyadic effect are two phenomena observed in\ncomplex networks that are based on the presence of certain substructures\ncomposed of specific nodes. Rich-club ordering represents the tendency of\nhighly connected and important elements to form tight communities with other\ncentral elements. The dyadic effect denotes the tendency of nodes that share a\ncommon property to be much more interconnected than expected. In this study, we\nconsider the interrelation between these two phenomena, which until now have\nalways been studied separately. We contribute with a new formulation of the\nrich-club measures in terms of the dyadic effect. Moreover, we introduce\ncertain measures related to the analysis of the dyadic effect, which are useful\nin confirming the presence and relevance of rich-clubs in complex networks. In\naddition, certain computational experiences show the usefulness of the\nintroduced quantities with regard to different classes of real networks.\n", "title": "Rich-Club Ordering and the Dyadic Effect: Two Interrelated Phenomena" }
null
null
null
null
true
null
12813
null
Default
null
null
null
{ "abstract": " Here we introduce the interstellar dust modelling framework THEMIS (The\nHeterogeneous dust Evolution Model for Interstellar Solids), which takes a\nglobal view of dust and its evolution in response to the local conditions in\ninterstellar media. This approach is built upon a core model that was developed\nto explain the dust extinction and emission in the diffuse interstellar medium.\nThe model was then further developed to self-consistently include the effects\nof dust evolution in the transition to denser regions. The THEMIS approach is\nunder continuous development and currently we are extending the framework to\nexplore the implications of dust evolution in HII regions and the\nphoton-dominated regions associated with star formation. We provide links to\nthe THEMIS, DustEM and DustPedia websites where more information about the\nmodel, its input data and applications can be found.\n", "title": "The global dust modelling framework THEMIS (The Heterogeneous dust Evolution Model for Interstellar Solids)" }
null
null
null
null
true
null
12814
null
Default
null
null
null
{ "abstract": " Model-based clustering is a popular approach for clustering multivariate data\nwhich has seen applications in numerous fields. Nowadays, high-dimensional data\nare more and more common and the model-based clustering approach has adapted to\ndeal with the increasing dimensionality. In particular, the development of\nvariable selection techniques has received a lot of attention and research\neffort in recent years. Even for small size problems, variable selection has\nbeen advocated to facilitate the interpretation of the clustering results. This\nreview provides a summary of the methods developed for variable selection in\nmodel-based clustering. Existing R packages implementing the different methods\nare indicated and illustrated in application to two data analysis examples.\n", "title": "Variable Selection Methods for Model-based Clustering" }
null
null
null
null
true
null
12815
null
Default
null
null
null
{ "abstract": " Over the past decade asteroseismology has become a powerful method to\nsystematically characterize host stars and dynamical architectures of exoplanet\nsystems. In this contribution I review current key synergies between\nasteroseismology and exoplanetary science such as the precise determination of\nplanet radii and ages, the measurement of orbital eccentricities, stellar\nobliquities and their impact on hot Jupiter formation theories, and the\nimportance of asteroseismology on spectroscopic analyses of exoplanet hosts. I\nalso give an outlook on future synergies such as the characterization of\nsub-Neptune-size planets orbiting solar-type stars, the study of planet\npopulations orbiting evolved stars, and the determination of ages of\nintermediate-mass stars hosting directly imaged planets.\n", "title": "Synergies between Asteroseismology and Exoplanetary Science" }
null
null
null
null
true
null
12816
null
Default
null
null
null
{ "abstract": " Deep optical photometric data on the NGC 7538 region were collected and\ncombined with archival data sets from $Chandra$, 2MASS and {\\it Spitzer}\nsurveys in order to generate a new catalog of young stellar objects (YSOs)\nincluding those not showing IR excess emission. This new catalog is complete\ndown to 0.8 M$_\\odot$. The nature of the YSOs associated with the NGC 7538\nregion and their spatial distribution are used to study the star formation\nprocess and the resultant mass function (MF) in the region. Out of the 419\nYSOs, $\\sim$91\\% have ages between 0.1 to 2.5 Myr and $\\sim$86\\% have masses\nbetween 0.5 to 3.5 M$_\\odot$, as derived by spectral energy distribution\nfitting analysis. Around 24\\%, 62\\% and 2\\% of these YSOs are classified to be\nthe Class I, Class II and Class III sources, respectively. The X-ray activity\nin the Class I, Class II and Class III objects is not significantly different\nfrom each other. This result implies that the enhanced X-ray surface flux due\nto the increase in the rotation rate may be compensated by the decrease in the\nstellar surface area during the pre-main sequence evolution. Our analysis shows\nthat the O3V type high mass star `IRS 6' might have triggered the formation of\nyoung low mass stars up to a radial distance of 3 pc. The MF shows a turn-off\nat around 1.5 M$_\\odot$ and the value of its slope `$\\Gamma$' in the mass range\n$1.5 <$M/M$_\\odot < 6$ comes out to be $-1.76\\pm0.24$, which is steeper than\nthe Salpeter value.\n", "title": "The stellar contents and star formation in the NGC 7538 region" }
null
null
null
null
true
null
12817
null
Default
null
null
null
{ "abstract": " Construction of ambiguity set in robust optimization relies on the choice of\ndivergences between probability distributions. In distribution learning,\nchoosing appropriate probability distributions based on observed data is\ncritical for approximating the true distribution. To improve the performance of\nmachine learning models, there has recently been interest in designing\nobjective functions based on Lp-Wasserstein distance rather than the classical\nKullback-Leibler (KL) divergence. In this paper, we derive concentration and\nasymptotic results using Bregman divergence. We propose a novel asymmetric\nstatistical divergence called Wasserstein-Bregman divergence as a\ngeneralization of L2-Wasserstein distance. We discuss how these results can be\napplied to the construction of ambiguity set in robust optimization.\n", "title": "Ambiguity set and learning via Bregman and Wasserstein" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
12818
null
Validated
null
null
null
{ "abstract": " Massive multiuser (MU) multiple-input multiple-output (MIMO) will be a core\ntechnology in fifth-generation (5G) wireless systems as it offers significant\nimprovements in spectral efficiency compared to existing multi-antenna\ntechnologies. The presence of hundreds of antenna elements at the base station\n(BS), however, results in excessively high hardware costs and power\nconsumption, and requires high interconnect throughput between the\nbaseband-processing unit and the radio unit. Massive MU-MIMO that uses\nlow-resolution analog-to-digital and digital-to-analog converters (DACs) has\nthe potential to address all these issues. In this paper, we focus on downlink\nprecoding for massive MU-MIMO systems with 1-bit DACs at the BS. The objective\nis to design precoders that simultaneously mitigate multi-user interference\n(MUI) and quantization artifacts. We propose two nonlinear 1-bit precoding\nalgorithms and corresponding very-large scale integration (VLSI) designs. Our\nalgorithms rely on biconvex relaxation, which enables the design of efficient\n1-bit precoding algorithms that achieve superior error-rate performance\ncompared to that of linear precoding algorithms followed by quantization. To\nshowcase the efficacy of our algorithms, we design VLSI architectures that\nenable efficient 1-bit precoding for massive MU-MIMO systems in which hundreds\nof antennas serve tens of user equipments. We present corresponding\nfield-programmable gate array (FPGA) implementations to demonstrate that 1-bit\nprecoding enables reliable and high-rate downlink data transmission in\npractical systems.\n", "title": "1-bit Massive MU-MIMO Precoding in VLSI" }
null
null
null
null
true
null
12819
null
Default
null
null
null
{ "abstract": " The formation and dynamics of free-surface structures, such as steps or\nterraces and their interplay with the phase separation in the bulk are key\nfeatures of diblock copolymer films. We present a phase-field model with an\nobstacle potential which follows naturally from derivations of the\nOhta-Kawasaki energy functional via self-consistent field theory. The free\nsurface of the film is incorporated into the phase-field model by including a\nthird phase for the void. The resulting model and its sharp interface limit are\nshown to capture the energetics of films with steps in two dimensions. For this\nmodel, we then develop a numerical approach that is capable of resolving the\nlong-time complex free-surface structures that arise in diblock copolymer\nfilms.\n", "title": "Step evolution in two-dimensional diblock copolymer films" }
null
null
null
null
true
null
12820
null
Default
null
null
null
{ "abstract": " In this note we show that Voevodsky's univalence axiom holds in the model of\ntype theory based on symmetric cubical sets. We will also discuss Swan's\nconstruction of the identity type in this variation of cubical sets. This\nproves that we have a model of type theory supporting dependent products,\ndependent sums, univalent universes, and identity types with the usual\njudgmental equality, and this model is formulated in a constructive metatheory.\n", "title": "The univalence axiom in cubical sets" }
null
null
null
null
true
null
12821
null
Default
null
null
null
{ "abstract": " Demand response (DR) programs have emerged as a potential key enabling\ningredient in the context of smart grid (SG). Nevertheless, the rising concerns\nover privacy issues raised by customers subscribed to these programs constitute\na major threat towards their effective deployment and utilization. This has\ndriven extensive research to resolve the hindrance confronted, resulting in a\nnumber of methods being proposed for preserving customers' privacy. While these\nmethods provide stringent privacy guarantees, only limited attention has been\npaid to their computational efficiency and performance quality. Under the\nparadigm of differential privacy, this paper initiates a systematic empirical\nstudy on quantifying the trade-off between privacy and optimality in\ncentralized DR systems for maximizing cumulative customer utility. Aiming to\nelucidate the factors governing this trade-off, we analyze the cost of privacy\nin terms of the effect incurred on the objective value of the DR optimization\nproblem when applying the employed privacy-preserving strategy based on Laplace\nmechanism. The theoretical results derived from the analysis are complemented\nwith empirical findings, corroborated extensively by simulations on a 4-bus MG\nsystem with up to thousands of customers. By evaluating the impact of privacy,\nthis pilot study serves DR practitioners when considering the social and\neconomic implications of deploying privacy-preserving DR programs in practice.\nMoreover, it stimulates further research on exploring more efficient approaches\nwith bounded performance guarantees for optimizing energy procurement of MGs\nwithout infringing the privacy of customers on demand side.\n", "title": "Assessing the Privacy Cost in Centralized Event-Based Demand Response for Microgrids" }
null
null
null
null
true
null
12822
null
Default
null
null
null
{ "abstract": " Our decision-making processes are becoming more data driven, based on data\nfrom multiple sources, of different types, processed by a variety of\ntechnologies. As technology becomes more relevant for decision processes, the\nmore likely they are to be subjects of attacks aimed at disrupting their\nexecution or changing their outcome. With the increasing complexity and\ndependencies on technical components, such attempts grow more sophisticated and\ntheir impact will be more severe. This is especially important in scenarios\nwith shared goals, which had to be previously agreed to, or decisions with\nbroad social impact. We need to think about our decisions-making and underlying\ndata analysis processes in a systemic way to correctly evaluate benefits and\nrisks of specific solutions and to design them to be resistant to attacks. To\nreach these goals, we can apply experiences from threat modeling analysis used\nin software security. We will need to adapt these practices to new types of\nthreats, protecting different assets and operating in socio-technical systems.\nWith these changes, threat modeling can become a foundation for implementing\ndetailed technical, organizational or legal mitigations and making our\ndecisions more reliable and trustworthy.\n", "title": "Threat Modeling Data Analysis in Socio-technical Systems" }
null
null
[ "Computer Science" ]
null
true
null
12823
null
Validated
null
null
null
{ "abstract": " We consider a half-soliton stationary state of the nonlinear Schrodinger\nequation with the power nonlinearity on a star graph consisting of N edges and\na single vertex. For the subcritical power nonlinearity, the half-soliton state\nis a degenerate critical point of the action functional under the mass\nconstraint such that the second variation is nonnegative. By using normal\nforms, we prove that the degenerate critical point is a nonlinear saddle point,\nfor which the small perturbations to the half-soliton state grow slowly in time\nresulting in the nonlinear instability of the half-soliton state. The result\nholds for any $N \\geq 3$ and arbitrary subcritical power nonlinearity. It gives\na precise dynamical characterization of the previous result of Adami {\\em et\nal.}, where the half-soliton state was shown to be a saddle point of the action\nfunctional under the mass constraint for $N = 3$ and for cubic nonlinearity.\n", "title": "Nonlinear Instability of Half-Solitons on Star Graphs" }
null
null
null
null
true
null
12824
null
Default
null
null
null
{ "abstract": " The field of algorithmic fairness has highlighted ethical questions which may\nnot have purely technical answers. For example, different algorithmic fairness\nconstraints are often impossible to satisfy simultaneously, and choosing\nbetween them requires value judgments about which people may disagree.\nAchieving consensus on algorithmic fairness will be difficult unless we\nunderstand why people disagree in the first place. Here we use a series of\nsurveys to investigate how two factors affect disagreement: demographics and\ndiscussion. First, we study whether disagreement on algorithmic fairness\nquestions is caused partially by differences in demographic backgrounds. This\nis a question of interest because computer science is demographically\nnon-representative. If beliefs about algorithmic fairness correlate with\ndemographics, and algorithm designers are demographically non-representative,\ndecisions made about algorithmic fairness may not reflect the will of the\npopulation as a whole. We show, using surveys of three separate populations,\nthat there are gender differences in beliefs about algorithmic fairness. For\nexample, women are less likely to favor including gender as a feature in an\nalgorithm which recommends courses to students if doing so would make female\nstudents less likely to be recommended science courses. Second, we investigate\nwhether people's views on algorithmic fairness can be changed by discussion and\nshow, using longitudinal surveys of students in two computer science classes,\nthat they can.\n", "title": "Demographics and discussion influence views on algorithmic fairness" }
null
null
[ "Computer Science" ]
null
true
null
12825
null
Validated
null
null
null
{ "abstract": " We revisit the present status of the stiffness of the supranuclear equations\nof state, particularly the solutions that increase the stiffness in the\npresence of hyperons, the putative transition to a quark matter phase and the\nrobustness of massive compact star observations.\n", "title": "The stifness of the supranuclear equation of state (once again)" }
null
null
null
null
true
null
12826
null
Default
null
null
null
{ "abstract": " In this paper, we use dynamical systems to analyze stability of\ndesynchronization algorithms at equilibrium. We start by illustrating the\nequilibrium of a dynamic systems and formalizing force components and time\nphases. Then, we use Linear Approximation to obtain Jaconian (J) matrixes which\nare used to find the eigenvalues. Next, we employ the Hirst and Macey theorem\nand Gershgorins theorem to find the bounds of those eigenvalues. Finally, if\nthe number of nodes (n) is within such bounds, the systems are stable at\nequilibrium. (This paper is the last part of the series Stable\nDesynchronization for Wireless Sensor Networks - (I) Concepts and Algorithms\n(II) Performance Evaluation (III) Stability Analysis)\n", "title": "Stable Desynchronization for Wireless Sensor Networks: (III) Stability Analysis" }
null
null
[ "Computer Science" ]
null
true
null
12827
null
Validated
null
null
null
{ "abstract": " Adiabatic quantum computing has evolved in recent years from a theoretical\nfield into an immensely practical area, a change partially sparked by D-Wave\nSystem's quantum annealing hardware. These multimillion-dollar quantum\nannealers offer the potential to solve optimization problems millions of times\nfaster than classical heuristics, prompting researchers at Google, NASA and\nLockheed Martin to study how these computers can be applied to complex\nreal-world problems such as NASA rover missions. Unfortunately, compiling\n(embedding) an optimization problem into the annealing hardware is itself a\ndifficult optimization problem and a major bottleneck currently preventing\nwidespread adoption. Additionally, while finding a single embedding is\ndifficult, no generalized method is known for tuning embeddings to use minimal\nhardware resources. To address these barriers, we introduce a graph-theoretic\nframework for developing structured embedding algorithms. Using this framework,\nwe introduce a biclique virtual hardware layer to provide a simplified\ninterface to the physical hardware. Additionally, we exploit bipartite\nstructure in quantum programs using odd cycle transversal (OCT) decompositions.\nBy coupling an OCT-based embedding algorithm with new, generalized reduction\nmethods, we develop a new baseline for embedding a wide range of optimization\nproblems into fault-free D-Wave annealing hardware. To encourage the reuse and\nextension of these techniques, we provide an implementation of the framework\nand embedding algorithms.\n", "title": "Optimizing Adiabatic Quantum Program Compilation using a Graph-Theoretic Framework" }
null
null
null
null
true
null
12828
null
Default
null
null
null
{ "abstract": " This paper analyzes pedestrians' behavioral patterns in the pedestrianized\nshopping environment in the historical center of Barcelona, Spain. We employ a\nBluetooth detection technique to capture a large-scale dataset of pedestrians'\nbehavior over a one-month period, including during a key sales period. We\nfocused on comparing particular behaviors before, during, and after the\ndiscount sales by analyzing this large-scale dataset, which is different but\ncomplementary to the conventionally used small-scale samples. Our results\nuncover pedestrians actively exploring a wider area of the district during a\ndiscount period compared to weekdays, giving rise to strong underlying mobility\npatterns.\n", "title": "Analysis of pedestrian behaviors through non-invasive Bluetooth monitoring" }
null
null
null
null
true
null
12829
null
Default
null
null
null
{ "abstract": " We define Open Gromov-Witten invariants counting psudoholomorphic curves with\nboundary of a fixed Euler characteristic. There is not obstruction in the\nconstruction of the invariant.\n", "title": "Open Gromov-Witten theory without Obstruction" }
null
null
null
null
true
null
12830
null
Default
null
null
null
{ "abstract": " This article determines and characterizes the minimal number of actuators\nneeded to ensure structural controllability of a linear system under structural\nalterations that can severe the connection between any two states. We assume\nthat initially the system is structurally controllable with respect to a given\nset of controls, and propose an efficient system-synthesis mechanism to find\nthe minimal number of additional actuators required for resilience of the\nsystem w.r.t such structural changes. The effectiveness of this approach is\ndemonstrated by using standard IEEE power networks.\n", "title": "Resilience of Complex Networks" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
12831
null
Validated
null
null
null
{ "abstract": " We present a general framework for training deep neural networks without\nbackpropagation. This substantially decreases training time and also allows for\nconstruction of deep networks with many sorts of learners, including networks\nwhose layers are defined by functions that are not easily differentiated, like\ndecision trees. The main idea is that layers can be trained one at a time, and\nonce they are trained, the input data are mapped forward through the layer to\ncreate a new learning problem. The process is repeated, transforming the data\nthrough multiple layers, one at a time, rendering a new data set, which is\nexpected to be better behaved, and on which a final output layer can achieve\ngood performance. We call this forward thinking and demonstrate a proof of\nconcept by achieving state-of-the-art accuracy on the MNIST dataset for\nconvolutional neural networks. We also provide a general mathematical\nformulation of forward thinking that allows for other types of deep learning\nproblems to be considered.\n", "title": "Forward Thinking: Building and Training Neural Networks One Layer at a Time" }
null
null
null
null
true
null
12832
null
Default
null
null
null
{ "abstract": " Multiplayer online battle arena has become a popular game genre. It also\nreceived increasing attention from our research community because they provide\na wealth of information about human interactions and behaviors. A major problem\nis extracting meaningful patterns of activity from this type of data, in a way\nthat is also easy to interpret. Here, we propose to exploit tensor\ndecomposition techniques, and in particular Non-negative Tensor Factorization,\nto discover hidden correlated behavioral patterns of play in a popular game:\nLeague of Legends. We first collect the entire gaming history of a group of\nabout one thousand players, totaling roughly $100K$ matches. By applying our\nmethodological framework, we then separate players into groups that exhibit\nsimilar features and playing strategies, as well as similar temporal\ntrajectories, i.e., behavioral progressions over the course of their gaming\nhistory: this will allow us to investigate how players learn and improve their\nskills.\n", "title": "Non-negative Tensor Factorization for Human Behavioral Pattern Mining in Online Games" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
12833
null
Validated
null
null
null
{ "abstract": " A novel method for robust estimation, called Graph-Cut RANSAC, GC-RANSAC in\nshort, is introduced. To separate inliers and outliers, it runs the graph-cut\nalgorithm in the local optimization (LO) step which is applied when a\nso-far-the-best model is found. The proposed LO step is conceptually simple,\neasy to implement, globally optimal and efficient. GC-RANSAC is shown\nexperimentally, both on synthesized tests and real image pairs, to be more\ngeometrically accurate than state-of-the-art methods on a range of problems,\ne.g. line fitting, homography, affine transformation, fundamental and essential\nmatrix estimation. It runs in real-time for many problems at a speed\napproximately equal to that of the less accurate alternatives (in milliseconds\non standard CPU).\n", "title": "Graph-Cut RANSAC" }
null
null
[ "Computer Science" ]
null
true
null
12834
null
Validated
null
null
null
{ "abstract": " Multivariate singular spectrum analysis (M-SSA), with a varimax rotation of\neigenvectors, was recently proposed to provide detailed information about phase\nsynchronization in networks of nonlinear oscillators without any a priori need\nfor phase estimation. The discriminatory power of M-SSA is often enhanced by\nusing only the time series of the variable that provides the best observability\nof the node dynamics. In practice, however, diverse factors could prevent one\nto have access to this variable in some nodes and other variables should be\nused, resulting in a mixed set of variables. In the present work, the impact of\nthis mixed measurement approach on the M-SSA is numerically investigated in\nnetworks of Rössler systems and cord oscillators. The results are threefold.\nFirst, a node measured by a poor variable, in terms of observability, becomes\nvirtually invisible to the technique. Second, a side effect of using a poor\nvariable is that the characterization of phase synchronization clustering of\nthe {\\it other}\\, nodes is hindered by a small amount. This suggests that,\ngiven a network, synchronization analysis with M-SSA could be more reliable by\nnot measuring those nodes that are accessible only through poor variables.\nThird, global phase synchronization could be detected even using only poor\nvariables, given enough of them are measured. These insights could be useful in\ndefining measurement strategies for both experimental design and real world\napplications for use with M-SSA.\n", "title": "Mixed measurements and the detection of phase synchronization in networks" }
null
null
null
null
true
null
12835
null
Default
null
null
null
{ "abstract": " In building intelligent transportation systems such as taxi or rideshare\nservices, accurate prediction of travel time and distance is crucial for\ncustomer experience and resource management. Using the NYC taxi dataset, which\ncontains taxi trips data collected from GPS-enabled taxis [23], this paper\ninvestigates the use of deep neural networks to jointly predict taxi trip time\nand distance. We propose a model, called ST-NN (Spatio-Temporal Neural\nNetwork), which first predicts the travel distance between an origin and a\ndestination GPS coordinate, then combines this prediction with the time of day\nto predict the travel time. The beauty of ST-NN is that it uses only the raw\ntrips data without requiring further feature engineering and provides a joint\nestimate of travel time and distance. We compare the performance of ST-NN to\nthat of state-of-the-art travel time estimation methods, and we observe that\nthe proposed approach generalizes better than state-of-the-art methods. We show\nthat ST-NN approach significantly reduces the mean absolute error for both\npredicted travel time and distance, about 17% for travel time prediction. We\nalso observe that the proposed approach is more robust to outliers present in\nthe dataset by testing the performance of ST-NN on the datasets with and\nwithout outliers.\n", "title": "A Unified Neural Network Approach for Estimating Travel Time and Distance for a Taxi Trip" }
null
null
null
null
true
null
12836
null
Default
null
null
null
{ "abstract": " A lion and a man move continuously in a space $X$. The aim of the lion is to\ncapture his prey while the man wants to escape forever. Which of them has a\nstrategy? This question has been studied for different metric domains. In this\narticle we consider the case of general topological spaces.\n", "title": "Lion and man in non-metric spaces" }
null
null
null
null
true
null
12837
null
Default
null
null
null
{ "abstract": " Let $ H $ be a compact subgroup of a locally compact group $G$. In this paper\nwe define a convolution on $ M(G/H) $, the space of all complex bounded Radon\nmeasures on the homogeneous space G/H. Then we prove that the measure space $\nM(G/H, *) $ is a non-unital Banach algebra that possesses an approximate\nidentity. Finally, it is shown that the Banach algebra $ M(G/H, *) $ is not\ninvolutive and also $ L^1(G/H, *) $ is a two-sided ideal of it.\n", "title": "Banach Algebra of Complex Bounded Radon Measures on Homogeneous Space" }
null
null
[ "Mathematics" ]
null
true
null
12838
null
Validated
null
null
null
{ "abstract": " In the past, Acoustic Scene Classification systems have been based on hand\ncrafting audio features that are input to a classifier. Nowadays, the common\ntrend is to adopt data driven techniques, e.g., deep learning, where audio\nrepresentations are learned from data. In this paper, we propose a system that\nconsists of a simple fusion of two methods of the aforementioned types: a deep\nlearning approach where log-scaled mel-spectrograms are input to a\nconvolutional neural network, and a feature engineering approach, where a\ncollection of hand-crafted features is input to a gradient boosting machine. We\nfirst show that both methods provide complementary information to some extent.\nThen, we use a simple late fusion strategy to combine both methods. We report\nclassification accuracy of each method individually and the combined system on\nthe TUT Acoustic Scenes 2017 dataset. The proposed fused system outperforms\neach of the individual methods and attains a classification accuracy of 72.8%\non the evaluation set, improving the baseline system by 11.8%.\n", "title": "A Simple Fusion of Deep and Shallow Learning for Acoustic Scene Classification" }
null
null
null
null
true
null
12839
null
Default
null
null
null
{ "abstract": " We propose an online convex optimization algorithm (RescaledExp) that\nachieves optimal regret in the unconstrained setting without prior knowledge of\nany bounds on the loss functions. We prove a lower bound showing an exponential\nseparation between the regret of existing algorithms that require a known bound\non the loss functions and any algorithm that does not require such knowledge.\nRescaledExp matches this lower bound asymptotically in the number of\niterations. RescaledExp is naturally hyperparameter-free and we demonstrate\nempirically that it matches prior optimization algorithms that require\nhyperparameter optimization.\n", "title": "Online Convex Optimization with Unconstrained Domains and Losses" }
null
null
null
null
true
null
12840
null
Default
null
null
null
{ "abstract": " This paper presents a feature encoding method of complex 3D objects for\nhigh-level semantic features. Recent approaches to object recognition methods\nbecome important for semantic simultaneous localization and mapping (SLAM).\nHowever, there is a lack of consideration of the probabilistic observation\nmodel for 3D objects, as the shape of a 3D object basically follows a complex\nprobability distribution. Furthermore, since the mobile robot equipped with a\nrange sensor observes only a single view, much information of the object shape\nis discarded. These limitations are the major obstacles to semantic SLAM and\nview-independent loop closure using 3D object shapes as features. In order to\nenable the numerical analysis for the Bayesian inference, we approximate the\ntrue observation model of 3D objects to tractable distributions. Since the\nobservation likelihood can be obtained from the generative model, we formulate\nthe true generative model for 3D object with the Bayesian networks. To capture\nthese complex distributions, we apply a variational auto-encoder. To analyze\nthe approximated distributions and encoded features, we perform classification\nwith maximum likelihood estimation and shape retrieval.\n", "title": "A Variational Feature Encoding Method of 3D Object for Probabilistic Semantic SLAM" }
null
null
null
null
true
null
12841
null
Default
null
null
null
{ "abstract": " Molecular simulations produce very high-dimensional data-sets with millions\nof data points. As analysis methods are often unable to cope with so many\ndimensions, it is common to use dimensionality reduction and clustering methods\nto reach a reduced representation of the data. Yet these methods often fail to\ncapture the most important features necessary for the construction of a Markov\nmodel. Here we demonstrate the results of various dimensionality reduction\nmethods on two simulation data-sets, one of protein folding and another of\nprotein-ligand binding. The methods tested include a k-means clustering\nvariant, a non-linear auto encoder, principal component analysis and tICA. The\ndimension-reduced data is then used to estimate the implied timescales of the\nslowest process by a Markov state model analysis to assess the quality of the\nprojection. The projected dimensions learned from the data are visualized to\ndemonstrate which conformations the various methods choose to represent the\nmolecular process.\n", "title": "Dimensionality reduction methods for molecular simulations" }
null
null
null
null
true
null
12842
null
Default
null
null
null
{ "abstract": " While a large body of empirical results show that temporally-extended actions\nand options may significantly affect the learning performance of an agent, the\ntheoretical understanding of how and when options can be beneficial in online\nreinforcement learning is relatively limited. In this paper, we derive an upper\nand lower bound on the regret of a variant of UCRL using options. While we\nfirst analyze the algorithm in the general case of semi-Markov decision\nprocesses (SMDPs), we show how these results can be translated to the specific\ncase of MDPs with options and we illustrate simple scenarios in which the\nregret of learning with options can be \\textit{provably} much smaller than the\nregret suffered when learning with primitive actions.\n", "title": "Exploration--Exploitation in MDPs with Options" }
null
null
null
null
true
null
12843
null
Default
null
null
null
{ "abstract": " Structural, magnetic and electrical-transport properties of {\\alpha}-LiFeO2,\ncrystallizing in the rock salt structure with random distribution of Li and Fe\nions, have been studied by synchrotron X-ray diffraction, 57Fe Mössbauer\nspectroscopy and electrical resistance measurements at pressures up to 100 GPa\nusing diamond anvil cells. It was found that the crystal structure is stable at\nleast to 82 GPa, though a significant change in compressibility has been\nobserved above 50 GPa. The changes in the structural properties are found to be\non a par with a sluggish Fe3+ high- to low-spin (HS-LS) transition (S=5/2 to\nS=1/2) starting at 50 GPa and not completed even at ~100 GPa. The HS-LS\ntransition is accompanied by an appreciable resistance decrease remaining a\nsemiconductor up to 115 GPa and is not expected to be metallic even at about\n200 GPa. The observed feature of the pressure-induced HS-LS transition is not\nan ordinary behavior of ferric oxides at high pressures. The effect of Fe3+\nnearest and next nearest neighbors on the features of the spin crossover is\ndiscussed.\n", "title": "Pressure induced spin crossover in disordered α-LiFeO2" }
null
null
null
null
true
null
12844
null
Default
null
null
null
{ "abstract": " We consider the problem of optimal transportation with quadratic cost between\na empirical measure and a general target probability on R d , with d $\\ge$ 1.\nWe provide new results on the uniqueness and stability of the associated\noptimal transportation potentials , namely, the minimizers in the dual\nformulation of the optimal transportation problem. As a consequence, we show\nthat a CLT holds for the empirical transportation cost under mild moment and\nsmoothness requirements. The limiting distributions are Gaussian and admit a\nsimple description in terms of the optimal transportation potentials.\n", "title": "Central Limit Theorem for empirical transportation cost in general dimension" }
null
null
null
null
true
null
12845
null
Default
null
null
null
{ "abstract": " Let $\\mathcal{L}$ be a Schrödinger operator of the form $\\mathcal{L} =\n-\\Delta+V$ acting on $L^2(\\mathbb R^n)$ where the nonnegative potential $V$\nbelongs to the reverse Hölder class $B_q$ for some $q\\geq n.$ Let\n$L^{p,\\lambda}(\\mathbb{R}^{n})$, $0\\le \\lambda<n$ denote the Morrey space on\n$\\mathbb{R}^{n}$. In this paper, we will show that a function $f\\in\nL^{2,\\lambda}(\\mathbb{R}^{n})$ is the trace of the solution of ${\\mathbb\nL}u=u_{t}+{\\mathcal{L}}u=0, u(x,0)= f(x),$ where $u$ satisfies a Carleson-type\ncondition \\begin{eqnarray*} \\sup_{x_B, r_B}\nr_B^{-\\lambda}\\int_0^{r_B^2}\\int_{B(x_B, r_B)} |\\nabla u(x,t)|^2 {dx dt} \\leq C\n<\\infty. \\end{eqnarray*} Conversely, this Carleson-type condition characterizes\nall the ${\\mathbb L}$-carolic functions whose traces belong to the Morrey space\n$L^{2,\\lambda}(\\mathbb{R}^{n})$ for all $0\\le \\lambda<n$. This result extends\nthe analogous characterization founded by Fabes and Neri for the classical BMO\nspace of John and Nirenberg.\n", "title": "Characterization of temperatures associated to Schrödinger operators with initial data in Morrey spaces" }
null
null
null
null
true
null
12846
null
Default
null
null
null
{ "abstract": " The importance of microscopic details on cooperation level is an intensively\nstudied aspect of evolutionary game theory. Interestingly, these details become\ncrucial on heterogeneous populations where individuals may possess diverse\ntraits. By introducing a coevolutionary model in which not only strategies but\nalso individual dynamical features may evolve we revealed that the formerly\nestablished conclusion is not necessarily true when different updating rules\nare on stage. In particular, we apply two strategy updating rules, imitation\nand Death-Birth rule, which allow local selection in a spatial system. Our\nobservation highlights that the microscopic feature of dynamics, like the level\nof learning activity, could be a fundamental factor even if all players share\nthe same trait uniformly.\n", "title": "Dynamic-sensitive cooperation in the presence of multiple strategy updating rules" }
null
null
null
null
true
null
12847
null
Default
null
null
null
{ "abstract": " Maximum A posteriori Probability (MAP) inference in graphical models amounts\nto solving a graph-structured combinatorial optimization problem. Popular\ninference algorithms such as belief propagation (BP) and generalized belief\npropagation (GBP) are intimately related to linear programming (LP) relaxation\nwithin the Sherali-Adams hierarchy. Despite the popularity of these algorithms,\nit is well understood that the Sum-of-Squares (SOS) hierarchy based on\nsemidefinite programming (SDP) can provide superior guarantees. Unfortunately,\nSOS relaxations for a graph with $n$ vertices require solving an SDP with\n$n^{\\Theta(d)}$ variables where $d$ is the degree in the hierarchy. In\npractice, for $d\\ge 4$, this approach does not scale beyond a few tens of\nvariables. In this paper, we propose binary SDP relaxations for MAP inference\nusing the SOS hierarchy with two innovations focused on computational\nefficiency. Firstly, in analogy to BP and its variants, we only introduce\ndecision variables corresponding to contiguous regions in the graphical model.\nSecondly, we solve the resulting SDP using a non-convex Burer-Monteiro style\nmethod, and develop a sequential rounding procedure. We demonstrate that the\nresulting algorithm can solve problems with tens of thousands of variables\nwithin minutes, and outperforms BP and GBP on practical problems such as image\ndenoising and Ising spin glasses. Finally, for specific graph types, we\nestablish a sufficient condition for the tightness of the proposed partial SOS\nrelaxation.\n", "title": "Inference in Graphical Models via Semidefinite Programming Hierarchies" }
null
null
null
null
true
null
12848
null
Default
null
null
null
{ "abstract": " One of the challenges in information retrieval is providing accurate answers\nto a user's question often expressed as uncertainty words. Most answers are\nbased on a Syntactic approach rather than a Semantic analysis of the query. In\nthis paper, our objective is to present a hybrid approach for a Semantic\nquestion answering retrieval system using Ontology Similarity and Fuzzy logic.\nWe use a Fuzzy Co-clustering algorithm to retrieve the collection of documents\nbased on Ontology Similarity. The Fuzzy Scale uses Fuzzy type-1 for documents\nand Fuzzy type-2 for words to prioritize answers. The objective of this work is\nto provide retrieval system with more accurate answers than non-fuzzy Semantic\nOntology approach.\n", "title": "A Hybrid Approach using Ontology Similarity and Fuzzy Logic for Semantic Question Answering" }
null
null
null
null
true
null
12849
null
Default
null
null
null
{ "abstract": " Brain tumour segmentation plays a key role in computer-assisted surgery. Deep\nneural networks have increased the accuracy of automatic segmentation\nsignificantly, however these models tend to generalise poorly to different\nimaging modalities than those for which they have been designed, thereby\nlimiting their applications. For example, a network architecture initially\ndesigned for brain parcellation of monomodal T1 MRI can not be easily\ntranslated into an efficient tumour segmentation network that jointly utilises\nT1, T1c, Flair and T2 MRI. To tackle this, we propose a novel scalable\nmultimodal deep learning architecture using new nested structures that\nexplicitly leverage deep features within or across modalities. This aims at\nmaking the early layers of the architecture structured and sparse so that the\nfinal architecture becomes scalable to the number of modalities. We evaluate\nthe scalable architecture for brain tumour segmentation and give evidence of\nits regularisation effect compared to the conventional concatenation approach.\n", "title": "Scalable multimodal convolutional networks for brain tumour segmentation" }
null
null
null
null
true
null
12850
null
Default
null
null
null
{ "abstract": " We study fermionic matrix product operator algebras and identify the\nassociated algebraic data. Using this algebraic data we construct fermionic\ntensor network states in two dimensions that have non-trivial\nsymmetry-protected or intrinsic topological order. The tensor network states\nallow us to relate physical properties of the topological phases to the\nunderlying algebraic data. We illustrate this by calculating defect properties\nand modular matrices of supercohomology phases. Our formalism also captures\nMajorana defects as we show explicitly for a class of $\\mathbb{Z}_2$\nsymmetry-protected and intrinsic topological phases. The tensor networks states\npresented here are well-suited for numerical applications and hence open up new\npossibilities for studying interacting fermionic topological phases.\n", "title": "Fermionic projected entangled-pair states and topological phases" }
null
null
null
null
true
null
12851
null
Default
null
null
null
{ "abstract": " The velocity-space moments of the often troublesome nonlinear Landau\ncollision operator are expressed exactly in terms of multi-index\nHermite-polynomial moments of the distribution functions. The collisional\nmoments are shown to be generated by derivatives of two well-known functions,\nnamely the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian\ndistribution. The resulting formula has a nonlinear dependency on the relative\nmean flow of the colliding species normalised to the root-mean-square of the\ncorresponding thermal velocities, and a bilinear dependency on densities and\nhigher-order velocity moments of the distribution functions, with no\nrestriction on temperature, flow or mass ratio of the species. The result can\nbe applied to both the classic transport theory of plasmas, that relies on the\nChapman-Enskog method, as well as to deriving collisional fluid equations that\nfollow Grad's moment approach. As an illustrative example, we provide the\ncollisional ten-moment equations with exact conservation laws for momentum- and\nenergy-transfer rate.\n", "title": "Exact collisional moments for plasma fluid theories" }
null
null
null
null
true
null
12852
null
Default
null
null
null
{ "abstract": " Machine learning (ML) techniques such as (deep) artificial neural networks\n(DNN) are solving very successfully a plethora of tasks and provide new\npredictive models for complex physical, chemical, biological and social\nsystems. However, in most cases this comes with the disadvantage of acting as a\nblack box, rarely providing information about what made them arrive at a\nparticular prediction. This black box aspect of ML techniques can be\nproblematic especially in medical diagnoses, so far hampering a clinical\nacceptance. The present paper studies the uniqueness of individual gait\npatterns in clinical biomechanics using DNNs. By attributing portions of the\nmodel predictions back to the input variables (ground reaction forces and\nfull-body joint angles), the Layer-Wise Relevance Propagation (LRP) technique\nreliably demonstrates which variables at what time windows of the gait cycle\nare most relevant for the characterisation of gait patterns from a certain\nindividual. By measuring the timeresolved contribution of each input variable\nto the prediction of ML techniques such as DNNs, our method describes the first\ngeneral framework that enables to understand and interpret non-linear ML\nmethods in (biomechanical) gait analysis and thereby supplies a powerful tool\nfor analysis, diagnosis and treatment of human gait.\n", "title": "What is Unique in Individual Gait Patterns? Understanding and Interpreting Deep Learning in Gait Analysis" }
null
null
null
null
true
null
12853
null
Default
null
null
null
{ "abstract": " Electrophysiological recordings of spiking activity are limited to a small\nnumber of neurons. This spatial subsampling has hindered characterizing even\nmost basic properties of collective spiking in cortical networks. In\nparticular, two contradictory hypotheses prevailed for over a decade: the first\nproposed an asynchronous irregular state, the second a critical state. While\ndistinguishing them is straightforward in models, we show that in experiments\nclassical approaches fail to correctly infer network dynamics because of\nsubsampling. Deploying a novel, subsampling-invariant estimator, we find that\nin vivo dynamics do not comply with either hypothesis, but instead occupy a\nnarrow \"reverberating\" state consistently across multiple mammalian species and\ncortical areas. A generic model tuned to this reverberating state predicts\nsingle neuron, pairwise, and population properties. With these predictions we\nfirst validate the model and then deduce network properties that are\nchallenging to obtain experimentally, like the network timescale and strength\nof cortical input.\n", "title": "On the ground state of spiking network activity in mammalian cortex" }
null
null
null
null
true
null
12854
null
Default
null
null
null
{ "abstract": " Understanding information processing in the brain requires the ability to\ndetermine the functional connectivity between the different regions of the\nbrain. We present a method using transfer entropy to extract this flow of\ninformation between brain regions from spike-train data commonly obtained in\nneurological experiments. Transfer entropy is a statistical measure based in\ninformation theory that attempts to quantify the information flow from one\nprocess to another, and has been applied to find connectivity in simulated\nspike-train data. Due to statistical error in the estimator, inferring\nfunctional connectivity requires a method for determining significance in the\ntransfer entropy values. We discuss the issues with numerical estimation of\ntransfer entropy and resulting challenges in determining significance before\npresenting the trial-shuffle method as a viable option. The trial-shuffle\nmethod, for spike-train data that is split into multiple trials, determines\nsignificant transfer entropy values independently for each individual pair of\nneurons by comparing to a created baseline distribution using a rigorous\nstatistical test. This is in contrast to either globally comparing all neuron\ntransfer entropy values or comparing pairwise values to a single baseline\nvalue.\nIn establishing the viability of this method by comparison to several\nalternative approaches in the literature, we find evidence that preserving the\ninter-spike-interval timing is important.\nWe then use the trial-shuffle method to investigate information flow within a\nmodel network as we vary model parameters. This includes investigating the\nglobal flow of information within a connectivity network divided into two\nwell-connected subnetworks, going beyond local transfer of information between\npairs of neurons.\n", "title": "Inferring Information Flow in Spike-train Data Sets using a Trial-Shuffle Method" }
null
null
null
null
true
null
12855
null
Default
null
null
null
{ "abstract": " How does one find dimensions in multivariate data that are reliably expressed\nacross repetitions? For example, in a brain imaging study one may want to\nidentify combinations of neural signals that are reliably expressed across\nmultiple trials or subjects. For a behavioral assessment with multiple ratings,\none may want to identify an aggregate score that is reliably reproduced across\nraters. Correlated Components Analysis (CorrCA) addresses this problem by\nidentifying components that are maximally correlated between repetitions (e.g.\ntrials, subjects, raters). Here we formalize this as the maximization of the\nratio of between-repetition to within-repetition covariance. We show that this\ncriterion maximizes repeat-reliability, defined as mean over variance across\nrepeats, and that it leads to CorrCA or to multi-set Canonical Correlation\nAnalysis, depending on the constraints. Surprisingly, we also find that CorrCA\nis equivalent to Linear Discriminant Analysis for zero-mean signals, which\nprovides an unexpected link between classic concepts of multivariate analysis.\nWe present an exact parametric test of statistical significance based on the\nF-statistic for normally distributed independent samples, and present and\nvalidate shuffle statistics for the case of dependent samples. Regularization\nand extension to non-linear mappings using kernels are also presented. The\nalgorithms are demonstrated on a series of data analysis applications, and we\nprovide all code and data required to reproduce the results.\n", "title": "Correlated Components Analysis - Extracting Reliable Dimensions in Multivariate Data" }
null
null
null
null
true
null
12856
null
Default
null
null
null
{ "abstract": " Rate change calculations in the literature involve deterministic methods that\nmeasure the change in premium for a given policy. The definition of rate change\nas a statistical parameter is proposed to address the stochastic nature of the\npremium charged for a policy. It promotes the idea that rate change is a\nproperty of an asymptotic population to be estimated, not just a property to\nmeasure or monitor in the sample of observed policies that are written. Various\nmodels and techniques are given for estimating this stochastic rate change and\nquantifying the uncertainty in the estimates. The use of matched sampling is\nemphasized for rate change estimation, as it adjusts for changes in policy\ncharacteristics by directly searching for similar policies across policy years.\nThis avoids any of the assumptions and recipes that are required to re-rate\npolicies in years where they were not written, as is common with deterministic\nmethods. Such procedures can be subjective or implausible if the structure of\nrating algorithms change or there are complex and heterogeneous exposure bases\nand coverages. The methods discussed are applied to a motor premium database.\nThe application includes the use of a genetic algorithm with parallel\ncomputations to automatically optimize the matched sampling.\n", "title": "Defining and estimating stochastic rate change in a dynamic general insurance portfolio" }
null
null
null
null
true
null
12857
null
Default
null
null
null
{ "abstract": " We call a learner super-teachable if a teacher can trim down an iid training\nset while making the learner learn even better. We provide sharp super-teaching\nguarantees on two learners: the maximum likelihood estimator for the mean of a\nGaussian, and the large margin classifier in 1D. For general learners, we\nprovide a mixed-integer nonlinear programming-based algorithm to find a super\nteaching set. Empirical experiments show that our algorithm is able to find\ngood super-teaching sets for both regression and classification problems.\n", "title": "Teacher Improves Learning by Selecting a Training Subset" }
null
null
[ "Statistics" ]
null
true
null
12858
null
Validated
null
null
null
{ "abstract": " We review the physics of GRB production by relativistic jets that start\nhighly opaque near the central source and then expand to transparency. We\ndiscuss dissipative and radiative processes in the jet and how radiative\ntransfer shapes the observed nonthermal spectrum released at the photosphere. A\ncomparison of recent detailed models with observations gives estimates for\nimportant parameters of GRB jets, such as the Lorentz factor and magnetization.\nWe also discuss predictions for GRB polarization and neutrino emission.\n", "title": "Photospheric Emission of Gamma-Ray Bursts" }
null
null
null
null
true
null
12859
null
Default
null
null
null
{ "abstract": " Elegant is an accelerator physics and particle-beam dynamics code widely used\nfor modeling and design of a variety of high-energy particle accelerators and\naccelerator-based systems. In this paper we discuss a recently developed\nversion of the code that can take advantage of CUDA-enabled graphics processing\nunits (GPUs) to achieve significantly improved performance for a large class of\nsimulations that are important in practice. The GPU version is largely defined\nby a framework that simplifies implementations of the fundamental kernel types\nthat are used by Elegant: particle operations, reductions, particle loss,\nhistograms, array convolutions and random number generation. Accelerated\nperformance on the Titan Cray XK-7 supercomputer is approximately 6-10 times\nbetter with the GPU than all the CPU cores associated with the same node count.\nIn addition to performance, the maintainability of the GPU-accelerated version\nof the code was considered a key design objective. Accuracy with respect to the\nCPU implementation is also a core consideration. Four different methods are\nused to ensure that the accelerated code faithfully reproduces the CPU results.\n", "title": "GPU acceleration and performance of the particle-beam-dynamics code Elegant" }
null
null
null
null
true
null
12860
null
Default
null
null
null
{ "abstract": " Through the Higgs mechanism, the long-range Coulomb interaction eliminates\nthe low-energy Goldstone phase mode in superconductors and transfers spectral\nweight all the way up to the plasma frequency. Here we show that the Higgs\nmechanism breaks down for length scales shorter than the superconducting\ncoherence length while it stays intact, even at high energies, in the\nlong-wavelength limit. This effect is a consequence of the composite nature of\nthe Higgs field of superconductivity and the broken Lorentz invariance in a\nsolid. Most importantly, the breakdown of the Higgs mechanism inside the\nsuperconducting coherence volume is crucial to ensure the stability of the BCS\nmean-field theory in the weak-coupling limit. We also show that changes in the\ngap equation due to plasmon-induced fluctuations can lead to significant\ncorrections to the mean-field theory and reveal that changes in the\ndensity-fluctuation spectrum of a superconductor are not limited to the\nvicinity of the gap.\n", "title": "Short-distance breakdown of the Higgs mechanism and the robustness of the BCS theory for charged superconductors" }
null
null
null
null
true
null
12861
null
Default
null
null
null
{ "abstract": " We revisit the Massey's method for rating and ranking in sports and\ncontextualize it as a general centrality measure in network science.\n", "title": "The Massey's method for sport rating: a network science perspective" }
null
null
null
null
true
null
12862
null
Default
null
null
null
{ "abstract": " A key objective in two phase 2b AMP clinical trials of VRC01 is to evaluate\nwhether drug concentration over time, as estimated by non-linear mixed effects\npharmacokinetics (PK) models, is associated with HIV infection rate. We\nconducted a simulation study of marker sampling designs, and evaluated the\neffect of study adherence and sub-cohort sample size on PK model estimates in\nmultiple-dose studies. With m=120, even under low adherence (about half of\nstudy visits missing per participant), reasonably unbiased and consistent\nestimates of most fixed and random effect terms were obtained. Coarsened marker\nsampling schedules were also studied.\n", "title": "Pharmacokinetics Simulations for Studying Correlates of Prevention Efficacy of Passive HIV-1 Antibody Prophylaxis in the Antibody Mediated Prevention (AMP) Study" }
null
null
null
null
true
null
12863
null
Default
null
null
null
{ "abstract": " We report the results of a pilot program to use the Magellan/M2FS\nspectrograph to survey the galactic populations and internal kinematics of\ngalaxy clusters. For this initial study, we present spectroscopic measurements\nfor $223$ quiescent galaxies observed along the line of sight to the galaxy\ncluster Abell 267 ($z\\sim0.23$). We develop a Bayesian method for modeling the\nintegrated light from each galaxy as a simple stellar population, with free\nparameters that specify redshift ($v_\\mathrm{los}/c$) and characteristic age,\nmetallicity ($\\mathrm{[Fe/H]}$), alpha-abundance ($[\\alpha/\\mathrm{Fe}]$), and\ninternal velocity dispersion ($\\sigma_\\mathrm{int}$) for individual galaxies.\nParameter estimates derived from our 1.5-hour observation of A267 have median\nrandom errors of $\\sigma_{v_\\mathrm{los}}=20\\ \\mathrm{km\\ s^{-1}}$,\n$\\sigma_{\\mathrm{Age}}=1.2\\ \\mathrm{Gyr}$, $\\sigma_{\\mathrm{[Fe/H]}}=0.11\\\n\\mathrm{dex}$, $\\sigma_{[\\alpha/\\mathrm{Fe}]}=0.07\\ \\mathrm{dex}$, and\n$\\sigma_{\\sigma_\\mathrm{int}}=20\\ \\mathrm{km\\ s^{-1}}$. In a companion paper,\nwe use these results to model the structure and internal kinematics of A267.\n", "title": "Magellan/M2FS Spectroscopy of Galaxy Clusters: Stellar Population Model and Application to Abell 267" }
null
null
[ "Physics" ]
null
true
null
12864
null
Validated
null
null
null
{ "abstract": " Evolution of the parametric decay instability (PDI) of a circularly polarized\nAflvén wave in a turbulent low-beta plasma background is investigated using\n3D hybrid simulations. It is shown that the turbulence reduces the growth rate\nof PDI as compared to the linear theory predictions, but PDI can still exist.\nInterestingly, the damping rate of ion acoustic mode (as the product of PDI) is\nalso reduced as compared to the linear Vlasov predictions. Nonetheless,\nsignificant heating of ions in the direction parallel to the background\nmagnetic field is observed due to resonant Landau damping of the ion acoustic\nwaves. In low-beta turbulent plasmas, PDI can provide an important channel for\nenergy dissipation of low-frequency Alfvén waves at a scale much larger than\nthe ion kinetic scales, different from the traditional turbulence dissipation\nmodels.\n", "title": "Parametric Decay Instability and Dissipation of Low-frequency Alfvén Waves in Low-beta Turbulent Plasmas" }
null
null
[ "Physics" ]
null
true
null
12865
null
Validated
null
null
null
{ "abstract": " (shortened) We determine the transformation matrix T that maps multiple\nimages with resolved features onto one another and that is based on a\nTaylor-expanded lensing potential close to a point on the critical curve within\nour model-independent lens characterisation approach. From T, the same\ninformation about the critical curve at fold and cusp points is derived as\ndetermined by the quadrupole moment of the individual images as observables. In\naddition, we read off the relative parities between the images, so that the\nparity of all images is determined, when one is known. We compare all\nretrievable ratios of potential derivatives to the actual ones and to those\nobtained by using the quadrupole moment as observable for two and three image\nconfigurations generated by a galaxy-cluster scale singular isothermal ellipse.\nWe conclude that using the quadrupole moments as observables, the properties of\nthe critical curve at the cusp points are retrieved to higher accuracy, at the\nfold points to lower accuracy, and the ratios of second order potential\nderivatives to comparable accuracy. We show that the approach using ratios of\nconvergences and reduced shear is equivalent to ours close to the critical\ncurve but yields more accurate results and is more robust because it does not\nrequire a special coordinate system like the approach using potential\nderivatives. T is determined by mapping manually assigned reference points in\nthe images onto each other. If the assignment of reference points is subject to\nmeasurement uncertainties under noise, we find that the confidence intervals of\nthe lens parameters can be as large as the values, when the uncertainties are\nlarger than one pixel. Observed multiple images with resolved features are more\nextended than unresolved ones, so that higher order moments should be taken\ninto account to improve the reconstruction.\n", "title": "Generalised model-independent characterisation of strong gravitational lenses II: Transformation matrix between multiple images" }
null
null
null
null
true
null
12866
null
Default
null
null
null
{ "abstract": " In structured populations the spatial arrangement of cooperators and\ndefectors on the interaction graph together with the structure of the graph\nitself determines the game dynamics and particularly whether or not fixation of\ncooperation (or defection) is favored. For a single cooperator (and a single\ndefector) and a network described by a regular graph the question of fixation\ncan be addressed by a single parameter, the structure coefficient. As this\nquantity is generic for any regular graph, we may call it the generic structure\ncoefficient. For two and more cooperators (or several defectors) fixation\nproperties can also be assigned by structure coefficients. These structure\ncoefficients, however, depend on the arrangement of cooperators and defectors\nwhich we may interpret as a configuration of the game. Moreover, the\ncoefficients are specific for a given interaction network modeled as regular\ngraph, which is why we may call them specific structure coefficients. In this\npaper, we study how specific structure coefficients vary over interaction\ngraphs and link the distributions obtained over different graphs to spectral\nproperties of interaction networks. We also discuss implications for the\nbenefit-to-cost ratios of donation games.\n", "title": "Properties of interaction networks, structure coefficients, and benefit-to-cost ratios" }
null
null
null
null
true
null
12867
null
Default
null
null
null
{ "abstract": " Demand Side Management (DSM) strategies are of-ten associated with the\nobjectives of smoothing the load curve and reducing peak load. Although the\nfuture of demand side manage-ment is technically dependent on remote and\nautomatic control of residential loads, the end-users play a significant role\nby shifting the use of appliances to the off-peak hours when they are exposed\nto Day-ahead market price. This paper proposes an optimum so-lution to the\nproblem of scheduling of household demand side management in the presence of PV\ngeneration under a set of tech-nical constraints such as dynamic electricity\npricing and voltage deviation. The proposed solution is implemented based on\nthe Clonal Selection Algorithm (CSA). This solution is evaluated through a set\nof scenarios and simulation results show that the proposed approach results in\nthe reduction of electricity bills and the import of energy from the grid.\n", "title": "Optimized Household Demand Management with Local Solar PV Generation" }
null
null
null
null
true
null
12868
null
Default
null
null
null
{ "abstract": " In this paper we consider a network scenario in which agents can evaluate\neach other according to a score graph that models some physical or social\ninteraction. The goal is to design a distributed protocol, run by the agents,\nallowing them to learn their unknown state among a finite set of possible\nvalues. We propose a Bayesian framework in which scores and states are\nassociated to probabilistic events with unknown parameters and hyperparameters\nrespectively. We prove that each agent can learn its state by means of a local\nBayesian classifier and a (centralized) Maximum-Likelihood (ML) estimator of\nthe parameter-hyperparameter that combines plain ML and Empirical Bayes\napproaches. By using tools from graphical models, which allow us to gain\ninsight on conditional dependences of scores and states, we provide two relaxed\nprobabilistic models that ultimately lead to ML parameter-hyperparameter\nestimators amenable to distributed computation. In order to highlight the\nappropriateness of the proposed relaxations, we demonstrate the distributed\nestimators on a machine-to-machine testing set-up for anomaly detection and on\na social interaction set-up for user profiling.\n", "title": "Interaction-Based Distributed Learning in Cyber-Physical and Social Networks" }
null
null
null
null
true
null
12869
null
Default
null
null
null
{ "abstract": " Classification and clustering algorithms have been proved to be successful\nindividually in different contexts. Both of them have their own advantages and\nlimitations. For instance, although classification algorithms are more powerful\nthan clustering methods in predicting class labels of objects, they do not\nperform well when there is a lack of sufficient manually labeled reliable data.\nOn the other hand, although clustering algorithms do not produce label\ninformation for objects, they provide supplementary constraints (e.g., if two\nobjects are clustered together, it is more likely that the same label is\nassigned to both of them) that one can leverage for label prediction of a set\nof unknown objects. Therefore, systematic utilization of both these types of\nalgorithms together can lead to better prediction performance. In this paper,\nWe propose a novel algorithm, called EC3 that merges classification and\nclustering together in order to support both binary and multi-class\nclassification. EC3 is based on a principled combination of multiple\nclassification and multiple clustering methods using an optimization function.\nWe theoretically show the convexity and optimality of the problem and solve it\nby block coordinate descent method. We additionally propose iEC3, a variant of\nEC3 that handles imbalanced training data. We perform an extensive experimental\nanalysis by comparing EC3 and iEC3 with 14 baseline methods (7 well-known\nstandalone classifiers, 5 ensemble classifiers, and 2 existing methods that\nmerge classification and clustering) on 13 standard benchmark datasets. We show\nthat our methods outperform other baselines for every single dataset, achieving\nat most 10% higher AUC. Moreover our methods are faster (1.21 times faster than\nthe best baseline), more resilient to noise and class imbalance than the best\nbaseline method.\n", "title": "EC3: Combining Clustering and Classification for Ensemble Learning" }
null
null
null
null
true
null
12870
null
Default
null
null
null
{ "abstract": " This paper investigates to what extent cognitive biases may affect human\nunderstanding of interpretable machine learning models, in particular of rules\ndiscovered from data. Twenty cognitive biases are covered, as are possible\ndebiasing techniques that can be adopted by designers of machine learning\nalgorithms and software. Our review transfers results obtained in cognitive\npsychology to the domain of machine learning, aiming to bridge the current gap\nbetween these two areas. It needs to be followed by empirical studies\nspecifically aimed at the machine learning domain.\n", "title": "A review of possible effects of cognitive biases on interpretation of rule-based machine learning models" }
null
null
null
null
true
null
12871
null
Default
null
null
null
{ "abstract": " We report a high-pressure study on the heavily electron doped Lix(NH3)yFe2Se2\nsingle crystal by using the cubic anvil cell apparatus. The superconducting\ntransition temperature Tc = 44 K at ambient pressure is first suppressed to\nbelow 20 K upon increasing pressure to Pc = 2 GPa, above which the pressure\ndependence of Tc(P) reverses and Tc increases steadily to ca. 55 K at 11 GPa.\nThese results thus evidenced a pressure-induced second high-Tc superconducting\n(SC-II) phase in Lix(NH3)yFe2Se2 with the highest Tcmax = 55K among the\nFeSe-based bulk materials. Hall data confirm that in the emergent SC-II phase\nthe dominant electron-type carrier density undergoes a fourfold enhancement and\ntracks the same trend as Tc(P). Interesting, we find a nearly parallel scaling\nbehavior between Tc and the inverse Hall coefficient for the SC-II phases of\nboth Lix(NH3)yFe2Se2 and (Li,Fe)OHFeSe. The present work demonstrates that high\npressure offers a distinctive means to further raising the maximum Tc of\nheavily electron doped FeSe-based materials by increasing the effective charge\ncarrier concentration via a plausible Fermi surface reconstruction at Pc.\n", "title": "High-Tc superconductivity up to 55 K under high pressure in the heavily electron doped Lix(NH3)yFe2Se2 single crystal" }
null
null
null
null
true
null
12872
null
Default
null
null
null
{ "abstract": " In this work, we are concerned with existence of solutions for a nonlinear\nsecond-order distributional differential equation, which contains measure\ndifferential equations and stochastic differential equations as special cases.\nThe proof is based on the Leray--Schauder nonlinear alternative and\nKurzweil--Henstock--Stieltjes integrals. Meanwhile, examples are worked out to\ndemonstrate that the main results are sharp.\n", "title": "Existence theorems for a nonlinear second-order distributional differential equation" }
null
null
null
null
true
null
12873
null
Default
null
null
null
{ "abstract": " The complete characterization of spatial coherence is difficult because the\nmutual coherence function is a complex-valued function of four independent\nvariables. This difficulty limits the ability of controlling and optimizing\nspatial coherence in a broad range of key applications. Here we propose a\nmethod for measuring the complete mutual coherence function, which does not\nrequire any prior knowledge and can be scaled to measure arbitrary coherence\nproperties for any wavelength. Our method can also be used to retrieve objects\nilluminated by partially coherent beam with unknown coherence properties. This\nstudy is particularly useful for coherent diffractive imaging of nanoscale\nstructures in the X-ray or electron regime. Our method is not limited by any\nassumption about the illumination and hence lays the foundation for a branch of\nnew diffractive imaging algorithms.\n", "title": "Spatial coherence measurement and partially coherent diffractive imaging" }
null
null
null
null
true
null
12874
null
Default
null
null
null
{ "abstract": " Network alignment consists of finding a correspondence between the nodes of\ntwo networks. From aligning proteins in computational biology, to\nde-anonymization of social networks, to recognition tasks in computer vision,\nthis problem has applications in many diverse areas. The current approaches to\nnetwork alignment mostly focus on the case where prior information is\navailable, either in the form of a seed set of correctly-matched nodes or\nattributes on the nodes and/or edges. Moreover, those approaches which assume\nno such prior information tend to be computationally expensive and do not scale\nto large-scale networks. However, many real-world networks are very large in\nsize, and prior information can be expensive, if not impossible, to obtain. In\nthis paper we introduce SPECTRE, a scalable, accurate algorithm able to solve\nthe network alignment problem with no prior information. SPECTRE makes use of\nspectral centrality measures and percolation techniques to robustly align nodes\nacross networks, even if those networks exhibit only moderate correlation.\nThrough extensive numerical experiments, we show that SPECTRE is able to\nrecover high-accuracy alignments on both synthetic and real-world networks, and\noutperforms other algorithms in the seedless case.\n", "title": "SPECTRE: Seedless Network Alignment via Spectral Centralities" }
null
null
null
null
true
null
12875
null
Default
null
null
null
{ "abstract": " An industrial indoor environment is harsh for wireless communications\ncompared to an office environment, because the prevalent metal easily causes\nshadowing effects and affects the availability of an industrial wireless local\narea network (IWLAN). On the one hand, it is costly, time-consuming, and\nineffective to perform trial-and-error manual deployment of wireless nodes. On\nthe other hand, the existing wireless planning tools only focus on office\nenvironments such that it is hard to plan IWLANs due to the larger problem size\nand the deployed IWLANs are vulnerable to prevalent shadowing effects in harsh\nindustrial indoor environments. To fill this gap, this paper proposes an\noverdimensioning model and a genetic algorithm based over-dimensioning (GAOD)\nalgorithm for deploying large-scale robust IWLANs. As a progress beyond the\nstate-of-the-art wireless planning, two full coverage layers are created. The\nsecond coverage layer serves as redundancy in case of shadowing. Meanwhile, the\ndeployment cost is reduced by minimizing the number of access points (APs); the\nhard constraint of minimal inter-AP spatial paration avoids multiple APs\ncovering the same area to be simultaneously shadowed by the same obstacle. The\ncomputation time and occupied memory are dedicatedly considered in the design\nof GAOD for large-scale optimization. A greedy heuristic based\nover-dimensioning (GHOD) algorithm and a random OD algorithm are taken as\nbenchmarks. In two vehicle manufacturers with a small and large indoor\nenvironment, GAOD outperformed GHOD with up to 20% less APs, while GHOD\noutputted up to 25% less APs than a random OD algorithm. Furthermore, the\neffectiveness of this model and GAOD was experimentally validated with a real\ndeployment system.\n", "title": "An efficient genetic algorithm for large-scale planning of robust industrial wireless networks" }
null
null
[ "Computer Science" ]
null
true
null
12876
null
Validated
null
null
null
{ "abstract": " In this paper we introduce a new feature selection algorithm to remove the\nirrelevant or redundant features in the data sets. In this algorithm the\nimportance of a feature is based on its fitting to the Catastrophe model.\nAkaike information crite- rion value is used for ranking the features in the\ndata set. The proposed algorithm is compared with well-known RELIEF feature\nselection algorithm. Breast Cancer, Parkinson Telemonitoring data and Slice\nlocality data sets are used to evaluate the model.\n", "title": "Feature selection algorithm based on Catastrophe model to improve the performance of regression analysis" }
null
null
null
null
true
null
12877
null
Default
null
null
null
{ "abstract": " This paper describes a neural-network model which performed competitively\n(top 6) at the SemEval 2017 cross-lingual Semantic Textual Similarity (STS)\ntask. Our system employs an attention-based recurrent neural network model that\noptimizes the sentence similarity. In this paper, we describe our participation\nin the multilingual STS task which measures similarity across English, Spanish,\nand Arabic.\n", "title": "Neobility at SemEval-2017 Task 1: An Attention-based Sentence Similarity Model" }
null
null
null
null
true
null
12878
null
Default
null
null
null
{ "abstract": " We give strengthened versions of the Herwig-Lascar and Hodkinson-Otto\nextension theorems for partial automorphisms of finite structures. Such\nstrengthenings yield several combinatorial and group-theoretic consequences for\nhomogeneous structures. For instance, we establish a coherent form of the\nextension property for partial automorphisms for certain Fraisse classes. We\ndeduce from these results that the isometry group of the rational Urysohn\nspace, the automorphism group of the Fraisse limit of any Fraisse class that is\nthe class of all $\\mathcal{F}$-free structures (in the Herwig--Lascar sense),\nand the automorphism group of any free homogeneous structure over a finite\nrelational language, all contain a dense locally finite subgroup. We also show\nthat any free homogeneous structure admits ample generics.\n", "title": "Coherent extension of partial automorphisms, free amalgamation, and automorphism groups" }
null
null
null
null
true
null
12879
null
Default
null
null
null
{ "abstract": " Homological index of a holomorphic 1-form on a complex analytic variety with\nan isolated singular point is an analogue of the usual index of a 1-form on a\nnon-singular manifold. One can say that it corresponds to the top Chern number\nof a manifold. We offer a definition of homological indices for collections of\n1-forms on a (purely dimensional) complex analytic variety with an isolated\nsingular point corresponding to other Chern numbers. We also define new\ninvariants of germs of complex analytic varieties with isolated singular points\nrelated to \"vanishing Chern numbers\" at them.\n", "title": "Homological indices of collections of 1-forms" }
null
null
null
null
true
null
12880
null
Default
null
null
null
{ "abstract": " Myxobacteria are social bacteria, that can glide in 2D and form\ncounter-propagating, interacting waves. Here we present a novel age-structured,\ncontinuous macroscopic model for the movement of myxobacteria. The derivation\nis based on microscopic interaction rules that can be formulated as a\nparticle-based model and set within the SOH (Self-Organized Hydrodynamics)\nframework. The strength of this combined approach is that microscopic knowledge\nor data can be incorporated easily into the particle model, whilst the\ncontinuous model allows for easy numerical analysis of the different effects.\nHowever we found that the derived macroscopic model lacks a diffusion term in\nthe density equations, which is necessary to control the number of waves,\nindicating that a higher order approximation during the derivation is crucial.\nUpon ad-hoc addition of the diffusion term, we found very good agreement\nbetween the age-structured model and the biology. In particular we analyzed the\ninfluence of a refractory (insensitivity) period following a reversal of\nmovement. Our analysis reveals that the refractory period is not necessary for\nwave formation, but essential to wave synchronization, indicating separate\nmolecular mechanisms.\n", "title": "An age-structured continuum model for myxobacteria" }
null
null
null
null
true
null
12881
null
Default
null
null
null
{ "abstract": " We prove that bilinear fractional integral operators and similar multipliers\nare smoothing in the sense that they improve the regularity of functions. We\nalso treat bilinear singular multiplier operators which preserve regularity and\nobtain several Leibniz-type rules in the contexts of Lebesgue and mixed\nLebesgue spaces.\n", "title": "Smoothing Properties of Bilinear Operators and Leibniz-Type Rules in Lebesgue and Mixed Lebesgue Spaces" }
null
null
null
null
true
null
12882
null
Default
null
null
null
{ "abstract": " The Bak Sneppen (BS) model is a very simple model that exhibits all the\nrichness of self-organized criticality theory. At the thermodynamic limit, the\nBS model converges to a situation where all particles have a fitness that is\nuniformly distributed between a critical value $p_c$ and 1. The $p_c$ value is\nunknown, as are the variables that influence and determine this value. Here, we\nstudy the Bak Sneppen model in the case in which the lowest fitness particle\ninteracts with an arbitrary even number of $m$ nearest neighbors. We show that\n$p_{c,m}$ verifies a simple local equilibrium relationship. Based on this\nrelationship, we can determine bounds for $p_{c,m}$.\n", "title": "Local equilibrium in the Bak-Sneppen model" }
null
null
null
null
true
null
12883
null
Default
null
null
null
{ "abstract": " We classify the ribbon structures of the Drinfeld center\n$\\mathcal{Z}(\\mathcal{C})$ of a finite tensor category $\\mathcal{C}$. Our\nresult generalizes Kauffman and Radford's classification result of the ribbon\nelements of the Drinfeld double of a finite-dimensional Hopf algebra. As a\nconsequence, we see that $\\mathcal{Z}(\\mathcal{C})$ is a modular tensor\ncategory in the sense of Lyubashenko if $\\mathcal{C}$ is a spherical finite\ntensor category in the sense of Douglas, Schommer-Pries and Snyder.\n", "title": "Ribbon structures of the Drinfeld center of a finite tensor category" }
null
null
null
null
true
null
12884
null
Default
null
null
null
{ "abstract": " Due to the low X-ray photon utilization efficiency and low measurement\nsensitivity of the electron multiplying charge coupled device (EMCCD) camera\nsetup, the collimator based narrow beam X-ray luminescence computed tomography\n(XLCT) usually requires a long measurement time. In this paper, we, for the\nfirst time, report a focused X-ray beam based XLCT imaging system with\nmeasurements by a single optical fiber bundle and a photomultiplier tube (PMT).\nAn X-ray tube with a polycapillary lens was used to generate a focused X-ray\nbeam whose X-ray photon density is 1200 times larger than a collimated X-ray\nbeam. An optical fiber bundle was employed to collect and deliver the emitted\nphotons on the phantom surface to the PMT. The total measurement time was\nreduced to 12.5 minutes. For numerical simulations of both single and six fiber\nbundle cases, we were able to reconstruct six targets successfully. For the\nphantom experiment, two targets with an edge-to-edge distance of 0.4 mm and a\ncenter-to-center distance of 0.8 mm were successfully reconstructed by the\nmeasurement setup with a single fiber bundle and a PMT.\n", "title": "X-ray luminescence computed tomography using a focused X-ray beam" }
null
null
[ "Physics" ]
null
true
null
12885
null
Validated
null
null
null
{ "abstract": " We developed general approach to the calculation of power-law infrared\nasymptotics of spin-spin correlation functions in the Kitaev honeycomb model\nwith different types of perturbations. We have shown that in order to find\nthese correlation functions, one can perform averaging of some bilinear forms\ncomposed out of free Majorana fermions, and we presented the method for\nexplicit calculation of these fermionic densities. We demonstrated how to\nderive an effective Hamiltonian for the Majorana fermions, including the\neffects of perturbations. For specific application of the general theory, we\nhave studied the effect of the Dzyaloshinskii-Moriya anisotropic spin-spin\ninteraction; we demonstrated that it leads, already in the second order over\nits relative magnitude $D/K$, to a power-law spin correlation functions, and\ncalculated dynamical spin structure factor of the system. We have shown that an\nexternal magnetic field $h$ in presence of the DM interaction, opens a gap in\nthe excitation spectrum of magnitude $\\Delta \\propto D h$.\n", "title": "Perturbed Kitaev model: excitation spectrum and long-ranged spin correlations" }
null
null
null
null
true
null
12886
null
Default
null
null
null
{ "abstract": " The early layers of a deep neural net have the fewest parameters, but take up\nthe most computation. In this extended abstract, we propose to only train the\nhidden layers for a set portion of the training run, freezing them out\none-by-one and excluding them from the backward pass. Through experiments on\nCIFAR, we empirically demonstrate that FreezeOut yields savings of up to 20%\nwall-clock time during training with 3% loss in accuracy for DenseNets, a 20%\nspeedup without loss of accuracy for ResNets, and no improvement for VGG\nnetworks. Our code is publicly available at\nthis https URL\n", "title": "FreezeOut: Accelerate Training by Progressively Freezing Layers" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
12887
null
Validated
null
null
null
{ "abstract": " We present a matrix-factorization algorithm that scales to input matrices\nwith both huge number of rows and columns. Learned factors may be sparse or\ndense and/or non-negative, which makes our algorithm suitable for dictionary\nlearning, sparse component analysis, and non-negative matrix factorization. Our\nalgorithm streams matrix columns while subsampling them to iteratively learn\nthe matrix factors. At each iteration, the row dimension of a new sample is\nreduced by subsampling, resulting in lower time complexity compared to a simple\nstreaming algorithm. Our method comes with convergence guarantees to reach a\nstationary point of the matrix-factorization problem. We demonstrate its\nefficiency on massive functional Magnetic Resonance Imaging data (2 TB), and on\npatches extracted from hyperspectral images (103 GB). For both problems, which\ninvolve different penalties on rows and columns, we obtain significant\nspeed-ups compared to state-of-the-art algorithms.\n", "title": "Stochastic Subsampling for Factorizing Huge Matrices" }
null
null
null
null
true
null
12888
null
Default
null
null
null
{ "abstract": " This paper presents the design of the machine learning architecture that\nunderlies the Alexa Skills Kit (ASK) a large scale Spoken Language\nUnderstanding (SLU) Software Development Kit (SDK) that enables developers to\nextend the capabilities of Amazon's virtual assistant, Alexa. At Amazon, the\ninfrastructure powers over 25,000 skills deployed through the ASK, as well as\nAWS's Amazon Lex SLU Service. The ASK emphasizes flexibility, predictability\nand a rapid iteration cycle for third party developers. It imposes inductive\nbiases that allow it to learn robust SLU models from extremely small and sparse\ndatasets and, in doing so, removes significant barriers to entry for software\ndevelopers and dialogue systems researchers.\n", "title": "Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding" }
null
null
null
null
true
null
12889
null
Default
null
null
null
{ "abstract": " Here we discuss blackbody radiation within the context of classical theory.\nWe note that nonrelativistic classical mechanics and relativistic classical\nelectrodynamics have contrasting scaling symmetries which influence the\nscattering of radiation. Also, nonrelativistic mechanical systems can be\naccurately combined with relativistic electromagnetic radiation only provided\nthe nonrelativistic mechanical systems are the low-velocity limits of fully\nrelativistic systems. Application of the no-interaction theorem for\nrelativistic systems limits the scattering mechanical systems for thermal\nradiation to relativistic classical electrodynamic systems, which involve the\nCoulomb potential. Whereas the naive use of nonrelativistic scatterers or\nnonrelativistic classical statistical mechanics leads to the Rayleigh-Jeans\nspectrum, the use of fully relativistic scatterers leads to the Planck spectrum\nfor blackbody radiation within classical physics.\n", "title": "Scaling, Scattering, and Blackbody Radiation in Classical Physics" }
null
null
[ "Physics" ]
null
true
null
12890
null
Validated
null
null
null
{ "abstract": " In this paper, we study a family of binomial ideals defining monomial curves\nin the $n-$dimensional affine space determined by $n$ hypersurfaces of the form\n$x_i^{c_i} - x_1^{u_{i1}} \\cdots x_n^{u_{1n}} \\in k[x_1, \\ldots, x_n]$ with\n$u_{ii} = 0$, $i\\in \\{ 1, \\ldots, n\\}$. We prove that, the monomial curves in\nthat family are set-theoretic complete intersection. Moreover, if the monomial\ncurve is irreducible, we compute some invariants such as genus, type and\nFröbenius number of the corresponding numerical semigroup. We also describe a\nmethod to produce set-theoretic complete intersection semigroup ideals of\narbitrary large height.\n", "title": "Critical binomial ideals of Norhtcott type" }
null
null
null
null
true
null
12891
null
Default
null
null
null
{ "abstract": " We investigate the collective behavior of magnetic swimmers, which are\nsuspended in a Poiseuille flow and placed under an external magnetic field,\nusing analytical techniques and Brownian dynamics simulations. We find that the\ninterplay between intrinsic activity, external alignment, and magnetic\ndipole-dipole interactions leads to longitudinal structure formation. Our work\nsheds light on a recent experimental observation of a clustering instability in\nthis system.\n", "title": "Clustering of Magnetic Swimmers in a Poiseuille Flow" }
null
null
null
null
true
null
12892
null
Default
null
null
null
{ "abstract": " We analyse a new subdomain scheme for a time-spectral method for solving\ninitial boundary value problems. Whilst spectral methods are commonplace for\nspatially dependent systems, finite difference schemes are typically applied\nfor the temporal domain. The Generalized Weighted Residual Method (GWRM) is a\nfully spectral method in that it spectrally decomposes all specified domains,\nincluding the temporal domain, with multivariate Chebyshev polynomials. The\nCommon Boundary-Condition method (CBC) is a spatial subdomain scheme that\nsolves the physical equations independently from the global connection of\nsubdomains. It is here evaluated against two finite difference methods. For the\nlinearised Burger equation the CBC-GWRM is $\\sim30\\%$ faster and $\\sim50\\%$\nmore memory efficient than the semi implicit Crank-Nicolson method at a maximum\nerror $\\sim10^{-5}$. For a forced wave equation the CBC-GWRM manages to average\nefficiently over the small time-scale in the entire temporal domain. The\nCBC-GWRM is also applied to the linearised ideal magnetohydrodynamic (MHD)\nequations for a screw pinch equilibrium. The growth rate of the most unstable\nmode was efficiently computed with an error $<0.1\\%$.\n", "title": "A Time-Spectral Method for Initial-Value Problems Using a Novel Spatial Subdomain Scheme" }
null
null
null
null
true
null
12893
null
Default
null
null
null
{ "abstract": " We propose a novel adaptive importance sampling algorithm which incorporates\nStein variational gradient decent algorithm (SVGD) with importance sampling\n(IS). Our algorithm leverages the nonparametric transforms in SVGD to\niteratively decrease the KL divergence between our importance proposal and the\ntarget distribution. The advantages of this algorithm are twofold: first, our\nalgorithm turns SVGD into a standard IS algorithm, allowing us to use standard\ndiagnostic and analytic tools of IS to evaluate and interpret the results;\nsecond, we do not restrict the choice of our importance proposal to predefined\ndistribution families like traditional (adaptive) IS methods. Empirical\nexperiments demonstrate that our algorithm performs well on evaluating\npartition functions of restricted Boltzmann machines and testing likelihood of\nvariational auto-encoders.\n", "title": "Stein Variational Adaptive Importance Sampling" }
null
null
null
null
true
null
12894
null
Default
null
null
null
{ "abstract": " We examine collective properties of closure operators on posets that are at\nleast dcpos. The first theorem sets the tone of the paper: it tells how a set\nof preclosure maps on a dcpo determines the least closure operator above them,\nand pronounces the related induction principle, and its companion, the obverse\ninduction principle. Using this theorem we prove that the poset of closure\noperators on a dcpo is a complete lattice, and then provide a constructive\nproof of the Tarski's theorem for dcpos. We go on to construct the joins in the\ncomplete lattice of Scott-continuous closure operators on a dcpo, and to prove\nthat the complete lattice of nuclei on a preframe is a frame, giving some\nconstructions in the special case of the frame of all nuclei on a frame. In the\nrather drawn-out proof if the Hofmann-Mislove-Johnstone theorem we show off the\nutility of the obverse induction, applying it in the proof of the crucial\nlemma. After that we shift a viewpoint and prove some results, analogous to\nresults about dcpos, for posets in which certain special subsets have enough\nmaximal elements; these results actually specialize to dcpos, but at the price\nof using the axiom of choice. We conclude by pointing out two convex geometries\nassociated with closure operators on a dcpo.\n", "title": "Closure operators on dcpos" }
null
null
null
null
true
null
12895
null
Default
null
null
null
{ "abstract": " We develop a hybrid system model to describe the behavior of multiple agents\ncooperatively solving an optimal coverage problem under energy depletion and\nrepletion constraints. The model captures the controlled switching of agents\nbetween coverage (when energy is depleted) and battery charging (when energy is\nreplenished) modes. It guarantees the feasibility of the coverage problem by\ndefining a guard function on each agent's battery level to prevent it from\ndying on its way to a charging station. The charging station plays the role of\na centralized scheduler to solve the contention problem of agents competing for\nthe only charging resource in the mission space. The optimal coverage problem\nis transformed into a parametric optimization problem to determine an optimal\nrecharging policy. This problem is solved through the use of Infinitesimal\nPerturbation Analysis (IPA), with simulation results showing that a full\nrecharging policy is optimal.\n", "title": "Multi-Agent Coverage Control with Energy Depletion and Repletion" }
null
null
null
null
true
null
12896
null
Default
null
null
null
{ "abstract": " We establish a link between trace modules and rigidity in modules over\nNoetherian rings. Using the theory of trace ideals we make partial progress on\na question of Dao, and on the Auslander-Reiten conjecture over Artinian\nGorenstein rings.\n", "title": "Self-injective commutative rings have no nontrivial rigid ideals" }
null
null
null
null
true
null
12897
null
Default
null
null
null
{ "abstract": " At zero temperature, the charge current operator appears to be conserved,\nwithin linear response, in certain holographic probe brane models of strange\nmetals. At small but finite temperature, we analytically show that the weak\nnon-conservation of this current leads to both a collective \"zero sound\" mode\nand a Drude peak in the electrical conductivity. This simultaneously resolves\ntwo outstanding puzzles about probe brane theories. The nonlinear dynamics of\nthe current operator itself appears qualitatively different.\n", "title": "Origin of the Drude peak and of zero sound in probe brane holography" }
null
null
null
null
true
null
12898
null
Default
null
null
null
{ "abstract": " How self-organized networks develop, mature and degenerate is a key question\nfor sociotechnical, cyberphysical and biological systems with potential\napplications from tackling violent extremism through to neurological diseases.\nSo far, it has proved impossible to measure the continuous-time evolution of\nany in vivo organism network from birth to death. Here we provide such a study\nwhich crosses all organizational and temporal scales, from individual\ncomponents (10^1) through to the mesoscopic (10^3) and entire system scale\n(10^6). These continuous-time data reveal a lifespan driven by punctuated,\nreal-time co-evolution of the structural and functional networks. Aging sees\nthese structural and functional networks gradually diverge in terms of their\nsmall-worldness and eventually their connectivity. Dying emerges as an extended\nprocess associated with the formation of large but disjoint functional\nsub-networks together with an increasingly detached core. Our mathematical\nmodel quantifies the very different impacts that interventions will have on the\noverall lifetime, period of initial growth, peak of potency, and duration of\nold age, depending on when and how they are administered. In addition to their\ndirect relevance to online extremism, our findings offer fresh insight into\naging in any network system of comparable complexity for which extensive in\nvivo data is not yet available.\n", "title": "Multiscale dynamical network mechanisms underlying aging from birth to death" }
null
null
null
null
true
null
12899
null
Default
null
null
null
{ "abstract": " Electron polarimeters based on Mott scattering are extensively used in\ndifferent fields in physics such as atomic, nuclear or particle physics. This\nis because spin-dependent measurements gives additional information on the\nphysical processes under study. The main quantity that needs to be understood\nin very much detail, both experimentally and theoretically, is the\nspin-polarization function, so called analyzing power or Sherman function. A\ndetailed theoretical analysis on all the contributions to the effective\ninteraction potential that are relevant at the typical electron beam energies\nand angles commonly used in the calibration of the experimental apparatus is\npresented. The main contribution leading the theoretical error on the Sherman\nfunction is found to correspond to radiative corrections that have been\nqualitatively estimated to be below the 0.5% for the considered kinematical\nconditions: unpolarized electron beams of few MeV elastically scattered from a\ngold and silver targets at backward angles.\n", "title": "Theoretical calculations for precision polarimetry based on Mott scattering" }
null
null
null
null
true
null
12900
null
Default
null
null