text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " An ultra-high throughput low-density parity check (LDPC) decoder with an\nunrolled full-parallel architecture is proposed, which achieves the highest\ndecoding throughput compared to previously reported LDPC decoders in the\nliterature. The decoder benefits from a serial message-transfer approach\nbetween the decoding stages to alleviate the well-known routing congestion\nproblem in parallel LDPC decoders. Furthermore, a finite-alphabet message\npassing algorithm is employed to replace the variable node update rule of the\nstandard min-sum decoder with look-up tables, which are designed in a way that\nmaximizes the mutual information between decoding messages. The proposed\nalgorithm results in an architecture with reduced bit-width messages, leading\nto a significantly higher decoding throughput and to a lower area as compared\nto a min-sum decoder when serial message-transfer is used. The architecture is\nplaced and routed for the standard min-sum reference decoder and for the\nproposed finite-alphabet decoder using a custom pseudo-hierarchical backend\ndesign strategy to further alleviate routing congestions and to handle the\nlarge design. Post-layout results show that the finite-alphabet decoder with\nthe serial message-transfer architecture achieves a throughput as large as 588\nGbps with an area of 16.2 mm$^2$ and dissipates an average power of 22.7 pJ per\ndecoded bit in a 28 nm FD-SOI library. Compared to the reference min-sum\ndecoder, this corresponds to 3.1 times smaller area and 2 times better energy\nefficiency.\n",
"title": "A 588 Gbps LDPC Decoder Based on Finite-Alphabet Message Passing"
}
| null | null | null | null | true | null |
5801
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we define the generalized q-analogues of Euler sums and present\na new family of identities for q-analogues of Euler sums by using the method of\nJackson q-integral rep- resentations of series. We then apply it to obtain a\nfamily of identities relating quadratic Euler sums to linear sums and\nq-polylogarithms. Furthermore, we also use certain stuffle products to evaluate\nseveral q-series with q-harmonic numbers. Some interesting new results and\nillustrative examples are considered. Finally, we can obtain some explicit\nrelations for the classical Euler sums when q approaches to 1.\n",
"title": "On q-analogues of quadratic Euler sums"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5802
| null |
Validated
| null | null |
null |
{
"abstract": " A mapping of the process on a continuous configuration space to the symbolic\nrepresentation of the motion on a discrete state space will be combined with an\niterative aggregation and disaggregation (IAD) procedure to obtain steady state\ndistributions of the process. The IAD speeds up the convergence to the unit\neigenvector, which is the steady state distribution, by forming smaller\naggregated matrices whose unit eigenvector solutions are used to refine\napproximations of the steady state vector until convergence is reached. This\nmethod works very efficiently and can be used together with distributed or\nparallel computing methods to obtain high resolution images of the steady state\ndistribution of complex atomistic or energy landscape type problems. The method\nis illustrated in two numerical examples. In the first example the transition\nmatrix is assumed to be known. The second example represents an overdamped\nBrownian motion process subject to a dichotomously changing external potential.\n",
"title": "An iterative aggregation and disaggregation approach to the calculation of steady-state distributions of continuous processes"
}
| null | null | null | null | true | null |
5803
| null |
Default
| null | null |
null |
{
"abstract": " We compare the predictions of stochastic closure theory (SCT) with\nexperimental measurements of homogeneous turbulence made in the Variable\nDensity Turbulence Tunnel (VDTT) at the Max Planck Institute for Dynamics and\nSelf-Organization in Gottingen. While the general form of SCT contains\ninfinitely many free parameters, the data permit us to reduce the number to\nseven, only three of which are active over the entire inertial range. Of these\nthree, one parameter characterizes the variance of the mean field noise in SCT\nand another characterizes the rate in the large deviations of the mean. The\nthird parameter is the decay exponent of the Fourier variables in the Fourier\nexpansion of the noise, which characterizes the smoothness of the turbulent\nvelocity. SCT compares favorably with velocity structure functions measured in\nthe experiment. We considered even-order structure functions ranging in order\nfrom two to eight as well as the third-order structure functions at five\nTaylor-Reynolds numbers (Rl) between 110 and 1450. The comparisons highlight\nseveral advantages of the SCT, which include explicit predictions for the\nstructure functions at any scale and for any Reynolds number. We observed that\nfinite-Rl corrections, for instance, are important even at the highest Reynolds\nnumbers produced in the experiments. SCT gives us the correct basis function to\nexpress all the moments of the velocity differences in turbulence in Fourier\nspace. The SCT produces the coefficients of the series and so determines the\nstatistical quantities that characterize the small scales in turbulence. It\nalso characterizes the random force acting on the fluid in the stochastic\nNavier-Stokes equation, as described in the paper.\n",
"title": "Reynolds number dependence of the structure functions in homogeneous turbulence"
}
| null | null | null | null | true | null |
5804
| null |
Default
| null | null |
null |
{
"abstract": " Energy consumption has been a great deal of concern in recent years and\ndevelopers need to take energy-efficiency into account when they design\nalgorithms. Their design needs to be energy-efficient and low-power while it\ntries to achieve attainable performance provided by underlying hardware.\nHowever, different optimization techniques have different effects on power and\nenergy-efficiency and a visual model would assist in the selection process.\nIn this paper, we extended the roofline model and provided a visual\nrepresentation of optimization strategies for power consumption. Our model is\ncomposed of various ceilings regarding each strategy we included in our models.\nOne roofline model for computational performance and one for memory performance\nis introduced. We assembled our models based on some optimization strategies\nfor two widespread GPUs from NVIDIA: Geforce GTX 970 and Tesla K80.\n",
"title": "Power and Energy-efficiency Roofline Model for GPUs"
}
| null | null | null | null | true | null |
5805
| null |
Default
| null | null |
null |
{
"abstract": " Let $\\Omega$ be a pseudoconvex domain in $\\mathbb C^n$ with smooth boundary\n$b\\Omega$. We define general estimates $(f\\text{-}\\mathcal M)^k_{\\Omega}$ and\n$(f\\text{-}\\mathcal M)^k_{b\\Omega}$ on $k$-forms for the complex Laplacian\n$\\Box$ on $\\Omega$ and the Kohn-Laplacian $\\Box_b$ on $b\\Omega$. For $1\\le k\\le\nn-2$, we show that $(f\\text{-}\\mathcal M)^k_{b\\Omega}$ holds if and only if\n$(f\\text{-}\\mathcal M)^k_{\\Omega}$ and $(f\\text{-}\\mathcal M)^{n-k-1}_{\\Omega}$\nhold. Our proof relies on Kohn's method in [Ann. of Math. (2), 156(1):213--248,\n2002].\n",
"title": "Equivalence of estimates on domain and its boundary"
}
| null | null | null | null | true | null |
5806
| null |
Default
| null | null |
null |
{
"abstract": " The application domains of civilian unmanned aerial systems (UASs) include\nagriculture, exploration, transportation, and entertainment. The expected\ngrowth of the UAS industry brings along new challenges: Unmanned aerial vehicle\n(UAV) flight control signaling requires low throughput, but extremely high\nreliability, whereas the data rate for payload data can be significant. This\npaper develops UAV number projections and concludes that small and micro UAVs\nwill dominate the US airspace with accelerated growth between 2028 and 2032. We\nanalyze the orthogonal frequency division multiplexing (OFDM) waveform because\nit can provide the much needed flexibility, spectral efficiency, and,\npotentially, reliability and derive suitable OFDM waveform parameters as a\nfunction of UAV flight characteristics. OFDM also lends itself to agile\nspectrum access. Based on our UAV growth predictions, we conclude that dynamic\nspectrum access is needed and discuss the applicability of spectrum sharing\ntechniques for future UAS communications.\n",
"title": "Waveform and Spectrum Management for Unmanned Aerial Systems Beyond 2025"
}
| null | null | null | null | true | null |
5807
| null |
Default
| null | null |
null |
{
"abstract": " Opinion polls have been the bridge between public opinion and politicians in\nelections. However, developing surveys to disclose people's feedback with\nrespect to economic issues is limited, expensive, and time-consuming. In recent\nyears, social media such as Twitter has enabled people to share their opinions\nregarding elections. Social media has provided a platform for collecting a\nlarge amount of social media data. This paper proposes a computational public\nopinion mining approach to explore the discussion of economic issues in social\nmedia during an election. Current related studies use text mining methods\nindependently for election analysis and election prediction; this research\ncombines two text mining methods: sentiment analysis and topic modeling. The\nproposed approach has effectively been deployed on millions of tweets to\nanalyze economic concerns of people during the 2012 US presidential election.\n",
"title": "Mining Public Opinion about Economic Issues: Twitter and the U.S. Presidential Election"
}
| null | null | null | null | true | null |
5808
| null |
Default
| null | null |
null |
{
"abstract": " The use of standard robotic platforms can accelerate research and lower the\nentry barrier for new research groups. There exist many affordable humanoid\nstandard platforms in the lower size ranges of up to 60cm, but larger humanoid\nrobots quickly become less affordable and more difficult to operate, maintain\nand modify. The igus Humanoid Open Platform is a new and affordable, fully\nopen-source humanoid platform. At 92cm in height, the robot is capable of\ninteracting in an environment meant for humans, and is equipped with enough\nsensors, actuators and computing power to support researchers in many fields.\nThe structure of the robot is entirely 3D printed, leading to a lightweight and\nvisually appealing design. The main features of the platform are described in\nthis article.\n",
"title": "The igus Humanoid Open Platform: A Child-sized 3D Printed Open-Source Robot for Research"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5809
| null |
Validated
| null | null |
null |
{
"abstract": " In this data-rich era of astronomy, there is a growing reliance on automated\ntechniques to discover new knowledge. The role of the astronomer may change\nfrom being a discoverer to being a confirmer. But what do astronomers actually\nlook at when they distinguish between \"sources\" and \"noise?\" What are the\ndifferences between novice and expert astronomers when it comes to visual-based\ndiscovery? Can we identify elite talent or coach astronomers to maximize their\npotential for discovery? By looking to the field of sports performance\nanalysis, we consider an established, domain-wide approach, where the expertise\nof the viewer (i.e. a member of the coaching team) plays a crucial role in\nidentifying and determining the subtle features of gameplay that provide a\nwinning advantage. As an initial case study, we investigate whether the\nSportsCode performance analysis software can be used to understand and document\nhow an experienced HI astronomer makes discoveries in spectral data cubes. We\nfind that the process of timeline-based coding can be applied to spectral cube\ndata by mapping spectral channels to frames within a movie. SportsCode provides\na range of easy to use methods for annotation, including feature-based codes\nand labels, text annotations associated with codes, and image-based drawing.\nThe outputs, including instance movies that are uniquely associated with coded\nevents, provide the basis for a training program or team-based analysis that\ncould be used in unison with discipline specific analysis software. In this\ncoordinated approach to visualization and analysis, SportsCode can act as a\nvisual notebook, recording the insight and decisions in partnership with\nestablished analysis methods. Alternatively, in situ annotation and coding of\nfeatures would be a valuable addition to existing and future visualisation and\nanalysis packages.\n",
"title": "Sports stars: analyzing the performance of astronomers at visualization-based discovery"
}
| null | null | null | null | true | null |
5810
| null |
Default
| null | null |
null |
{
"abstract": " Chimera states are an example of intriguing partial synchronization patterns\nemerging in networks of identical oscillators. They consist of spatially\ncoexisting domains of coherent (synchronized) and incoherent (desynchronized)\ndynamics. We analyze chimera states in networks of Van der Pol oscillators with\nhierarchical connectivities, and elaborate the role of time delay introduced in\nthe coupling term. In the parameter plane of coupling strength and delay time\nwe find tongue-like regions of existence of chimera states alternating with\nregions of existence of coherent travelling waves. We demonstrate that by\nvarying the time delay one can deliberately stabilize desired spatio-temporal\npatterns in the system.\n",
"title": "Chimera states in complex networks: interplay of fractal topology and delay"
}
| null | null | null | null | true | null |
5811
| null |
Default
| null | null |
null |
{
"abstract": " With the increasing demands of applications in virtual reality such as 3D\nfilms, virtual Human-Machine Interactions and virtual agents, the analysis of\n3D human face analysis is considered to be more and more important as a\nfundamental step for those virtual reality tasks. Due to information provided\nby an additional dimension, 3D facial reconstruction enables aforementioned\ntasks to be achieved with higher accuracy than those based on 2D facial\nanalysis. The denser the 3D facial model is, the more information it could\nprovide. However, most existing dense 3D facial reconstruction methods require\ncomplicated processing and high system cost. To this end, this paper presents a\nnovel method that simplifies the process of dense 3D facial reconstruction by\nemploying only one frame of depth data obtained with an off-the-shelf RGB-D\nsensor. The experiments showed competitive results with real world data.\n",
"title": "Dense 3D Facial Reconstruction from a Single Depth Image in Unconstrained Environment"
}
| null | null | null | null | true | null |
5812
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we deal with the multiplicity and concentration of positive\nsolutions for the following fractional Schrödinger-Kirchhoff type equation\n\\begin{equation*} M\\left(\\frac{1}{\\varepsilon^{3-2s}}\n\\iint_{\\mathbb{R}^{6}}\\frac{|u(x)- u(y)|^{2}}{|x-y|^{3+2s}} dxdy +\n\\frac{1}{\\varepsilon^{3}} \\int_{\\mathbb{R}^{3}} V(x)u^{2}\ndx\\right)[\\varepsilon^{2s} (-\\Delta)^{s}u+ V(x)u]= f(u) \\, \\mbox{in}\n\\mathbb{R}^{3} \\end{equation*} where $\\varepsilon>0$ is a small parameter,\n$s\\in (\\frac{3}{4}, 1)$, $(-\\Delta)^{s}$ is the fractional Laplacian, $M$ is a\nKirchhoff function, $V$ is a continuous positive potential and $f$ is a\nsuperlinear continuous function with subcritical growth. By using penalization\ntechniques and Ljusternik-Schnirelmann theory, we investigate the relation\nbetween the number of positive solutions with the topology of the set where the\npotential attains its minimum.\n",
"title": "Concentration phenomena for a fractional Schrödinger-Kirchhoff type equation"
}
| null | null | null | null | true | null |
5813
| null |
Default
| null | null |
null |
{
"abstract": " We describe inferactive data analysis, so-named to denote an interactive\napproach to data analysis with an emphasis on inference after data analysis.\nOur approach is a compromise between Tukey's exploratory (roughly speaking\n\"model free\") and confirmatory data analysis (roughly speaking classical and\n\"model based\"), also allowing for Bayesian data analysis. We view this approach\nas close in spirit to current practice of applied statisticians and data\nscientists while allowing frequentist guarantees for results to be reported in\nthe scientific literature, or Bayesian results where the data scientist may\nchoose the statistical model (and hence the prior) after some initial\nexploratory analysis. While this approach to data analysis does not cover every\nscenario, and every possible algorithm data scientists may use, we see this as\na useful step in concrete providing tools (with frequentist statistical\nguarantees) for current data scientists. The basis of inference we use is\nselective inference [Lee et al., 2016, Fithian et al., 2014], in particular its\nrandomized form [Tian and Taylor, 2015a]. The randomized framework, besides\nproviding additional power and shorter confidence intervals, also provides\nexplicit forms for relevant reference distributions (up to normalization)\nthrough the {\\em selective sampler} of Tian et al. [2016]. The reference\ndistributions are constructed from a particular conditional distribution formed\nfrom what we call a DAG-DAG -- a Data Analysis Generative DAG. As sampling\nconditional distributions in DAGs is generally complex, the selective sampler\nis crucial to any practical implementation of inferactive data analysis. Our\nprincipal goal is in reviewing the recent developments in selective inference\nas well as describing the general philosophy of selective inference.\n",
"title": "Inferactive data analysis"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
5814
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we present promising accurate prefix boosting (PAPB), a\ndiscriminative training technique for attention based sequence-to-sequence\n(seq2seq) ASR. PAPB is devised to unify the training and testing scheme in an\neffective manner. The training procedure involves maximizing the score of each\npartial correct sequence obtained during beam search compared to other\nhypotheses. The training objective also includes minimization of token\n(character) error rate. PAPB shows its efficacy by achieving 10.8\\% and 3.8\\%\nWER with and without RNNLM respectively on Wall Street Journal dataset.\n",
"title": "Promising Accurate Prefix Boosting for sequence-to-sequence ASR"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5815
| null |
Validated
| null | null |
null |
{
"abstract": " The recently proposed \"generalized min-max\" (GMM) kernel can be efficiently\nlinearized, with direct applications in large-scale statistical learning and\nfast near neighbor search. The linearized GMM kernel was extensively compared\nin with linearized radial basis function (RBF) kernel. On a large number of\nclassification tasks, the tuning-free GMM kernel performs (surprisingly) well\ncompared to the best-tuned RBF kernel. Nevertheless, one would naturally expect\nthat the GMM kernel ought to be further improved if we introduce tuning\nparameters.\nIn this paper, we study three simple constructions of tunable GMM kernels:\n(i) the exponentiated-GMM (or eGMM) kernel, (ii) the powered-GMM (or pGMM)\nkernel, and (iii) the exponentiated-powered-GMM (epGMM) kernel. The pGMM kernel\ncan still be efficiently linearized by modifying the original hashing procedure\nfor the GMM kernel. On about 60 publicly available classification datasets, we\nverify that the proposed tunable GMM kernels typically improve over the\noriginal GMM kernel. On some datasets, the improvements can be astonishingly\nsignificant.\nFor example, on 11 popular datasets which were used for testing deep learning\nalgorithms and tree methods, our experiments show that the proposed tunable GMM\nkernels are strong competitors to trees and deep nets. The previous studies\ndeveloped tree methods including \"abc-robust-logitboost\" and demonstrated the\nexcellent performance on those 11 datasets (and other datasets), by\nestablishing the second-order tree-split formula and new derivatives for\nmulti-class logistic loss. Compared to tree methods like\n\"abc-robust-logitboost\" (which are slow and need substantial model sizes), the\ntunable GMM kernels produce largely comparable results.\n",
"title": "Tunable GMM Kernels"
}
| null | null | null | null | true | null |
5816
| null |
Default
| null | null |
null |
{
"abstract": " Quantum mechanics postulates that any measurement influences the state of the\ninvestigated system. Here, by means of angle-, spin-, and time-resolved\nphotoemission experiments and ab initio calculations we demonstrate how\nnon-equal depopulation of the Dirac cone (DC) states with opposite momenta in\nV-doped and pristine topological insulators (TIs) created by a photoexcitation\nby linearly polarized synchrotron radiation (SR) is followed by the\nhole-generated uncompensated spin accumulation and the SR-induced magnetization\nvia the spin-torque effect. We show that the photoexcitation of the DC is\nasymmetric, that it varies with the photon energy, and that it practically does\nnot change during the relaxation. We find a relation between the\nphotoexcitation asymmetry, the generated spin accumulation and the induced spin\npolarization of the DC and V 3d states. Experimentally the SR-generated\nin-plane and out-of-plane magnetization is confirmed by the\n$k_{\\parallel}$-shift of the DC position and by the splitting of the states at\nthe Dirac point even above the Curie temperature. Theoretical predictions and\nestimations of the measurable physical quantities substantiate the experimental\nresults.\n",
"title": "Synchrotron radiation induced magnetization in magnetically-doped and pristine topological insulators"
}
| null | null |
[
"Physics"
] | null | true | null |
5817
| null |
Validated
| null | null |
null |
{
"abstract": " Given a pseudoword over suitable pseudovarieties, we associate to it a\nlabeled linear order determined by the factorizations of the pseudoword. We\nshow that, in the case of the pseudovariety of aperiodic finite semigroups, the\npseudoword can be recovered from the labeled linear order.\n",
"title": "The linear nature of pseudowords"
}
| null | null | null | null | true | null |
5818
| null |
Default
| null | null |
null |
{
"abstract": " This paper examines the association between household healthcare expenses and\nparticipation in the Supplemental Nutrition Assistance Program (SNAP) when\nmoderated by factors associated with financial stability of households. Using a\nlarge longitudinal panel encompassing eight years, this study finds that an\ninter-temporal increase in out-of-pocket medical expenses increased the\nlikelihood of household SNAP participation in the current period. Financially\nstable households with precautionary financial assets to cover at least 6\nmonths worth of household expenses were significantly less likely to\nparticipate in SNAP. The low income households who recently experienced an\nincrease in out of pocket medical expenses but had adequate precautionary\nsavings were less likely than similar households who did not have precautionary\nsavings to participate in SNAP. Implications for economists, policy makers, and\nhousehold finance professionals are discussed.\n",
"title": "Health Care Expenditures, Financial Stability, and Participation in the Supplemental Nutrition Assistance Program (SNAP)"
}
| null | null | null | null | true | null |
5819
| null |
Default
| null | null |
null |
{
"abstract": " We prove that, under certain conditions on the function pair $\\varphi_1$ and\n$\\varphi_2$, bilinear average $p^{-1}\\sum_{y\\in\n\\mathbb{F}_p}f_1(x+\\varphi_1(y)) f_2(x+\\varphi_2(y))$ along curve $(\\varphi_1,\n\\varphi_2)$ satisfies certain decay estimate. As a consequence, Roth type\ntheorems hold in the setting of finite fields. In particular, if\n$\\varphi_1,\\varphi_2\\in \\mathbb{F}_p[X]$ with $\\varphi_1(0)=\\varphi_2(0)=0$ are\nlinearly independent polynomials, then for any $A\\subset \\mathbb{F}_p,\n|A|=\\delta p$ with $\\delta>c p^{-\\frac{1}{12}}$, there are $\\gtrsim\n\\delta^3p^2$ triplets $x,x+\\varphi_1(y), x+\\varphi_2(y)\\in A$. This extends a\nrecent result of Bourgain and Chang who initiated this type of problems, and\nstrengthens the bound in a result of Peluse, who generalized Bourgain and\nChang's work. The proof uses discrete Fourier analysis and algebraic geometry.\n",
"title": "Improved estimates for polynomial Roth type theorems in finite fields"
}
| null | null | null | null | true | null |
5820
| null |
Default
| null | null |
null |
{
"abstract": " The ability to cool atoms below the Doppler limit -- the minimum temperature\nreachable by Doppler cooling -- has been essential to most experiments with\nquantum degenerate gases, optical lattices and atomic fountains, among many\nother applications. A broad set of new applications await ultracold molecules,\nand the extension of laser cooling to molecules has begun. A molecular\nmagneto-optical trap has been demonstrated, where molecules approached the\nDoppler limit. However, the sub-Doppler temperatures required for most\napplications have not yet been reached. Here we cool molecules to 50 uK, well\nbelow the Doppler limit, using a three-dimensional optical molasses. These\nultracold molecules could be loaded into optical tweezers to trap arbitrary\narrays for quantum simulation, launched into a molecular fountain for testing\nfundamental physics, and used to study ultracold collisions and ultracold\nchemistry.\n",
"title": "Molecules cooled below the Doppler limit"
}
| null | null | null | null | true | null |
5821
| null |
Default
| null | null |
null |
{
"abstract": " This paper is dedicated to new methods of constructing weight structures and\nweight-exact localizations; our arguments generalize their bounded versions\nconsidered in previous papers of the authors. We start from a class of objects\n$P$ of triangulated category $C$ that satisfies a certain negativity condition\n(there are no $C$-extensions of positive degrees between elements of $P$; we\nactually need a somewhat stronger condition of this sort) to obtain a weight\nstructure both \"halves\" of which are closed either with respect to\n$C$-coproducts of less than $\\alpha$ objects (for $\\alpha$ being a fixed\nregular cardinal) or with respect to all coproducts (provided that $C$ is\nclosed with respect to coproducts of this sort). This construction gives all\n\"reasonable\" weight structures satisfying the latter condition. In particular,\nwe obtain certain weight structures on spectra (in $SH$) consisting of less\nthan $\\alpha$ cells and on certain localizations of $SH$; these results are\nnew.\n",
"title": "On purely generated $α$-smashing weight structures and weight-exact localizations"
}
| null | null | null | null | true | null |
5822
| null |
Default
| null | null |
null |
{
"abstract": " A common data mining task on networks is community detection, which seeks an\nunsupervised decomposition of a network into structural groups based on\nstatistical regularities in the network's connectivity. Although many methods\nexist, the No Free Lunch theorem for community detection implies that each\nmakes some kind of tradeoff, and no algorithm can be optimal on all inputs.\nThus, different algorithms will over or underfit on different inputs, finding\nmore, fewer, or just different communities than is optimal, and evaluation\nmethods that use a metadata partition as a ground truth will produce misleading\nconclusions about general accuracy. Here, we present a broad evaluation of over\nand underfitting in community detection, comparing the behavior of 16\nstate-of-the-art community detection algorithms on a novel and structurally\ndiverse corpus of 406 real-world networks. We find that (i) algorithms vary\nwidely both in the number of communities they find and in their corresponding\ncomposition, given the same input, (ii) algorithms can be clustered into\ndistinct high-level groups based on similarities of their outputs on real-world\nnetworks, and (iii) these differences induce wide variation in accuracy on link\nprediction and link description tasks. We introduce a new diagnostic for\nevaluating overfitting and underfitting in practice, and use it to roughly\ndivide community detection methods into general and specialized learning\nalgorithms. Across methods and inputs, Bayesian techniques based on the\nstochastic block model and a minimum description length approach to\nregularization represent the best general learning approach, but can be\noutperformed under specific circumstances. These results introduce both a\ntheoretically principled approach to evaluate over and underfitting in models\nof network community structure and a realistic benchmark by which new methods\nmay be evaluated and compared.\n",
"title": "Evaluating Overfit and Underfit in Models of Network Community Structure"
}
| null | null | null | null | true | null |
5823
| null |
Default
| null | null |
null |
{
"abstract": " Advancements in technology and culture lead to changes in our language. These\nchanges create a gap between the language known by users and the language\nstored in digital archives. It affects user's possibility to firstly find\ncontent and secondly interpret that content. In previous work we introduced our\napproach for Named Entity Evolution Recognition~(NEER) in newspaper\ncollections. Lately, increasing efforts in Web preservation lead to increased\navailability of Web archives covering longer time spans. However, language on\nthe Web is more dynamic than in traditional media and many of the basic\nassumptions from the newspaper domain do not hold for Web data. In this paper\nwe discuss the limitations of existing methodology for NEER. We approach these\nby adapting an existing NEER method to work on noisy data like the Web and the\nBlogosphere in particular. We develop novel filters that reduce the noise and\nmake use of Semantic Web resources to obtain more information about terms. Our\nevaluation shows the potentials of the proposed approach.\n",
"title": "Named Entity Evolution Recognition on the Blogosphere"
}
| null | null | null | null | true | null |
5824
| null |
Default
| null | null |
null |
{
"abstract": " A new test of normality based on a standardised empirical process is\nintroduced in this article.\nThe first step is to introduce a Cramér-von Mises type statistic with\nweights equal to the inverse of the standard normal density function supported\non a symmetric interval $[-a_n,a_n]$ depending on the sample size $n.$ The\nsequence of end points $a_n$ tends to infinity, and is chosen so that the\nstatistic goes to infinity at the speed of $\\ln \\ln n.$ After substracting the\nmean, a suitable test statistic is obtained, with the same asymptotic law as\nthe well-known Shapiro-Wilk statistic. The performance of the new test is\ndescribed and compared with three other well-known tests of normality, namely,\nShapiro-Wilk, Anderson-Darling and that of del Barrio-Matrán, Cuesta\nAlbertos, and Rodr\\'{\\i}guez Rodr\\'{\\i}guez, by means of power calculations\nunder many alternative hypotheses.\n",
"title": "Truncated Cramér-von Mises test of normality"
}
| null | null | null | null | true | null |
5825
| null |
Default
| null | null |
null |
{
"abstract": " We consider the problem of how decision making can be fair when the\nunderlying probabilistic model of the world is not known with certainty. We\nargue that recent notions of fairness in machine learning need to explicitly\nincorporate parameter uncertainty, hence we introduce the notion of {\\em\nBayesian fairness} as a suitable candidate for fair decision rules. Using\nbalance, a definition of fairness introduced by Kleinberg et al (2016), we show\nhow a Bayesian perspective can lead to well-performing, fair decision rules\neven under high uncertainty.\n",
"title": "Bayesian fairness"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5826
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the bi-Laplacian eigenvalue problem for the modes of vibration of\na thin elastic plate with a discrete set of clamped points. A high-order\nboundary integral equation method is developed for efficient numerical\ndetermination of these modes in the presence of multiple localized defects for\na wide range of two-dimensional geometries. The defects result in\neigenfunctions with a weak singularity that is resolved by decomposing the\nsolution as a superposition of Green's functions plus a smooth regular part.\nThis method is applied to a variety of regular and irregular domains and two\nkey phenomena are observed. First, careful placement of clamping points can\nentirely eliminate particular eigenvalues and suggests a strategy for\nmanipulating the vibrational characteristics of rigid bodies so that\nundesirable frequencies are removed. Second, clamping of the plate can result\nin partitioning of the domain so that vibrational modes are largely confined to\ncertain spatial regions. This numerical method gives a precision tool for\ntuning the vibrational characteristics of thin elastic plates.\n",
"title": "A boundary integral equation method for mode elimination and vibration confinement in thin plates with clamped points"
}
| null | null | null | null | true | null |
5827
| null |
Default
| null | null |
null |
{
"abstract": " Let $K$ be a field, $G$ a finite group. Let $G$ act on the function field $L\n= K(x_{\\sigma} : \\sigma \\in G)$ by $\\tau \\cdot x_{\\sigma} = x_{\\tau\\sigma}$ for\nany $\\sigma, \\tau \\in G$. Denote the fixed field of the action by $K(G) = L^{G}\n= \\left\\{ \\frac{f}{g} \\in L : \\sigma(\\frac{f}{g}) = \\frac{f}{g}, \\forall \\sigma\n\\in G \\right\\}$. Noether's problem asks whether $K(G)$ is rational (purely\ntranscendental) over $K$. It is known that if $G = C_m \\rtimes C_n$ is a\nsemidirect product of cyclic groups $C_m$ and $C_n$ with $\\mathbb{Z}[\\zeta_n]$\na unique factorization domain, and $K$ contains an $e$th primitive root of\nunity, where $e$ is the exponent of $G$, then $K(G)$ is rational over $K$. In\nthis paper, we give another criteria to determine whether $K(C_m \\rtimes C_n)$\nis rational over $K$. In particular, if $p, q$ are prime numbers and there\nexists $x \\in \\mathbb{Z}[\\zeta_q]$ such that the norm\n$N_{\\mathbb{Q}(\\zeta_q)/\\mathbb{Q}}(x) = p$, then $\\mathbb{C}(C_{p} \\rtimes\nC_{q})$ is rational over $\\mathbb{C}$.\n",
"title": "Noether's Problem on Semidirect Product Groups"
}
| null | null | null | null | true | null |
5828
| null |
Default
| null | null |
null |
{
"abstract": " Social abstract argumentation is a principled way to assign values to\nconflicting (weighted) arguments. In this note we discuss the important\nproperty of the uniqueness of the model.\n",
"title": "A note on the uniqueness of models in social abstract argumentation"
}
| null | null | null | null | true | null |
5829
| null |
Default
| null | null |
null |
{
"abstract": " We consider the parametric learning problem, where the objective of the\nlearner is determined by a parametric loss function. Employing empirical risk\nminimization with possibly regularization, the inferred parameter vector will\nbe biased toward the training samples. Such bias is measured by the cross\nvalidation procedure in practice where the data set is partitioned into a\ntraining set used for training and a validation set, which is not used in\ntraining and is left to measure the out-of-sample performance. A classical\ncross validation strategy is the leave-one-out cross validation (LOOCV) where\none sample is left out for validation and training is done on the rest of the\nsamples that are presented to the learner, and this process is repeated on all\nof the samples. LOOCV is rarely used in practice due to the high computational\ncomplexity. In this paper, we first develop a computationally efficient\napproximate LOOCV (ALOOCV) and provide theoretical guarantees for its\nperformance. Then we use ALOOCV to provide an optimization algorithm for\nfinding the regularizer in the empirical risk minimization framework. In our\nnumerical experiments, we illustrate the accuracy and efficiency of ALOOCV as\nwell as our proposed framework for the optimization of the regularizer.\n",
"title": "On Optimal Generalizability in Parametric Learning"
}
| null | null | null | null | true | null |
5830
| null |
Default
| null | null |
null |
{
"abstract": " This study addresses the problem of identifying the meaning of unknown words\nor entities in a discourse with respect to the word embedding approaches used\nin neural language models. We proposed a method for on-the-fly construction and\nexploitation of word embeddings in both the input and output layers of a neural\nmodel by tracking contexts. This extends the dynamic entity representation used\nin Kobayashi et al. (2016) and incorporates a copy mechanism proposed\nindependently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we\nconstruct a new task and dataset called Anonymized Language Modeling for\nevaluating the ability to capture word meanings while reading. Experiments\nconducted using our novel dataset show that the proposed variant of RNN\nlanguage model outperformed the baseline model. Furthermore, the experiments\nalso demonstrate that dynamic updates of an output layer help a model predict\nreappearing entities, whereas those of an input layer are effective to predict\nwords following reappearing entities.\n",
"title": "A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a Discourse"
}
| null | null | null | null | true | null |
5831
| null |
Default
| null | null |
null |
{
"abstract": " The study of subblock-constrained codes has recently gained attention due to\ntheir application in diverse fields. We present bounds on the size and\nasymptotic rate for two classes of subblock-constrained codes. The first class\nis binary constant subblock-composition codes (CSCCs), where each codeword is\npartitioned into equal sized subblocks, and every subblock has the same fixed\nweight. The second class is binary subblock energy-constrained codes (SECCs),\nwhere the weight of every subblock exceeds a given threshold. We present novel\nupper and lower bounds on the code sizes and asymptotic rates for binary CSCCs\nand SECCs. For a fixed subblock length and small relative distance, we show\nthat the asymptotic rate for CSCCs (resp. SECCs) is strictly lower than the\ncorresponding rate for constant weight codes (CWCs) (resp. heavy weight codes\n(HWCs)). Further, for codes with high weight and low relative distance, we show\nthat the asymptotic rates for CSCCs is strictly lower than that of SECCs, which\ncontrasts that the asymptotic rate for CWCs is equal to that of HWCs. We also\nprovide a correction to an earlier result by Chee et al. (2014) on the\nasymptotic CSCC rate. Additionally, we present several numerical examples\ncomparing the rates for CSCCs and SECCs with those for constant weight codes\nand heavy weight codes.\n",
"title": "Bounds on the Size and Asymptotic Rate of Subblock-Constrained Codes"
}
| null | null | null | null | true | null |
5832
| null |
Default
| null | null |
null |
{
"abstract": " We prove that there are arbitrarily large values of $t$ such that\n$|\\zeta(1+it)| \\geq e^{\\gamma} (\\log_2 t + \\log_3 t) + \\mathcal{O}(1)$. This\nessentially matches the prediction for the optimal lower bound in a conjecture\nof Granville and Soundararajan. Our proof uses a new variant of the \"long\nresonator\" method. While earlier implementations of this method crucially\nrelied on a \"sparsification\" technique to control the mean-square of the\nresonator function, in the present paper we exploit certain self-similarity\nproperties of a specially designed resonator function.\n",
"title": "Extreme values of the Riemann zeta function on the 1-line"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5833
| null |
Validated
| null | null |
null |
{
"abstract": " In an ion trap quantum computer, collective motional modes are used to\nentangle two or more qubits in order to execute multi-qubit logical gates. Any\nresidual entanglement between the internal and motional states of the ions\nresults in loss of fidelity, especially when there are many spectator ions in\nthe crystal. We propose using a frequency-modulated (FM) driving force to\nminimize such errors. In simulation, we obtained an optimized FM two-qubit gate\nthat can suppress errors to less than 0.01\\% and is robust against frequency\ndrifts over $\\pm$1 kHz. Experimentally, we have obtained a two-qubit gate\nfidelity of $98.3(4)\\%$, a state-of-the-art result for two-qubit gates with 5\nions.\n",
"title": "Robust two-qubit gates in a linear ion crystal using a frequency-modulated driving force"
}
| null | null | null | null | true | null |
5834
| null |
Default
| null | null |
null |
{
"abstract": " We investigate models of the mitogenactivated protein kinases (MAPK) network,\nwith the aim of determining where in parameter space there exist multiple\npositive steady states. We build on recent progress which combines various\nsymbolic computation methods for mixed systems of equalities and inequalities.\nWe demonstrate that those techniques benefit tremendously from a newly\nimplemented graph theoretical symbolic preprocessing method. We compare\ncomputation times and quality of results of numerical continuation methods with\nour symbolic approach before and after the application of our preprocessing.\n",
"title": "Symbolic Versus Numerical Computation and Visualization of Parameter Regions for Multistationarity of Biological Networks"
}
| null | null | null | null | true | null |
5835
| null |
Default
| null | null |
null |
{
"abstract": " The University of the East Web Portal is an academic, web based system that\nprovides educational electronic materials and e-learning services. To fully\noptimize its usage, it is imperative to determine the factors that relate to\nits usage. Thus, this study, to determine the computer self-efficacy of the\nfaculty members of the University of the East and its relationship with their\nweb portal usage, was conceived. Using a validated questionnaire, the profile\nof the respondents, their computer self-efficacy, and web portal usage were\ngathered. Data showed that the respondents were relatively young (M = 40 years\nold), majority had masters degree (f = 85, 72%), most had been using the web\nportal for four semesters (f = 60, 51%), and the large part were intermediate\nweb portal users (f = 69, 59%). They were highly skilled in using the computer\n(M = 4.29) and skilled in using the Internet (M = 4.28). E-learning services (M\n= 3.29) and online library resources (M = 3.12) were only used occasionally.\nPearson correlation revealed that age was positively correlated with online\nlibrary resources (r = 0.267, p < 0.05) and a negative relationship existed\nbetween perceived skill level in using the portal and online library resources\nusage (r = -0.206, p < 0.05). A 2x2 chi square revealed that the highest\neducational attainment had a significant relationship with online library\nresources (chi square = 5.489, df = 1, p < 0.05). Basic computer (r = 0.196, p\n< 0.05) and Internet skills (r = 0.303, p < 0.05) were significantly and\npositively related with e-learning services usage but not with online library\nresources usage. Other individual factors such as attitudes towards the web\nportal and anxiety towards using the web portal can be investigated.\n",
"title": "Computer Self-efficacy and Its Relationship with Web Portal Usage: Evidence from the University of the East"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5836
| null |
Validated
| null | null |
null |
{
"abstract": " We classify the dispersive Poisson brackets with one dependent variable and\ntwo independent variables, with leading order of hydrodynamic type, up to Miura\ntransformations. We show that, in contrast to the case of a single independent\nvariable for which a well known triviality result exists, the Miura equivalence\nclasses are parametrised by an infinite number of constants, which we call\nnumerical invariants of the brackets. We obtain explicit formulas for the first\nfew numerical invariants.\n",
"title": "Normal forms of dispersive scalar Poisson brackets with two independent variables"
}
| null | null | null | null | true | null |
5837
| null |
Default
| null | null |
null |
{
"abstract": " In this work we obtain a Liouville theorem for positive, bounded solutions of\nthe equation $$ (-\\Delta)^s u= h(x_N)f(u) \\quad \\hbox{in }\\mathbb{R}^{N} $$\nwhere $(-\\Delta)^s$ stands for the fractional Laplacian with $s\\in (0,1)$, and\nthe functions $h$ and $f$ are nondecreasing. The main feature is that the\nfunction $h$ changes sign in $\\mathbb{R}$, therefore the problem is sometimes\ntermed as indefinite. As an application we obtain a priori bounds for positive\nsolutions of some boundary value problems, which give existence of such\nsolutions by means of bifurcation methods.\n",
"title": "A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions"
}
| null | null | null | null | true | null |
5838
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies semiparametric contextual bandits, a generalization of the\nlinear stochastic bandit problem where the reward for an action is modeled as a\nlinear function of known action features confounded by an non-linear\naction-independent term. We design new algorithms that achieve\n$\\tilde{O}(d\\sqrt{T})$ regret over $T$ rounds, when the linear function is\n$d$-dimensional, which matches the best known bounds for the simpler\nunconfounded case and improves on a recent result of Greenewald et al. (2017).\nVia an empirical evaluation, we show that our algorithms outperform prior\napproaches when there are non-linear confounding effects on the rewards.\nTechnically, our algorithms use a new reward estimator inspired by\ndoubly-robust approaches and our proofs require new concentration inequalities\nfor self-normalized martingales.\n",
"title": "Semiparametric Contextual Bandits"
}
| null | null | null | null | true | null |
5839
| null |
Default
| null | null |
null |
{
"abstract": " As all physical adaptive quantum-enhanced metrology schemes operate under\nnoisy conditions with only partially understood noise characteristics, so a\npractical control policy must be robust even for unknown noise. We aim to\ndevise a test to evaluate the robustness of AQEM policies and assess the\nresource used by the policies. The robustness test is performed on QEAPE by\nsimulating the scheme under four phase-noise models corresponding to\nnormal-distribution noise, random-telegraph noise, skew-normal-distribution\nnoise, and log-normal-distribution noise. Control policies are devised either\nby an evolutionary algorithm under the same noisy conditions, albeit ignorant\nof its properties, or a Bayesian-based feedback method that assumes no noise.\nOur robustness test and resource comparison method can be used to determining\nthe efficacy and selecting a suitable policy.\n",
"title": "Robustness of Quantum-Enhanced Adaptive Phase Estimation"
}
| null | null | null | null | true | null |
5840
| null |
Default
| null | null |
null |
{
"abstract": " This paper shows that generalizations of operads equipped with their\nrespective bar/cobar dualities are related by a six operations formalism\nanalogous to that of classical contexts in algebraic geometry. As a consequence\nof our constructions, we prove intertwining theorems which govern derived\nKoszul duality of push-forwards and pull-backs.\n",
"title": "Six operations formalism for generalized operads"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5841
| null |
Validated
| null | null |
null |
{
"abstract": " Extracting characteristics from the training datasets of classification\nproblems has proven effective in a number of meta-analyses. Among them,\nmeasures of classification complexity can estimate the difficulty in separating\nthe data points into their expected classes. Descriptors of the spatial\ndistribution of the data and estimates of the shape and size of the decision\nboundary are among the existent measures for this characterization. This\ninformation can support the formulation of new data-driven pre-processing and\npattern recognition techniques, which can in turn be focused on challenging\ncharacteristics of the problems. This paper surveys and analyzes measures which\ncan be extracted from the training datasets in order to characterize the\ncomplexity of the respective classification problems. Their use in recent\nliterature is also reviewed and discussed, allowing to prospect opportunities\nfor future work in the area. Finally, descriptions are given on an R package\nnamed Extended Complexity Library (ECoL) that implements a set of complexity\nmeasures and is made publicly available.\n",
"title": "How Complex is your classification problem? A survey on measuring classification complexity"
}
| null | null | null | null | true | null |
5842
| null |
Default
| null | null |
null |
{
"abstract": " Dynamical dark energy has been recently suggested as a promising and physical\nway to solve the 3.4 sigma tension on the value of the Hubble constant $H_0$\nbetween the direct measurement of Riess et al. (2016) (R16, hereafter) and the\nindirect constraint from Cosmic Microwave Anisotropies obtained by the Planck\nsatellite under the assumption of a $\\Lambda$CDM model. In this paper, by\nparameterizing dark energy evolution using the $w_0$-$w_a$ approach, and\nconsidering a $12$ parameter extended scenario, we find that: a) the tension on\nthe Hubble constant can indeed be solved with dynamical dark energy, b) a\ncosmological constant is ruled out at more than $95 \\%$ c.l. by the Planck+R16\ndataset, and c) all of the standard quintessence and half of the \"downward\ngoing\" dark energy model space (characterized by an equation of state that\ndecreases with time) is also excluded at more than $95 \\%$ c.l. These results\nare further confirmed when cosmic shear, CMB lensing, or SN~Ia luminosity\ndistance data are also included. However, tension remains with the BAO dataset.\nA cosmological constant and small portion of the freezing quintessence models\nare still in agreement with the Planck+R16+BAO dataset at between 68\\% and 95\\%\nc.l. Conversely, for Planck plus a phenomenological $H_0$ prior, both thawing\nand freezing quintessence models prefer a Hubble constant of less than 70\nkm/s/Mpc. The general conclusions hold also when considering models with\nnon-zero spatial curvature.\n",
"title": "Constraining Dark Energy Dynamics in Extended Parameter Space"
}
| null | null | null | null | true | null |
5843
| null |
Default
| null | null |
null |
{
"abstract": " The muscle synergy concept provides a widely-accepted paradigm to break down\nthe complexity of motor control. In order to identify the synergies, different\nmatrix factorisation techniques have been used in a repertoire of fields such\nas prosthesis control and biomechanical and clinical studies. However, the\nrelevance of these matrix factorisation techniques is still open for discussion\nsince there is no ground truth for the underlying synergies. Here, we evaluate\nfactorisation techniques and investigate the factors that affect the quality of\nestimated synergies. We compared commonly used matrix factorisation methods:\nPrincipal component analysis (PCA), Independent component analysis (ICA),\nNon-negative matrix factorization (NMF) and second-order blind identification\n(SOBI). Publicly available real data were used to assess the synergies\nextracted by each factorisation method in the classification of wrist\nmovements. Synthetic datasets were utilised to explore the effect of muscle\nsynergy sparsity, level of noise and number of channels on the extracted\nsynergies. Results suggest that the sparse synergy model and a higher number of\nchannels would result in better-estimated synergies. Without dimensionality\nreduction, SOBI showed better results than other factorisation methods. This\nsuggests that SOBI would be an alternative when a limited number of electrodes\nis available but its performance was still poor in that case. Otherwise, NMF\nhad the best performance when the number of channels was higher than the number\nof synergies. Therefore, NMF would be the best method for muscle synergy\nextraction.\n",
"title": "Evaluation of matrix factorisation approaches for muscle synergy extraction"
}
| null | null | null | null | true | null |
5844
| null |
Default
| null | null |
null |
{
"abstract": " Two classifications of second order ODE's cubic with respect to the first\norder derivative are compared in the case of general position, which is common\nfor both classifications. The correspondence of vectorial, pseudovectorial,\nscalar, and pseudoscalar invariants is established.\n",
"title": "Comparison of two classifications of a class of ODE's in the case of general position"
}
| null | null | null | null | true | null |
5845
| null |
Default
| null | null |
null |
{
"abstract": " The problem of learning structural equation models (SEMs) from data is a\nfundamental problem in causal inference. We develop a new algorithm --- which\nis computationally and statistically efficient and works in the\nhigh-dimensional regime --- for learning linear SEMs from purely observational\ndata with arbitrary noise distribution. We consider three aspects of the\nproblem: identifiability, computational efficiency, and statistical efficiency.\nWe show that when data is generated from a linear SEM over $p$ nodes and\nmaximum degree $d$, our algorithm recovers the directed acyclic graph (DAG)\nstructure of the SEM under an identifiability condition that is more general\nthan those considered in the literature, and without faithfulness assumptions.\nIn the population setting, our algorithm recovers the DAG structure in\n$\\mathcal{O}(p(d^2 + \\log p))$ operations. In the finite sample setting, if the\nestimated precision matrix is sparse, our algorithm has a smoothed complexity\nof $\\widetilde{\\mathcal{O}}(p^3 + pd^7)$, while if the estimated precision\nmatrix is dense, our algorithm has a smoothed complexity of\n$\\widetilde{\\mathcal{O}}(p^5)$. For sub-Gaussian noise, we show that our\nalgorithm has a sample complexity of $\\mathcal{O}(\\frac{d^8}{\\varepsilon^2}\n\\log (\\frac{p}{\\sqrt{\\delta}}))$ to achieve $\\varepsilon$ element-wise additive\nerror with respect to the true autoregression matrix with probability at most\n$1 - \\delta$, while for noise with bounded $(4m)$-th moment, with $m$ being a\npositive integer, our algorithm has a sample complexity of\n$\\mathcal{O}(\\frac{d^8}{\\varepsilon^2} (\\frac{p^2}{\\delta})^{1/m})$.\n",
"title": "Learning linear structural equation models in polynomial time and sample complexity"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5846
| null |
Validated
| null | null |
null |
{
"abstract": " For a group $G$ and $R=\\mathbb Z,\\mathbb Z/p,\\mathbb Q$ we denote by $\\hat\nG_R$ the $R$-completion of $G.$ We study the map $H_n(G,K)\\to H_n(\\hat G_R,K),$\nwhere $(R,K)=(\\mathbb Z,\\mathbb Z/p),(\\mathbb Z/p,\\mathbb Z/p),(\\mathbb\nQ,\\mathbb Q).$ We prove that $H_2(G,K)\\to H_2(\\hat G_R,K)$ is an epimorphism\nfor a finitely generated solvable group $G$ of finite Prüfer rank. In\nparticular, Bousfield's $HK$-localisation of such groups coincides with the\n$K$-completion for $K=\\mathbb Z/p,\\mathbb Q.$ Moreover, we prove that\n$H_n(G,K)\\to H_n(\\hat G_R,K)$ is an epimorphism for any $n$ if $G$ is a\nfinitely presented group of the form $G=M\\rtimes C,$ where $C$ is the infinite\ncyclic group and $M$ is a $C$-module.\n",
"title": "On Bousfield's problem for solvable groups of finite Prüfer rank"
}
| null | null | null | null | true | null |
5847
| null |
Default
| null | null |
null |
{
"abstract": " Immunotherapy plays a major role in tumour treatment, in comparison with\nother methods of dealing with cancer. The Kirschner-Panetta (KP) model of\ncancer immunotherapy describes the interaction between tumour cells, effector\ncells and interleukin-2 which are clinically utilized as medical treatment. The\nmodel selects a rich concept of immune-tumour dynamics. In this paper,\napproximate analytical solutions to KP model are represented by using the\ndifferential transform and Adomian decomposition. The complicated nonlinearity\nof the KP system causes the application of these two methods to require more\ninvolved calculations. The approximate analytical solutions to the model are\ncompared with the results obtained by numerical fourth order Runge-Kutta\nmethod.\n",
"title": "Approximate Analytical Solution of a Cancer Immunotherapy Model by the Application of Differential Transform and Adomian Decomposition Methods"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
5848
| null |
Validated
| null | null |
null |
{
"abstract": " A one-to-one correspondence between the infinitesimal motions of bar-joint\nframeworks in $\\mathbb{R}^d$ and those in $\\mathbb{S}^d$ is a classical\nobservation by Pogorelov, and further connections among different rigidity\nmodels in various different spaces have been extensively studied. In this\npaper, we shall extend this line of research to include the infinitesimal\nrigidity of frameworks consisting of points and hyperplanes. This enables us to\nunderstand correspondences between point-hyperplane rigidity, classical\nbar-joint rigidity, and scene analysis.\nAmong other results, we derive a combinatorial characterization of graphs\nthat can be realized as infinitesimally rigid frameworks in the plane with a\ngiven set of points collinear. This extends a result by Jackson and Jordán,\nwhich deals with the case when three points are collinear.\n",
"title": "Point-hyperplane frameworks, slider joints, and rigidity preserving transformations"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5849
| null |
Validated
| null | null |
null |
{
"abstract": " We first develop a general framework for signless 1-Laplacian defined in\nterms of the combinatorial structure of a simplicial complex. The structure of\nthe eigenvectors and the complex feature of eigenvalues are studied. The\nCourant nodal domain theorem for partial differential equation is extended to\nthe signless 1-Laplacian on complex. We also study the effects of a wedge sum\nand a duplication of a motif on the spectrum of the signless 1-Laplacian, and\nidentify some of the combinatorial features of a simplicial complex that are\nencoded in its spectrum. A special result is that the independent number and\nclique covering number on a complex provide lower and upper bounds of the\nmultiplicity of the largest eigenvalue of signless 1-Laplacian, respectively,\nwhich has no counterpart of $p$-Laplacian for any $p>1$.\n",
"title": "Spectrum of signless 1-Laplacian on simplicial complexes"
}
| null | null | null | null | true | null |
5850
| null |
Default
| null | null |
null |
{
"abstract": " One of the major issues in an interconnected power system is the low damping\nof inter-area oscillations which significantly reduces the power transfer\ncapability. Advances in Wide-Area Measurement System (WAMS) makes it possible\nto use the information from geographical distant location to improve power\nsystem dynamics and performances. A speed deviation based Wide-Area Power\nSystem Stabilizer (WAPSS) is known to be effective in damping inter-area modes.\nHowever, the involvement of wide-area signals gives rise to the problem of\ntime-delay, which may degrade the system performance. In general, time-stamped\nsynchronized signals from Phasor Data Concentrator (PDC) are used for WAPSS, in\nwhich delays are introduced in both local and remote signals. One can opt for a\nfeedback of remote signal only from PDC and uses the local signal as it is\navailable, without time synchronization. This paper utilizes configurations of\ntime-matched synchronized and nonsychronized feedback and provides the\nguidelines to design the controller. The controllers are synthesized using\n$H_\\infty$ control with regional pole placement for ensuring adequate dynamic\nperformance. To show the effectiveness of the proposed approach, two power\nsystem models have been used for the simulations. It is shown that the\ncontrollers designed based on the nonsynchronized signals are more robust to\ntime time delay variations than the controllers using synchronized signal.\n",
"title": "Inter-Area Oscillation Damping With Non-Synchronized Wide-Area Power System Stabilizer"
}
| null | null | null | null | true | null |
5851
| null |
Default
| null | null |
null |
{
"abstract": " This paper demonstrates the use of genetic algorithms for evolving: 1) a\ngrandmaster-level evaluation function, and 2) a search mechanism for a chess\nprogram, the parameter values of which are initialized randomly. The evaluation\nfunction of the program is evolved by learning from databases of (human)\ngrandmaster games. At first, the organisms are evolved to mimic the behavior of\nhuman grandmasters, and then these organisms are further improved upon by means\nof coevolution. The search mechanism is evolved by learning from tactical test\nsuites. Our results show that the evolved program outperforms a two-time world\ncomputer chess champion and is at par with the other leading computer chess\nprograms.\n",
"title": "Genetic Algorithms for Evolving Computer Chess Programs"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5852
| null |
Validated
| null | null |
null |
{
"abstract": " This work presents a low-cost robot, controlled by a Raspberry Pi, whose\nnavigation system is based on vision. The strategy used consisted of\nidentifying obstacles via optical flow pattern recognition. Its estimation was\ndone using the Lucas-Kanade algorithm, which can be executed by the Raspberry\nPi without harming its performance. Finally, an SVM-based classifier was used\nto identify patterns of this signal associated with obstacles movement. The\ndeveloped system was evaluated considering its execution over an optical flow\npattern dataset extracted from a real navigation environment. In the end, it\nwas verified that the acquisition cost of the system was inferior to that\npresented by most of the cited works, while its performance was similar to\ntheirs.\n",
"title": "Low-cost Autonomous Navigation System Based on Optical Flow Classification"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5853
| null |
Validated
| null | null |
null |
{
"abstract": " The NVIDIA Volta GPU microarchitecture introduces a specialized unit, called\n\"Tensor Core\" that performs one matrix-multiply-and-accumulate on 4x4 matrices\nper clock cycle. The NVIDIA Tesla V100 accelerator, featuring the Volta\nmicroarchitecture, provides 640 Tensor Cores with a theoretical peak\nperformance of 125 Tflops/s in mixed precision. In this paper, we investigate\ncurrent approaches to program NVIDIA Tensor Cores, their performances and the\nprecision loss due to computation in mixed precision.\nCurrently, NVIDIA provides three different ways of programming\nmatrix-multiply-and-accumulate on Tensor Cores: the CUDA Warp Matrix Multiply\nAccumulate (WMMA) API, CUTLASS, a templated library based on WMMA, and cuBLAS\nGEMM. After experimenting with different approaches, we found that NVIDIA\nTensor Cores can deliver up to 83 Tflops/s in mixed precision on a Tesla V100\nGPU, seven and three times the performance in single and half precision\nrespectively. A WMMA implementation of batched GEMM reaches a performance of 4\nTflops/s. While precision loss due to matrix multiplication with half precision\ninput might be critical in many HPC applications, it can be considerably\nreduced at the cost of increased computation. Our results indicate that HPC\napplications using matrix multiplications can strongly benefit from using of\nNVIDIA Tensor Cores.\n",
"title": "NVIDIA Tensor Core Programmability, Performance & Precision"
}
| null | null | null | null | true | null |
5854
| null |
Default
| null | null |
null |
{
"abstract": " Orion KL is one of the most frequently observed sources in the Galaxy, and\nthe site where many molecular species have been discovered for the first time.\nWith the availability of powerful wideband backends, it is nowadays possible to\ncomplete spectral surveys in the entire mm-range to obtain a spectroscopically\nunbiased chemical picture of the region. In this paper we present a sensitive\nspectral survey of Orion KL, made with one of the 34m antennas of the Madrid\nDeep Space Communications Complex in Robledo de Chavela, Spain. The spectral\nrange surveyed is from 41.5 to 50 GHz, with a frequency spacing of 180 kHz\n(equivalent to about 1.2 km/s, depending on the exact frequency). The rms\nachieved ranges from 8 to 12 mK. The spectrum is dominated by the J=1-0 SiO\nmaser lines and by radio recombination lines (RRLs), which were detected up to\nDelta_n=11. Above a 3-sigma level, we identified 66 RRLs and 161 molecular\nlines corresponding to 39 isotopologues from 20 molecules; a total of 18 lines\nremain unidentified, two of them above a 5-sigma level. Results of radiative\nmodelling of the detected molecular lines (excluding masers) are presented. At\nthis frequency range, this is the most sensitive survey and also the one with\nthe widest band. Although some complex molecules like CH_3CH_2CN and CH_2CHCN\narise from the hot core, most of the detected molecules originate from the low\ntemperature components in Orion KL.\n",
"title": "A spectroscopic survey of Orion KL between 41.5 and 50 GHz"
}
| null | null | null | null | true | null |
5855
| null |
Default
| null | null |
null |
{
"abstract": " We address the problem of latent truth discovery, LTD for short, where the\ngoal is to discover the underlying true values of entity attributes in the\npresence of noisy, conflicting or incomplete information. Despite a multitude\nof algorithms to address the LTD problem that can be found in literature, only\nlittle is known about their overall performance with respect to effectiveness\n(in terms of truth discovery capabilities), efficiency and robustness. A\npractical LTD approach should satisfy all these characteristics so that it can\nbe applied to heterogeneous datasets of varying quality and degrees of\ncleanliness.\nWe propose a novel algorithm for LTD that satisfies the above requirements.\nThe proposed model is based on Restricted Boltzmann Machines, thus coined\nLTD-RBM. In extensive experiments on various heterogeneous and publicly\navailable datasets, LTD-RBM is superior to state-of-the-art LTD techniques in\nterms of an overall consideration of effectiveness, efficiency and robustness.\n",
"title": "Restricted Boltzmann Machines for Robust and Fast Latent Truth Discovery"
}
| null | null | null | null | true | null |
5856
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the evolution of vortex-surface fields (VSFs) in compressible\nTaylor--Green flows at Mach numbers ($Ma$) ranging from 0.5 to 2.0 using direct\nnumerical simulation. The formulation of VSFs in incompressible flows is\nextended to compressible flows, and a mass-based renormalization of VSFs is\nused to facilitate characterizing the evolution of a particular vortex surface.\nThe effects of the Mach number on the VSF evolution are different in three\nstages. In the early stage, the jumps of the compressive velocity component\nnear shocklets generate sinks to contract surrounding vortex surfaces, which\nshrink vortex volume and distort vortex surfaces. The subsequent reconnection\nof vortex surfaces, quantified by the minimal distance between approaching\nvortex surfaces and the exchange of vorticity fluxes, occurs earlier and has a\nhigher reconnection degree for larger $Ma$ owing to the dilatational\ndissipation and shocklet-induced reconnection of vortex lines. In the late\nstage, the positive dissipation rate and negative pressure work accelerate the\nloss of kinetic energy and suppress vortex twisting with increasing $Ma$.\n",
"title": "Effects of the Mach number on the evolution of vortex-surface fields in compressible Taylor--Green flows"
}
| null | null |
[
"Physics"
] | null | true | null |
5857
| null |
Validated
| null | null |
null |
{
"abstract": " A possible route to extract electronic and nuclear dynamics from molecular\ntargets with attosecond temporal and nanometer spatial resolution is to employ\nrecolliding electrons as `probes'. The recollision process in molecules is,\nhowever, very challenging to treat using {\\it ab initio} approaches. Even for\nthe simplest diatomic systems, such as H$_2$, today's computational\ncapabilities are not enough to give a complete description of the electron and\nnuclear dynamics initiated by a strong laser field. As a consequence,\napproximate qualitative descriptions are called to play an important role. In\nthis contribution we extend the work presented in N. Suárez {\\it et al.},\nPhys.~Rev. A {\\bf 95}, 033415 (2017), to three-center molecular targets.\nAdditionally, we incorporate a more accurate description of the molecular\nground state, employing information extracted from quantum chemistry software\npackages. This step forward allows us to include, in a detailed way, both the\nmolecular symmetries and nodes present in the high-occupied molecular orbital.\nWe are able to, on the one hand, keep our formulation as analytical as in the\ncase of diatomics, and, on the other hand, to still give a complete description\nof the underlying physics behind the above-threshold ionization process. The\napplication of our approach to complex multicenter - with more than 3 centers,\ntargets appears to be straightforward.\n",
"title": "Above-threshold ionization (ATI) in multicenter molecules: the role of the initial state"
}
| null | null | null | null | true | null |
5858
| null |
Default
| null | null |
null |
{
"abstract": " We address the problem of estimating human pose and body shape from 3D scans\nover time. Reliable estimation of 3D body shape is necessary for many\napplications including virtual try-on, health monitoring, and avatar creation\nfor virtual reality. Scanning bodies in minimal clothing, however, presents a\npractical barrier to these applications. We address this problem by estimating\nbody shape under clothing from a sequence of 3D scans. Previous methods that\nhave exploited body models produce smooth shapes lacking personalized details.\nWe contribute a new approach to recover a personalized shape of the person. The\nestimated shape deviates from a parametric model to fit the 3D scans. We\ndemonstrate the method using high quality 4D data as well as sequences of\nvisual hulls extracted from multi-view images. We also make available BUFF, a\nnew 4D dataset that enables quantitative evaluation\n(this http URL). Our method outperforms the state of the art in\nboth pose estimation and shape estimation, qualitatively and quantitatively.\n",
"title": "Detailed, accurate, human shape estimation from clothed 3D scan sequences"
}
| null | null | null | null | true | null |
5859
| null |
Default
| null | null |
null |
{
"abstract": " We shall introduce the notion of the Picard group for an inclusion of\n$C^*$-algebras. We shall also study its basic properties and the relation\nbetween the Picard group for an inclusion of $C^*$-algebras and the ordinary\nPicard group. Furthermore, we shall give some examples of the Picard groups for\nunital inclusions of unital $C^*$-algebras.\n",
"title": "The Picard groups for unital inclusions of unital $C^*$-algebras"
}
| null | null | null | null | true | null |
5860
| null |
Default
| null | null |
null |
{
"abstract": " The aim of this paper is to provide a discussion on current directions of\nresearch involving typical singularities of 3D nonsmooth vector fields. A brief\nsurvey of known results is presented. The main purpose of this work is to\ndescribe the dynamical features of a fold-fold singularity in its most basic\nform and to give a complete and detailed proof of its local structural\nstability (or instability). In addition, classes of all topological types of a\nfold-fold singularity are intrinsically characterized. Such proof essentially\nfollows firstly from some lines laid out by Colombo, García, Jeffrey,\nTeixeira and others and secondly offers a rigorous mathematical treatment under\nclear and crisp assumptions and solid arguments. One should to highlight that\nthe geometric-topological methods employed lead us to the completely\nmathematical understanding of the dynamics around a T-singularity. This\napproach lends itself to applications in generic bifurcation theory. It is\nworth to say that such subject is still poorly understood in higher dimension.\n",
"title": "Generic Singularities of 3D Piecewise Smooth Dynamical Systems"
}
| null | null | null | null | true | null |
5861
| null |
Default
| null | null |
null |
{
"abstract": " With the spreading prevalence of Big Data, many advances have recently been\nmade in this field. Frameworks such as Apache Hadoop and Apache Spark have\ngained a lot of traction over the past decades and have become massively\npopular, especially in industries. It is becoming increasingly evident that\neffective big data analysis is key to solving artificial intelligence problems.\nThus, a multi-algorithm library was implemented in the Spark framework, called\nMLlib. While this library supports multiple machine learning algorithms, there\nis still scope to use the Spark setup efficiently for highly time-intensive and\ncomputationally expensive procedures like deep learning. In this paper, we\npropose a novel framework that combines the distributive computational\nabilities of Apache Spark and the advanced machine learning architecture of a\ndeep multi-layer perceptron (MLP), using the popular concept of Cascade\nLearning. We conduct empirical analysis of our framework on two real world\ndatasets. The results are encouraging and corroborate our proposed framework,\nin turn proving that it is an improvement over traditional big data analysis\nmethods that use either Spark or Deep learning as individual elements.\n",
"title": "A Big Data Analysis Framework Using Apache Spark and Deep Learning"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5862
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the problem of reconstructing a signal from multi-layered\n(possibly) non-linear measurements. Using non-rigorous but standard methods\nfrom statistical physics we present the Multi-Layer Approximate Message Passing\n(ML-AMP) algorithm for computing marginal probabilities of the corresponding\nestimation problem and derive the associated state evolution equations to\nanalyze its performance. We also give the expression of the asymptotic free\nenergy and the minimal information-theoretically achievable reconstruction\nerror. Finally, we present some applications of this measurement model for\ncompressed sensing and perceptron learning with structured matrices/patterns,\nand for a simple model of estimation of latent variables in an auto-encoder.\n",
"title": "Multi-Layer Generalized Linear Estimation"
}
| null | null |
[
"Computer Science",
"Physics",
"Statistics"
] | null | true | null |
5863
| null |
Validated
| null | null |
null |
{
"abstract": " Let $L_g$ be the subcritical GJMS operator on an even-dimensional compact\nmanifold $(X, g)$ and consider the zeta-regularized trace\n$\\mathrm{Tr}_\\zeta(L_g^{-1})$ of its inverse. We show that if $\\ker L_g = 0$,\nthen the supremum of this quantity, taken over all metrics $g$ of fixed volume\nin the conformal class, is always greater than or equal to the corresponding\nquantity on the standard sphere. Moreover, we show that in the case that it is\nstrictly larger, the supremum is attained by a metric of constant mass. Using\npositive mass theorems, we give some geometric conditions for this to happen.\n",
"title": "The Trace and the Mass of subcritical GJMS Operators"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5864
| null |
Validated
| null | null |
null |
{
"abstract": " Random scattering is usually viewed as a serious nuisance in optical imaging,\nand needs to be prevented in the conventional imaging scheme based on\nsingle-photon interference. Here we proposed a two-photon imaging scheme with\nthe widely used lens replaced by a dynamic random medium. In contrast to\ndestroying imaging process, the dynamic random medium in our scheme works as a\ncrucial imaging element to bring constructive interference, and allows us to\nimage an object from light field scattered by this dynamic random medium. On\nthe one hand, our imaging scheme with incoherent two-photon illumination\nenables us to achieve super-resolution imaging with the resolution reaching\nHeisenberg limit. On the other hand, with coherent two-photon illumination, the\nimage of a pure-phase object can be obtained in our imaging scheme. These\nresults show new possibilities to overcome bottleneck of widely used\nsingle-photon imaging by developing imaging method based on multi-photon\ninterference.\n",
"title": "Two-photon imaging assisted by a dynamic random medium"
}
| null | null |
[
"Physics"
] | null | true | null |
5865
| null |
Validated
| null | null |
null |
{
"abstract": " Deployment of deep neural networks (DNNs) in safety- or security-critical\nsystems requires provable guarantees on their correct behaviour. A common\nrequirement is robustness to adversarial perturbations in a neighbourhood\naround an input. In this paper we focus on the $L_0$ norm and aim to compute,\nfor a trained DNN and an input, the maximal radius of a safe norm ball around\nthe input within which there are no adversarial examples. Then we define global\nrobustness as an expectation of the maximal safe radius over a test data set.\nWe first show that the problem is NP-hard, and then propose an approximate\napproach to iteratively compute lower and upper bounds on the network's\nrobustness. The approach is \\emph{anytime}, i.e., it returns intermediate\nbounds and robustness estimates that are gradually, but strictly, improved as\nthe computation proceeds; \\emph{tensor-based}, i.e., the computation is\nconducted over a set of inputs simultaneously, instead of one by one, to enable\nefficient GPU computation; and has \\emph{provable guarantees}, i.e., both the\nbounds and the robustness estimates can converge to their optimal values.\nFinally, we demonstrate the utility of the proposed approach in practice to\ncompute tight bounds by applying and adapting the anytime algorithm to a set of\nchallenging problems, including global robustness evaluation, competitive $L_0$\nattacks, test case generation for DNNs, and local robustness evaluation on\nlarge-scale ImageNet DNNs. We release the code of all case studies via GitHub.\n",
"title": "Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm"
}
| null | null | null | null | true | null |
5866
| null |
Default
| null | null |
null |
{
"abstract": " The coupled evolution of an eroding cylinder immersed in a fluid within the\nsubcritical Reynolds range is explored with scale resolving simulations.\nErosion of the cylinder is driven by fluid shear stress. Kármán vortex\nshedding features in the wake and these oscillations occur on a significantly\nsmaller time scale compared to the slowly eroding cylinder boundary. Temporal\nand spatial averaging across the cylinder span allows mean wall statistics such\nas wall shear to be evaluated; with geometry evolving in 2-D and the flow field\nsimulated in 3-D. The cylinder develops into a rounded triangular body with\nuniform wall shear stress which is in agreement with existing theory and\nexperiments. We introduce a node shuffle algorithm to reposition nodes around\nthe cylinder boundary with a uniform distribution such that the mesh quality is\npreserved under high boundary deformation. A cylinder is then modelled within\nan infinite array of other cylinders by simulating a repeating unit cell and\ntheir profile evolution is studied. A similar terminal form is discovered for\nlarge cylinder spacings with consistent flow conditions and an intermediate\nprofile was found with a closely packed lattice before reaching the common\nterminal form.\n",
"title": "Evolution of an eroding cylinder in single and lattice arrangements"
}
| null | null | null | null | true | null |
5867
| null |
Default
| null | null |
null |
{
"abstract": " A classical problem in causal inference is that of matching, where treatment\nunits need to be matched to control units. Some of the main challenges in\ndeveloping matching methods arise from the tension among (i) inclusion of as\nmany covariates as possible in defining the matched groups, (ii) having matched\ngroups with enough treated and control units for a valid estimate of Average\nTreatment Effect (ATE) in each group, and (iii) computing the matched pairs\nefficiently for large datasets. In this paper we propose a fast method for\napproximate and exact matching in causal analysis called FLAME (Fast\nLarge-scale Almost Matching Exactly). We define an optimization objective for\nmatch quality, which gives preferences to matching on covariates that can be\nuseful for predicting the outcome while encouraging as many matches as\npossible. FLAME aims to optimize our match quality measure, leveraging\ntechniques that are natural for query processing in the area of database\nmanagement. We provide two implementations of FLAME using SQL queries and\nbit-vector techniques.\n",
"title": "FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference"
}
| null | null | null | null | true | null |
5868
| null |
Default
| null | null |
null |
{
"abstract": " We propose a stochastic extension of the primal-dual hybrid gradient\nalgorithm studied by Chambolle and Pock in 2011 to solve saddle point problems\nthat are separable in the dual variable. The analysis is carried out for\ngeneral convex-concave saddle point problems and problems that are either\npartially smooth / strongly convex or fully smooth / strongly convex. We\nperform the analysis for arbitrary samplings of dual variables, and obtain\nknown deterministic results as a special case. Several variants of our\nstochastic method significantly outperform the deterministic variant on a\nvariety of imaging tasks.\n",
"title": "Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications"
}
| null | null | null | null | true | null |
5869
| null |
Default
| null | null |
null |
{
"abstract": " We study combinatorial multi-armed bandit with probabilistically triggered\narms (CMAB-T) and semi-bandit feedback. We resolve a serious issue in the prior\nCMAB-T studies where the regret bounds contain a possibly exponentially large\nfactor of $1/p^*$, where $p^*$ is the minimum positive probability that an arm\nis triggered by any action. We address this issue by introducing a triggering\nprobability modulated (TPM) bounded smoothness condition into the general\nCMAB-T framework, and show that many applications such as influence\nmaximization bandit and combinatorial cascading bandit satisfy this TPM\ncondition. As a result, we completely remove the factor of $1/p^*$ from the\nregret bounds, achieving significantly better regret bounds for influence\nmaximization and cascading bandits than before. Finally, we provide lower bound\nresults showing that the factor $1/p^*$ is unavoidable for general CMAB-T\nproblems, suggesting that the TPM condition is crucial in removing this factor.\n",
"title": "Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications"
}
| null | null | null | null | true | null |
5870
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural network algorithms are difficult to analyze because they lack\nstructure allowing to understand the properties of underlying transforms and\ninvariants. Multiscale hierarchical convolutional networks are structured deep\nconvolutional networks where layers are indexed by progressively higher\ndimensional attributes, which are learned from training data. Each new layer is\ncomputed with multidimensional convolutions along spatial and attribute\nvariables. We introduce an efficient implementation of such networks where the\ndimensionality is progressively reduced by averaging intermediate layers along\nattribute indices. Hierarchical networks are tested on CIFAR image data bases\nwhere they obtain comparable precisions to state of the art networks, with much\nfewer parameters. We study some properties of the attributes learned from these\ndatabases.\n",
"title": "Multiscale Hierarchical Convolutional Networks"
}
| null | null | null | null | true | null |
5871
| null |
Default
| null | null |
null |
{
"abstract": " Information bottleneck (IB) is a method for extracting information from one\nrandom variable $X$ that is relevant for predicting another random variable\n$Y$. To do so, IB identifies an intermediate \"bottleneck\" variable $T$ that has\nlow mutual information $I(X;T)$ and high mutual information $I(Y;T)$. The \"IB\ncurve\" characterizes the set of bottleneck variables that achieve maximal\n$I(Y;T)$ for a given $I(X;T)$, and is typically explored by maximizing the \"IB\nLagrangian\", $I(Y;T) - \\beta I(X;T)$. In some cases, $Y$ is a deterministic\nfunction of $X$, including many classification problems in supervised learning\nwhere the output class $Y$ is a deterministic function of the input $X$. We\ndemonstrate three caveats when using IB in any situation where $Y$ is a\ndeterministic function of $X$: (1) the IB curve cannot be recovered by\nmaximizing the IB Lagrangian for different values of $\\beta$; (2) there are\n\"uninteresting\" trivial solutions at all points of the IB curve; and (3) for\nmulti-layer classifiers that achieve low prediction error, different layers\ncannot exhibit a strict trade-off between compression and prediction, contrary\nto a recent proposal. We also show that when $Y$ is a small perturbation away\nfrom being a deterministic function of $X$, these three caveats arise in an\napproximate way. To address problem (1), we propose a functional that, unlike\nthe IB Lagrangian, can recover the IB curve in all cases. We demonstrate the\nthree caveats on the MNIST dataset.\n",
"title": "Caveats for information bottleneck in deterministic scenarios"
}
| null | null | null | null | true | null |
5872
| null |
Default
| null | null |
null |
{
"abstract": " Simulations of charge transport in graphene are presented by implementing a\nrecent method published on the paper: V. Romano, A. Majorana, M. Coco, \"DSMC\nmethod consistent with the Pauli exclusion principle and comparison with\ndeterministic solutions for charge transport in graphene\", Journal of\nComputational Physics 302 (2015) 267-284. After an overview of the most\nimportant aspects of the semiclassical transport model for the dynamics of\nelectrons in monolayer graphene, it is made a comparison in computational time\nbetween MATLAB and Fortran implementations of the algorithms. Therefore it is\nstudied the case of graphene on substrates which it is produced original\nresults by introducing models for the distribution of distances between\ngraphene's atoms and impurities. Finally simulations, by choosing different\nkind of substrates, are done.\n-----\nLe simulazioni per il trasporto di cariche nel grafene sono presentate\nimplementando un recente metodo pubblicato nell'articolo: V. Romano, A.\nMajorana, M. Coco, \"DSMC method consistent with the Pauli exclusion principle\nand comparison with deterministic solutions for charge transport in graphene\",\nJournal of Computational Physics 302 (2015) 267-284. Dopo una panoramica sugli\naspetti più importanti del modello di trasporto semiclassico per la dinamica\ndegli elettroni nel grafene sospeso, è stato effettuato un confronto del\ntempo computazionale tra le implementazioni MATLAB e Fortran dell'algoritmo.\nInoltre è stato anche studiato il caso del grafene su substrato su cui sono\nstati prodotti dei risultati originali considerando dei modelli per la\ndistribuzione delle distanze tra gli atomi del grafene e le impurezze. Infine\nsono state effettuate delle simulazioni scegliendo substrati di diversa natura.\n",
"title": "Monte Carlo Simulation of Charge Transport in Graphene (Simulazione Monte Carlo per il trasporto di cariche nel grafene)"
}
| null | null | null | null | true | null |
5873
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a new type of graph, denoted as \"embedded-graph\",\nand its theory, which employs a distributed representation to describe the\nrelations on the graph edges. Embedded-graphs can express linguistic and\ncomplicated relations, which cannot be expressed by the existing edge-graphs or\nweighted-graphs. We introduce the mathematical definition of embedded-graph,\ntranslation, edge distance, and graph similarity. We can transform an\nembedded-graph into a weighted-graph and a weighted-graph into an edge-graph by\nthe translation method and by threshold calculation, respectively. The edge\ndistance of an embedded-graph is a distance based on the components of a target\nvector, and it is calculated through cosine similarity with the target vector.\nThe graph similarity is obtained considering the relations with linguistic\ncomplexity. In addition, we provide some examples and data structures for\nembedded-graphs in this paper.\n",
"title": "Embedded-Graph Theory"
}
| null | null | null | null | true | null |
5874
| null |
Default
| null | null |
null |
{
"abstract": " The analysis of the entanglement entropy of a subsystem of a one-dimensional\nquantum system is a powerful tool for unravelling its critical nature. For\ninstance, the scaling behaviour of the entanglement entropy determines the\ncentral charge of the associated Virasoro algebra. For a free fermion system,\nthe entanglement entropy depends essentially on two sets, namely the set $A$ of\nsites of the subsystem considered and the set $K$ of excited momentum modes. In\nthis work we make use of a general duality principle establishing the\ninvariance of the entanglement entropy under exchange of the sets $A$ and $K$\nto tackle complex problems by studying their dual counterparts. The duality\nprinciple is also a key ingredient in the formulation of a novel conjecture for\nthe asymptotic behavior of the entanglement entropy of a free fermion system in\nthe general case in which both sets $A$ and $K$ consist of an arbitrary number\nof blocks. We have verified that this conjecture reproduces the numerical\nresults with excellent precision for all the configurations analyzed. We have\nalso applied the conjecture to deduce several asymptotic formulas for the\nmutual and $r$-partite information generalizing the known ones for the single\nblock case.\n",
"title": "A duality principle for the multi-block entanglement entropy of free fermion systems"
}
| null | null | null | null | true | null |
5875
| null |
Default
| null | null |
null |
{
"abstract": " The efficiency of a game is typically quantified by the price of anarchy\n(PoA), defined as the worst ratio of the objective function value of an\nequilibrium --- solution of the game --- and that of an optimal outcome. Given\nthe tremendous impact of tools from mathematical programming in the design of\nalgorithms and the similarity of the price of anarchy and different measures\nsuch as the approximation and competitive ratios, it is intriguing to develop a\nduality-based method to characterize the efficiency of games.\nIn the paper, we present an approach based on linear programming duality to\nstudy the efficiency of games. We show that the approach provides a general\nrecipe to analyze the efficiency of games and also to derive concepts leading\nto improvements. The approach is particularly appropriate to bound the PoA.\nSpecifically, in our approach the dual programs naturally lead to competitive\nPoA bounds that are (almost) optimal for several classes of games. The approach\nindeed captures the smoothness framework and also some current non-smooth\ntechniques/concepts. We show the applicability to the wide variety of games and\nenvironments, from congestion games to Bayesian welfare, from full-information\nsettings to incomplete-information ones.\n",
"title": "Game Efficiency through Linear Programming Duality"
}
| null | null | null | null | true | null |
5876
| null |
Default
| null | null |
null |
{
"abstract": " An equation-by-equation (EBE) method is proposed to solve a system of\nnonlinear equations arising from the moment constrained maximum entropy problem\nof multidimensional variables. The design of the EBE method combines ideas from\nhomotopy continuation and Newton's iterative methods. Theoretically, we\nestablish the local convergence under appropriate conditions and show that the\nproposed method, geometrically, finds the solution by searching along the\nsurface corresponding to one component of the nonlinear problem. We will\ndemonstrate the robustness of the method on various numerical examples,\nincluding: (1) A six-moment one-dimensional entropy problem with an explicit\nsolution that contains components of order $10^0-10^3$ in magnitude; (2)\nFour-moment multidimensional entropy problems with explicit solutions where the\nresulting systems to be solved ranging from $70-310$ equations; (3) Four- to\neight-moment of a two-dimensional entropy problem, which solutions correspond\nto the densities of the two leading EOFs of the wind stress-driven large-scale\noceanic model. In this case, we find that the EBE method is more accurate\ncompared to the classical Newton's method, the MATLAB generic solver, and the\npreviously developed BFGS-based method, which was also tested on this problem.\n(4) Four-moment constrained of up to five-dimensional entropy problems which\nsolutions correspond to multidimensional densities of the components of the\nsolutions of the Kuramoto-Sivashinsky equation. For the higher dimensional\ncases of this example, the EBE method is superior because it automatically\nselects a subset of the prescribed moment constraints from which the maximum\nentropy solution can be estimated within the desired tolerance. This selection\nfeature is particularly important since the moment constrained maximum entropy\nproblems do not necessarily have solutions in general.\n",
"title": "An Equation-By-Equation Method for Solving the Multidimensional Moment Constrained Maximum Entropy Problem"
}
| null | null | null | null | true | null |
5877
| null |
Default
| null | null |
null |
{
"abstract": " The reduction by restricting the spectral parameters $k$ and $k'$ on a\ngeneric algebraic curve of degree $\\mathcal{N}$ is performed for the discrete\nAKP, BKP and CKP equations, respectively. A variety of two-dimensional discrete\nintegrable systems possessing a more general solution structure arise from the\nreduction, and in each case a unified formula for generic positive integer\n$\\mathcal{N}\\geq 2$ is given to express the corresponding reduced integrable\nlattice equations. The obtained extended two-dimensional lattice models give\nrise to many important integrable partial difference equations as special\ndegenerations. Some new integrable lattice models such as the discrete\nSawada--Kotera, Kaup--Kupershmidt and Hirota--Satsuma equations in extended\nform are given as examples within the framework.\n",
"title": "On reductions of the discrete Kadomtsev--Petviashvili-type equations"
}
| null | null | null | null | true | null |
5878
| null |
Default
| null | null |
null |
{
"abstract": " Existing black-box attacks on deep neural networks (DNNs) so far have largely\nfocused on transferability, where an adversarial instance generated for a\nlocally trained model can \"transfer\" to attack other learning models. In this\npaper, we propose novel Gradient Estimation black-box attacks for adversaries\nwith query access to the target model's class probabilities, which do not rely\non transferability. We also propose strategies to decouple the number of\nqueries required to generate each adversarial sample from the dimensionality of\nthe input. An iterative variant of our attack achieves close to 100%\nadversarial success rates for both targeted and untargeted attacks on DNNs. We\ncarry out extensive experiments for a thorough comparative evaluation of\nblack-box attacks and show that the proposed Gradient Estimation attacks\noutperform all transferability based black-box attacks we tested on both MNIST\nand CIFAR-10 datasets, achieving adversarial success rates similar to well\nknown, state-of-the-art white-box attacks. We also apply the Gradient\nEstimation attacks successfully against a real-world Content Moderation\nclassifier hosted by Clarifai. Furthermore, we evaluate black-box attacks\nagainst state-of-the-art defenses. We show that the Gradient Estimation attacks\nare very effective even against these defenses.\n",
"title": "Exploring the Space of Black-box Attacks on Deep Neural Networks"
}
| null | null | null | null | true | null |
5879
| null |
Default
| null | null |
null |
{
"abstract": " Many iterative procedures in stochastic optimization exhibit a transient\nphase followed by a stationary phase. During the transient phase the procedure\nconverges towards a region of interest, and during the stationary phase the\nprocedure oscillates in that region, commonly around a single point. In this\npaper, we develop a statistical diagnostic test to detect such phase transition\nin the context of stochastic gradient descent with constant learning rate. We\npresent theory and experiments suggesting that the region where the proposed\ndiagnostic is activated coincides with the convergence region. For a class of\nloss functions, we derive a closed-form solution describing such region.\nFinally, we suggest an application to speed up convergence of stochastic\ngradient descent by halving the learning rate each time stationarity is\ndetected. This leads to a new variant of stochastic gradient descent, which in\nmany settings is comparable to state-of-art.\n",
"title": "Convergence diagnostics for stochastic gradient descent with constant step size"
}
| null | null | null | null | true | null |
5880
| null |
Default
| null | null |
null |
{
"abstract": " We provide a unified framework for proving Reidemeister-invariance and\nfunctoriality for a wide range of link homology theories. These include Lee\nhomology, Heegaard Floer homology of branched double covers, singular instanton\nhomology, and \\Szabo's geometric link homology theory. We follow Baldwin,\nHedden, and Lobb (arXiv:1509.04691) in leveraging the relationships between\nthese theories and Khovanov homology. We obtain stronger functoriality results\nby avoiding spectral sequences and instead showing that each theory factors\nthrough Bar-Natan's cobordism-theoretic link homology theory.\n",
"title": "Strong Khovanov-Floer Theories and Functoriality"
}
| null | null | null | null | true | null |
5881
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider the problem of formally verifying the safety of an\nautonomous robot equipped with a Neural Network (NN) controller that processes\nLiDAR images to produce control actions. Given a workspace that is\ncharacterized by a set of polytopic obstacles, our objective is to compute the\nset of safe initial conditions such that a robot trajectory starting from these\ninitial conditions is guaranteed to avoid the obstacles. Our approach is to\nconstruct a finite state abstraction of the system and use standard\nreachability analysis over the finite state abstraction to compute the set of\nthe safe initial states. The first technical problem in computing the finite\nstate abstraction is to mathematically model the imaging function that maps the\nrobot position to the LiDAR image. To that end, we introduce the notion of\nimaging-adapted sets as partitions of the workspace in which the imaging\nfunction is guaranteed to be affine. We develop a polynomial-time algorithm to\npartition the workspace into imaging-adapted sets along with computing the\ncorresponding affine imaging functions. Given this workspace partitioning, a\ndiscrete-time linear dynamics of the robot, and a pre-trained NN controller\nwith Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge\nis to analyze the behavior of the neural network. To that end, we utilize a\nSatisfiability Modulo Convex (SMC) encoding to enumerate all the possible\nsegments of different ReLUs. SMC solvers then use a Boolean satisfiability\nsolver and a convex programming solver and decompose the problem into smaller\nsubproblems. To accelerate this process, we develop a pre-processing algorithm\nthat could rapidly prune the space feasible ReLU segments. Finally, we\ndemonstrate the efficiency of the proposed algorithms using numerical\nsimulations with increasing complexity of the neural network controller.\n",
"title": "Formal Verification of Neural Network Controlled Autonomous Systems"
}
| null | null | null | null | true | null |
5882
| null |
Default
| null | null |
null |
{
"abstract": " With the method of moments and the mollification method, we study the central\n$L$-values of GL(2) Maass forms of weight $0$ and level $1$ and establish a\npositive-proportional nonvanishing result of such values in the aspect of large\nspectral parameter in short intervals, which is qualitatively optimal in view\nof Weyl's law. As an application of this result and a formula of Katok--Sarnak,\nwe give a nonvanishing result on the first Fourier coefficients of Maass forms\nof weight $\\frac{1}{2}$ and level $4$ in the Kohnen plus space.\n",
"title": "Nonvanishing of central $L$-values of Maass forms"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5883
| null |
Validated
| null | null |
null |
{
"abstract": " We study the problem of minimizing a strongly convex and smooth function when\nwe have noisy estimates of its gradient. We propose a novel multistage\naccelerated algorithm that is universally optimal in the sense that it achieves\nthe optimal rate both in the deterministic and stochastic case and operates\nwithout knowledge of noise characteristics. The algorithm consists of stages\nthat use a stochastic version of Nesterov's accelerated algorithm with a\nspecific restart and parameters selected to achieve the fastest reduction in\nthe bias-variance terms in the convergence rate bounds.\n",
"title": "A Universally Optimal Multistage Accelerated Stochastic Gradient Method"
}
| null | null | null | null | true | null |
5884
| null |
Default
| null | null |
null |
{
"abstract": " The Venusian surface has been studied by measuring radar reflections and\nthermal radio emission over a wide spectral region of several centimeters to\nmeter wavelengths from the Earth-based as well as orbiter platforms. The\nradiometric observations, in the decimeter (dcm) wavelength regime showed a\ndecreasing trend in the observed brightness temperature (Tb) with increasing\nwavelength. The thermal emission models available at present have not been able\nto explain the radiometric observations at longer wavelength (dcm) to a\nsatisfactory level. This paper reports the first interferometric imaging\nobservations of Venus below 620 MHz. They were carried out at 606, 332.9 and\n239.9 MHz using the Giant Meterwave Radio Telescope (GMRT). The Tb values\nderived at the respective frequencies are 526 K, 409 K and < 426 K, with errors\nof ~7% which are generally consistent with the reported Tb values at 608 MHz\nand 430 MHz by previous investigators, but are much lower than those derived\nfrom high-frequency observations at 1.38-22.46 GHz using the VLA.\n",
"title": "Radio Observation of Venus at Meter Wavelengths using the GMRT"
}
| null | null |
[
"Physics"
] | null | true | null |
5885
| null |
Validated
| null | null |
null |
{
"abstract": " Air-showers measured by the Pierre Auger Observatory were analyzed in order\nto extract the depth of maximum (Xmax).The results allow the analysis of the\nXmax distributions as a function of energy ($> 10^{17.8}$ eV). The Xmax\ndistributions, their mean and standard deviation are analyzed with the help of\nshower simulations with the aim of interpreting the mass composition. The mean\nand standard deviation were used to derive <ln A> and its variance as a\nfunction of energy. The fraction of four components (p, He, N and Fe) were fit\nto the Xmax distributions. Regardless of the hadronic model used the data is\nbetter described by a mix of light, intermediate and heavy primaries. Also,\nindependent of the hadronic models, a decrease of the proton flux with energy\nis observed. No significant contribution of iron nuclei is derived in the\nentire energy range studied.\n",
"title": "Measurements of the depth of maximum of air-shower profiles at the Pierre Auger Observatory and their composition implications"
}
| null | null | null | null | true | null |
5886
| null |
Default
| null | null |
null |
{
"abstract": " Reward augmented maximum likelihood (RAML), a simple and effective learning\nframework to directly optimize towards the reward function in structured\nprediction tasks, has led to a number of impressive empirical successes. RAML\nincorporates task-specific reward by performing maximum-likelihood updates on\ncandidate outputs sampled according to an exponentiated payoff distribution,\nwhich gives higher probabilities to candidates that are close to the reference\noutput. While RAML is notable for its simplicity, efficiency, and its\nimpressive empirical successes, the theoretical properties of RAML, especially\nthe behavior of the exponentiated payoff distribution, has not been examined\nthoroughly. In this work, we introduce softmax Q-distribution estimation, a\nnovel theoretical interpretation of RAML, which reveals the relation between\nRAML and Bayesian decision theory. The softmax Q-distribution can be regarded\nas a smooth approximation of the Bayes decision boundary, and the Bayes\ndecision rule is achieved by decoding with this Q-distribution. We further show\nthat RAML is equivalent to approximately estimating the softmax Q-distribution,\nwith the temperature $\\tau$ controlling approximation error. We perform two\nexperiments, one on synthetic data of multi-class classification and one on\nreal data of image captioning, to demonstrate the relationship between RAML and\nthe proposed softmax Q-distribution estimation method, verifying our\ntheoretical analysis. Additional experiments on three structured prediction\ntasks with rewards defined on sequential (named entity recognition), tree-based\n(dependency parsing) and irregular (machine translation) structures show\nnotable improvements over maximum likelihood baselines.\n",
"title": "Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5887
| null |
Validated
| null | null |
null |
{
"abstract": " Unmanned Aerial Vehicles (UAVs) equipped with bioradars are a life-saving\ntechnology that can enable identification of survivors under collapsed\nbuildings in the aftermath of natural disasters such as earthquakes or gas\nexplosions. However, these UAVs have to be able to autonomously land on debris\npiles in order to accurately locate the survivors. This problem is extremely\nchallenging as the structure of these debris piles is often unknown and no\nprior knowledge can be leveraged. In this work, we propose a computationally\nefficient system that is able to reliably identify safe landing sites and\nautonomously perform the landing maneuver. Specifically, our algorithm computes\ncostmaps based on several hazard factors including terrain flatness, steepness,\ndepth accuracy and energy consumption information. We first estimate dense\ncandidate landing sites from the resulting costmap and then employ clustering\nto group neighboring sites into a safe landing region. Finally, a minimum-jerk\ntrajectory is computed for landing considering the surrounding obstacles and\nthe UAV dynamics. We demonstrate the efficacy of our system using experiments\nfrom a city scale hyperrealistic simulation environment and in real-world\nscenarios with collapsed buildings.\n",
"title": "Vision-based Autonomous Landing in Catastrophe-Struck Environments"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5888
| null |
Validated
| null | null |
null |
{
"abstract": " Optimal sensor placement is a central challenge in the design, prediction,\nestimation, and control of high-dimensional systems. High-dimensional states\ncan often leverage a latent low-dimensional representation, and this inherent\ncompressibility enables sparse sensing. This article explores optimized sensor\nplacement for signal reconstruction based on a tailored library of features\nextracted from training data. Sparse point sensors are discovered using the\nsingular value decomposition and QR pivoting, which are two ubiquitous matrix\ncomputations that underpin modern linear dimensionality reduction. Sparse\nsensing in a tailored basis is contrasted with compressed sensing, a universal\nsignal recovery method in which an unknown signal is reconstructed via a sparse\nrepresentation in a universal basis. Although compressed sensing can recover a\nwider class of signals, we demonstrate the benefits of exploiting known\npatterns in data with optimized sensing. In particular, drastic reductions in\nthe required number of sensors and improved reconstruction are observed in\nexamples ranging from facial images to fluid vorticity fields. Principled\nsensor placement may be critically enabling when sensors are costly and\nprovides faster state estimation for low-latency, high-bandwidth control.\nMATLAB code is provided for all examples.\n",
"title": "Data-Driven Sparse Sensor Placement for Reconstruction"
}
| null | null | null | null | true | null |
5889
| null |
Default
| null | null |
null |
{
"abstract": " We consider a one-dimensional two component extended Fermi-Hubbard model with\nnearest neighbor interactions and mass imbalance between the two species. We\nstudy the stability of trimers, various observables for detecting them, and\nexpansion dynamics. We generalize the definition of the trimer gap to include\nthe formation of different types of clusters originating from nearest neighbor\ninteractions. Expansion dynamics reveal rapidly propagating trimers, with\nspeeds exceeding doublon propagation in strongly interacting regime. We present\na simple model for understanding this unique feature of the movement of the\ntrimers, and we discuss the potential for experimental realization.\n",
"title": "Fast trimers in one-dimensional extended Fermi-Hubbard model"
}
| null | null | null | null | true | null |
5890
| null |
Default
| null | null |
null |
{
"abstract": " The monitoring of large dynamic networks is a major chal- lenge for a wide\nrange of application. The complexity stems from properties of the underlying\ngraphs, in which slight local changes can lead to sizable variations of global\nprop- erties, e.g., under certain conditions, a single link cut that may be\noverlooked during monitoring can result in splitting the graph into two\ndisconnected components. Moreover, it is often difficult to determine whether a\nchange will propagate globally or remain local. Traditional graph theory\nmeasure such as the centrality or the assortativity of the graph are not\nsatisfying to characterize global properties of the graph. In this paper, we\ntackle the problem of real-time monitoring of dynamic large scale graphs by\ndeveloping a geometric approach that leverages notions of geometric curvature\nand recent development in graph embeddings using Ollivier-Ricci curvature [47].\nWe illustrate the use of our method by consid- ering the practical case of\nmonitoring dynamic variations of global Internet using topology changes\ninformation provided by combining several BGP feeds. In particular, we use our\nmethod to detect major events and changes via the geometry of the embedding of\nthe graph.\n",
"title": "A Geometric Approach for Real-time Monitoring of Dynamic Large Scale Graphs: AS-level graphs illustrated"
}
| null | null | null | null | true | null |
5891
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study general $(\\alpha,\\beta)$-metrics which $\\alpha$ is a\nRiemannian metric and $\\beta$ is an one-form. We have proven that every weak\nLandsberg general $(\\alpha,\\beta)$-metric is a Berwald metric, where $\\beta$ is\na closed and conformal one-form. This show that there exist no generalized\nunicorn metric in this class of general $(\\alpha,\\beta)$-metric. Further, We\nshow that $F$ is a Landsberg general $(\\alpha,\\beta)$-metric if and only if it\nis weak Landsberg general $(\\alpha,\\beta)$-metric, where $\\beta$ is a closed\nand conformal one-form.\n",
"title": "On general $(α, β)$-metrics of weak Landsberg type"
}
| null | null | null | null | true | null |
5892
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a measure of fairness for algorithms and data with regard to\nmultiple protected attributes. Our proposed definition, differential fairness,\nis informed by the framework of intersectionality, which analyzes how\ninterlocking systems of power and oppression affect individuals along\noverlapping dimensions including race, gender, sexual orientation, class, and\ndisability. We show that our criterion behaves sensibly for any subset of the\nset of protected attributes, and we illustrate links to differential privacy. A\ncase study on census data demonstrates the utility of our approach.\n",
"title": "An Intersectional Definition of Fairness"
}
| null | null |
[
"Statistics"
] | null | true | null |
5893
| null |
Validated
| null | null |
null |
{
"abstract": " Technological parasitism is a new theory to explain the evolution of\ntechnology in society. In this context, this study proposes a model to analyze\nthe interaction between a host technology (system) and a parasitic technology\n(subsystem) to explain evolutionary pathways of technologies as complex\nsystems. The coefficient of evolutionary growth of the model here indicates the\ntypology of evolution of parasitic technology in relation to host technology:\ni.e., underdevelopment, growth and development. This approach is illustrated\nwith realistic examples using empirical data of product and process\ntechnologies. Overall, then, the theory of technological parasitism can be\nuseful for bringing a new perspective to explain and generalize the evolution\nof technology and predict which innovations are likely to evolve rapidly in\nsociety.\n",
"title": "Technological Parasitism"
}
| null | null | null | null | true | null |
5894
| null |
Default
| null | null |
null |
{
"abstract": " A popular setting in medical statistics is a group sequential trial with\nindependent and identically distributed normal outcomes, in which interim\nanalyses of the sum of the outcomes are performed. Based on a prescribed\nstopping rule, one decides after each interim analysis whether the trial is\nstopped or continued. Consequently, the actual length of the study is a random\nvariable. It is reported in the literature that the interim analyses may cause\nbias if one uses the ordinary sample mean to estimate the location parameter.\nFor a generic stopping rule, which contains many classical stopping rules as a\nspecial case, explicit formulas for the expected length of the trial, the bias,\nand the mean squared error (MSE) are provided. It is deduced that, for a fixed\nnumber of interim analyses, the bias and the MSE converge to zero if the first\ninterim analysis is performed not too early. In addition, optimal rates for\nthis convergence are provided. Furthermore, under a regularity condition,\nasymptotic normality in total variation distance for the sample mean is\nestablished. A conclusion for naive confidence intervals based on the sample\nmean is derived. It is also shown how the developed theory naturally fits in\nthe broader framework of likelihood theory in a group sequential trial setting.\nA simulation study underpins the theoretical findings.\n",
"title": "On the sample mean after a group sequential trial"
}
| null | null | null | null | true | null |
5895
| null |
Default
| null | null |
null |
{
"abstract": " A reliable wireless connection between the operator and the teleoperated\nUnmanned Ground Vehicle (UGV) is critical in many Urban Search and Rescue\n(USAR) missions. Unfortunately, as was seen in e.g. the Fukushima disaster, the\nnetworks available in areas where USAR missions take place are often severely\nlimited in range and coverage. Therefore, during mission execution, the\noperator needs to keep track of not only the physical parts of the mission,\nsuch as navigating through an area or searching for victims, but also the\nvariations in network connectivity across the environment. In this paper, we\npropose and evaluate a new teleoperation User Interface (UI) that includes a\nway of estimating the Direction of Arrival (DoA) of the Radio Signal Strength\n(RSS) and integrating the DoA information in the interface. The evaluation\nshows that using the interface results in more objects found, and less aborted\nmissions due to connectivity problems, as compared to a standard interface. The\nproposed interface is an extension to an existing interface centered around the\nvideo stream captured by the UGV. But instead of just showing the network\nsignal strength in terms of percent and a set of bars, the additional\ninformation of DoA is added in terms of a color bar surrounding the video feed.\nWith this information, the operator knows what movement directions are safe,\neven when moving in regions close to the connectivity threshold.\n",
"title": "A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings"
}
| null | null | null | null | true | null |
5896
| null |
Default
| null | null |
null |
{
"abstract": " Recent advancements in quantum annealing hardware and numerous studies in\nthis area suggests that quantum annealers have the potential to be effective in\nsolving unconstrained binary quadratic programming problems. Naturally, one may\ndesire to expand the application domain of these machines to problems with\ngeneral discrete variables. In this paper, we explore the possibility of\nemploying quantum annealers to solve unconstrained quadratic programming\nproblems over a bounded integer domain. We present an approach for encoding\ninteger variables into binary ones, thereby representing unconstrained integer\nquadratic programming problems as unconstrained binary quadratic programming\nproblems. To respect some of the limitations of the currently developed quantum\nannealers, we propose an integer encoding, named bounded- coefficient encoding,\nin which we limit the size of the coefficients that appear in the encoding.\nFurthermore, we propose an algorithm for finding the upper bound on the\ncoefficients of the encoding using the precision of the machine and the\ncoefficients of the original integer problem. Finally, we experimentally show\nthat this approach is far more resilient to the noise of the quantum annealers\ncompared to traditional approaches for the encoding of integers in base two.\n",
"title": "Practical Integer-to-Binary Mapping for Quantum Annealers"
}
| null | null | null | null | true | null |
5897
| null |
Default
| null | null |
null |
{
"abstract": " Life evolved on our planet by means of a combination of Darwinian selection\nand innovations leading to higher levels of complexity. The emergence and\nselection of replicating entities is a central problem in prebiotic evolution.\nTheoretical models have shown how populations of different types of replicating\nentities exclude or coexist with other classes of replicators. Models are\ntypically kinetic, based on standard replicator equations. On the other hand,\nthe presence of thermodynamical constrains for these systems remain an open\nquestion. This is largely due to the lack of a general theory of out of\nstatistical methods for systems far from equilibrium. Nonetheless, a first\napproach to this problem has been put forward in a series of novel\ndevelopements in non-equilibrium physics, under the rubric of the extended\nsecond law of thermodynamics. The work presented here is twofold: firstly, we\nreview this theoretical framework and provide a brief description of the three\nfundamental replicator types in prebiotic evolution: parabolic, malthusian and\nhyperbolic. Finally, we employ these previously mentioned techinques to explore\nhow replicators are constrained by thermodynamics.\n",
"title": "Nonequilibrium entropic bounds for Darwinian replicators"
}
| null | null | null | null | true | null |
5898
| null |
Default
| null | null |
null |
{
"abstract": " Approximate full configuration interaction (FCI) calculations have recently\nbecome tractable for systems of unforeseen size thanks to stochastic and\nadaptive approximations to the exponentially scaling FCI problem. The result of\nan FCI calculation is a weighted set of electronic configurations, which can\nalso be expressed in terms of excitations from a reference configuration. The\nexcitation amplitudes contain information on the complexity of the electronic\nwave function, but this information is contaminated by contributions from\ndisconnected excitations, i.e. those excitations that are just products of\nindependent lower-level excitations. The unwanted contributions can be removed\nvia a cluster decomposition procedure, making it possible to examine the\nimportance of connected excitations in complicated multireference molecules\nwhich are outside the reach of conventional algorithms. We present an\nimplementation of the cluster decomposition analysis and apply it to both true\nFCI wave functions, as well as wave functions generated from the adaptive\nsampling CI (ASCI) algorithm. The cluster decomposition is useful for\ninterpreting calculations in chemical studies, as a diagnostic for the\nconvergence of various excitation manifolds, as well as as a guidepost for\npolynomially scaling electronic structure models. Applications are presented\nfor (i) the double dissociation of water, (ii) the carbon dimer, (iii) the\n{\\pi} space of polyacenes, as well as (iv) the chromium dimer. While the\ncluster amplitudes exhibit rapid decay with increasing rank for the first three\nsystems, even connected octuple excitations still appear important in Cr$_2$,\nsuggesting that spin-restricted single-reference coupled-cluster approaches may\nnot be tractable for some problems in transition metal chemistry.\n",
"title": "Cluster decomposition of full configuration interaction wave functions: a tool for chemical interpretation of systems with strong correlation"
}
| null | null | null | null | true | null |
5899
| null |
Default
| null | null |
null |
{
"abstract": " We search for $\\gamma$-ray and optical periodic modulations in a distant flat\nspectrum radio quasar (FSRQ) PKS 0426-380 (the redshift $z=1.1$). Using two\ntechniques (i.e., the maximum likelihood optimization and the exposure-weighted\naperture photometry), we obtain $\\gamma$-ray light curves from \\emph{Fermi}-LAT\nPass 8 data covering from 2008 August to 2016 December. We then analyze the\nlight curves with the Lomb-Scargle Periodogram (LSP) and the Weighted Wavelet\nZ-transform (WWZ). A $\\gamma$-ray quasi-periodicity with a period of 3.35 $\\pm$\n0.68 years is found at the significance-level of $\\simeq3.6\\ \\sigma$. The\noptical-UV flux covering from 2005 August to 2013 April provided by ASI SCIENCE\nDATA CENTER is also analyzed, but no significant quasi-periodicity is found. It\nshould be pointed out that the result of the optical-UV data could be tentative\nbecause of the incomplete of the data. Further long-term multiwavelength\nmonitoring of this FSRQ is needed to confirm its quasi-periodicity.\n",
"title": "Possible Quasi-Periodic modulation in the z = 1.1 $γ$-ray blazar PKS 0426-380"
}
| null | null | null | null | true | null |
5900
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.