text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Using the six parameters truncated Mittag-Leffler function, we introduce a\nconvenient truncated function to define the so-called truncated\n$\\mathcal{V}$-fractional derivative type. After a discussion involving some\nproperties associated with this derivative, we propose the derivative of a\nvector valued function and define the $\\mathcal{V}$-fractional Jacobian matrix\nwhose properties allow us to say that: the multivariable truncated\n$\\mathcal{V}$-fractional derivative type, as proposed here, generalizes the\ntruncated $\\mathcal{V}$-fractional derivative type and can bee extended to\nobtain a truncated $\\mathcal{V}$-fractional partial derivative type. As\napplications we discuss and prove the change of order associated with two index\ni.e., the commutativity of two truncated $\\mathcal{V}$-fractional partial\nderivative type and propose the truncated $\\mathcal{V}$-fractional Green's\ntheorem.\n",
"title": "A truncated $\\mathcal{V}$-fractional derivative in $\\mathbb{R}^n$"
}
| null | null | null | null | true | null |
13501
| null |
Default
| null | null |
null |
{
"abstract": " Trending topic of newspapers is an indicator to understand the situation of a\ncountry and also a way to evaluate the particular newspaper. This paper\nrepresents a model describing few techniques to select trending topics from\nBangla Newspaper. Topics that are discussed more frequently than other in\nBangla newspaper will be marked and how a very famous topic loses its\nimportance with the change of time and another topic takes its place will be\ndemonstrated. Data from two popular Bangla Newspaper with date and time were\ncollected. Statistical analysis was performed after on these data after\npreprocessing. Popular and most used keywords were extracted from the stream of\nBangla keyword with this analysis. This model can also cluster category wise\nnews trend or a list of news trend in daily or weekly basis with enough data. A\npattern can be found on their news trend too. Comparison among past news trend\nof Bangla newspapers will give a visualization of the situation of Bangladesh.\nThis visualization will be helpful to predict future trending topics of Bangla\nNewspaper.\n",
"title": "Statistical Analysis on Bangla Newspaper Data to Extract Trending Topic and Visualize Its Change Over Time"
}
| null | null | null | null | true | null |
13502
| null |
Default
| null | null |
null |
{
"abstract": " Magnetically-driven disk winds would alter the surface density slope of gas\nin the inner region of a protoplanetary disk $(r \\lesssim 1 {\\rm au})$. This in\nturn affects planet formation. Recently, the effect of disk wind torque has\nbeen considered with the suggestion that it would carve out the surface density\nof the disk from inside and would induce global gas flows (wind-driven\naccretion). We aim to investigate effects of global gas flows on type I\nmigration and also examine planet formation. A simplified approach was taken to\naddress this issue, and N-body simulations with isolation-mass planets were\nalso performed. In previous studies, the effect of gas flow induced by\nturbulence-driven accretion has been taken into account for its desaturation\neffect of the corotation torque. If more rapid gas flows (e.g., wind-driven\naccretion) are considered, the desaturation effect can be modified. In\nMRI-inactive disks, in which the wind-driven accretion dominates the disk\nevolution, the gas flow at the midplane plays an important role. If this flow\nis fast, the corotation torque is efficiently desaturated. Then, the fact that\nthe surface density slope can be positive in the inner region due to the wind\ntorque can generate an outward migration region extended to super-Earth mass\nplanets. In this case, we observe that no planets fall onto the central star in\nN-body simulations with migration forces imposed to reproduce such migration\npattern. We also see that super-Earth mass planets can undergo outward\nmigration. Relatively rapid gas flows affects type I migration and thus the\nformation of close-in planets.\n",
"title": "Effects of global gas flows on type I migration"
}
| null | null | null | null | true | null |
13503
| null |
Default
| null | null |
null |
{
"abstract": " We show a proof of principle for warping, a method to interpret the inner\nworking of neural networks in the context of gene expression analysis. Warping\nis an efficient way to gain insight to the inner workings of neural nets and\nmake them more interpretable. We demonstrate the ability of warping to recover\nmeaningful information for a given class on a samplespecific individual basis.\nWe found warping works well in both linearly and nonlinearly separable\ndatasets. These encouraging results show that warping has a potential to be the\nanswer to neural networks interpretability in computational biology.\n",
"title": "Warp: a method for neural network interpretability applied to gene expression profiles"
}
| null | null | null | null | true | null |
13504
| null |
Default
| null | null |
null |
{
"abstract": " We show that every tiling of a convex set in the Euclidean plane\n$\\mathbb{R}^2$ by equilateral triangles of mutually different sizes contains\narbitrarily small tiles. The proof is purely elementary up to the discussion of\none family of tilings of the full plane $\\mathbb{R}^2$, which is based on a\nsurprising connection to a random walk on a directed graph.\n",
"title": "Tilings of convex sets by mutually incongruent equilateral triangles contain arbitrarily small tiles"
}
| null | null | null | null | true | null |
13505
| null |
Default
| null | null |
null |
{
"abstract": " Recently Tewari and van Willigenburg constructed modules of the 0-Hecke\nalgebra that are mapped to the quasisymmetric Schur functions by the\nquasisymmetric characteristic and decomposed them into a direct sum of certain\nsubmodules. We show that these submodules are indecomposable by determining\ntheir endomorphism rings.\n",
"title": "The decomposition of 0-Hecke modules associated to quasisymmetric Schur functions"
}
| null | null |
[
"Mathematics"
] | null | true | null |
13506
| null |
Validated
| null | null |
null |
{
"abstract": " Calculation of phase diagrams is one of the fundamental tools in alloy\ndesign---more specifically under the framework of Integrated Computational\nMaterials Engineering. Uncertainty quantification of phase diagrams is the\nfirst step required to provide confidence for decision making in property- or\nperformance-based design. As a manner of illustration, a thorough probabilistic\nassessment of the CALPHAD model parameters is performed against the available\ndata for a Hf-Si binary case study using a Markov Chain Monte Carlo sampling\napproach. The plausible optimum values and uncertainties of the parameters are\nthus obtained, which can be propagated to the resulting phase diagram. Using\nthe parameter values obtained from deterministic optimization in a\ncomputational thermodynamic assessment tool (in this case Thermo-Calc) as the\nprior information for the parameter values and ranges in the sampling process\nis often necessary to achieve a reasonable cost for uncertainty quantification.\nThis brings up the problem of finding an appropriate CALPHAD model with\nhigh-level of confidence which is a very hard and costly task that requires\nconsiderable expert skill. A Bayesian hypothesis testing based on Bayes'\nfactors is proposed to fulfill the need of model selection in this case, which\nis applied to compare four recommended models for the Hf-Si system. However, it\nis demonstrated that information fusion approaches, i.e., Bayesian model\naveraging and an error correlation-based model fusion, can be used to combine\nthe useful information existing in all the given models rather than just using\nthe best selected model, which may lack some information about the system being\nmodelled.\n",
"title": "Bayesian Uncertainty Quantification and Information Fusion in CALPHAD-based Thermodynamic Modeling"
}
| null | null | null | null | true | null |
13507
| null |
Default
| null | null |
null |
{
"abstract": " Next Generation Sequencing (NGS) technology has resulted in massive amounts\nof proteomics and genomics data. This data is of no use if it is not properly\nanalyzed. ETL (Extraction, Transformation, Loading) is an important step in\ndesigning data analytics applications. ETL requires proper understanding of\nfeatures of data. Data format plays a key role in understanding of data,\nrepresentation of data, space required to store data, data I/O during\nprocessing of data, intermediate results of processing, in-memory analysis of\ndata and overall time required to process data. Different data mining and\nmachine learning algorithms require input data in specific types and formats.\nThis paper explores the data formats used by different tools and algorithms and\nalso presents modern data formats that are used on Big Data Platform. It will\nhelp researchers and developers in choosing appropriate data format to be used\nfor a particular tool or algorithm.\n",
"title": "Modern Data Formats for Big Bioinformatics Data Analytics"
}
| null | null | null | null | true | null |
13508
| null |
Default
| null | null |
null |
{
"abstract": " The present contribution offers a simple methodology for the obtainment of\ndata-driven interval forecasting models by combining pairs of quantile\nregressions. Those regressions are created without the usage of the\nnon-differentiable pinball-loss function, but through a k-nearest-neighbors\nbased training set transformation and traditional regression approaches. By\nleaving the underlying training algorithms of the data mining techniques\nunchanged, the presented approach simplifies the creation of quantile\nregressions with more complex techniques (e.g. artificial neural networks). The\nquality of the presented methodology is tested on the usecase of photovoltaic\npower forecasting, for which quantile regressions using polynomial models as\nwell as artificial neural networks and support vector regressions are created.\nFrom the resulting evaluation values it can be concluded that acceptable\ninterval forecasting models are created.\n",
"title": "Nearest-Neighbor Based Non-Parametric Probabilistic Forecasting with Applications in Photovoltaic Systems"
}
| null | null | null | null | true | null |
13509
| null |
Default
| null | null |
null |
{
"abstract": " We provide here a thermodynamic analog of the Braess road-network paradox\nwith irreversible engines working between reservoirs that are placed at\nvertices of the network. Paradoxes of different kinds reappear, emphasizing the\nspecialty of the network.\n",
"title": "A thermodynamic parallel of the Braess road-network paradox"
}
| null | null |
[
"Physics"
] | null | true | null |
13510
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we study the weighted Gevrey class regularity of Euler equation\nin the whole space R 3. We first establish the local existence of Euler\nequation in weighted Sobolev space, then obtain the weighted Gevrey regularity\nof Euler equation. We will use the weighted Sobolev-Gevrey space method to\nobtain the results of Gevrey regularity of Euler equation, and the use of the\nproperty of singular operator in the estimate of the pressure term is the\nimprovement of our work.\n",
"title": "Weighted gevrey class regularity of euler equation in the whole space"
}
| null | null | null | null | true | null |
13511
| null |
Default
| null | null |
null |
{
"abstract": " Soft Random Geometric Graphs (SRGGs) have been widely applied to various\nmodels including those of wireless sensor, communication, social and neural\nnetworks. SRGGs are constructed by randomly placing nodes in some space and\nmaking pairwise links probabilistically using a connection function that is\nsystem specific and usually decays with distance. In this paper we focus on the\napplication of SRGGs to wireless communication networks where information is\nrelayed in a multi hop fashion, although the analysis is more general and can\nbe applied elsewhere by using different distributions of nodes and/or\nconnection functions. We adopt a general non-uniform density which can model\nthe stationary distribution of different mobility models, with the interesting\ncase being when the density goes to zero along the boundaries. The global\nconnectivity properties of these non-uniform networks are likely to be\ndetermined by highly isolated nodes, where isolation can be caused by the\nspatial distribution or the local geometry (boundaries). We extend the analysis\nto temporal-spatial networks where we fix the underlying non-uniform\ndistribution of points and the dynamics are caused by the temporal variations\nin the link set, and explore the probability a node near the corner is isolated\nat time $T$. This work allows for insight into how non-uniformity (caused by\nmobility) and boundaries impact the connectivity features of temporal-spatial\nnetworks. We provide a simple method for approximating these probabilities for\na range of different connection functions and verify them against simulations.\nBoundary nodes are numerically shown to dominate the connectivity properties of\nthese finite networks with non-uniform measure.\n",
"title": "Temporal connectivity in finite networks with non-uniform measures"
}
| null | null | null | null | true | null |
13512
| null |
Default
| null | null |
null |
{
"abstract": " The media industry is increasingly personalizing the offering of contents in\nattempt to better target the audience. This requires to analyze the\nrelationships that goes established between users and content they enjoy,\nlooking at one side to the content characteristics and on the other to the user\nprofile, in order to find the best match between the two. In this paper we\nsuggest to build that relationship using the Dempster-Shafer's Theory of\nEvidence, proposing a reference model and illustrating its properties by means\nof a toy example. Finally we suggest possible applications of the model for\ntasks that are common in the modern media industry.\n",
"title": "Matching Media Contents with User Profiles by means of the Dempster-Shafer Theory"
}
| null | null | null | null | true | null |
13513
| null |
Default
| null | null |
null |
{
"abstract": " Piecewise testable languages form the first level of the Straubing-Thérien\nhierarchy. The membership problem for this level is decidable and testing if\nthe language of a DFA is piecewise testable is NL-complete. The question has\nnot yet been addressed for NFAs. We fill in this gap by showing that it is\nPSpace-complete. The main result is then the lower-bound complexity of\nseparability of regular languages by piecewise testable languages. Two regular\nlanguages are separable by a piecewise testable language if the piecewise\ntestable language includes one of them and is disjoint from the other. For\nlanguages represented by NFAs, separability by piecewise testable languages is\nknown to be decidable in PTime. We show that it is PTime-hard and that it\nremains PTime-hard even for minimal DFAs.\n",
"title": "Separability by Piecewise Testable Languages is PTime-Complete"
}
| null | null |
[
"Computer Science"
] | null | true | null |
13514
| null |
Validated
| null | null |
null |
{
"abstract": " We present SuperPivot, an analysis method for low-resource languages that\noccur in a superparallel corpus, i.e., in a corpus that contains an order of\nmagnitude more languages than parallel corpora currently in use. We show that\nSuperPivot performs well for the crosslingual analysis of the linguistic\nphenomenon of tense. We produce analysis results for more than 1000 languages,\nconducting - to the best of our knowledge - the largest crosslingual\ncomputational study performed to date. We extend existing methodology for\nleveraging parallel corpora for typological analysis by overcoming a limiting\nassumption of earlier work: We only require that a linguistic feature is\novertly marked in a few of thousands of languages as opposed to requiring that\nit be marked in all languages under investigation.\n",
"title": "Past, Present, Future: A Computational Investigation of the Typology of Tense in 1000 Languages"
}
| null | null | null | null | true | null |
13515
| null |
Default
| null | null |
null |
{
"abstract": " We study a stochastic control approach to managed futures portfolios.\nBuilding on the Schwartz 97 stochastic convenience yield model for commodity\nprices, we formulate a utility maximization problem for dynamically trading a\nsingle-maturity futures or multiple futures contracts over a finite horizon. By\nanalyzing the associated Hamilton-Jacobi-Bellman (HJB) equation, we solve the\ninvestor's utility maximization problem explicitly and derive the optimal\ndynamic trading strategies in closed form. We provide numerical examples and\nillustrate the optimal trading strategies using WTI crude oil futures data.\n",
"title": "A Stochastic Control Approach to Managed Futures Portfolios"
}
| null | null | null | null | true | null |
13516
| null |
Default
| null | null |
null |
{
"abstract": " Face recognition has made great progress with the development of deep\nlearning. However, video face recognition (VFR) is still an ongoing task due to\nvarious illumination, low-resolution, pose variations and motion blur. Most\nexisting CNN-based VFR methods only obtain a feature vector from a single image\nand simply aggregate the features in a video, which less consider the\ncorrelations of face images in one video. In this paper, we propose a novel\nAttention-Set based Metric Learning (ASML) method to measure the statistical\ncharacteristics of image sets. It is a promising and generalized extension of\nMaximum Mean Discrepancy with memory attention weighting. First, we define an\neffective distance metric on image sets, which explicitly minimizes the\nintra-set distance and maximizes the inter-set distance simultaneously. Second,\ninspired by Neural Turing Machine, a Memory Attention Weighting is proposed to\nadapt set-aware global contents. Then ASML is naturally integrated into CNNs,\nresulting in an end-to-end learning scheme. Our method achieves\nstate-of-the-art performance for the task of video face recognition on the\nthree widely used benchmarks including YouTubeFace, YouTube Celebrities and\nCelebrity-1000.\n",
"title": "Attention-Set based Metric Learning for Video Face Recognition"
}
| null | null | null | null | true | null |
13517
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose: (a) a restart schedule for an adaptive simulated\nannealer, and (b) parallel simulated annealing, with an adaptive and\nparameter-free annealing schedule. The foundation of our approach is the\nModified Lam annealing schedule, which adaptively controls the temperature\nparameter to track a theoretically ideal rate of acceptance of neighboring\nstates. A sequential implementation of Modified Lam simulated annealing is\nalmost parameter-free. However, it requires prior knowledge of the annealing\nlength. We eliminate this parameter using restarts, with an exponentially\nincreasing schedule of annealing lengths. We then extend this restart schedule\nto parallel implementation, executing several Modified Lam simulated annealers\nin parallel, with varying initial annealing lengths, and our proposed parallel\nannealing length schedule. To validate our approach, we conduct experiments on\nan NP-Hard scheduling problem with sequence-dependent setup constraints. We\ncompare our approach to fixed length restarts, both sequentially and in\nparallel. Our results show that our approach can achieve substantial\nperformance gains, throughout the course of the run, demonstrating our approach\nto be an effective anytime algorithm.\n",
"title": "Variable Annealing Length and Parallelism in Simulated Annealing"
}
| null | null | null | null | true | null |
13518
| null |
Default
| null | null |
null |
{
"abstract": " The purpose of this work is to construct a simple, efficient and accurate\nwell-balanced numerical scheme for one-dimensional (1D) blood flow in large\narteries with varying geometrical and mechanical properties. As the steady\nstates at rest are not relevant for blood flow, we construct two well-balanced\nhydrostatic reconstruction techniques designed to preserve low-Shapiro number\nsteady states that may occur in large network simulations. The Shapiro number S\nh = u/c is the equivalent of the Froude number for shallow water equations and\nthe Mach number for compressible Euler equations. The first is the low-Shapiro\nhydrostatic reconstruction (HR-LS), which is a simple and efficient method,\ninspired from the hydrostatic reconstruction technique (HR). The second is the\nsubsonic hydrostatic reconstruction (HR-S), adapted here to blood flow and\ndesigned to exactly preserve all subcritical steady states. We systematically\ncompare HR, HR-LS and HR-S in a series of single artery and arterial network\nnumerical tests designed to evaluate their well-balanced and wave-capturing\nproperties. The results indicate that HR is not adapted to compute blood flow\nin large arteries as it is unable to capture wave reflections and transmissions\nwhen large variations of the arteries' geometrical and mechanical properties\nare considered. On the contrary, HR-S is exactly well-balanced and is the most\naccurate hydrostatic reconstruction technique. However, HR-LS is able to\ncompute low-Shapiro number steady states as well as wave reflections and\ntransmissions with satisfying accuracy and is simpler and computationally less\nexpensive than HR-S. We therefore recommend using HR-LS for 1D blood flow\nsimulations in large arterial network simulations.\n",
"title": "Low-Shapiro hydrostatic reconstruction technique for blood flow simulation in large arteries with varying geometrical and mechanical properties"
}
| null | null | null | null | true | null |
13519
| null |
Default
| null | null |
null |
{
"abstract": " Ambient-pressure-grown LaO$_{0.5}$F$_{0.5}$BiS$_2$ with a superconducting\ntransition temperature $T_{c}\\sim$3K possesses a highly anisotropic normal\nstate. By a series of electrical resistivity measurements with a magnetic field\ndirection varying between the crystalline $c$-axis and the $ab$-plane, we\npresent the first datasets displaying the temperature dependence of the\nout-of-plane upper critical field $H_{c2}^{\\perp}(T)$, the in-plane upper\ncritical field $H_{c2}^{\\parallel}(T)$, as well as the angular dependence of\n$H_{c2}$ at fixed temperatures for ambient-pressure-grown\nLaO$_{0.5}$F$_{0.5}$BiS$_2$ single crystals. The anisotropy of the\nsuperconductivity, $H_{c2}^{\\parallel}/H_{c2}^{\\perp}$, reaches $\\sim$16 on\napproaching 0 K, but it decreases significantly near $T_{c}$. A pronounced\nupward curvature of $H_{c2}^{\\parallel}(T)$ is observed near $T_{c}$, which we\nanalyze using a two-gap model. Moreover, $H_{c2}^{\\parallel}(0)$ is found to\nexceed the Pauli paramagnetic limit, which can be understood by considering the\nstrong spin-orbit coupling associated with Bi as well as the breaking of the\nlocal inversion symmetry at the electronically active BiS$_2$ bilayers. Hence,\nLaO$_{0.5}$F$_{0.5}$BiS$_2$ with a centrosymmetric lattice structure is a\nunique platform to explore the physics associated with local parity violation\nin the bulk crystal.\n",
"title": "Anisotropic two-gap superconductivity and the absence of a Pauli paramagnetic limit in single-crystalline LaO$_{0.5}$F$_{0.5}$BiS$_2$"
}
| null | null | null | null | true | null |
13520
| null |
Default
| null | null |
null |
{
"abstract": " Let $ \\mathcal{A}_1, \\ldots, \\mathcal{A}_k $ be finite sets in $ \\mathbb{Z}^n\n$ and let $ Y \\subset (\\mathbb{C}^*)^n $ be an algebraic variety defined by a\nsystem of equations \\[ f_1 = \\ldots = f_k = 0, \\] where $ f_1, \\ldots, f_k $\nare Laurent polynomials with supports in $\\mathcal{A}_1, \\ldots,\n\\mathcal{A}_k$. Assuming that $ f_1, \\ldots, f_k $ are sufficiently generic,\nthe Newton polyhedron theory computes discrete invariants of $ Y $ in terms of\nthe Newton polyhedra of $ f_1, \\ldots, f_k $. It may appear that the generic\nsystem with fixed supports $ \\mathcal{A}_1, \\ldots, \\mathcal{A}_k $ is\ninconsistent. In this paper, we compute discrete invariants of algebraic\nvarieties defined by system of equations which are generic in the set of\nconsistent system with support in $\\mathcal{A}_1, \\ldots, \\mathcal{A}_k$ by\nreducing the question to the Newton polyhedra theory. Unlike the classical\nsituation, not only the Newton polyhedra of $f_1,\\dots,f_k$, but also the\nsupports $\\mathcal{A}_1, \\ldots, \\mathcal{A}_k$ themselves appear in the\nanswers.\n",
"title": "Discrete Invariants of Generically Inconsistent Systems of Laurent Polynomials"
}
| null | null | null | null | true | null |
13521
| null |
Default
| null | null |
null |
{
"abstract": " We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport\ndata format to enable rapid remote searches for fast transient events as part\nof the Deeper Wider Faster program (DWF). DWF uses ~20 telescopes from radio to\ngamma-rays to perform simultaneous and rapid-response follow-up searches for\nfast transient events on millisecond-to-hours timescales. DWF search demands\nhave a set of constraints that is becoming common amongst large collaborations.\nHere, we focus on the rapid optical data component of DWF led by the Dark\nEnergy Camera (DECam) at CTIO. Each DECam image has 70 total CCDs saved as a\n~1.2 gigabyte FITS file. Near real-time data processing and fast transient\ncandidate identifications -- in minutes for rapid follow-up triggers on other\ntelescopes -- requires computational power exceeding what is currently\navailable on-site at CTIO. In this context, data files need to be transmitted\nrapidly to a foreign location for supercomputing post-processing, source\nfinding, visualization and analysis. This step in the search process poses a\nmajor bottleneck, and reducing the data size helps accommodate faster data\ntransmission. To maximise our gain in transfer time and still achieve our\nscience goals, we opt for lossy data compression -- keeping in mind that raw\ndata is archived and can be evaluated at a later time. We evaluate how lossy\nJPEG2000 compression affects the process of finding transients, and find only a\nnegligible effect for compression ratios up to ~25:1. We also find a linear\nrelation between compression ratio and the mean estimated data transmission\nspeed-up factor. Adding highly customized compression and decompression steps\nto the science pipeline considerably reduces the transmission time --\nvalidating its introduction to the DWF science pipeline and enabling science\nthat was otherwise too difficult with current technology.\n",
"title": "Enabling near real-time remote search for fast transient events with lossy data compression"
}
| null | null | null | null | true | null |
13522
| null |
Default
| null | null |
null |
{
"abstract": " Deep Reinforcement Learning (DRL) has been applied successfully to many\nrobotic applications. However, the large number of trials needed for training\nis a key issue. Most of existing techniques developed to improve training\nefficiency (e.g. imitation) target on general tasks rather than being tailored\nfor robot applications, which have their specific context to benefit from. We\npropose a novel framework, Assisted Reinforcement Learning, where a classical\ncontroller (e.g. a PID controller) is used as an alternative, switchable policy\nto speed up training of DRL for local planning and navigation problems. The\ncore idea is that the simple control law allows the robot to rapidly learn\nsensible primitives, like driving in a straight line, instead of random\nexploration. As the actor network becomes more advanced, it can then take over\nto perform more complex actions, like obstacle avoidance. Eventually, the\nsimple controller can be discarded entirely. We show that not only does this\ntechnique train faster, it also is less sensitive to the structure of the DRL\nnetwork and consistently outperforms a standard Deep Deterministic Policy\nGradient network. We demonstrate the results in both simulation and real-world\nexperiments.\n",
"title": "Learning with Training Wheels: Speeding up Training with a Simple Controller for Deep Reinforcement Learning"
}
| null | null | null | null | true | null |
13523
| null |
Default
| null | null |
null |
{
"abstract": " Following the breakthrough of Croot, Lev, and Pach, Tao introduced a\nsymmetrized version of their argument, which is now known as the slice rank\nmethod. In this paper, we introduce a more general version of the slice rank of\na tensor, which we call the Partition Rank. This allows us to extend the slice\nrank method to problems that require the variables to be distinct. Using the\npartition rank, we generalize a recent result of Ge and Shangguan, and prove\nthat any set $A\\subset\\mathbb{F}_{q}^{n}$ of size\n\\[|A|>(k+1)\\cdot\\binom{n+(k-1)q}{(k-1)(q-1)}\\] contains a $k$-right-corner,\nthat is distinct vectors $x_{1},\\dots,x_{k},x_{k+1}$ where\n$x_{1}-x_{k+1},\\dots,x_{k}-x_{k+1}$ are mutually orthogonal, for $q=p^{r}$, a\nprime power with $p>k$.\n",
"title": "The Partition Rank of a Tensor and $k$-Right Corners in $\\mathbb{F}_{q}^{n}$"
}
| null | null | null | null | true | null |
13524
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we present a detailed computational study of the electronic\nstructure and optical properties of triply-bonded hydrocarbons with linear, and\ngraphyne substructures, with the aim of identifying their potential in\nopto-electronic device applications. For the purpose, we employed a correlated\nelectron methodology based upon the Pariser-Parr-Pople model Hamiltonian,\ncoupled with the configuration interaction (CI) approach, and studied\nstructures containing up to 42 carbon atoms. Our calculations, based upon\nlarge-scale CI expansions, reveal that the linear structures have intense\noptical absorption at the HOMO-LUMO gap, while the graphyne ones have those at\nhigher energies. Thus, the opto-electronic properties depend on the topology of\nthe {graphyne substructures, suggesting that they can be tuned by means of\nstructural modifications. Our results are in very good agreement with the\navailable experimental data.\n",
"title": "Tunable Optoelectronic Properties of Triply-Bonded Carbon Molecules with Linear and Graphyne Substructures"
}
| null | null | null | null | true | null |
13525
| null |
Default
| null | null |
null |
{
"abstract": " Given any Koszul algebra of finite global dimension one can define a new\nalgebra, which we call a higher zigzag algebra, as a twisted trivial extension\nof the Koszul dual of our original algebra. If our original algebra is the path\nalgebra of a quiver whose underlying graph is a tree, this construction\nrecovers the zigzag algebras of Huerfano and Khovanov. We study examples of\nhigher zigzag algebras coming from Iyama's iterative construction of type A\nhigher representation finite algebras. We give presentations of these algebras\nby quivers and relations, and describe relations between spherical twists\nacting on their derived categories. We then make a connection to the McKay\ncorrespondence in higher dimensions: if G is a finite abelian subgroup of the\nspecial linear group acting on affine space, then the skew group algebra which\ncontrols the category of G-equivariant sheaves is Koszul dual to a higher\nzigzag algebra. Using this, we show that our relations between spherical twists\nappear naturally in examples from algebraic geometry.\n",
"title": "Higher zigzag algebras"
}
| null | null |
[
"Mathematics"
] | null | true | null |
13526
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we produce a cellular motivic spectrum of motivic modular\nforms over $\\R$ and $\\C$, answering positively to a conjecture of Dan Isaksen.\nThis spectrum is constructed to have the appropriate cohomology, as a module\nover the relevant motivic Steenrod algebra. We first produce a $\\G$-equivariant\nversion of this spectrum, and then use a machinery to construct a motivic\nspectrum from an equivariant one. We believe that this machinery will be of\nindependent interest.\n",
"title": "Motivic modular forms from equivariant stable homotopy theory"
}
| null | null | null | null | true | null |
13527
| null |
Default
| null | null |
null |
{
"abstract": " In this work we introduce the idea that the primary application of topology\nin experimental sciences is to keep track of what can be distinguished through\nexperimentation. This link provides understanding and justification as to why\ntopological spaces and continuous functions are pervasive tools in science. We\nfirst define an experimental observation as a statement that can be verified\nusing an experimental procedure and show that observations are closed under\nfinite conjunction and countable disjunction. We then consider observations\nthat identify elements within a set and show how they induce a Hausdorff and\nsecond-countable topology on that set, thus identifying an open set as one that\ncan be associated with an experimental observation. We then show that\nexperimental relationships are continuous functions, as they must preserve\nexperimental distinguishability, and that they are themselves experimentally\ndistinguishable by defining a Hausdorff and second-countable topology for this\ncollection.\n",
"title": "Topology and experimental distinguishability"
}
| null | null | null | null | true | null |
13528
| null |
Default
| null | null |
null |
{
"abstract": " We study theoretically the edge fracture instability in sheared complex\nfluids, by means of linear stability analysis and direct nonlinear simulations.\nWe derive an exact analytical expression for the onset of edge fracture in\nterms of the shear-rate derivative of the fluid's second normal stress\ndifference, the shear-rate derivative of the shear stress, the jump in shear\nstress across the interface between the fluid and the outside medium (usually\nair), the surface tension of that interface, and the rheometer gap size. We\nprovide a full mechanistic understanding of the edge fracture instability,\ncarefully validated against our simulations. These findings, which are robust\nwith respect to choice of rheological constitutive model, also suggest a\npossible route to mitigating edge fracture, potentially allowing\nexperimentalists to achieve and accurately measure stronger flows than\nhitherto.\n",
"title": "Edge fracture in complex fluids"
}
| null | null | null | null | true | null |
13529
| null |
Default
| null | null |
null |
{
"abstract": " We apply our symmetry based Power tensor technique to test conformity of\nPLANCK Polarization maps with statistical isotropy. On a wide range of angular\nscales (l=40-150), our preliminary analysis detects many statistically\nanisotropic multipoles in foreground cleaned full sky PLANCK polarization maps\nviz., COMMANDER and NILC. We also study the effect of residual foregrounds that\nmay still be present in the galactic plane using both common UPB77 polarization\nmask, as well as the individual component separation method specific\npolarization masks. However some of the statistically anisotropic modes still\npersist, albeit significantly in NILC map. We further probed the data for any\ncoherent alignments across multipoles in several bins from the chosen multipole\nrange.\n",
"title": "Testing statistical Isotropy in Cosmic Microwave Background Polarization maps"
}
| null | null | null | null | true | null |
13530
| null |
Default
| null | null |
null |
{
"abstract": " This paper investigates the dependence of functional portfolio generation,\nintroduced by Fernholz (1999), on an extra finite variation process. The\nframework of Karatzas and Ruf (2017) is used to formulate conditions on trading\nstrategies to be strong arbitrage relative to the market over sufficiently\nlarge time horizons. A mollification argument and Komlos theorem yield a\ngeneral class of potential arbitrage strategies. These theoretical results are\ncomplemented by several empirical examples using data from the S&P 500 stocks.\n",
"title": "Generalised Lyapunov Functions and Functionally Generated Trading Strategies"
}
| null | null | null | null | true | null |
13531
| null |
Default
| null | null |
null |
{
"abstract": " We study local optima of the Hamiltonian of the Sherrington-Kirkpatrick\nmodel. We compute the exponent of the expected number of local optima and\ndetermine the \"typical\" value of the Hamiltonian.\n",
"title": "Local optima of the Sherrington-Kirkpatrick Hamiltonian"
}
| null | null | null | null | true | null |
13532
| null |
Default
| null | null |
null |
{
"abstract": " Regular expressions with capture variables, also known as \"regex formulas,\"\nextract relations of spans (interval positions) from text. These relations can\nbe further manipulated via Relational Algebra as studied in the context of\ndocument spanners, Fagin et al.'s formal framework for information extraction.\nWe investigate the complexity of querying text by Conjunctive Queries (CQs) and\nUnions of CQs (UCQs) on top of regex formulas. We show that the lower bounds\n(NP-completeness and W[1]-hardness) from the relational world also hold in our\nsetting; in particular, hardness hits already single-character text! Yet, the\nupper bounds from the relational world do not carry over. Unlike the relational\nworld, acyclic CQs, and even gamma-acyclic CQs, are hard to compute. The source\nof hardness is that it may be intractable to instantiate the relation defined\nby a regex formula, simply because it has an exponential number of tuples. Yet,\nwe are able to establish general upper bounds. In particular, UCQs can be\nevaluated with polynomial delay, provided that every CQ has a bounded number of\natoms (while unions and projection can be arbitrary). Furthermore, UCQ\nevaluation is solvable with FPT (Fixed-Parameter Tractable) delay when the\nparameter is the size of the UCQ.\n",
"title": "Joining Extractions of Regular Expressions"
}
| null | null | null | null | true | null |
13533
| null |
Default
| null | null |
null |
{
"abstract": " We present a new approach to learning for planning, where knowledge acquired\nwhile solving a given set of planning problems is used to plan faster in\nrelated, but new problem instances. We show that a deep neural network can be\nused to learn and represent a \\emph{generalized reactive policy} (GRP) that\nmaps a problem instance and a state to an action, and that the learned GRPs\nefficiently solve large classes of challenging problem instances. In contrast\nto prior efforts in this direction, our approach significantly reduces the\ndependence of learning on handcrafted domain knowledge or feature selection.\nInstead, the GRP is trained from scratch using a set of successful execution\ntraces. We show that our approach can also be used to automatically learn a\nheuristic function that can be used in directed search algorithms. We evaluate\nour approach using an extensive suite of experiments on two challenging\nplanning problem domains and show that our approach facilitates learning\ncomplex decision making policies and powerful heuristic functions with minimal\nhuman input. Videos of our results are available at goo.gl/Hpy4e3.\n",
"title": "Learning Generalized Reactive Policies using Deep Neural Networks"
}
| null | null | null | null | true | null |
13534
| null |
Default
| null | null |
null |
{
"abstract": " In the usual approaches to mechanics (classical or quantum) the primary\nobject of interest is the Hamiltonian, from which one tries to deduce the\nsolutions of the equations of motion (Hamilton or Schrödinger). In the\npresent work we reverse this paradigm and view the motions themselves as being\nthe primary objects. This is made possible by studying arbitrary phase space\nmotions, not of points, but of (small) ellipsoids with the requirement that the\nsymplectic capacity of these ellipsoids is preserved. This allows us to guide\nand control these motions as we like. In the classical case these ellipsoids\ncorrespond to a symplectic coarse graining of phase space, and in the quantum\ncase they correspond to the \"quantum blobs\" we defined in previous work, and\nwhich can be viewed as minimum uncertainty phase space cells which are in a\none-to-one correspondence with Gaussian pure states.\n",
"title": "Symplectic Coarse-Grained Dynamics: Chalkboard Motion in Classical and Quantum Mechanics"
}
| null | null |
[
"Mathematics"
] | null | true | null |
13535
| null |
Validated
| null | null |
null |
{
"abstract": " We show that characteristic functions of domains with boundaries transversal\nto stable cones are bounded multipliers on a recently introduced scale\n$U^{t,s}_p$ of anisotropic Banach spaces, under the conditions -1+1/p<s<-t<0\nand -(r-1)+t<s, with 1<p<infty. (Amended after comments from the referee and M.\nJézéquel, January 10, 2018)\n",
"title": "Characteristic functions as bounded multipliers on anisotropic spaces"
}
| null | null |
[
"Physics"
] | null | true | null |
13536
| null |
Validated
| null | null |
null |
{
"abstract": " Least angle regression (LARS) by Efron et al. (2004) is a novel method for\nconstructing the piece-wise linear path of Lasso solutions. For several years,\nit remained also as the de facto method for computing the Lasso solution before\nmore sophisticated optimization algorithms preceded it. LARS method has\nrecently again increased its popularity due to its ability to find the values\nof the penalty parameters, called knots, at which a new parameter enters the\nactive set of non-zero coefficients. Significance test for the Lasso by\nLockhart et al. (2014), for example, requires solving the knots via the LARS\nalgorithm. Elastic net (EN), on the other hand, is a highly popular extension\nof Lasso that uses a linear combination of Lasso and ridge regression\npenalties. In this paper, we propose a new novel algorithm, called pathwise\n(PW-)LARS-EN, that is able to compute the EN knots over a grid of EN tuning\nparameter {\\alpha} values. The developed PW-LARS-EN algorithm decreases the EN\ntuning parameter and exploits the previously found knot values and the original\nLARS algorithm. A covariance test statistic for the Lasso is then generalized\nto the EN for testing the significance of the predictors. Our simulation\nstudies validate the fact that the test statistic has an asymptotic Exp(1)\ndistribution.\n",
"title": "Pathwise Least Angle Regression and a Significance Test for the Elastic Net"
}
| null | null | null | null | true | null |
13537
| null |
Default
| null | null |
null |
{
"abstract": " We present measurements of the velocity power spectrum and constraints on the\ngrowth rate of structure $f\\sigma_{8}$, at redshift zero, using the peculiar\nmotions of 2,062 galaxies in the completed 2MASS Tully-Fisher survey (2MTF). To\naccomplish this we introduce a model for fitting the velocity power spectrum\nincluding the effects of non-linear Redshift Space Distortions (RSD), allowing\nus to recover unbiased fits down to scales $k=0.2\\,h\\,{\\rm Mpc}^{-1}$ without\nthe need to smooth or grid the data. Our fitting methods are validated using a\nset of simulated 2MTF surveys. Using these simulations we also identify that\nthe Gaussian distributed estimator for peculiar velocities of\n\\cite{Watkins2015} is suitable for measuring the velocity power spectrum, but\nsub-optimal for the 2MTF data compared to using magnitude fluctuations $\\delta\nm$, and that, whilst our fits are robust to a change in fiducial cosmology,\nfuture peculiar velocity surveys with more constraining power may have to\nmarginalise over this. We obtain \\textit{scale-dependent} constraints on the\ngrowth rate of structure in two bins, finding $f\\sigma_{8} =\n[0.55^{+0.16}_{-0.13},0.40^{+0.16}_{-0.17}]$ in the ranges $k = [0.007-0.055,\n0.55-0.150]\\,h\\,{\\rm Mpc}^{-1}$. We also find consistent results using four\nbins. Assuming scale-\\textit{independence} we find a value $f\\sigma_{8} =\n0.51^{+0.09}_{-0.08}$, a $\\sim16\\%$ measurement of the growth rate. Performing\na consistency check of General Relativity (GR) and combining our results with\nCMB data only we find $\\gamma = 0.45^{+0.10}_{-0.11}$, a remarkable constraint\nconsidering the small number of galaxies. All of our results are completely\nindependent of the effects of galaxy bias, and fully consistent with the\npredictions of GR (scale-independent $f\\sigma_{8}$ and $\\gamma\\approx0.55$).\n",
"title": "2MTF VI. Measuring the velocity power spectrum"
}
| null | null | null | null | true | null |
13538
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we prove the Lorentz space $L^{q,p}$-estimates for gradients\nof very weak solutions to the linear parabolic equations with\n$\\mathbf{A}_q$-weights $$u_t-\\operatorname{div}(A(x,t)\\nabla\nu)=\\operatorname{div}(F),$$ in a bounded domain $\\Omega\\times\n(0,T)\\subset\\mathbb{R}^{N+1}$, where $A$ has a small mean oscillation, and\n$\\Omega$ is a Lipchistz domain with a small Lipschitz constant.\n",
"title": "Gradient weighted norm inequalities for very weak solutions of linear parabolic equations with BMO coefficients"
}
| null | null | null | null | true | null |
13539
| null |
Default
| null | null |
null |
{
"abstract": " We prove sharp density upper bounds on optimal length-scales for the ground\nstates of classical 2D Coulomb systems and generalizations thereof. Our method\nis new, based on an auxiliary Thomas-Fermi-like variational model. Moreover, we\ndeduce density upper bounds for the related low-temperature Gibbs states. Our\nmotivation comes from fractional quantum Hall physics, more precisely, the\nperturbation of the Laughlin state by external potentials or impurities. These\ngive rise to a class of many-body wave-functions that have the form of a\nproduct of the Laughlin state and an analytic function of many variables. This\nclass is related via Laughlin's plasma analogy to Gibbs states of the\ngeneralized classical Coulomb systems we consider. Our main result shows that\nthe perturbation of the Laughlin state cannot increase the particle density\nanywhere, with implications for the response of FQHE systems to external\nperturbations.\n",
"title": "Local incompressibility estimates for the Laughlin phase"
}
| null | null | null | null | true | null |
13540
| null |
Default
| null | null |
null |
{
"abstract": " Image diffusion plays a fundamental role for the task of image denoising.\nRecently proposed trainable nonlinear reaction diffusion (TNRD) model defines a\nsimple but very effective framework for image denoising. However, as the TNRD\nmodel is a local model, the diffusion behavior of which is purely controlled by\ninformation of local patches, it is prone to create artifacts in the homogenous\nregions and over-smooth highly textured regions, especially in the case of\nstrong noise levels. Meanwhile, it is widely known that the non-local\nself-similarity (NSS) prior stands as an effective image prior for image\ndenoising, which has been widely exploited in many non-local methods. In this\nwork, we are highly motivated to embed the NSS prior into the TNRD model to\ntackle its weaknesses. In order to preserve the expected property that\nend-to-end training is available, we exploit the NSS prior by a set of\nnon-local filters, and derive our proposed trainable non-local reaction\ndiffusion (TNLRD) model for image denoising. Together with the local filters\nand influence functions, the non-local filters are learned by employing\nloss-specific training. The experimental results show that the trained TNLRD\nmodel produces visually plausible recovered images with more textures and less\nartifacts, compared to its local versions. Moreover, the trained TNLRD model\ncan achieve strongly competitive performance to recent state-of-the-art image\ndenoising methods in terms of peak signal-to-noise ratio (PSNR) and structural\nsimilarity index (SSIM).\n",
"title": "Learning Non-local Image Diffusion for Image Denoising"
}
| null | null | null | null | true | null |
13541
| null |
Default
| null | null |
null |
{
"abstract": " This study compares various superlearner and deep learning architectures\n(machine-learning-based and neural-network-based) for classification problems\nacross several simulated and industrial datasets to assess performance and\ncomputational efficiency, as both methods have nice theoretical convergence\nproperties. Superlearner formulations outperform other methods at small to\nmoderate sample sizes (500-2500) on nonlinear and mixed linear/nonlinear\npredictor relationship datasets, while deep neural networks perform well on\nlinear predictor relationship datasets of all sizes. This suggests faster\nconvergence of the superlearner compared to deep neural network architectures\non many messy classification problems for real-world data.\nSuperlearners also yield interpretable models, allowing users to examine\nimportant signals in the data; in addition, they offer flexible formulation,\nwhere users can retain good performance with low-computational-cost base\nalgorithms.\nK-nearest-neighbor (KNN) regression demonstrates improvements using the\nsuperlearner framework, as well; KNN superlearners consistently outperform deep\narchitectures and KNN regression, suggesting that superlearners may be better\nable to capture local and global geometric features through utilizing a variety\nof algorithms to probe the data space.\n",
"title": "Deep vs. Diverse Architectures for Classification Problems"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
13542
| null |
Validated
| null | null |
null |
{
"abstract": " It is shown that Newton's inequalities and the related Maclaurin's\ninequalities provide several refinements of the fundamental Arithmetic mean -\nGeometric mean - Harmonic mean inequality in terms of the means and variance of\npositive real numbers. We also obtain some inequalities involving third and\nfourth central moments of real numbers.\n",
"title": "Means Moments and Newton's Inequalities"
}
| null | null | null | null | true | null |
13543
| null |
Default
| null | null |
null |
{
"abstract": " Understanding the influence of hyperparameters on the performance of a\nmachine learning algorithm is an important scientific topic in itself and can\nhelp to improve automatic hyperparameter tuning procedures. Unfortunately,\nexperimental meta data for this purpose is still rare. This paper presents a\nlarge, free and open dataset addressing this problem, containing results on 38\nOpenML data sets, six different machine learning algorithms and many different\nhyperparameter configurations. Results where generated by an automated random\nsampling strategy, termed the OpenML Random Bot. Each algorithm was\ncross-validated up to 20.000 times per dataset with different hyperparameters\nsettings, resulting in a meta dataset of around 2.5 million experiments\noverall.\n",
"title": "Automatic Exploration of Machine Learning Experiments on OpenML"
}
| null | null | null | null | true | null |
13544
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a generalized expectation consistent signal\nrecovery algorithm to estimate the signal $\\mathbf{x}$ from the nonlinear\nmeasurements of a linear transform output $\\mathbf{z}=\\mathbf{A}\\mathbf{x}$.\nThis estimation problem has been encountered in many applications, such as\ncommunications with front-end impairments, compressed sensing, and phase\nretrieval. The proposed algorithm extends the prior art called generalized\nturbo signal recovery from a partial discrete Fourier transform matrix\n$\\mathbf{A}$ to a class of general matrices. Numerical results show the\nexcellent agreement of the proposed algorithm with the theoretical\nBayesian-optimal estimator derived using the replica method.\n",
"title": "Generalized Expectation Consistent Signal Recovery for Nonlinear Measurements"
}
| null | null | null | null | true | null |
13545
| null |
Default
| null | null |
null |
{
"abstract": " This paper investigates the performance of a legitimate surveillance system,\nwhere a legitimate monitor aims to eavesdrop on a dubious decode-and-forward\nrelaying communication link. In order to maximize the effective eavesdropping\nrate, two strategies are proposed, where the legitimate monitor adaptively acts\nas an eavesdropper, a jammer or a helper. In addition, the corresponding\noptimal jamming beamformer and jamming power are presented. Numerical results\ndemonstrate that the proposed strategies attain better performance compared\nwith intuitive benchmark schemes. Moreover, it is revealed that the position of\nthe legitimate monitor plays an important role on the eavesdropping performance\nfor the two strategies.\n",
"title": "Proactive Eavesdropping in Relaying Systems"
}
| null | null | null | null | true | null |
13546
| null |
Default
| null | null |
null |
{
"abstract": " Big data problems frequently require processing datasets in a streaming\nfashion, either because all data are available at once but collectively are\nlarger than available memory or because the data intrinsically arrive one data\npoint at a time and must be processed online. Here, we introduce a\ncomputationally efficient version of similarity matching, a framework for\nonline dimensionality reduction that incrementally estimates the top\nK-dimensional principal subspace of streamed data while keeping in memory only\nthe last sample and the current iterate. To assess the performance of our\napproach, we construct and make public a test suite containing both a synthetic\ndata generator and the infrastructure to test online dimensionality reduction\nalgorithms on real datasets, as well as performant implementations of our\nalgorithm and competing algorithms with similar aims. Among the algorithms\nconsidered we find our approach to be competitive, performing among the best on\nboth synthetic and real data.\n",
"title": "Efficient Principal Subspace Projection of Streaming Data Through Fast Similarity Matching"
}
| null | null | null | null | true | null |
13547
| null |
Default
| null | null |
null |
{
"abstract": " Developer forums contain opinions and information related to the usage of\nAPIs. API names in forum posts are often not explicitly linked to their\nofficial resources. Automatic linking of an API mention to its official\nresources can be challenging for various reasons, such as, name overloading. We\npresent a technique, ANACE, to automatically resolve API mentions in the\ntextual contents of forum posts. Given a database of APIs, we first detect all\nwords in a forum post that are potential references to an API. We then use a\ncombination of heuristics and machine learning to eliminate false positives and\nto link true positives to the actual APIs and their resources.\n",
"title": "Resolving API Mentions in Informal Documents"
}
| null | null | null | null | true | null |
13548
| null |
Default
| null | null |
null |
{
"abstract": " Simulations of tidal streams show that close encounters with dark matter\nsubhalos induce density gaps and distortions in on-sky path along the streams.\nAccordingly, observing disrupted streams in the Galactic halo would\nsubstantiate the hypothesis that dark matter substructure exists there, while\nin contrast, observing collimated streams with smoothly varying density\nprofiles would place strong upper limits on the number density and mass\nspectrum of subhalos. Here, we examine several measures of stream \"disruption\"\nand their power to distinguish between halo potentials with and without\nsubstructure and with different global shapes. We create and evolve a\npopulation of 1280 streams on a range of orbits in the Via Lactea II simulation\nof a Milky Way-like halo, replete with a full mass range of {\\Lambda}CDM\nsubhalos, and compare it to two control stream populations evolved in smooth\nspherical and smooth triaxial potentials, respectively. We find that the number\nof gaps observed in a stellar stream is a poor indicator of the halo potential,\nbut that (i) the thinness of the stream on-sky, (ii) the symmetry of the\nleading and trailing tails, and (iii) the deviation of the tails from a\nlow-order polynomial path on-sky (\"path regularity\") distinguish between the\nthree potentials more effectively. We find that globular cluster streams on\nlow-eccentricity orbits far from the galactic center (apocentric radius ~ 30-80\nkpc) are most powerful in distinguishing between the three potentials. If they\nexist, such streams will shortly be discoverable and mapped in high dimensions\nwith near-future photometric and spectroscopic surveys.\n",
"title": "Quantifying tidal stream disruption in a simulated Milky Way"
}
| null | null |
[
"Physics"
] | null | true | null |
13549
| null |
Validated
| null | null |
null |
{
"abstract": " We present a three-species multi-fluid MHD model (H$^+$, H$_2$O$^+$ and\ne$^-$), endowed with the requisite atmospheric chemistry, that is capable of\naccurately quantifying the magnitude of water ion losses from exoplanets. We\napply this model to a water world with Earth-like parameters orbiting a\nSun-like star for three cases: (i) current normal solar wind conditions, (ii)\nancient normal solar wind conditions, and (iii) one extreme \"Carrington-type\"\nspace weather event. We demonstrate that the ion escape rate for (ii), with a\nvalue of 6.0$\\times$10$^{26}$ s$^{-1}$, is about an order of magnitude higher\nthan the corresponding value of 6.7$\\times$10$^{25}$ s$^{-1}$ for (i). Studies\nof ion losses induced by space weather events, where the ion escape rates can\nreach $\\sim$ 10$^{28}$ s$^{-1}$, are crucial for understanding how an active,\nearly solar-type star (e.g., with frequent coronal mass ejections) could have\naccelerated the depletion of the exoplanet's atmosphere. We briefly explore the\nramifications arising from the loss of water ions, especially for planets\norbiting M-dwarfs where such effects are likely to be significant.\n",
"title": "The dehydration of water worlds via atmospheric losses"
}
| null | null | null | null | true | null |
13550
| null |
Default
| null | null |
null |
{
"abstract": " With the ever-growing amounts of textual data from a large variety of\nlanguages, domains, and genres, it has become standard to evaluate NLP\nalgorithms on multiple datasets in order to ensure consistent performance\nacross heterogeneous setups. However, such multiple comparisons pose\nsignificant challenges to traditional statistical analysis methods in NLP and\ncan lead to erroneous conclusions. In this paper, we propose a Replicability\nAnalysis framework for a statistically sound analysis of multiple comparisons\nbetween algorithms for NLP tasks. We discuss the theoretical advantages of this\nframework over the current, statistically unjustified, practice in the NLP\nliterature, and demonstrate its empirical value across four applications:\nmulti-domain dependency parsing, multilingual POS tagging, cross-domain\nsentiment classification and word similarity prediction.\n",
"title": "Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets"
}
| null | null | null | null | true | null |
13551
| null |
Default
| null | null |
null |
{
"abstract": " Software is a key component of solutions for 21st Century problems. These\nproblems are often \"wicked\", complex, and unpredictable. To provide the best\npossible solution, millennial software engineers must be prepared to make\nethical decisions, thinking critically, and acting systematically. This reality\ndemands continuous changes in educational systems and curricula delivery, as\nmisjudgment might have serious social impact. This study aims to investigate\nand reflect on Software Engineering (SE) Programs, proposing a conceptual\nframework for analyzing cyberethics education and a set of suggestions on how\nto integrate it into the SE undergraduate curriculum.\n",
"title": "Reflections on Cyberethics Education for Millennial Software Engineers"
}
| null | null |
[
"Computer Science"
] | null | true | null |
13552
| null |
Validated
| null | null |
null |
{
"abstract": " We performed geometric pulsar light curve modeling using static, retarded\nvacuum, and offset polar cap (PC) dipole $B$-fields (the latter is\ncharacterized by a parameter $\\epsilon$), in conjunction with standard two-pole\ncaustic (TPC) and outer gap (OG) emission geometries. The offset-PC dipole\n$B$-field mimics deviations from the static dipole (which corresponds to\n$\\epsilon=0$). In addition to constant-emissivity geometric models, we also\nconsidered a slot gap (SG) $E$-field associated with the offset-PC dipole\n$B$-field and found that its inclusion leads to qualitatively different light\ncurves. Solving the particle transport equation shows that the particle energy\nonly becomes large enough to yield significant curvature radiation at large\naltitudes above the stellar surface, given this relatively low $E$-field.\nTherefore, particles do not always attain the radiation-reaction limit. Our\noverall optimal light curve fit is for the retarded vacuum dipole field and OG\nmodel, at an inclination angle $\\alpha=78{_{-1}^{+1}}^{\\circ}$ and observer\nangle $\\zeta=69{_{-1}^{+2}}^{\\circ}$. For this $B$-field, the TPC model is\nstatistically disfavored compared to the OG model. For the static dipole field,\nneither model is significantly preferred. We found that smaller values of\n$\\epsilon$ are favored for the offset-PC dipole field when assuming constant\nemissivity, and larger $\\epsilon$ values favored for variable emissivity, but\nnot significantly so. When multiplying the SG $E$-field by a factor of 100, we\nfound improved light curve fits, with $\\alpha$ and $\\zeta$ being closer to best\nfits from independent studies, as well as curvature radiation reaction at lower\naltitudes.\n",
"title": "The effect of an offset polar cap dipolar magnetic field on the modeling of the Vela pulsar's $γ$-ray light curves"
}
| null | null | null | null | true | null |
13553
| null |
Default
| null | null |
null |
{
"abstract": " Automation and computer intelligence to support complex human decisions\nbecomes essential to manage large and distributed systems in the Cloud and IoT\nera. Understanding the root cause of an observed symptom in a complex system\nhas been a major problem for decades. As industry dives into the IoT world and\nthe amount of data generated per year grows at an amazing speed, an important\nquestion is how to find appropriate mechanisms to determine root causes that\ncan handle huge amounts of data or may provide valuable feedback in real-time.\nWhile many survey papers aim at summarizing the landscape of techniques for\nmodelling system behavior and infering the root cause of a problem based in the\nresulting models, none of those focuses on analyzing how the different\ntechniques in the literature fit growing requirements in terms of performance\nand scalability. In this survey, we provide a review of root-cause analysis,\nfocusing on these particular aspects. We also provide guidance to choose the\nbest root-cause analysis strategy depending on the requirements of a particular\nsystem and application.\n",
"title": "Survey on Models and Techniques for Root-Cause Analysis"
}
| null | null | null | null | true | null |
13554
| null |
Default
| null | null |
null |
{
"abstract": " Two heuristics namely diversity-based (DBTP) and history-based test\nprioritization (HBTP) have been separately proposed in the literature. Yet,\ntheir combination has not been widely studied in continuous integration (CI)\nenvironments. The objective of this study is to catch regression faults\nearlier, allowing developers to integrate and verify their changes more\nfrequently and continuously. To achieve this, we investigated six open-source\nprojects, each of which included several builds over a large time period.\nFindings indicate that previous failure knowledge seems to have strong\npredictive power in CI environments and can be used to effectively prioritize\ntests. HBTP does not necessarily need to have large data, and its effectiveness\nimproves to a certain degree with larger history interval. DBTP can be used\neffectively during the early stages, when no historical data is available, and\nalso combined with HBTP to improve its effectiveness. Among the investigated\ntechniques, we found that history-based diversity using NCD Multiset is\nsuperior in terms of effectiveness but comes with relatively higher overhead in\nterms of method execution time. Test prioritization in CI environments can be\neffectively performed with negligible investment using previous failure\nknowledge, and its effectiveness can be further improved by considering\ndissimilarities among the tests.\n",
"title": "Test Prioritization in Continuous Integration Environments"
}
| null | null | null | null | true | null |
13555
| null |
Default
| null | null |
null |
{
"abstract": " Using the Kato-Rosenblum theorem, we describe the absolutely continuous\nspectrum of a class of weighted integral Hankel operators in $L^2(\\mathbb\nR_+)$. These self-adjoint operators generalise the explicitly diagonalisable\noperator with the integral kernel $s^\\alpha t^\\alpha(s+t)^{-1-2\\alpha}$, where\n$\\alpha>-1/2$. Our analysis can be considered as an extension of J.Howland's\n1992 paper which dealt with the unweighted case, corresponding to $\\alpha=0$.\n",
"title": "Weighted integral Hankel operators with continuous spectrum"
}
| null | null |
[
"Mathematics"
] | null | true | null |
13556
| null |
Validated
| null | null |
null |
{
"abstract": " We investigate separation properties of $N$-point configurations that\nminimize discrete Riesz $s$-energy on a compact set $A\\subset \\mathbb{R}^p$.\nWhen $A$ is a smooth $(p-1)$-dimensional manifold without boundary and $s\\in\n[p-2, p-1)$, we prove that the order of separation (as $N\\to \\infty$) is the\nbest possible. The same conclusions hold for the points that are a fixed\npositive distance from the boundary of $A$ whenever $A$ is any $p$-dimensional\nset. These estimates extend a result of Dahlberg for certain smooth\n$(p-1)$-dimensional surfaces when $s=p-2$ (the harmonic case). Furthermore, we\nobtain the same separation results for `greedy' $s$-energy points. We deduce\nour results from an upper regularity property of the $s$-equilibrium measure\n(i.e., the measure that solves the continuous minimal Riesz $s$-energy\nproblem), and we show that this property holds under a local smoothness\nassumption on the set $A$.\n",
"title": "Local properties of Riesz minimal energy configurations and equilibrium measures"
}
| null | null | null | null | true | null |
13557
| null |
Default
| null | null |
null |
{
"abstract": " We introduce an algorithm for word-level text spotting that is able to\naccurately and reliably determine the bounding regions of individual words of\ntext \"in the wild\". Our system is formed by the cascade of two convolutional\nneural networks. The first network is fully convolutional and is in charge of\ndetecting areas containing text. This results in a very reliable but possibly\ninaccurate segmentation of the input image. The second network (inspired by the\npopular YOLO architecture) analyzes each segment produced in the first stage,\nand predicts oriented rectangular regions containing individual words. No\npost-processing (e.g. text line grouping) is necessary. With execution time of\n450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the\nhighest score to date among published algorithms on the ICDAR 2015 Incidental\nScene Text dataset benchmark.\n",
"title": "Cascaded Segmentation-Detection Networks for Word-Level Text Spotting"
}
| null | null | null | null | true | null |
13558
| null |
Default
| null | null |
null |
{
"abstract": " In previous work, we defined and studied $\\Sigma^*$-modules, a class of\nHilbert $C^*$-modules over $\\Sigma^*$-algebras (the latter are $C^*$-algebras\nthat are sequentially closed in the weak operator topology). The present work\ncontinues this study by developing the appropriate $\\Sigma^*$-algebraic\nanalogue of the notion of strong Morita equivalence for $C^*$-algebras. We\ndefine strong $\\Sigma^*$-Morita equivalence, prove a few characterizations,\nlook at the relationship with equivalence of categories of a certain type of\nHilbert space representation, study $\\Sigma^*$-versions of the interior and\nexterior tensor products, and prove a $\\Sigma^*$-version of the\nBrown-Green-Rieffel stable isomorphism theorem.\n",
"title": "Hilbert $C^*$-modules over $Σ^*$-algebras II: $Σ^*$-Morita equivalence"
}
| null | null |
[
"Mathematics"
] | null | true | null |
13559
| null |
Validated
| null | null |
null |
{
"abstract": " We revisit the low energy physics of one dimensional spinless fermion\nliquids, showing that with sufficiently strong interactions the conventional\nLuttinger liquid can give way to a strong pairing phase. While the density\nfluctuations in both phases are described by a gapless Luttinger liquid, single\nfermion excitations are gapped only in the strong pairing phase. Smooth spatial\nInterfaces between the two phases lead to topological degeneracies in the\nground state and low energy phonon spectrum. Using a concrete microscopic\nmodel, with both single particle and pair hopping, we show that the strong\npairing state is established through emergence of a new low energy fermionic\nmode. We characterize the two phases with numerical calculations using the\ndensity matrix renormalization group. In particular we find enhancement of the\ncentral charge from $c=1$ in the two Luttinger liquid phases to $c=3/2$ at the\ncritical point, which gives direct evidence for an emergent critical Majorana\nmode. Finally, we confirm the existence of topological degeneracies in the low\nenergy phonon spectrum, associated with spatial interfaces between the two\nphases.\n",
"title": "Topological degeneracy and pairing in a one-dimensional gas of spinless Fermions"
}
| null | null | null | null | true | null |
13560
| null |
Default
| null | null |
null |
{
"abstract": " Several methods for checking admissibility of rules in the modal logic $S4$\nare presented in [1], [15]. These methods determine admissibility of rules in\n$S4$, but they don't determine or give substitutions rejecting inadmissible\nrules. In this paper, we investigate some relations between one of the above\nmethods, based on the reduced normal form rules, and sets of substitutions\nwhich reject them. We also generalize the method in [1], [15] for one rule to\nadmissibility of a set of rules.\n",
"title": "Rejecting inadmissible rules in reduced normal forms in S4"
}
| null | null | null | null | true | null |
13561
| null |
Default
| null | null |
null |
{
"abstract": " Spark is a new promising platform for scalable data-parallel computation. It\nprovides several high-level application programming interfaces (APIs) to\nperform parallel data aggregation. Since execution of parallel aggregation in\nSpark is inherently non-deterministic, a natural requirement for Spark programs\nis to give the same result for any execution on the same data set. We present\nPureSpark, an executable formal Haskell specification for Spark aggregate\ncombinators. Our specification allows us to deduce the precise condition for\ndeterministic outcomes from Spark aggregation. We report case studies analyzing\ndeterministic outcomes and correctness of Spark programs.\n",
"title": "An Executable Sequential Specification for Spark Aggregation"
}
| null | null | null | null | true | null |
13562
| null |
Default
| null | null |
null |
{
"abstract": " It is known that for every graph $G$ there exists the smallest Helly graph\n$\\cal H(G)$ into which $G$ isometrically embeds ($\\cal H(G)$ is called the\ninjective hull of $G$) such that the hyperbolicity of $\\cal H(G)$ is equal to\nthe hyperbolicity of $G$. Motivated by this, we investigate structural\nproperties of Helly graphs that govern their hyperbolicity and identify three\nisometric subgraphs of the King-grid as structural obstructions to a small\nhyperbolicity in Helly graphs.\n",
"title": "Obstructions to a small hyperbolicity in Helly graphs"
}
| null | null | null | null | true | null |
13563
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we estimate the time resolution of the J-PET scanner built from\nplastic scintillators. We incorporate the method of signal processing using the\nTikhonov regularization framework and the Kernel Density Estimation method. We\nobtain simple, closed-form analytical formulas for time resolutions. The\nproposed method is validated using signals registered by means of the single\ndetection unit of the J-PET tomograph built out from 30 cm long plastic\nscintillator strip. It is shown that the experimental and theoretical results,\nobtained for the J-PET scanner equipped with vacuum tube photomultipliers, are\nconsistent.\n",
"title": "Calculation of time resolution of the J-PET tomograph using the Kernel Density Estimation"
}
| null | null | null | null | true | null |
13564
| null |
Default
| null | null |
null |
{
"abstract": " The area of Handwritten Signature Verification has been broadly researched in\nthe last decades, but remains an open research problem. In offline (static)\nsignature verification, the dynamic information of the signature writing\nprocess is lost, and it is difficult to design good feature extractors that can\ndistinguish genuine signatures and skilled forgeries. This verification task is\neven harder in writer independent scenarios which is undeniably fiscal for\nrealistic cases. In this paper, we have proposed an Ensemble model for offline\nwriter, independent signature verification task with Deep learning. We have\nused two CNNs for feature extraction, after that RGBT for classification &\nStacking to generate final prediction vector. We have done extensive\nexperiments on various datasets from various sources to maintain a variance in\nthe dataset. We have achieved the state of the art performance on various\ndatasets.\n",
"title": "Writer Independent Offline Signature Recognition Using Ensemble Learning"
}
| null | null | null | null | true | null |
13565
| null |
Default
| null | null |
null |
{
"abstract": " Fairness is a critical trait in decision making. As machine-learning models\nare increasingly being used in sensitive application domains (e.g. education\nand employment) for decision making, it is crucial that the decisions computed\nby such models are free of unintended bias. But how can we automatically\nvalidate the fairness of arbitrary machine-learning models? For a given\nmachine-learning model and a set of sensitive input parameters, our AEQUITAS\napproach automatically discovers discriminatory inputs that highlight fairness\nviolation. At the core of AEQUITAS are three novel strategies to employ\nprobabilistic search over the input space with the objective of uncovering\nfairness violation. Our AEQUITAS approach leverages inherent robustness\nproperty in common machine-learning models to design and implement scalable\ntest generation methodologies. An appealing feature of our generated test\ninputs is that they can be systematically added to the training set of the\nunderlying model and improve its fairness. To this end, we design a fully\nautomated module that guarantees to improve the fairness of the underlying\nmodel.\nWe implemented AEQUITAS and we have evaluated it on six state-of-the-art\nclassifiers, including a classifier that was designed with fairness\nconstraints. We show that AEQUITAS effectively generates inputs to uncover\nfairness violation in all the subject classifiers and systematically improves\nthe fairness of the respective models using the generated test inputs. In our\nevaluation, AEQUITAS generates up to 70% discriminatory inputs (w.r.t. the\ntotal number of inputs generated) and leverages these inputs to improve the\nfairness up to 94%.\n",
"title": "Automated Directed Fairness Testing"
}
| null | null | null | null | true | null |
13566
| null |
Default
| null | null |
null |
{
"abstract": " Memory-safety violations are a prevalent cause of both reliability and\nsecurity vulnerabilities in systems software written in unsafe languages like\nC/C++. Unfortunately, all the existing software-based solutions to this problem\nexhibit high performance overheads preventing them from wide adoption in\nproduction runs. To address this issue, Intel recently released a new ISA\nextension - Memory Protection Extensions (Intel MPX), a hardware-assisted\nfull-stack solution to protect against memory safety violations. In this work,\nwe perform an exhaustive study of the Intel MPX architecture to understand its\nadvantages and caveats. We base our study along three dimensions: (a)\nperformance overheads, (b) security guarantees, and (c) usability issues. To\nput our results in perspective, we compare Intel MPX with three prominent\nsoftware-based approaches: (1) trip-wire - AddressSanitizer, (2) object-based -\nSAFECode, and (3) pointer-based - SoftBound. Our main conclusion is that Intel\nMPX is a promising technique that is not yet practical for widespread adoption.\nIntel MPX's performance overheads are still high (roughly 50% on average), and\nthe supporting infrastructure has bugs which may cause compilation or runtime\nerrors. Moreover, we showcase the design limitations of Intel MPX: it cannot\ndetect temporal errors, may have false positives and false negatives in\nmultithreaded code, and its restrictions on memory layout require substantial\ncode changes for some programs.\n",
"title": "Intel MPX Explained: An Empirical Study of Intel MPX and Software-based Bounds Checking Approaches"
}
| null | null | null | null | true | null |
13567
| null |
Default
| null | null |
null |
{
"abstract": " We study the problem of modeling spatiotemporal trajectories over long time\nhorizons using expert demonstrations. For instance, in sports, agents often\nchoose action sequences with long-term goals in mind, such as achieving a\ncertain strategic position. Conventional policy learning approaches, such as\nthose based on Markov decision processes, generally fail at learning cohesive\nlong-term behavior in such high-dimensional state spaces, and are only\neffective when myopic modeling lead to the desired behavior. The key difficulty\nis that conventional approaches are \"shallow\" models that only learn a single\nstate-action policy. We instead propose a hierarchical policy class that\nautomatically reasons about both long-term and short-term goals, which we\ninstantiate as a hierarchical neural network. We showcase our approach in a\ncase study on learning to imitate demonstrated basketball trajectories, and\nshow that it generates significantly more realistic trajectories compared to\nnon-hierarchical baselines as judged by professional sports analysts.\n",
"title": "Generating Long-term Trajectories Using Deep Hierarchical Networks"
}
| null | null | null | null | true | null |
13568
| null |
Default
| null | null |
null |
{
"abstract": " We rewrite Poynting's theorem, already used in a previous publication\n(Treumann & Baumjohann 2017) to derive relations between the turbulent magnetic\nand electric power spectral densities, to make explicit where the mechanical\ncontributions enter. We then make explicit use of the relativistic\ntransformation of the turbulent electric fluctuations to obtain expressions\nwhich depend only on the magnetic and velocity fluctuations. Any electric\nfluctuations play just an intermediate role. Equations are constructed for the\nturbulent conductivity spectrum in Alfvénic and non-Alfvénic turbulence in\nextension of the results in the above citation. An observation-based discussion\nof their use in application to solar wind turbulence is given. The inertial\nrange solar wind turbulence exhibits signs of chaos and self-organisation.\n",
"title": "The usefulness of Poynting's theorem in magnetic turbulence"
}
| null | null | null | null | true | null |
13569
| null |
Default
| null | null |
null |
{
"abstract": " {\\it Victory}, i.e. \\underline{vi}enna \\underline{c}omputational\n\\underline{to}ol deposito\\underline{ry}, is a collection of numerical tools for\nsolving the parquet equations for the Hubbard model and similar many body\nproblems. The parquet formalism is a self-consistent theory at both the single-\nand two-particle levels, and can thus describe individual fermions as well as\ntheir collective behavior on equal footing. This is essential for the\nunderstanding of various emergent phases and their transitions in many-body\nsystems, in particular for cases in which a single-particle description fails.\nOur implementation of {\\it victory} is in modern Fortran and it fully respects\nthe structure of various vertex functions in both momentum and Matsubara\nfrequency space. We found the latter to be crucial for the convergence of the\nparquet equations, as well as for the correct determination of various physical\nobservables. In this release, we thoroughly explain the program structure and\nthe controlled approximations to efficiently solve the parquet equations, i.e.\nthe two-level kernel approximation and the high-frequency regulation.\n",
"title": "The {\\it victory} project v1.0: an efficient parquet equations solver"
}
| null | null | null | null | true | null |
13570
| null |
Default
| null | null |
null |
{
"abstract": " Tumor stromal interactions have been shown to be the driving force behind the\npoor prognosis associated with aggressive breast tumors. These interactions,\nspecifically between tumor and the surrounding ECM, and tumor and vascular\nendothelium, promote tumor formation, angiogenesis, and metastasis. In this\nstudy, we develop an in vitro vascularized tumor platform that allows for\ninvestigation of tumor-stromal interactions in three breast tumor derived cell\nlines of varying aggressiveness: MDA-IBC3, SUM149, and MDA-MB-231. The platform\nrecreates key features of breast tumors, including increased vascular\npermeability, vessel sprouting, and ECM remodeling. Morphological and\nquantitative analysis reveals differential effects from each tumor cell type on\nendothelial coverage, permeability, expression of VEGF, and collagen\nremodeling. Triple negative tumors, SUM149 and MDA-MB-321, resulted in a\nsignificantly (p<0.05) higher endothelial permeability and decreased\nendothelial coverage compared to the control TIME only platform. SUM149/TIME\nplatforms were 1.3 fold lower (p<0.05), and MDA-MB-231/TIME platforms were 1.5\nfold lower (p<0.01) in endothelial coverage compared to the control TIME only\nplatform. HER2+ MDA-IBC3 tumor cells expressed high levels of VEGF (p<0.01) and\ninduced vessel sprouting. Vessels sprouting was tracked for 3 weeks and with\nincreasing time exhibited formation of multiple vessel sprouts that invaded\ninto the ECM and surrounded clusters of MDA-IBC3 cells. Both IBC cell lines,\nSUM149 and MDA-IBC3, resulted in a collagen ECM with significantly greater\nporosity with 1.6 and 1.1 fold higher compared to control, p<0.01. The breast\ncancer in vitro vascularized platforms introduced in this paper are an\nadaptable, high throughout tool for unearthing tumor-stromal mechanisms and\ndynamics behind tumor progression and may prove essential in developing\neffective targeted therapeutics.\n",
"title": "An In Vitro Vascularized Tumor Platform for Modeling Breast Tumor Stromal Interactions and Characterizing the Subsequent Response"
}
| null | null | null | null | true | null |
13571
| null |
Default
| null | null |
null |
{
"abstract": " We consider the problem of testing, on the basis of a $p$-variate Gaussian\nrandom sample, the null hypothesis ${\\cal H}_0: {\\pmb \\theta}_1= {\\pmb\n\\theta}_1^0$ against the alternative ${\\cal H}_1: {\\pmb \\theta}_1 \\neq {\\pmb\n\\theta}_1^0$, where ${\\pmb \\theta}_1$ is the \"first\" eigenvector of the\nunderlying covariance matrix and ${\\pmb \\theta}_1^0$ is a fixed unit\n$p$-vector. In the classical setup where eigenvalues $\\lambda_1>\\lambda_2\\geq\n\\ldots\\geq \\lambda_p$ are fixed, the Anderson (1963) likelihood ratio test\n(LRT) and the Hallin, Paindaveine and Verdebout (2010) Le Cam optimal test for\nthis problem are asymptotically equivalent under the null hypothesis, hence\nalso under sequences of contiguous alternatives. We show that this equivalence\ndoes not survive asymptotic scenarios where\n$\\lambda_{n1}/\\lambda_{n2}=1+O(r_n)$ with $r_n=O(1/\\sqrt{n})$. For such\nscenarios, the Le Cam optimal test still asymptotically meets the nominal level\nconstraint, whereas the LRT severely overrejects the null hypothesis.\nConsequently, the former test should be favored over the latter one whenever\nthe two largest sample eigenvalues are close to each other. By relying on the\nLe Cam's asymptotic theory of statistical experiments, we study the non-null\nand optimality properties of the Le Cam optimal test in the aforementioned\nasymptotic scenarios and show that the null robustness of this test is not\nobtained at the expense of power. Our asymptotic investigation is extensive in\nthe sense that it allows $r_n$ to converge to zero at an arbitrary rate. While\nwe restrict to single-spiked spectra of the form\n$\\lambda_{n1}>\\lambda_{n2}=\\ldots=\\lambda_{np}$ to make our results as striking\nas possible, we extend our results to the more general elliptical case.\nFinally, we present an illustrative real data example.\n",
"title": "Testing for Principal Component Directions under Weak Identifiability"
}
| null | null | null | null | true | null |
13572
| null |
Default
| null | null |
null |
{
"abstract": " Phylodynamics is an area of population genetics that uses genetic sequence\ndata to estimate past population dynamics. Modern state-of-the-art Bayesian\nnonparametric methods for phylodynamics use either change-point models or\nGaussian process priors to recover population size trajectories of unknown\nform. Change-point models suffer from computational issues when the number of\nchange-points is unknown and needs to be estimated. Gaussian process-based\nmethods lack local adaptivity and cannot accurately recover trajectories that\nexhibit features such as abrupt changes in trend or varying levels of\nsmoothness. We propose a novel, locally-adaptive approach to Bayesian\nnonparametric phylodynamic inference that has the flexibility to accommodate a\nlarge class of functional behaviors. Local adaptivity results from modeling the\nlog-transformed effective population size a priori as a horseshoe Markov random\nfield, a recently proposed statistical model that blends together the best\nproperties of the change-point and Gaussian process modeling paradigms. We use\nsimulated data to assess model performance, and find that our proposed method\nresults in reduced bias and increased precision when compared to contemporary\nmethods. We also use our models to reconstruct past changes in genetic\ndiversity of human hepatitis C virus in Egypt and to estimate population size\nchanges of ancient and modern steppe bison. These analyses show that our new\nmethod captures features of the population size trajectories that were missed\nby the state-of-the-art phylodynamic methods.\n",
"title": "Locally-adaptive Bayesian nonparametric inference for phylodynamics"
}
| null | null | null | null | true | null |
13573
| null |
Default
| null | null |
null |
{
"abstract": " Social and affective relations may shape empathy to others' affective states.\nPrevious studies also revealed that people tend to form very different mental\nrepresentations of stimuli on the basis of their physical distance. In this\nregard, embodied cognition proposes that different physical distances between\nindividuals activate different interpersonal processing modes, such that close\nphysical distance tends to activate the interpersonal processing mode typical\nof socially and affectively close relationships. In Experiment 1, two groups of\nparticipants were administered a pain decision task involving upright and\ninverted face stimuli painfully or neutrally stimulated, and we monitored their\nneural empathic reactions by means of event-related potentials (ERPs)\ntechnique. Crucially, participants were presented with face stimuli of one of\ntwo possible sizes in order to manipulate retinal size and perceived physical\ndistance, roughly corresponding to the close and far portions of social\ndistance. ERPs modulations compatible with an empathic reaction were observed\nonly for the group exposed to face stimuli appearing to be at a close social\ndistance from the participants. This reaction was absent in the group exposed\nto smaller stimuli corresponding to face stimuli observed from a far social\ndistance. In Experiment 2, one different group of participants was engaged in a\nmatch-to-sample task involving the two-size upright face stimuli of Experiment\n1 to test whether the modulation of neural empathic reaction observed in\nExperiment 1 could be ascribable to differences in the ability to identify\nfaces of the two different sizes. Results suggested that face stimuli of the\ntwo sizes could be equally identifiable. In line with the Construal Level and\nEmbodied Simulation theoretical frameworks, we conclude that perceived physical\ndistance may shape empathy as well as social and affective distance.\n",
"title": "Out of sight out of mind: Perceived physical distance between the observer and someone in pain shapes observer's neural empathic reactions"
}
| null | null | null | null | true | null |
13574
| null |
Default
| null | null |
null |
{
"abstract": " This paper addresses the dynamic difficulty adjustment on MOBA games as a way\nto improve the player's entertainment. Although MOBA is currently one of the\nmost played genres around the world, it is known as a game that offer less\nautonomy, more challenges and consequently more frustration. Due to these\ncharacteristics, the use of a mechanism that performs the difficulty balance\ndynamically seems to be an interesting alternative to minimize and/or avoid\nthat players experience such frustrations. In this sense, this paper presents a\ndynamic difficulty adjustment mechanism for MOBA games. The main idea is to\ncreate a computer controlled opponent that adapts dynamically to the player\nperformance, trying to offer to the player a better game experience. This is\ndone by evaluating the performance of the player using a metric based on some\ngame features and switching the difficulty of the opponent's artificial\nintelligence behavior accordingly. Quantitative and qualitative experiments\nwere performed and the results showed that the system is capable of adapting\ndynamically to the opponent's skills. In spite of that, the qualitative\nexperiments with users showed that the player's expertise has a greater\ninfluence on the perception of the difficulty level and dynamic adaptation.\n",
"title": "Dynamic Difficulty Adjustment on MOBA Games"
}
| null | null | null | null | true | null |
13575
| null |
Default
| null | null |
null |
{
"abstract": " We characterize all varieties with a torus action of complexity one that\nadmit iteration of Cox rings.\n",
"title": "On iteration of Cox rings"
}
| null | null | null | null | true | null |
13576
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we study the interplay between Lagrangian cobordisms and\nstability conditions. We show that any stability condition on the derived\nFukaya category $D\\mathcal{F}uk(M)$ of a symplectic manifold $(M,\\omega)$\ninduces a stability condition on the derived Fukaya category of Lagrangian\ncobordisms $D\\mathcal{F}uk(\\mathbb{C} \\times M)$. In addition, using stability\nconditions, we provide general conditions under which the homomorphism $\\Theta:\n\\Omega_{Lag}(M)\\to K_0(D\\mathcal{F}uk(M))$, introduced by Biran and Cornea, is\nan isomorphism. This yields a better understanding of how stability conditions\naffect $\\Theta$ and it allows us to elucidate Haug's result, that the\nLagrangian cobordism group of $T^2$ is isomorphic to\n$K_0(D\\mathcal{F}uk(T^2))$.\n",
"title": "Stability Conditions and Lagrangian Cobordisms"
}
| null | null | null | null | true | null |
13577
| null |
Default
| null | null |
null |
{
"abstract": " Citation sentiment analysis is an important task in scientific paper\nanalysis. Existing machine learning techniques for citation sentiment analysis\nare focusing on labor-intensive feature engineering, which requires large\nannotated corpus. As an automatic feature extraction tool, word2vec has been\nsuccessfully applied to sentiment analysis of short texts. In this work, I\nconducted empirical research with the question: how well does word2vec work on\nthe sentiment analysis of citations? The proposed method constructed sentence\nvectors (sent2vec) by averaging the word embeddings, which were learned from\nAnthology Collections (ACL-Embeddings). I also investigated polarity-specific\nword embeddings (PS-Embeddings) for classifying positive and negative\ncitations. The sentence vectors formed a feature space, to which the examined\ncitation sentence was mapped to. Those features were input into classifiers\n(support vector machines) for supervised classification. Using\n10-cross-validation scheme, evaluation was conducted on a set of annotated\ncitations. The results showed that word embeddings are effective on classifying\npositive and negative citations. However, hand-crafted features performed\nbetter for the overall classification.\n",
"title": "Sentiment Analysis of Citations Using Word2vec"
}
| null | null | null | null | true | null |
13578
| null |
Default
| null | null |
null |
{
"abstract": " We present a hybrid method for latent information discovery on the data sets\ncontaining both text content and connection structure based on constrained low\nrank approximation. The new method jointly optimizes the Nonnegative Matrix\nFactorization (NMF) objective function for text clustering and the Symmetric\nNMF (SymNMF) objective function for graph clustering. We propose an effective\nalgorithm for the joint NMF objective function, based on a block coordinate\ndescent (BCD) framework. The proposed hybrid method discovers content\nassociations via latent connections found using SymNMF. The method can also be\napplied with a natural conversion of the problem when a hypergraph formulation\nis used or the content is associated with hypergraph edges.\nExperimental results show that by simultaneously utilizing both content and\nconnection structure, our hybrid method produces higher quality clustering\nresults compared to the other NMF clustering methods that uses content alone\n(standard NMF) or connection structure alone (SymNMF). We also present some\ninteresting applications to several types of real world data such as citation\nrecommendations of papers. The hybrid method proposed in this paper can also be\napplied to general data expressed with both feature space vectors and pairwise\nsimilarities and can be extended to the case with multiple feature spaces or\nmultiple similarity measures.\n",
"title": "Hybrid Clustering based on Content and Connection Structure using Joint Nonnegative Matrix Factorization"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
13579
| null |
Validated
| null | null |
null |
{
"abstract": " Let k be a field. This paper investigates the embedding dimension and\ncodimension of Noetherian local rings arising as localizations of tensor\nproducts of k-algebras. We use results and techniques from prime spectra and\ndimension theory to establish an analogue of the \"special chain theorem\" for\nthe embedding dimension of tensor products, with effective consequence on the\ntransfer or defect of regularity as exhibited by the (embedding) codimension.\n",
"title": "Embedding dimension and codimension of tensor products of algebras over a field"
}
| null | null |
[
"Mathematics"
] | null | true | null |
13580
| null |
Validated
| null | null |
null |
{
"abstract": " The Cosmic Axion Spin Precession Experiment (CASPEr) is a nuclear magnetic\nresonance experiment (NMR) seeking to detect axion and axion-like particles\nwhich could make up the dark matter present in the universe. We review the\npredicted couplings of axions and axion-like particles with baryonic matter\nthat enable their detection via NMR. We then describe two measurement schemes\nbeing implemented in CASPEr. The first method, presented in the original CASPEr\nproposal, consists of a resonant search via continuous-wave NMR spectroscopy.\nThis method offers the highest sensitivity for frequencies ranging from a few\nHz to hundreds of MHz, corresponding to masses $ m_{\\rm a} \\sim\n10^{-14}$--$10^{-6}$ eV. Sub-Hz frequencies are typically difficult to probe\nwith NMR due to the diminishing sensitivity of magnetometers in this region. To\ncircumvent this limitation, we suggest new detection and data processing\nmodalities. We describe a non-resonant frequency-modulation detection scheme,\nenabling searches from mHz to Hz frequencies ($m_{\\rm a} \\sim\n10^{-17}$--$10^{-14} $ eV), extending the detection bandwidth by three decades.\n",
"title": "The Cosmic Axion Spin Precession Experiment (CASPEr): a dark-matter search with nuclear magnetic resonance"
}
| null | null | null | null | true | null |
13581
| null |
Default
| null | null |
null |
{
"abstract": " The energy of a graph G is equal to the sum of absolute values of the\neigenvalues of the adjacency matrix of G, whereas the Laplacian energy of a\ngraph G is equal to the sum of the absolute value of the difference between the\neigenvalues of the Laplacian matrix of G and average degree of the vertices of\nG. Motivated by the work from Sharafdini et al. [R. Sharafdini, H. Panahbar,\nVertex weighted Laplacian graph energy and other topological indices. J. Math.\nNanosci. 2016, 6, 49-57.], in this paper we investigate the eccentricity\nversion of Laplacian energy of a graph G.\n",
"title": "On eccentricity version of Laplacian energy of a graph"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
13582
| null |
Validated
| null | null |
null |
{
"abstract": " The Linear Attention Recurrent Neural Network (LARNN) is a recurrent\nattention module derived from the Long Short-Term Memory (LSTM) cell and ideas\nfrom the consciousness Recurrent Neural Network (RNN). Yes, it LARNNs. The\nLARNN uses attention on its past cell state values for a limited window size\n$k$. The formulas are also derived from the Batch Normalized LSTM (BN-LSTM)\ncell and the Transformer Network for its Multi-Head Attention Mechanism. The\nMulti-Head Attention Mechanism is used inside the cell such that it can query\nits own $k$ past values with the attention window. This has the effect of\naugmenting the rank of the tensor with the attention mechanism, such that the\ncell can perform complex queries to question its previous inner memories, which\nshould augment the long short-term effect of the memory. With a clever trick,\nthe LARNN cell with attention can be easily used inside a loop on the cell\nstate, just like how any other Recurrent Neural Network (RNN) cell can be\nlooped linearly through time series. This is due to the fact that its state,\nwhich is looped upon throughout time steps within time series, stores the inner\nstates in a \"first in, first out\" queue which contains the $k$ most recent\nstates and on which it is easily possible to add static positional encoding\nwhen the queue is represented as a tensor. This neural architecture yields\nbetter results than the vanilla LSTM cells. It can obtain results of 91.92% for\nthe test accuracy, compared to the previously attained 91.65% using vanilla\nLSTM cells. Note that this is not to compare to other research, where up to\n93.35% is obtained, but costly using 18 LSTM cells rather than with 2 to 3\ncells as analyzed here. Finally, an interesting discovery is made, such that\nadding activation within the multi-head attention mechanism's linear layers can\nyield better results in the context researched hereto.\n",
"title": "LARNN: Linear Attention Recurrent Neural Network"
}
| null | null | null | null | true | null |
13583
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider the robust adaptive non parametric estimation\nproblem for the drift coefficient in diffusion processes. An adaptive model\nselection procedure, based on the improved weighted least square estimates, is\nproposed. Sharp oracle inequalities for the robust risk have been obtained.\n",
"title": "Improved nonparametric estimation of the drift in diffusion processes"
}
| null | null | null | null | true | null |
13584
| null |
Default
| null | null |
null |
{
"abstract": " Designing analog sub-threshold neuromorphic circuits in deep sub-micron\ntechnologies e.g. 28 nm can be a daunting task due to the problem of excessive\nleakage current. We propose novel energy-efficient hybrid CMOS-nano\nelectro-mechanical switches (NEMS) Leaky Integrate and Fire (LIF) neuron and\nsynapse circuits and investigate the impact of NEM switches on the leakage\npower and overall energy consumption. We analyze the performance of\nbiologically-inspired neuron circuit in terms of leakage power consumption and\npresent new energy-efficient neural circuits that operate with biologically\nplausible firing rates. Our results show the proposed CMOS-NEMS neuron circuit\nis, on average, 35% more energy-efficient than its CMOS counterpart with same\ncomplexity in 28 nm process. Moreover, we discuss how NEM switches can be\nutilized to further improve the scalability of mixed-signal neuromorphic\ncircuits.\n",
"title": "Energy-efficient Hybrid CMOS-NEMS LIF Neuron Circuit in 28 nm CMOS Process"
}
| null | null |
[
"Computer Science"
] | null | true | null |
13585
| null |
Validated
| null | null |
null |
{
"abstract": " The origin of the broad emission line region (BELR) in quasars and active\ngalactic nuclei is still unclear. I propose that condensations form in the\nwarm, radiation pressure driven, accretion disk wind of quasars creating the\nBEL clouds and uniting them with the other two manifestations of cool, 10,000\nK, gas in quasars, the low ionization phase of the warm absorbers (WAs) and the\nclouds causing X-ray eclipses. The cool clouds will condense quickly (days to\nyears), before the WA outflows reach escape velocity (which takes months to\ncenturies). Cool clouds form in equilibrium with the warm phase of the wind\nbecause the rapidly varying X-ray quasar continuum changes the force\nmultiplier, causing pressure waves to move gas into stable locations in\npressure-temperature space. The narrow range of 2-phase equilibrium densities\nmay explain the scaling of the BELR size with the square root of luminosity,\nwhile the scaling of cloud formation timescales could produce the Baldwin\neffect. These dense clouds have force multipliers of order unity and so cannot\nbe accelerated to escape velocity. They fall back on a dynamical timescale\n(months to centuries), producing an inflow that rains down toward the central\nblack hole. As they soon move at Mach ~40 with respect to the WA outflow, these\n'raindrops' will be rapidly destroyed within months. This rain of clouds may\nproduce the elliptical BELR orbits implied by velocity resolved reverberation\nmapping in some objects, and can explain the opening angle and destruction\ntimescale of the narrow 'cometary' tails of the clouds seen in X-ray eclipse\nobservations. Some consequences and challenges of this 'quasar rain' model are\npresented along with several avenues for theoretical investigation.\n",
"title": "Quasar Rain: the Broad Emission Line Region as Condensations in the Warm Accretion Disk Wind"
}
| null | null | null | null | true | null |
13586
| null |
Default
| null | null |
null |
{
"abstract": " We propose a consistent polynomial-time method for the unseeded node matching\nproblem for networks with smooth underlying structures. Despite widely\nconjectured by the research community that the structured graph matching\nproblem to be significantly easier than its worst case counterpart, well-known\nto be NP-hard, the statistical version of the problem has stood a challenge\nthat resisted any solution both provable and polynomial-time. The closest\nexisting work requires quasi-polynomial time. Our method is based on the latest\nadvances in graphon estimation techniques and analysis on the concentration of\nempirical Wasserstein distances. Its core is a simple yet unconventional\nsampling-and-matching scheme that reduces the problem from unseeded to seeded.\nOur method allows flexible efficiencies, is convenient to analyze and\npotentially can be extended to more general settings. Our work enables a rich\nvariety of subsequent estimations and inferences.\n",
"title": "Consistent polynomial-time unseeded graph matching for Lipschitz graphons"
}
| null | null |
[
"Statistics"
] | null | true | null |
13587
| null |
Validated
| null | null |
null |
{
"abstract": " We report time and angle resolved spectroscopic measurements in optimally\ndoped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\\delta}$. The spectral function is monitored as\na function of temperature, photoexcitation density and delay time from the pump\npulse. According to our data, the superconducting gap becomes slightly stiffer\nwhen moving off the nodal direction. The nodal quasiparticles develop a faster\ndynamics when pumping the superconductor with a fluence that is large enough to\ninduce the total collapse of the gap. We discuss the observed relaxation in\nterms of a dynamical reformation of Cooper pairs.\n",
"title": "Photoinduced filling of near nodal gap in Bi$_2$Sr$_2$CaCu$_2$O$_{8+δ}$"
}
| null | null | null | null | true | null |
13588
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider a family of Jacobi-type algorithms for\nsimultaneous orthogonal diagonalization problem of symmetric tensors. For the\nJacobi-based algorithm of [SIAM J. Matrix Anal. Appl., 2(34):651--672, 2013],\nwe prove its global convergence for simultaneous orthogonal diagonalization of\nsymmetric matrices and 3rd-order tensors. We also propose a new Jacobi-based\nalgorithm in the general setting and prove its global convergence for\nsufficiently smooth functions.\n",
"title": "Globally convergent Jacobi-type algorithms for simultaneous orthogonal symmetric tensor diagonalization"
}
| null | null | null | null | true | null |
13589
| null |
Default
| null | null |
null |
{
"abstract": " The goal of the present paper is to introduce a smaller, but equivalent\nversion of the Deligne-Hinich-Getzler $\\infty$-groupoid associated to a\nhomotopy Lie algebra. In the case of differential graded Lie algebras, we\nrepresent it by a universal cosimplicial object.\n",
"title": "Representing the Deligne-Hinich-Getzler $\\infty$-groupoid"
}
| null | null |
[
"Mathematics"
] | null | true | null |
13590
| null |
Validated
| null | null |
null |
{
"abstract": " This submission investigates alternative machine learning models for\npredicting the HTER score on the sentence level. Instead of directly predicting\nthe HTER score, we suggest a model that jointly predicts the amount of the 4\ndistinct post-editing operations, which are then used to calculate the HTER\nscore. This also gives the possibility to correct invalid (e.g. negative)\npredicted values prior to the calculation of the HTER score. Without any\nfeature exploration, a multi-layer perceptron with 4 outputs yields small but\nsignificant improvements over the baseline.\n",
"title": "Sentence-level quality estimation by predicting HTER as a multi-component metric"
}
| null | null |
[
"Computer Science"
] | null | true | null |
13591
| null |
Validated
| null | null |
null |
{
"abstract": " Growing digital archives and improving algorithms for automatic analysis of\ntext and speech create new research opportunities for fundamental research in\nphonetics. Such empirical approaches allow statistical evaluation of a much\nlarger set of hypothesis about phonetic variation and its conditioning factors\n(among them geographical / dialectal variants). This paper illustrates this\nvision and proposes to challenge automatic methods for the analysis of a not\neasily observable phenomenon: vowel length contrast. We focus on Wolof, an\nunder-resourced language from Sub-Saharan Africa. In particular, we propose\nmultiple features to make a fine evaluation of the degree of length contrast\nunder different factors such as: read vs semi spontaneous speech ; standard vs\ndialectal Wolof. Our measures made fully automatically on more than 20k vowel\ntokens show that our proposed features can highlight different degrees of\ncontrast for each vowel considered. We notably show that contrast is weaker in\nsemi-spontaneous speech and in a non standard semi-spontaneous dialect.\n",
"title": "Machine Assisted Analysis of Vowel Length Contrasts in Wolof"
}
| null | null |
[
"Computer Science"
] | null | true | null |
13592
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we promote a method for the evaluation of a surface topography\nwhich we call the correlogram correlation method. Employing a theoretical\nanalysis as well as numerical simulations the method is proven to be the most\naccurate among available evaluation algorithms in the common case of\nuncorrelated noise. Examples illustrate the superiority of the correlogram\ncorrelation method over the common envelope and phase methods.\n",
"title": "Precision of Evaluation Methods in White Light Interferometry: Correlogram Correlation Method"
}
| null | null | null | null | true | null |
13593
| null |
Default
| null | null |
null |
{
"abstract": " We consider the flotation of deformable, non-wetting drops on a liquid\ninterface. We consider the deflection of both the liquid interface and the\ndroplet itself in response to the buoyancy forces, density difference and the\nvarious surface tensions within the system. Our results suggest new insight\ninto a range of phenomena in which such drops occur, including Leidenfrost\ndroplets and floating liquid marbles. In particular, we show that the floating\nstate of liquid marbles is very sensitive to the tension of the\nparticle-covered interface and suggest that this sensitivity may make such\nexperiments a useful assay of the properties of these complex interfaces.\n",
"title": "Non-wetting drops at liquid interfaces: From liquid marbles to Leidenfrost drops"
}
| null | null | null | null | true | null |
13594
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we study the crystalline nuclei growth in glassy systems\nfocusing primarily on the early stages of the process, at which the size of a\ngrowing nucleus is still comparable with the critical size. On the basis of\nmolecular dynamics simulation results for two crystallizing glassy systems, we\nevaluate the growth laws of the crystalline nuclei and the parameters of the\ngrowth kinetics at the temperatures corresponding to deep supercoolings;\nherein, the statistical treatment of the simulation results is done within the\nmean-first-passage-time method. It is found for the considered systems at\ndifferent temperatures that the crystal growth laws rescaled onto the waiting\ntimes of the critically-sized nucleus follow the unified dependence, that can\nsimplify significantly theoretical description of the post-nucleation growth of\ncrystalline nuclei. The evaluated size-dependent growth rates are characterized\nby transition to the steady-state growth regime, which depends on the\ntemperature and occurs in the glassy systems when the size of a growing nucleus\nbecomes two-three times larger than a critical size. It is suggested to\nconsider the temperature dependencies of the crystal growth rate\ncharacteristics by using the reduced temperature scale $\\widetilde{T}$. Thus,\nit is revealed that the scaled values of the crystal growth rate\ncharacteristics (namely, the steady-state growth rate and the attachment rate\nfor the critically-sized nucleus) as functions of the reduced temperature\n$\\widetilde{T}$ for glassy systems follow the unified power-law dependencies.\nThis finding is supported by available simulation results; the correspondence\nwith the experimental data for the crystal growth rate in glassy systems at the\ntemperatures near the glass transition is also discussed.\n",
"title": "Kinetics of the Crystalline Nuclei Growth in Glassy Systems"
}
| null | null | null | null | true | null |
13595
| null |
Default
| null | null |
null |
{
"abstract": " Electron cloud can lead to a fast instability in intense proton and positron\nbeams in circular accelerators. In the Fermilab Recycler the electron cloud is\nconfined within its combined function magnets. We show that the field of\ncombined function magnets traps the electron cloud, present the results of\nanalytical estimates of trapping, and compare them to numerical simulations of\nelectron cloud formation. The electron cloud is located at the beam center and\nup to 1% of the particles can be trapped by the magnetic field. Since the\nprocess of electron cloud build-up is exponential, once trapped this amount of\nelectrons significantly increases the density of the cloud on the next\nrevolution. In a Recycler combined function dipole this multi-turn accumulation\nallows the electron cloud reaching final intensities orders of magnitude\ngreater than in a pure dipole. The multi-turn build-up can be stopped by\ninjection of a clearing bunch of $10^{10}$ p at any position in the ring.\n",
"title": "Electron Cloud Trapping In Recycler Combined Function Dipole Magnets"
}
| null | null | null | null | true | null |
13596
| null |
Default
| null | null |
null |
{
"abstract": " No man is an island, as individuals interact and influence one another daily\nin our society. When social influence takes place in experiments on a\npopulation of interconnected individuals, the treatment on a unit may affect\nthe outcomes of other units, a phenomenon known as interference. This thesis\ndevelops a causal framework and inference methodology for experiments where\ninterference takes place on a network of influence (i.e. network interference).\nIn this framework, the network potential outcomes serve as the key quantity and\nflexible building blocks for causal estimands that represent a variety of\nprimary, peer, and total treatment effects. These causal estimands are\nestimated via principled Bayesian imputation of missing outcomes. The theory on\nthe unconfoundedness assumptions leading to simplified imputation highlights\nthe importance of including relevant network covariates in the potential\noutcome model. Additionally, experimental designs that result in balanced\ncovariates and sizes across treatment exposure groups further improve the\ncausal estimate, especially by mitigating potential outcome model\nmis-specification. The true potential outcome model is not typically known in\nreal-world experiments, so the best practice is to account for interference and\nconfounding network covariates through both balanced designs and model-based\nimputation. A full factorial simulated experiment is formulated to demonstrate\nthis principle by comparing performance across different randomization schemes\nduring the design phase and estimators during the analysis phase, under varying\nnetwork topology and true potential outcome models. Overall, this thesis\nasserts that interference is not just a nuisance for analysis but rather an\nopportunity for quantifying and leveraging peer effects in real-world\nexperiments.\n",
"title": "Causal Inference Under Network Interference: A Framework for Experiments on Social Networks"
}
| null | null | null | null | true | null |
13597
| null |
Default
| null | null |
null |
{
"abstract": " Let $F(x)=(f_1(x), \\dots, f_m(x))$ be such that $1, f_1, \\dots, f_m$ are\nlinearly independent polynomials with real coefficients. Based on ideas of\nBachoc, DeCorte, Oliveira and Vallentin in combination with estimating certain\noscillatory integrals with polynomial phase we will show that the independence\nratio of the Cayley graph of $\\mathbb{R}^m$ with respect to the portion of the\ngraph of $F$ defined by $a\\leq \\log |s| \\leq T$ is at most $O(1/(T-a))$. We\nconclude that if $I \\subseteq \\mathbb{R}^m$ has positive upper density, then\nthe difference set $I-I$ contains vectors of the form $F(s)$ for an unbounded\nset of values $s \\in \\mathbb{R}$. It follows that the Borel chromatic number of\nthe Cayley graph of $\\mathbb{R}^m$ with respect to the set $\\{ \\pm F(s): s \\in\n\\mathbb{R} \\}$ is infinite. Analogous results are also proven when $\\mathbb{R}$\nis replaced by the field of $p$-adic numbers $\\mathbb{Q}_p$. At the end, we\nwill also the existence of real analytic functions $f_1, \\dots, f_m$, for which\nthe analogous statements no longer hold.\n",
"title": "Polynomial configurations in sets of positive upper density over local fields"
}
| null | null | null | null | true | null |
13598
| null |
Default
| null | null |
null |
{
"abstract": " We consider several notions of genericity appearing in algebraic geometry and\ncommutative algebra. Special emphasis is put on various stability notions which\nare defined in a combinatorial manner and for which a number of equivalent\nalgebraic characterisations are provided. It is shown that in characteristic\nzero the corresponding generic positions can be obtained with a simple\ndeterministic algorithm. In positive characteristic, only adapted stable\npositions are reachable except for quasi-stability which is obtainable in any\ncharacteristic.\n",
"title": "Deterministic Genericity for Polynomial Ideals"
}
| null | null | null | null | true | null |
13599
| null |
Default
| null | null |
null |
{
"abstract": " CubeSats are emerging as low-cost tools to perform astronomy, exoplanet\nsearches and earth observation. These satellites can target an object for\nscience observation for weeks on end. This is typically not possible on larger\nmissions where usage time is shared. The problem of designing an attitude\ncontrol system for CubeSat telescopes is very challenging because current\nchoice of actuators such as reaction-wheels and magnetorquers can induce jitter\non the spacecraft due to moving mechanical parts and due to external\ndisturbances. These telescopes may contain cryo-pumps and servos that introduce\nadditional vibrations. A better solution is required. In our paper, we analyze\nthe feasibility of utilizing solar radiation pressure (SRP) and radiometric\nforce to achieve precise attitude control. Our studies show radiometric\nactuators to be a viable method to achieve precise pointing. The device uses 8\nthin vanes of different temperatures placed in a near-vacuum chamber. These\nchambers contain trace quantities of lightweight, inert gasses like argon. The\ntemperature gradient across the vanes causes the gas molecules to strike the\nvanes differently and thus inducing a force. By controlling these forces, it's\npossible to produce a torque to precisely point or spin a spacecraft. We\npresent a conceptual design of a CubeSat that is equipped with these actuators.\nWe then analyze the potential slew maneuver and slew rates possible with these\nactuators by simulating their performance. Our analytical and simulation\nresults point towards a promising pathway for laboratory testing of this\ntechnology and demonstration of this technology in space.\n",
"title": "Precise Pointing of Cubesat Telescopes: Comparison Between Heat and Light Induced Attitude Control Methods"
}
| null | null |
[
"Physics"
] | null | true | null |
13600
| null |
Validated
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.