text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " In the theory of second-order, nonlinear elliptic and parabolic equations,\nobtaining local or global gradient bounds is often a key step for proving the\nexistence of solutions but it may be even more useful in many applications, for\nexample to singular perturbations problems. The classical Bernstein's method is\nthe well-known tool to obtain these bounds but, in most cases, it has the\ndefect of providing only a priori estimates. The \"weak Bernstein's method\" ,\nbased on viscosity solutions' theory, is an alternative way to prove the global\nLipschitz regularity of solutions together with some estimates but it is not so\neasy to perform in the case of local bounds. The aim of this paper is to\nprovide an extension of the \"weak Bernstein's method\" which allows to prove\nlocal gradient bounds with reasonnable technicalities. The classical\nBernstein's method is a well-known tool for obtaining gradient estimates for\nsolutions of second-order, elliptic and parabolic equations\n",
"title": "Local Gradient Estimates for Second-Order Nonlinear Elliptic and Parabolic Equations by the Weak Bernstein's Method"
}
| null | null | null | null | true | null |
11901
| null |
Default
| null | null |
null |
{
"abstract": " The purpose of this study is to investigate the structure and evolution of\nknowledge spillovers across technological domains. Specifically, dynamic\npatterns of knowledge flow among 29 technological domains, measured by patent\ncitations for eight distinct periods, are identified and link prediction is\ntested for capability for forecasting the evolution in these cross-domain\npatent networks. The overall success of the predictions using the Katz metric\nimplies that there is a tendency to generate increased knowledge flows mostly\nwithin the set of previously linked technological domains. This study\ncontributes to innovation studies by characterizing the structural change and\nevolutionary behaviors in dynamic technology networks and by offering the basis\nfor predicting the emergence of future technological knowledge flows.\n",
"title": "Dynamic patterns of knowledge flows across technological domains: empirical results and link prediction"
}
| null | null | null | null | true | null |
11902
| null |
Default
| null | null |
null |
{
"abstract": " Alternative expressions for calculating the oblate spheroidal radial\nfunctions of both kinds R1ml and R2ml are shown to provide accurate values over\nvery large parameter ranges using 64 bit arithmetic, even where the traditional\nexpressions fail. First is the expansion of the product of a radial function\nand the angular function of the first kind in a series of products of the\ncorresponding spherical functions, with the angular coordinate being a free\nparameter. Setting the angular coordinate equal to zero leads to accurate\nvalues for R2ml when the radial coordinate xi is larger than 0.01 and l is\nsomewhat larger than m. Allowing it to vary with increasing l leads to highly\naccurate values for R1ml over all parameter ranges. Next is the calculation of\nR2ml as an integral of the product of S1ml and a spherical Neumann function\nkernel. This is useful for smaller values of xi. Also used is the near equality\nof pairs of low order eigenvalues when the size parameter c is large that leads\nto accurate values for R2ml using neighboring accurate values for R1ml. A\nmodified method is described that provides accurate values for the necessary\nexpansion coefficients when c is large and l is near m and traditional methods\nfail. A resulting Fortran computer program Oblfcn almost always provides radial\nfunction values with at least 8 accurate decimal digits using 64 bit arithmetic\nfor m up to at least 1000 with c up to at least 2000 when xi is greater than\n0.000001 and c up to at least 5000 when xi is greater than 0.01. Use of 128 bit\narithmetic extends the accuracy to 15 or more digits and extends xi to all\nvalues other than zero. Oblfcn is freely available.\n",
"title": "Accurate calculation of oblate spheroidal wave functions"
}
| null | null |
[
"Mathematics"
] | null | true | null |
11903
| null |
Validated
| null | null |
null |
{
"abstract": " Learning cooperative policies for multi-agent systems is often challenged by\npartial observability and a lack of coordination. In some settings, the\nstructure of a problem allows a distributed solution with limited\ncommunication. Here, we consider a scenario where no communication is\navailable, and instead we learn local policies for all agents that collectively\nmimic the solution to a centralized multi-agent static optimization problem.\nOur main contribution is an information theoretic framework based on rate\ndistortion theory which facilitates analysis of how well the resulting fully\ndecentralized policies are able to reconstruct the optimal solution. Moreover,\nthis framework provides a natural extension that addresses which nodes an agent\nshould communicate with to improve the performance of its individual policy.\n",
"title": "Fully Decentralized Policies for Multi-Agent Systems: An Information Theoretic Approach"
}
| null | null | null | null | true | null |
11904
| null |
Default
| null | null |
null |
{
"abstract": " The ablation of solid tin surfaces by an 800-nanometer-wavelength laser is\nstudied for a pulse length range from 500 fs to 4.5 ps and a fluence range\nspanning 0.9 to 22 J/cm^2. The ablation depth and volume are obtained employing\na high-numerical-aperture optical microscope, while the ion yield and energy\ndistributions are obtained from a set of Faraday cups set up under various\nangles. We found a slight increase of the ion yield for an increasing pulse\nlength, while the ablation depth is slightly decreasing. The ablation volume\nremained constant as a function of pulse length. The ablation depth follows a\ntwo-region logarithmic dependence on the fluence, in agreement with the\navailable literature and theory. In the examined fluence range, the ion yield\nangular distribution is sharply peaked along the target normal at low fluences\nbut rapidly broadens with increasing fluence. The total ionization fraction\nincreases monotonically with fluence to a 5-6% maximum, which is substantially\nlower than the typical ionization fractions obtained with nanosecond-pulse\nablation. The angular distribution of the ions does not depend on the laser\npulse length within the measurement uncertainty. These results are of\nparticular interest for the possible utilization of fs-ps laser systems in\nplasma sources of extreme ultraviolet light for nanolithography.\n",
"title": "Ion distribution and ablation depth measurements of a fs-ps laser-irradiated solid tin target"
}
| null | null | null | null | true | null |
11905
| null |
Default
| null | null |
null |
{
"abstract": " The most precise local measurements of $H_0$ rely on observations of Type Ia\nsupernovae (SNe Ia) coupled with Cepheid distances to SN Ia host galaxies.\nRecent results have shown tension comparing $H_0$ to the value inferred from\nCMB observations assuming $\\Lambda$CDM, making it important to check for\npotential systematic uncertainties in either approach. To date, precise local\n$H_0$ measurements have used SN Ia distances based on optical photometry, with\ncorrections for light curve shape and colour. Here, we analyse SNe Ia as\nstandard candles in the near-infrared (NIR), where intrinsic variations in the\nsupernovae and extinction by dust are both reduced relative to the optical.\nFrom a combined fit to 9 nearby calibrator SNe with host Cepheid distances from\nRiess et al. (2016) and 27 SNe in the Hubble flow, we estimate the absolute\npeak $J$ magnitude $M_J = -18.524\\;\\pm\\;0.041$ mag and $H_0 = 72.8\\;\\pm\\;1.6$\n(statistical) $\\pm$ 2.7 (systematic) km s$^{-1}$ Mpc$^{-1}$. The 2.2 $\\%$\nstatistical uncertainty demonstrates that the NIR provides a compelling avenue\nto measuring SN Ia distances, and for our sample the intrinsic (unmodeled) peak\n$J$ magnitude scatter is just $\\sim$0.10 mag, even without light curve shape or\ncolour corrections. Our results do not vary significantly with different sample\nselection criteria, though photometric calibration in the NIR may be a dominant\nsystematic uncertainty. Our findings suggest that tension in the competing\n$H_0$ distance ladders is likely not a result of supernova systematics that\ncould be expected to vary between optical and NIR wavelengths, like dust\nextinction. We anticipate further improvements in $H_0$ with a larger\ncalibrator sample of SNe Ia with Cepheid distances, more Hubble flow SNe Ia\nwith NIR light curves, and better use of the full NIR photometric data set\nbeyond simply the peak $J$-band magnitude.\n",
"title": "Measuring the Hubble constant with Type Ia supernovae as near-infrared standard candles"
}
| null | null | null | null | true | null |
11906
| null |
Default
| null | null |
null |
{
"abstract": " Deep learning thrives with large neural networks and large datasets. However,\nlarger networks and larger datasets result in longer training times that impede\nresearch and development progress. Distributed synchronous SGD offers a\npotential solution to this problem by dividing SGD minibatches over a pool of\nparallel workers. Yet to make this scheme efficient, the per-worker workload\nmust be large, which implies nontrivial growth in the SGD minibatch size. In\nthis paper, we empirically show that on the ImageNet dataset large minibatches\ncause optimization difficulties, but when these are addressed the trained\nnetworks exhibit good generalization. Specifically, we show no loss of accuracy\nwhen training with large minibatch sizes up to 8192 images. To achieve this\nresult, we adopt a hyper-parameter-free linear scaling rule for adjusting\nlearning rates as a function of minibatch size and develop a new warmup scheme\nthat overcomes optimization challenges early in training. With these simple\ntechniques, our Caffe2-based system trains ResNet-50 with a minibatch size of\n8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using\ncommodity hardware, our implementation achieves ~90% scaling efficiency when\nmoving from 8 to 256 GPUs. Our findings enable training visual recognition\nmodels on internet-scale data with high efficiency.\n",
"title": "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour"
}
| null | null | null | null | true | null |
11907
| null |
Default
| null | null |
null |
{
"abstract": " For conventional secret sharing, if cheaters can submit possibly forged\nshares after observing shares of the honest users in the reconstruction phase\nthen they cannot only disturb the protocol but also only they may reconstruct\nthe true secret. To overcome the problem, secret sharing scheme with properties\nof cheater-identification have been proposed. Existing protocols for\ncheater-identifiable secret sharing assumed non-rushing cheaters or honest\nmajority. In this paper, we remove both conditions simultaneously, and give its\nuniversal construction from any secret sharing scheme. To resolve this end, we\npropose the concepts of \"individual identification\" and \"agreed\nidentification\".\n",
"title": "Universal Construction of Cheater-Identifiable Secret Sharing Against Rushing Cheaters Based on Message Authentication"
}
| null | null | null | null | true | null |
11908
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents a Center of Mass (CoM) based manipulation and regrasp\nplanner that implements stability constraints to preserve the robot balance.\nThe planner provides a graph of IK-feasible, collision-free and stable motion\nsequences, constructed using an energy based motion planning algorithm. It\nassures that the assembly motions are stable and prevent the robot from falling\nwhile performing dexterous tasks in different situations. Furthermore, the\nconstraints are also used to perform an RRT-inspired task-related stability\nestimation in several simulations. The estimation can be used to select between\nsingle-arm and dual-arm regrasping configurations to achieve more stability and\nrobustness for a given manipulation task. To validate the planner and the\ntask-related stability estimations, several tests are performed in simulations\nand real-world experiments involving the HRP5P humanoid robot, the 5th\ngeneration of the HRP robot family. The experiment results suggest that the\nplanner and the task-related stability estimation provide robust behavior for\nthe humanoid robot while performing regrasp tasks.\n",
"title": "Regrasp Planning Considering Bipedal Stability Constraints"
}
| null | null | null | null | true | null |
11909
| null |
Default
| null | null |
null |
{
"abstract": " In the present paper, a continuum model is introduced for fluid flow in a\ndeformable porous medium, where the fluid may undergo phase transitions.\nTypically, such problems arise in modeling liquid-solid phase transformations\nin groundwater flows. The system of equations is derived here from the\nconservation principles for mass, momentum, and energy and from the\nClausius-Duhem inequality for entropy. It couples the evolution of the\ndisplacement in the matrix material, of the capillary pressure, of the absolute\ntemperature, and of the phase fraction. Mathematical results are proved under\nthe additional hypothesis that inertia effects and shear stresses can be\nneglected. For the resulting highly nonlinear system of two PDEs, one ODE and\none ordinary differential inclusion with natural initial and boundary\nconditions, existence of global in time solutions is proved by means of cut-off\ntechniques and suitable Moser-type estimates.\n",
"title": "Unsaturated deformable porous media flow with phase transition"
}
| null | null | null | null | true | null |
11910
| null |
Default
| null | null |
null |
{
"abstract": " The convergence speed of stochastic gradient descent (SGD) can be improved by\nactively selecting mini-batches. We explore sampling schemes where similar data\npoints are less likely to be selected in the same mini-batch. In particular, we\nprove that such repulsive sampling schemes lowers the variance of the gradient\nestimator. This generalizes recent work on using Determinantal Point Processes\n(DPPs) for mini-batch diversification (Zhang et al., 2017) to the broader class\nof repulsive point processes. We first show that the phenomenon of variance\nreduction by diversified sampling generalizes in particular to non-stationary\npoint processes. We then show that other point processes may be computationally\nmuch more efficient than DPPs. In particular, we propose and investigate\nPoisson Disk sampling---frequently encountered in the computer graphics\ncommunity---for this task. We show empirically that our approach improves over\nstandard SGD both in terms of convergence speed as well as final model\nperformance.\n",
"title": "Active Mini-Batch Sampling using Repulsive Point Processes"
}
| null | null | null | null | true | null |
11911
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies a problem of inverse visual path planning: creating a\nvisual scene from a first person action. Our conjecture is that the spatial\narrangement of a first person visual scene is deployed to afford an action, and\ntherefore, the action can be inversely used to synthesize a new scene such that\nthe action is feasible. As a proof-of-concept, we focus on linking visual\nexperiences induced by walking.\nA key innovation of this paper is a concept of ActionTunnel---a 3D virtual\ntunnel along the future trajectory encoding what the wearer will visually\nexperience as moving into the scene. This connects two distinctive first person\nimages through similar walking paths. Our method takes a first person image\nwith a user defined future trajectory and outputs a new image that can afford\nthe future motion. The image is created by combining present and future\nActionTunnels in 3D where the missing pixels in adjoining area are computed by\na generative adversarial network. Our work can provide a travel across\ndifferent first person experiences in diverse real world scenes.\n",
"title": "Customizing First Person Image Through Desired Actions"
}
| null | null | null | null | true | null |
11912
| null |
Default
| null | null |
null |
{
"abstract": " We reconsider the minimization of the compliance of a two dimensional elastic\nbody with traction boundary conditions for a given weight. It is well known how\nto rewrite this optimal design problem as a nonlinear variational problem. We\ntake the limit of vanishing weight by sending a suitable Lagrange multiplier to\ninfinity in the variational formulation. We show that the limit, in the sense\nof $\\Gamma$-convergence, is a certain Michell truss problem. This proves a\nconjecture by Kohn and Allaire.\n",
"title": "Michell trusses in two dimensions as a Gamma-limit of optimal design problems in linear elasticity"
}
| null | null |
[
"Mathematics"
] | null | true | null |
11913
| null |
Validated
| null | null |
null |
{
"abstract": " To help with the planning of inter-vehicular communication networks, an\naccurate understanding of traffic behavior and traffic phase transition is\nrequired. We calculate inter-vehicle spacings from empirical data collected in\na multi-lane highway in California, USA. We calculate the correlation\ncoefficients for spacings between vehicles in individual lanes to show that the\nflows are independent. We determine the first four moments for individual lanes\nat regular time intervals, namely the mean, variance, skewness and kurtosis. We\nfollow the evolution of these moments as the traffic condition changes from the\nlow-density free flow to high-density congestion. We find that the higher\nmoments of inter-vehicle spacings have a well defined dependence on the mean\nvalue. The variance of the spacing distribution monotonously increases with the\nmean vehicle spacing. In contrast, our analysis suggests that the skewness and\nkurtosis provide one of the most sensitive probes towards the search for the\ncritical points. We find two significant results. First, the kurtosis\ncalculated in different time intervals for different lanes varies smoothly with\nthe skewness. They share the same behavior with the skewness and kurtosis\ncalculated for probability density functions that depend on a single parameter.\nSecond, the skewness and kurtosis as functions of the mean intervehicle spacing\nshow sharp peaks at critical densities expected for transitions between\ndifferent traffic phases. The data show a considerable scatter near the peak\npositions, which suggests that the critical behavior may depend on other\nparameters in addition to the traffic density.\n",
"title": "Moment analysis of highway-traffic clearance distribution"
}
| null | null | null | null | true | null |
11914
| null |
Default
| null | null |
null |
{
"abstract": " As many different 3D volumes could produce the same 2D x-ray image, inverting\nthis process is challenging. We show that recent deep learning-based\nconvolutional neural networks can solve this task. As the main challenge in\nlearning is the sheer amount of data created when extending the 2D image into a\n3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which\nis then fused in a second step with the input x-ray into a high-resolution\nvolume. To train and validate our approach we introduce a new dataset that\ncomprises of close to half a million computer-simulated 2D x-ray images of 3D\nvolumes scanned from 175 mammalian species. Applications of our approach\ninclude stereoscopic rendering of legacy x-ray images, re-rendering of x-rays\nincluding changes of illumination, view pose or geometry. Our evaluation\nincludes comparison to previous tomography work, previous learning methods\nusing our data, a user study and application to a set of real x-rays.\n",
"title": "Single-image Tomography: 3D Volumes from 2D Cranial X-Rays"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11915
| null |
Validated
| null | null |
null |
{
"abstract": " We consider minimization of stochastic functionals that are compositions of a\n(potentially) non-smooth convex function $h$ and smooth function $c$ and, more\ngenerally, stochastic weakly-convex functionals. We develop a family of\nstochastic methods---including a stochastic prox-linear algorithm and a\nstochastic (generalized) sub-gradient procedure---and prove that, under mild\ntechnical conditions, each converges to first-order stationary points of the\nstochastic objective. We provide experiments further investigating our methods\non non-smooth phase retrieval problems; the experiments indicate the practical\neffectiveness of the procedures.\n",
"title": "Stochastic Methods for Composite and Weakly Convex Optimization Problems"
}
| null | null | null | null | true | null |
11916
| null |
Default
| null | null |
null |
{
"abstract": " We have performed high-resolution powder x-ray diffraction measurements on a\nsample of $^{242}$PuCoGa$_{5}$, the heavy-fermion superconductor with the\nhighest critical temperature $T_{c}$ = 18.7 K. The results show that the\ntetragonal symmetry of its crystallographic lattice is preserved down to 2 K.\nMarginal evidence is obtained for an anomalous behaviour below $T_{c}$ of the\n$a$ and $c$ lattice parameters. The observed thermal expansion is isotropic\ndown to 150 K, and becomes anisotropic for lower temperatures. This gives a\n$c/a$ ratio that decreases with increasing temperature to become almost\nconstant above $\\sim$150 K. The volume thermal expansion coefficient\n$\\alpha_{V}$ has a jump at $T_{c}$, a factor $\\sim$20 larger than the change\npredicted by the Ehrenfest relation for a second order phase transition. The\nvolume expansion deviates from the curve expected for the conventional\nanharmonic behaviour described by a simple Grüneisen-Einstein model. The\nobserved differences are about ten times larger than the statistical error bars\nbut are too small to be taken as an indication for the proximity of the system\nto a valence instability that is avoided by the superconducting state.\n",
"title": "Thermal Expansion of the Heavy-fermion Superconductor PuCoGa$_{5}$"
}
| null | null | null | null | true | null |
11917
| null |
Default
| null | null |
null |
{
"abstract": " Conditional generators learn the data distribution for each class in a\nmulti-class scenario and generate samples for a specific class given the right\ninput from the latent space. In this work, a method known as \"Versatile\nAuxiliary Classifier with Generative Adversarial Network\" for multi-class\nscenarios is presented. In this technique, the Generative Adversarial Networks\n(GAN)'s generator is turned into a conditional generator by placing a\nmulti-class classifier in parallel with the discriminator network and\nbackpropagate the classification error through the generator. This technique is\nversatile enough to be applied to any GAN implementation. The results on two\ndatabases and comparisons with other method are provided as well.\n",
"title": "Versatile Auxiliary Classifier with Generative Adversarial Network (VAC+GAN), Multi Class Scenarios"
}
| null | null | null | null | true | null |
11918
| null |
Default
| null | null |
null |
{
"abstract": " The need to develop models to predict the motion of microrobots, or robots of\na much smaller scale, moving in fluids in a low Reynolds number regime, and in\nparticular, in non Newtonian fluids, cannot be understated. The article\ndevelops a Lagrangian based model for one such mechanism - a two-link mechanism\ntermed a microscallop, moving in a low Reynolds number environment in a non\nNewtonian fluid. The modelling proceeds through the conventional Lagrangian\nconstruction for a two-link mechanism and then goes on to model the external\nfluid forces using empirically based models for viscosity to complete the\ndynamic model. The derived model is then simulated for different initial\nconditions and key parameters of the non Newtonian fluid, and the results are\ncorroborated with a few existing experimental results on a similar mechanism\nunder identical conditions.\n",
"title": "A Lagrangian Model to Predict Microscallop Motion in non Newtonian Fluids"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11919
| null |
Validated
| null | null |
null |
{
"abstract": " We employ a hybrid approach in determining the anomalous dimension and OPE\ncoefficient of higher spin operators in the Wilson-Fisher theory. First we do a\nlarge spin analysis for CFT data where we use results obtained from the usual\nand the Mellin Bootstrap and also from Feynman diagram literature. This gives\nnew predictions at $O(\\epsilon^4)$ and $O(\\epsilon^5)$ for anomalous dimensions\nand OPE coefficients, and also provides a cross-check for the results from\nMellin Bootstrap. These higher orders get contributions from all higher spin\noperators in the crossed channel. We also use the Bootstrap in Mellin space\nmethod for $\\phi^3$ in $d=6-\\epsilon$ CFT where we calculate general higher\nspin OPE data. We demonstrate a higher loop order calculation in this approach\nby summing over contributions from higher spin operators of the crossed channel\nin the same spirit as before.\n",
"title": "Towards a Bootstrap approach to higher orders of epsilon expansion"
}
| null | null | null | null | true | null |
11920
| null |
Default
| null | null |
null |
{
"abstract": " We study the impact of thermal inflation on the formation of cosmological\nstructures and present astrophysical observables which can be used to constrain\nand possibly probe the thermal inflation scenario. These are dark matter halo\nabundance at high redshifts, satellite galaxy abundance in the Milky Way, and\nfluctuation in the 21-cm radiation background before the epoch of reionization.\nThe thermal inflation scenario leaves a characteristic signature on the matter\npower spectrum by boosting the amplitude at a specific wavenumber determined by\nthe number of e-foldings during thermal inflation ($N_{\\rm bc}$), and strongly\nsuppressing the amplitude for modes at smaller scales. For a reasonable range\nof parameter space, one of the consequences is the suppression of minihalo\nformation at high redshifts and that of satellite galaxies in the Milky Way.\nWhile this effect is substantial, it is degenerate with other cosmological or\nastrophysical effects. The power spectrum of the 21-cm background probes this\nimpact more directly, and its observation may be the best way to constrain the\nthermal inflation scenario due to the characteristic signature in the power\nspectrum. The Square Kilometre Array (SKA) in phase 1 (SKA1) has sensitivity\nlarge enough to achieve this goal for models with $N_{\\rm bc}\\gtrsim 26$ if a\n10000-hr observation is performed. The final phase SKA, with anticipated\nsensitivity about an order of magnitude higher, seems more promising and will\ncover a wider parameter space.\n",
"title": "Small-scale Effects of Thermal Inflation on Halo Abundance at High-$z$, Galaxy Substructure Abundance and 21-cm Power Spectrum"
}
| null | null | null | null | true | null |
11921
| null |
Default
| null | null |
null |
{
"abstract": " We prove that the space of dominant/non-constant holomorphic mappings from a\nproduct of hyperbolic Riemann surfaces of finite type into certain hyperbolic\nmanifolds with universal cover a bounded domain is a finite set.\n",
"title": "Finiteness theorems for holomorphic mappings from products of hyperbolic Riemann surfaces"
}
| null | null | null | null | true | null |
11922
| null |
Default
| null | null |
null |
{
"abstract": " Time spent in leisure is not a minor research question as it is acknowledged\nas a key aspect of one's quality of life. The primary aim of this article is to\nqualify time and Internet use of Italian Generation Y beyond media hype and\nassumptions. To this aim, we apply a multidimensional extension of Item\nResponse Theory models to the Italian \"Multipurpose survey on households:\naspects of daily life\" to ascertain the relevant dimensions of Generation Y\ntime-use. We show that the use of technology is neither the first nor the\nforemost time-use activity of Italian Generation Y, who still prefers to use\nits time to socialise and have fun with friends in a non media-medalled manner.\n",
"title": "Time and media-use of Italian Generation Y: dimensions of leisure preferences"
}
| null | null | null | null | true | null |
11923
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we are interested in the problem of image segmentation given\nnatural language descriptions, i.e. referring expressions. Existing works\ntackle this problem by first modeling images and sentences independently and\nthen segment images by combining these two types of representations. We argue\nthat learning word-to-image interaction is more native in the sense of jointly\nmodeling two modalities for the image segmentation task, and we propose\nconvolutional multimodal LSTM to encode the sequential interactions between\nindividual words, visual information, and spatial information. We show that our\nproposed model outperforms the baseline model on benchmark datasets. In\naddition, we analyze the intermediate output of the proposed multimodal LSTM\napproach and empirically explain how this approach enforces a more effective\nword-to-image interaction.\n",
"title": "Recurrent Multimodal Interaction for Referring Image Segmentation"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11924
| null |
Validated
| null | null |
null |
{
"abstract": " Anisotropy of friction force is proved to be an important factor in various\ncontact problems. We study dynamical behavior of thin plates with respect to\nsymmetric and asymmetric orthotropic friction. Terminal motion of plates with\ncircular and elliptic contact areas is mainly analyzed. Evaluation of friction\nforces for both symmetric and asymmetric orthotropic cases are shown. Regular\npressure distribution is considered. Differential equations are formulated and\nsolved numerically for a number of initial conditions. Examples show\nsignificant influence of friction force asymmetry on the motion.\n",
"title": "Motion of a thin elliptic plate under symmetric and asymmetric orthotropic friction forces"
}
| null | null | null | null | true | null |
11925
| null |
Default
| null | null |
null |
{
"abstract": " In combinatorial auctions, a designer must decide how to allocate a set of\nindivisible items amongst a set of bidders. Each bidder has a valuation\nfunction which gives the utility they obtain from any subset of the items. Our\nfocus is specifically on welfare maximization, where the objective is to\nmaximize the sum of valuations that the bidders place on the items that they\nwere allocated (the valuation functions are assumed to be reported truthfully).\nWe analyze an online problem in which the algorithm is not given the set of\nbidders in advance. Instead, the bidders are revealed sequentially in a\nuniformly random order, similarly to secretary problems. The algorithm must\nmake an irrevocable decision about which items to allocate to the current\nbidder before the next one is revealed. When the valuation functions lie in the\nclass $XOS$ (which includes submodular functions), we provide a black box\nreduction from offline to online optimization. Specifically, given an\n$\\alpha$-approximation algorithm for offline welfare maximization, we show how\nto create a $(0.199 \\alpha)$-approximation algorithm for the online problem.\nOur algorithm draws on connections to secretary problems; in fact, we show that\nthe online welfare maximization problem itself can be viewed as a particular\nkind of secretary problem with nonuniform arrival order.\n",
"title": "Combinatorial Auctions with Online XOS Bidders"
}
| null | null | null | null | true | null |
11926
| null |
Default
| null | null |
null |
{
"abstract": " The adaptability of the convolutional neural network (CNN) technique for\naerodynamic meta-modeling tasks is probed in this work. The primary objective\nis to develop suitable CNN architecture for variable flow conditions and object\ngeometry, in addition to identifying a sufficient data preparation process.\nMultiple CNN structures were trained to learn the lift coefficients of the\nairfoils with a variety of shapes in multiple flow Mach numbers, Reynolds\nnumbers, and diverse angles of attack. This is conducted to illustrate the\nconcept of the technique. A multi-layered perceptron (MLP) is also used for the\ntraining sets. The MLP results are compared with that of the CNN results. The\nnewly proposed meta-modeling concept has been found to be comparable with the\nMLP in learning capability; and more importantly, our CNN model exhibits a\ncompetitive prediction accuracy with minimal constraints in a geometric\nrepresentation.\n",
"title": "Application of Convolutional Neural Network to Predict Airfoil Lift Coefficient"
}
| null | null | null | null | true | null |
11927
| null |
Default
| null | null |
null |
{
"abstract": " A Y-linked two-sex branching process with mutations and blind choice of males\nis a suitable model for analyzing the evolution of the number of carriers of an\nallele and its mutations of a Y-linked gene. Considering a two-sex monogamous\npopulation, in this model each female chooses her partner from among the male\npopulation without caring about his type (i.e., the allele he carries). In this\nwork, we deal with the problem of estimating the main parameters of such model\ndeveloping the Bayesian inference in a parametric framework. Firstly, we\nconsider, as sample scheme, the observation of the total number of females and\nmales up to some generation as well as the number of males of each genotype at\nlast generation. Later, we introduce the information of the mutated males only\nin the last generation obtaining in this way a second sample scheme. For both\nsamples, we apply the Approximate Bayesian Computation (ABC) methodology to\napproximate the posterior distributions of the main parameters of this model.\nThe accuracy of the procedure based on these samples is illustrated and\ndiscussed by way of simulated examples.\n",
"title": "Bayesian inference in Y-linked two-sex branching processes with mutations: ABC approach"
}
| null | null | null | null | true | null |
11928
| null |
Default
| null | null |
null |
{
"abstract": " In this work we propose an effective low-energy theory for a large class of\n2+1 dimensional non-Abelian topological spin liquids whose edge states are\nconformal degrees of freedom with central charges corresponding to the coset\nstructure $su(2)_k\\oplus su(2)_{k'}/su(2)_{k+k'}$. For particular values of\n$k'$ it furnishes the series for unitary minimal and superconformal models.\nThese gapped phases were recently suggested to be obtained from an array of\none-dimensional coupled quantum wires. In doing so we provide an explicit\nrelationship between two distinct approaches: quantum wires and Chern-Simons\nbulk theory. We firstly make a direct connection between the interacting\nquantum wires and the corresponding conformal field theory at the edges, which\nturns out to be given in terms of chiral gauged WZW models. Relying on the\nbulk-edge correspondence we are able to construct the underlying non-Abelian\nChern-Simons effective field theory.\n",
"title": "Effective Theories for 2+1 Dimensional Non-Abelian Topological Spin Liquids"
}
| null | null | null | null | true | null |
11929
| null |
Default
| null | null |
null |
{
"abstract": " A self-contained method of obtaining effective theories resulting from the\nspontaneous breakdown of conformal invariance is developed. It allows to\ndemonstrate that the Nambu-Goldstone fields for special conformal\ntransformations always represent non-dynamical degrees of freedom. The standard\napproach to the same question, which includes the imposition of the inverse\nHiggs constraints, is shown to follow from the developed technique. This\nprovides an alternative view on the nature of the inverse Higgs constraints for\nthe conformal group.\n",
"title": "Coset space construction for the conformal group. II. Spontaneously broken phase and inverse Higgs phenomenon"
}
| null | null | null | null | true | null |
11930
| null |
Default
| null | null |
null |
{
"abstract": " Cell shape is an important biomarker. Previously extensive studies have\nestablished the relation between cell shape and cell function. However, the\nmorphodynamics, namely the temporal fluctuation of cell shape is much less\nunderstood. We study the morphodynamics of MDA-MB-231 cells in type I collagen\nextracellular matrix (ECM). We find ECM mechanics, as tuned by collagen\nconcentration, controls the morphodynamics but not the static cell morphology.\nBy employing machine learning techniques, we classify cell shape into five\ndifferent morphological phenotypes corresponding to different migration modes.\nAs a result, cell morphodynamics is mapped into temporal evolution of\nmorphological phenotypes. We systematically characterize the phenotype dynamics\nincluding occurrence probability, dwell time, transition flux, and also obtain\nthe invasion characteristics of each phenotype. Using a tumor organoid model,\nwe show that the distinct invasion potentials of each phenotype modulate the\nphenotype homeostasis. Overall invasion of a tumor organoid is facilitated by\nindividual cells searching for and committing to phenotypes of higher invasive\npotential. In conclusion, we show that 3D migrating cancer cells exhibit rich\nmorphodynamics that is regulated by ECM mechanics and is closely related with\ncell motility. Our results pave the way to systematic characterization and\nfunctional understanding of cell morphodynamics.\n",
"title": "The morphodynamics of 3D migrating cancer cells"
}
| null | null | null | null | true | null |
11931
| null |
Default
| null | null |
null |
{
"abstract": " We explore the use of Evolution Strategies (ES), a class of black box\noptimization algorithms, as an alternative to popular MDP-based RL techniques\nsuch as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show\nthat ES is a viable solution strategy that scales extremely well with the\nnumber of CPUs available: By using a novel communication strategy based on\ncommon random numbers, our ES implementation only needs to communicate scalars,\nmaking it possible to scale to over a thousand parallel workers. This allows us\nto solve 3D humanoid walking in 10 minutes and obtain competitive results on\nmost Atari games after one hour of training. In addition, we highlight several\nadvantages of ES as a black box optimization technique: it is invariant to\naction frequency and delayed rewards, tolerant of extremely long horizons, and\ndoes not need temporal discounting or value function approximation.\n",
"title": "Evolution Strategies as a Scalable Alternative to Reinforcement Learning"
}
| null | null | null | null | true | null |
11932
| null |
Default
| null | null |
null |
{
"abstract": " Passive Radio-Frequency IDentification (RFID) systems carry critical\nimportance for Internet of Things (IoT) applications due to their energy\nharvesting capabilities. RFID based position estimation, in particular, is\nexpected to facilitate a wide array of location based services for IoT\napplications with low-power requirements. In this paper, considering monostatic\nand bistatic configurations and 3D antenna radiation pattern, we investigate\nthe accuracy of received signal strength based wireless localization using\npassive ultra high frequency (UHF) RFID systems. The Cramer-Rao Lower Bound\n(CRLB) for the localization accuracy is derived, and is compared with the\naccuracy of maximum likelihood estimators for various RFID antenna\nconfigurations. Numerical results show that due to RFID tag/antenna\nsensitivity, and the directional antenna pattern, the localization accuracy can\ndegrade at blind locations that remain outside of the RFID reader antennas'\nmain beam patterns. In such cases optimizing elevation angle of antennas are\nshown to improve localization coverage, while using bistatic configuration\nimproves localization accuracy significantly.\n",
"title": "IoT Localization for Bistatic Passive UHF RFID Systems with 3D Radiation Pattern"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11933
| null |
Validated
| null | null |
null |
{
"abstract": " We present a new method for numerical hydrodynamics which uses a\nmultidimensional generalisation of the Roe solver and operates on an\nunstructured triangular mesh. The main advantage over traditional methods based\non Riemann solvers, which commonly use one-dimensional flux estimates as\nbuilding blocks for a multidimensional integration, is its inherently\nmultidimensional nature, and as a consequence its ability to recognise\nmultidimensional stationary states that are not hydrostatic. A second novelty\nis the focus on Graphics Processing Units (GPUs). By tailoring the algorithms\nspecifically to GPUs we are able to get speedups of 100-250 compared to a\ndesktop machine. We compare the multidimensional upwind scheme to a\ntraditional, dimensionally split implementation of the Roe solver on several\ntest problems, and we find that the new method significantly outperforms the\nRoe solver in almost all cases. This comes with increased computational costs\nper time step, which makes the new method approximately a factor of 2 slower\nthan a dimensionally split scheme acting on a structured grid.\n",
"title": "Multidimensional upwind hydrodynamics on unstructured meshes using Graphics Processing Units I. Two-dimensional uniform meshes"
}
| null | null | null | null | true | null |
11934
| null |
Default
| null | null |
null |
{
"abstract": " An effective approach to non-parallel voice conversion (VC) is to utilize\ndeep neural networks (DNNs), specifically variational auto encoders (VAEs), to\nmodel the latent structure of speech in an unsupervised manner. A previous\nstudy has confirmed the ef- fectiveness of VAE using the STRAIGHT spectra for\nVC. How- ever, VAE using other types of spectral features such as mel- cepstral\ncoefficients (MCCs), which are related to human per- ception and have been\nwidely used in VC, have not been prop- erly investigated. Instead of using one\nspecific type of spectral feature, it is expected that VAE may benefit from\nusing multi- ple types of spectral features simultaneously, thereby improving\nthe capability of VAE for VC. To this end, we propose a novel VAE framework\n(called cross-domain VAE, CDVAE) for VC. Specifically, the proposed framework\nutilizes both STRAIGHT spectra and MCCs by explicitly regularizing multiple\nobjectives in order to constrain the behavior of the learned encoder and de-\ncoder. Experimental results demonstrate that the proposed CD- VAE framework\noutperforms the conventional VAE framework in terms of subjective tests.\n",
"title": "Voice Conversion Based on Cross-Domain Features Using Variational Auto Encoders"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11935
| null |
Validated
| null | null |
null |
{
"abstract": " Exchange hole is the principle constituent in density functional theory,\nwhich can be used to accurately design exchange energy functional and range\nseparated hybrid functionals coupled with some appropriate correlation.\nRecently, density matrix expansion (DME) based semi-local exchange hole\nproposed by Tao-Mo gained attention due to its fulfillment of some exact\nconstraints. We propose a new long-range corrected (LC) scheme that combines\nmeta-generalized gradient approximation (meta-GGA) exchange functionals\ndesigned from DME exchange hole coupled with the ab-initio Hartree-Fock (HF)\nexchange integral by separating the Coulomb interaction operator using standard\nerror function. Associate with Lee-Yang-Parr (LYP) correlation functional,\nassessment and benchmarking of our functional using well-known test set shows\nthat it performs remarkably well for a broad range of molecular properties,\nsuch as thermochemistry, noncovalent interaction and barrier height of chemical\nreactions.\n",
"title": "Density matrix expansion based semi-local exchange hole applied to range separated density functional theory"
}
| null | null | null | null | true | null |
11936
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the environmental quenching of galaxies, especially those with\nstellar masses (M*)$<10^{9.5} M_\\odot$, beyond the local universe. Essentially\nall local low-mass quenched galaxies (QGs) are believed to live close to\nmassive central galaxies, which is a demonstration of environmental quenching.\nWe use CANDELS data to test {\\it whether or not} such a dwarf QG--massive\ncentral galaxy connection exists beyond the local universe. To this purpose, we\nonly need a statistically representative, rather than a complete, sample of\nlow-mass galaxies, which enables our study to $z\\gtrsim1.5$. For each low-mass\ngalaxy, we measure the projected distance ($d_{proj}$) to its nearest massive\nneighbor (M*$>10^{10.5} M_\\odot$) within a redshift range. At a given redshift\nand M*, the environmental quenching effect is considered to be observed if the\n$d_{proj}$ distribution of QGs ($d_{proj}^Q$) is significantly skewed toward\nlower values than that of star-forming galaxies ($d_{proj}^{SF}$). For galaxies\nwith $10^{8} M_\\odot < M* < 10^{10} M_\\odot$, such a difference between\n$d_{proj}^Q$ and $d_{proj}^{SF}$ is detected up to $z\\sim1$. Also, about 10\\%\nof the quenched galaxies in our sample are located between two and four virial\nradii ($R_{Vir}$) of the massive halos. The median projected distance from\nlow-mass QGs to their massive neighbors, $d_{proj}^Q / R_{Vir}$, decreases with\nsatellite M* at $M* \\lesssim 10^{9.5} M_\\odot$, but increases with satellite M*\nat $M* \\gtrsim 10^{9.5} M_\\odot$. This trend suggests a smooth, if any,\ntransition of the quenching timescale around $M* \\sim 10^{9.5} M_\\odot$ at\n$0.5<z<1.0$.\n",
"title": "CANDELS Sheds Light on the Environmental Quenching of Low-mass Galaxies"
}
| null | null | null | null | true | null |
11937
| null |
Default
| null | null |
null |
{
"abstract": " We determine the BP-module structure, mod higher filtration, of the main part\nof the BP-homology of elementary abelian 2-groups. The action is related to\nsymmetric polynomials and to Dickson invariants.\n",
"title": "BP-homology of elementary abelian 2-groups: BP-module structure"
}
| null | null | null | null | true | null |
11938
| null |
Default
| null | null |
null |
{
"abstract": " We consider the problem of learning the functions computing children from\nparents in a Structural Causal Model once the underlying causal graph has been\nidentified. This is in some sense the second step after causal discovery.\nTaking a probabilistic approach to estimating these functions, we derive a\nnatural myopic active learning scheme that identifies the intervention which is\noptimally informative about all of the unknown functions jointly, given\npreviously observed data. We test the derived algorithms on simple examples, to\ndemonstrate that they produce a structured exploration policy that\nsignificantly improves on unstructured base-lines.\n",
"title": "Probabilistic Active Learning of Functions in Structural Causal Models"
}
| null | null | null | null | true | null |
11939
| null |
Default
| null | null |
null |
{
"abstract": " Deep reinforcement learning methods attain super-human performance in a wide\nrange of environments. Such methods are grossly inefficient, often taking\norders of magnitudes more data than humans to achieve reasonable performance.\nWe propose Neural Episodic Control: a deep reinforcement learning agent that is\nable to rapidly assimilate new experiences and act upon them. Our agent uses a\nsemi-tabular representation of the value function: a buffer of past experience\ncontaining slowly changing state representations and rapidly updated estimates\nof the value function. We show across a wide range of environments that our\nagent learns significantly faster than other state-of-the-art, general purpose\ndeep reinforcement learning agents.\n",
"title": "Neural Episodic Control"
}
| null | null | null | null | true | null |
11940
| null |
Default
| null | null |
null |
{
"abstract": " School bus planning is usually divided into routing and scheduling due to the\ncomplexity of solving them concurrently. However, the separation between these\ntwo steps may lead to worse solutions with higher overall costs than that from\nsolving them together. When finding the minimal number of trips in the routing\nproblem, neglecting the importance of trip compatibility may increase the\nnumber of buses actually needed in the scheduling problem. This paper proposes\na new formulation for the multi-school homogeneous fleet routing problem that\nmaximizes trip compatibility while minimizing total travel time. This\nincorporates the trip compatibility for the scheduling problem in the routing\nproblem. Since the problem is inherently just a routing problem, finding a good\nsolution is not cumbersome. To compare the performance of the model with\ntraditional routing problems, we generate eight mid-size data sets. Through\nimporting the generated trips of the routing problems into the bus scheduling\n(blocking) problem, it is shown that the proposed model uses up to 13% fewer\nbuses than the common traditional routing models.\n",
"title": "School bus routing by maximizing trip compatibility"
}
| null | null | null | null | true | null |
11941
| null |
Default
| null | null |
null |
{
"abstract": " In this work we proivied a new simpler proof of the global diffeomorphism\ntheorem from [9] which we further apply to consider unique solvability of some\nabstract semilinear equations. Applications to the second order Dirichlet\nproblem driven by the Laplace operator are given.\n",
"title": "Solvability of abstract semilinear equations by a global diffeomorphism theorem"
}
| null | null | null | null | true | null |
11942
| null |
Default
| null | null |
null |
{
"abstract": " Deep generative models have been successfully used to learn representations\nfor high-dimensional discrete spaces by representing discrete objects as\nsequences and employing powerful sequence-based deep models. Unfortunately,\nthese sequence-based models often produce invalid sequences: sequences which do\nnot represent any underlying discrete structure; invalid sequences hinder the\nutility of such models. As a step towards solving this problem, we propose to\nlearn a deep recurrent validator model, which can estimate whether a partial\nsequence can function as the beginning of a full, valid sequence. This\nvalidator provides insight as to how individual sequence elements influence the\nvalidity of the overall sequence, and can be used to constrain sequence based\nmodels to generate valid sequences -- and thus faithfully model discrete\nobjects. Our approach is inspired by reinforcement learning, where an oracle\nwhich can evaluate validity of complete sequences provides a sparse reward\nsignal. We demonstrate its effectiveness as a generative model of Python 3\nsource code for mathematical expressions, and in improving the ability of a\nvariational autoencoder trained on SMILES strings to decode valid molecular\nstructures.\n",
"title": "Learning a Generative Model for Validity in Complex Discrete Structures"
}
| null | null | null | null | true | null |
11943
| null |
Default
| null | null |
null |
{
"abstract": " We present a brief review of discrete structures in a finite Hilbert space,\nrelevant for the theory of quantum information. Unitary operator bases,\nmutually unbiased bases, Clifford group and stabilizer states, discrete Wigner\nfunction, symmetric informationally complete measurements, projective and\nunitary t--designs are discussed. Some recent results in the field are covered\nand several important open questions are formulated. We advocate a geometric\napproach to the subject and emphasize numerous links to various mathematical\nproblems.\n",
"title": "On discrete structures in finite Hilbert spaces"
}
| null | null | null | null | true | null |
11944
| null |
Default
| null | null |
null |
{
"abstract": " This article is based on a series of lectures on toric varieties given at\nRIMS, Kyoto. We start by introducing toric varieties, their basic properties\nand later pass to more advanced topics relating mostly to combinatorics.\n",
"title": "Selected topics on Toric Varieties"
}
| null | null | null | null | true | null |
11945
| null |
Default
| null | null |
null |
{
"abstract": " Feedback control actively dissipates uncertainty from a dynamical system by\nmeans of actuation. We develop a notion of \"control capacity\" that gives a\nfundamental limit (in bits) on the rate at which a controller can dissipate the\nuncertainty from a system, i.e. stabilize to a known fixed point. We give a\ncomputable single-letter characterization of control capacity for memoryless\nstationary scalar multiplicative actuation channels. Control capacity allows us\nto answer questions of stabilizability for scalar linear systems: a system with\nactuation uncertainty is stabilizable if and only if the control capacity is\nlarger than the log of the unstable open-loop eigenvalue.\nFor second-moment senses of stability, we recover the classic uncertainty\nthreshold principle result. However, our definition of control capacity can\nquantify the stabilizability limits for any moment of stability. Our\nformulation parallels the notion of Shannon's communication capacity, and thus\nyields both a strong converse and a way to compute the value of\nside-information in control. The results in our paper are motivated by\nbit-level models for control that build on the deterministic models that are\nwidely used to understand information flows in wireless network information\ntheory.\n",
"title": "Control Capacity"
}
| null | null | null | null | true | null |
11946
| null |
Default
| null | null |
null |
{
"abstract": " The superposition of temporal point processes has been studied for many\nyears, although the usefulness of such models for practical applications has\nnot be fully developed. We investigate superposed Hawkes process as an\nimportant class of such models, with properties studied in the framework of\nleast squares estimation. The superposition of Hawkes processes is demonstrated\nto be beneficial for tightening the upper bound of excess risk under certain\nconditions, and we show the feasibility of the benefit in typical situations.\nThe usefulness of superposed Hawkes processes is verified on synthetic data,\nand its potential to solve the cold-start problem of recommendation systems is\ndemonstrated on real-world data.\n",
"title": "Benefits from Superposed Hawkes Processes"
}
| null | null | null | null | true | null |
11947
| null |
Default
| null | null |
null |
{
"abstract": " Since the invention of word2vec, the skip-gram model has significantly\nadvanced the research of network embedding, such as the recent emergence of the\nDeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of\nthe aforementioned models with negative sampling can be unified into the matrix\nfactorization framework with closed forms. Our analysis and proofs reveal that:\n(1) DeepWalk empirically produces a low-rank transformation of a network's\nnormalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk\nwhen the size of vertices' context is set to one; (3) As an extension of LINE,\nPTE can be viewed as the joint factorization of multiple networks' Laplacians;\n(4) node2vec is factorizing a matrix related to the stationary distribution and\ntransition probability tensor of a 2nd-order random walk. We further provide\nthe theoretical connections between skip-gram based network embedding\nalgorithms and the theory of graph Laplacian. Finally, we present the NetMF\nmethod as well as its approximation algorithm for computing network embedding.\nOur method offers significant improvements over DeepWalk and LINE for\nconventional network mining tasks. This work lays the theoretical foundation\nfor skip-gram based network embedding methods, leading to a better\nunderstanding of latent network representation learning.\n",
"title": "Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec"
}
| null | null | null | null | true | null |
11948
| null |
Default
| null | null |
null |
{
"abstract": " With the development of Big Data and cloud data sharing, privacy preserving\ndata publishing becomes one of the most important topics in the past decade. As\none of the most influential privacy definitions, differential privacy provides\na rigorous and provable privacy guarantee for data publishing. Differentially\nprivate interactive publishing achieves good performance in many applications;\nhowever, the curator has to release a large number of queries in a batch or a\nsynthetic dataset in the Big Data era. To provide accurate non-interactive\npublishing results in the constraint of differential privacy, two challenges\nneed to be tackled: one is how to decrease the correlation between large sets\nof queries, while the other is how to predict on fresh queries. Neither is easy\nto solve by the traditional differential privacy mechanism. This paper\ntransfers the data publishing problem to a machine learning problem, in which\nqueries are considered as training samples and a prediction model will be\nreleased rather than query results or synthetic datasets. When the model is\npublished, it can be used to answer current submitted queries and predict\nresults for fresh queries from the public. Compared with the traditional\nmethod, the proposed prediction model enhances the accuracy of query results\nfor non-interactive publishing. Experimental results show that the proposed\nsolution outperforms traditional differential privacy in terms of Mean Absolute\nValue on a large group of queries. This also suggests the learning model can\nsuccessfully retain the utility of published queries while preserving privacy.\n",
"title": "Differentially Private Query Learning: from Data Publishing to Model Publishing"
}
| null | null | null | null | true | null |
11949
| null |
Default
| null | null |
null |
{
"abstract": " Level-1 Consensus is a property of a preference-profile. Intuitively, it\nmeans that there exists a preference relation which induces an ordering of all\nother preferences such that frequent preferences are those that are more\nsimilar to it. This is a desirable property, since it enhances the stability of\nsocial choice by guaranteeing that there exists a Condorcet winner and it is\nelected by all scoring rules.\nIn this paper, we present an algorithm for checking whether a given\npreference profile exhibits level-1 consensus. We apply this algorithm to a\nlarge number of preference profiles, both real and randomly-generated, and find\nthat level-1 consensus is very improbable. We support these empirical findings\ntheoretically, by showing that, under the impartial culture assumption, the\nprobability of level-1 consensus approaches zero when the number of individuals\napproaches infinity.\nMotivated by these observations, we show that the level-1 consensus property\ncan be weakened while retaining its stability implications. We call this weaker\nproperty Flexible Consensus. We show, both empirically and theoretically, that\nit is considerably more probable than the original level-1 consensus. In\nparticular, under the impartial culture assumption, the probability for\nFlexible Consensus converges to a positive number when the number of\nindividuals approaches infinity.\n",
"title": "Flexible Level-1 Consensus Ensuring Stable Social Choice: Analysis and Algorithms"
}
| null | null | null | null | true | null |
11950
| null |
Default
| null | null |
null |
{
"abstract": " The sparsely spaced highly permeable fractures of the granitic rock aquifer\nat Stang-er-Brune (Brittany, France) form a well-connected fracture network of\nhigh permeability but unknown geometry. Previous work based on optical and\nacoustic logging together with single-hole and cross-hole flowmeter data\nacquired in 3 neighboring boreholes (70-100 m deep) have identified the most\nimportant permeable fractures crossing the boreholes and their hydraulic\nconnections. To constrain possible flow paths by estimating the geometries of\nknown and previously unknown fractures, we have acquired, processed and\ninterpreted multifold, single- and cross-hole GPR data using 100 and 250 MHz\nantennas. The GPR data processing scheme consisting of time-zero corrections,\nscaling, bandpass filtering and F-X deconvolution, eigenvector filtering,\nmuting, pre-stack Kirchhoff depth migration and stacking was used to\ndifferentiate fluid-filled fracture reflections from source-generated noise.\nThe final stacked and pre-stack depth-migrated GPR sections provide\nhigh-resolution images of individual fractures (dipping 30-90°) in the\nsurroundings (2-20 m for the 100 MHz antennas; 2-12 m for the 250 MHz antennas)\nof each borehole in a 2D plane projection that are of superior quality to those\nobtained from single-offset sections. Most fractures previously identified from\nhydraulic testing can be correlated to reflections in the single-hole data.\nSeveral previously unknown major near vertical fractures have also been\nidentified away from the boreholes.\n",
"title": "Fracture imaging within a granitic rock aquifer using multiple-offset single-hole and cross-hole GPR reflection data"
}
| null | null | null | null | true | null |
11951
| null |
Default
| null | null |
null |
{
"abstract": " The purpose of this paper is to investigate the asymptotic behavior of\nautomorphism groups of function fields when genus tends to infinity.\nMotivated by applications in coding and cryptography, we consider the maximum\nsize of abelian subgroups of the automorphism group\n$\\mbox{Aut}(F/\\mathbb{F}_q)$ in terms of genus ${g_F}$ for a function field $F$\nover a finite field $\\mathbb{F}_q$. Although the whole group\n$\\mbox{Aut}(F/\\mathbb{F}_q)$ could have size $\\Omega({g_F}^4)$, the maximum\nsize $m_F$ of abelian subgroups of the automorphism group\n$\\mbox{Aut}(F/\\mathbb{F}_q)$ is upper bounded by $4g_F+4$ for $g_F\\ge 2$. In\nthe present paper, we study the asymptotic behavior of $m_F$ by defining\n$M_q=\\limsup_{{g_F}\\rightarrow\\infty}\\frac{m_F \\cdot \\log_q m_F}{g_F}$, where\n$F$ runs through all function fields over $\\mathbb{F}_q$. We show that $M_q$\nlies between $2$ and $3$ (or $4$) for odd characteristic (or for even\ncharacteristic, respectively). This means that $m_F$ grows much more slowly\nthan genus does asymptotically.\nThe second part of this paper is to study the maximum size $b_F$ of subgroups\nof $\\mbox{Aut}(F/\\mathbb{F}_q)$ whose order is coprime to $q$. The Hurwitz\nbound gives an upper bound $b_F\\le 84(g_F-1)$ for every function field\n$F/\\mathbb{F}_q$ of genus $g_F\\ge 2$. We investigate the asymptotic behavior of\n$b_F$ by defining ${B_q}=\\limsup_{{g_F}\\rightarrow\\infty}\\frac{b_F}{g_F}$,\nwhere $F$ runs through all function fields over $\\mathbb{F}_q$. Although the\nHurwitz bound shows ${B_q}\\le 84$, there are no lower bounds on $B_q$ in\nliterature. One does not even know if ${B_q}=0$. For the first time, we show\nthat ${B_q}\\ge 2/3$ by explicitly constructing some towers of function fields\nin this paper.\n",
"title": "The asymptotic behavior of automorphism groups of function fields over finite fields"
}
| null | null |
[
"Mathematics"
] | null | true | null |
11952
| null |
Validated
| null | null |
null |
{
"abstract": " We address the problem of prescribing an optimal decision in a framework\nwhere its cost depends on uncertain problem parameters $Y$ that need to be\nlearned from data. Earlier work by Bertsimas and Kallus (2014) transforms\nclassical machine learning methods that merely predict $Y$ from supervised\ntraining data $[(x_1, y_1), \\dots, (x_n, y_n)]$ into prescriptive methods\ntaking optimal decisions specific to a particular covariate context $X=\\bar x$.\nTheir prescriptive methods factor in additional observed contextual information\non a potentially large number of covariates $X=\\bar x$ to take context specific\nactions $z(\\bar x)$ which are superior to any static decision $z$. Any naive\nuse of limited training data may, however, lead to gullible decisions\nover-calibrated to one particular data set. In this paper, we borrow ideas from\ndistributionally robust optimization and the statistical bootstrap of Efron\n(1982) to propose two novel prescriptive methods based on (nw) Nadaraya-Watson\nand (nn) nearest-neighbors learning which safeguard against overfitting and\nlead to improved out-of-sample performance. Both resulting robust prescriptive\nmethods reduce to tractable convex optimization problems and enjoy a limited\ndisappointment on bootstrap data. We illustrate the data-driven decision-making\nframework and our novel robustness notion on a small news vendor problem as\nwell as a small portfolio allocation problem.\n",
"title": "Bootstrap Robust Prescriptive Analytics"
}
| null | null | null | null | true | null |
11953
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we show a variant of colorful Tverberg's theorem which is valid\nin any matroid: Let $S$ be a sequence of non-loops in a matroid $M$ of finite\nrank $m$ with closure operator cl. Suppose that $S$ is colored in such a way\nthat the first color does not appear more than $r$-times and each other color\nappears at most $(r-1)$-times. Then $S$ can be partitioned into $r$ rainbow\nsubsequences $S_1,\\ldots, S_r$ such that $cl\\,\\emptyset\\subsetneq\ncl\\,S_1\\subseteq cl\\, S_2\\subseteq \\ldots \\subseteq cl\\,S_r$. In particular,\n$\\emptyset\\neq \\bigcap_{i=1}^r cl\\,S_i$. A subsequence is called rainbow if it\ncontains each color at most once.\nThe conclusion of our theorem is weaker than the conclusion of the original\nTverberg's theorem in $\\mathbb R^d$, which states that $\\bigcap conv\\,S_i\\neq\n\\emptyset$, whereas we only claim that $\\bigcap aff\\,S_i\\neq \\emptyset$. On the\nother hand, our theorem strengthens the Tverberg's theorem in several other\nways: 1) it is applicable to any matroid (whereas Tverberg's theorem can only\nbe used in $\\mathbb R^d$), 2) instead of $\\bigcap cl\\,S_i\\neq \\emptyset$ we\nhave the stronger condition $cl\\,\\emptyset\\subsetneq cl\\,S_1\\subseteq\ncl\\,S_2\\subseteq \\ldots \\subseteq cl\\,S_r$, and 3) we add a color constraints\nthat are even stronger than the color constraints in the colorful version of\nTverberg's theorem.\nRecently, the author together with Goaoc, Mabillard, Patáková, Tancer and\nWagner used the first property and applied the non-colorful version of this\ntheorem to homology groups with $GF(p)$ coefficients to obtain several\nnon-embeddability results, for details we refer to arXiv:1610.09063.\n",
"title": "Tverberg type theorems for matroids"
}
| null | null | null | null | true | null |
11954
| null |
Default
| null | null |
null |
{
"abstract": " We propose two classes of dynamic versions of the classical Erdős-Rényi\ngraph: one in which the transition rates are governed by an external regime\nprocess, and one in which the transition rates are periodically resampled. For\nboth models we consider the evolution of the number of edges present, with\nexplicit results for the corresponding moments, functional central limit\ntheorems and large deviations asymptotics.\n",
"title": "Dynamic Erdős-Rényi graphs"
}
| null | null | null | null | true | null |
11955
| null |
Default
| null | null |
null |
{
"abstract": " Dense subgraph discovery is a key primitive in many graph mining\napplications, such as detecting communities in social networks and mining gene\ncorrelation from biological data. Most studies on dense subgraph mining only\ndeal with one graph. However, in many applications, we have more than one graph\ndescribing relations among a same group of entities. In this paper, given two\ngraphs sharing the same set of vertices, we investigate the problem of\ndetecting subgraphs that contrast the most with respect to density. We call\nsuch subgraphs Density Contrast Subgraphs, or DCS in short. Two widely used\ngraph density measures, average degree and graph affinity, are considered. For\nboth density measures, mining DCS is equivalent to mining the densest subgraph\nfrom a \"difference\" graph, which may have both positive and negative edge\nweights. Due to the existence of negative edge weights, existing dense subgraph\ndetection algorithms cannot identify the subgraph we need. We prove the\ncomputational hardness of mining DCS under the two graph density measures and\ndevelop efficient algorithms to find DCS. We also conduct extensive experiments\non several real-world datasets to evaluate our algorithms. The experimental\nresults show that our algorithms are both effective and efficient.\n",
"title": "Mining Density Contrast Subgraphs"
}
| null | null | null | null | true | null |
11956
| null |
Default
| null | null |
null |
{
"abstract": " Convolutional neural networks provide visual features that perform remarkably\nwell in many computer vision applications. However, training these networks\nrequires significant amounts of supervision. This paper introduces a generic\nframework to train deep networks, end-to-end, with no supervision. We propose\nto fix a set of target representations, called Noise As Targets (NAT), and to\nconstrain the deep features to align to them. This domain agnostic approach\navoids the standard unsupervised learning issues of trivial solutions and\ncollapsing of features. Thanks to a stochastic batch reassignment strategy and\na separable square loss function, it scales to millions of images. The proposed\napproach produces representations that perform on par with state-of-the-art\nunsupervised methods on ImageNet and Pascal VOC.\n",
"title": "Unsupervised Learning by Predicting Noise"
}
| null | null | null | null | true | null |
11957
| null |
Default
| null | null |
null |
{
"abstract": " Microrobots have the potential to impact many areas such as microsurgery,\nmicromanipulation and minimally invasive sensing. Due to their small size,\nmicrorobots swim in a regime that is governed by low Reynolds number\nhydrodynamics. In this paper, we consider small scale artificial swimmers that\nare fabricated using ferromagnetic filaments and locomote in response to time\nvarying external magnetic fields. We motivate the design of previously proposed\ncontrol laws using tools from geometric mechanics and also demonstrate how new\ncontrol laws can be synthesized to generate net translation in such swimmers.\nWe further describe how to modify these control inputs to make the swimmers\ntrack rich trajectories in the workspace by investigating stability properties\nof their limit cycles in the orientation angles phase space. Following a\nsystematic design optimization, we develop a principled approach to encode\ninternal magnetization distributions in millimeter scale ferromagnetic\nfilaments. We verify and demonstrate this procedure experimentally and finally\nshow translation, trajectory tracking and turning in place locomotion in these\noptimal swimmers using a Helmholtz coils setup.\n",
"title": "Trajectory Generation for Millimeter Scale Ferromagnetic Swimmers: Theory and Experiments"
}
| null | null | null | null | true | null |
11958
| null |
Default
| null | null |
null |
{
"abstract": " This research investigates the implementation mechanism of block-wise ILU(k)\npreconditioner on GPU. The block-wise ILU(k) algorithm requires both the level\nk and the block size to be designed as variables. A decoupled ILU(k) algorithm\nconsists of a symbolic phase and a factorization phase. In the symbolic phase,\na ILU(k) nonzero pattern is established from the point-wise structure extracted\nfrom a block-wise matrix. In the factorization phase, the block-wise matrix\nwith a variable block size is factorized into a block lower triangular matrix\nand a block upper triangular matrix. And a further diagonal factorization is\nrequired to perform on the block upper triangular matrix for adapting a\nparallel triangular solver on GPU.We also present the numerical experiments to\nstudy the preconditioner actions on different k levels and block sizes.\n",
"title": "Decoupled Block-Wise ILU(k) Preconditioner on GPU"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11959
| null |
Validated
| null | null |
null |
{
"abstract": " The dual crises of the sub-prime mortgage crisis and the global financial\ncrisis has prompted a call for explanations of non-equilibrium market dynamics.\nRecently a promising approach has been the use of agent based models (ABMs) to\nsimulate aggregate market dynamics. A key aspect of these models is the\nendogenous emergence of critical transitions between equilibria, i.e. market\ncollapses, caused by multiple equilibria and changing market parameters.\nSeveral research themes have developed microeconomic based models that include\nmultiple equilibria: social decision theory (Brock and Durlauf), quantal\nresponse models (McKelvey and Palfrey), and strategic complementarities\n(Goldstein). A gap that needs to be filled in the literature is a unified\nanalysis of the relationship between these models and how aggregate criticality\nemerges from the individual agent level. This article reviews the agent-based\nfoundations of markets starting with the individual agent perspective of\nMcFadden and the aggregate perspective of catastrophe theory emphasising\nconnections between the different approaches. It is shown that changes in the\nuncertainty agents have in the value of their interactions with one another,\neven if these changes are one-sided, plays a central role in systemic market\nrisks such as market instability and the twin crises effect. These interactions\ncan endogenously cause crises that are an emergent phenomena of markets.\n",
"title": "Multi-agent Economics and the Emergence of Critical Markets"
}
| null | null | null | null | true | null |
11960
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we propose a novel neural language modelling (NLM) method based\non \\textit{error-correcting output codes} (ECOC), abbreviated as ECOC-NLM. This\nlatent variable based approach provides a principled way to choose a varying\namount of latent output codes and avoids exact softmax normalization. Instead\nof minimizing measures between the predicted probability distribution and true\ndistribution, we use error-correcting codes to represent both predictions and\noutputs. Secondly, we propose multiple ways to improve accuracy and convergence\nrates by maximizing the separability between codes that correspond to classes\nproportional to word embedding similarities. Lastly, we introduce a novel\nmethod called \\textit{Latent Mixture Sampling}, a technique that is used to\nmitigate exposure bias and can be integrated into training latent-based neural\nlanguage models. This involves mixing the latent codes (i.e variables) of past\npredictions and past targets in one of two ways: (1) according to a predefined\nsampling schedule or (2) a differentiable sampling procedure whereby the mixing\nprobability is learned throughout training by replacing the greedy argmax\noperation with a smooth approximation. In evaluating Codeword Mixture Sampling\nfor ECOC-NLM, we also baseline it against CWMS in a closely related Hierarhical\nSoftmax-based NLM.\n",
"title": "Error-Correcting Neural Sequence Prediction"
}
| null | null | null | null | true | null |
11961
| null |
Default
| null | null |
null |
{
"abstract": " We study sampling as optimization in the space of measures. We focus on\ngradient flow-based optimization with the Langevin dynamics as a case study. We\ninvestigate the source of the bias of the unadjusted Langevin algorithm (ULA)\nin discrete time, and consider how to remove or reduce the bias. We point out\nthe difficulty is that the heat flow is exactly solvable, but neither its\nforward nor backward method is implementable in general, except for Gaussian\ndata. We propose the symmetrized Langevin algorithm (SLA), which should have a\nsmaller bias than ULA, at the price of implementing a proximal gradient step in\nspace. We show SLA is in fact consistent for Gaussian target measure, whereas\nULA is not. We also illustrate various algorithms explicitly for Gaussian\ntarget measure, including gradient descent, proximal gradient, and\nForward-Backward, and show they are all consistent.\n",
"title": "Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem"
}
| null | null | null | null | true | null |
11962
| null |
Default
| null | null |
null |
{
"abstract": " We study laser cooling of $^{24}$Mg atoms in dipole optical trap with pumping\nfield resonant to narrow $(3s3s)\\,^1S_0 \\rightarrow \\, (3s3p)\\,^{3}P_1$\n($\\lambda = 457$ nm) optical transition. For description of laser cooling of\natoms in the optical trap with taking into account quantum recoil effects we\nconsider two quantum models. The first one is based on direct numerical\nsolution of quantum kinetic equation for atom density matrix and the second one\nis simplified model based on decomposition of atom density matrix over\nvibration states in the dipole trap. We search pumping field intensity and\ndetuning for minimum cooling energy and fast laser cooling.\n",
"title": "Deep laser cooling in optical trap: two-level quantum model"
}
| null | null |
[
"Physics"
] | null | true | null |
11963
| null |
Validated
| null | null |
null |
{
"abstract": " The problem of feature disentanglement has been explored in the literature,\nfor the purpose of image and video processing and text analysis.\nState-of-the-art methods for disentangling feature representations rely on the\npresence of many labeled samples. In this work, we present a novel method for\ndisentangling factors of variation in data-scarce regimes. Specifically, we\nexplore the application of feature disentangling for the problem of supervised\nclassification in a setting where few labeled samples exist, and there are no\nunlabeled samples for use in unsupervised training. Instead, a similar datasets\nexists which shares at least one direction of variation with the\nsample-constrained datasets. We train our model end-to-end using the framework\nof variational autoencoders and are able to experimentally demonstrate that\nusing an auxiliary dataset with similar variation factors contribute positively\nto classification performance, yielding competitive results with the\nstate-of-the-art in unsupervised learning.\n",
"title": "JADE: Joint Autoencoders for Dis-Entanglement"
}
| null | null | null | null | true | null |
11964
| null |
Default
| null | null |
null |
{
"abstract": " We consider the long-term collisional and dynamical evolution of solid\nmaterial orbiting in a narrow annulus near the Roche limit of a white dwarf.\nWith orbital velocities of 300 km/sec, systems of solids with initial\neccentricity $e \\gtrsim 10^{-3}$ generate a collisional cascade where objects\nwith radii $r \\lesssim$ 100--300 km are ground to dust. This process converts\n1-100 km asteroids into 1 $\\mu$m particles in $10^2 - 10^6$ yr. Throughout this\nevolution, the swarm maintains an initially large vertical scale height $H$.\nAdding solids at a rate $\\dot{M}$ enables the system to find an equilibrium\nwhere the mass in solids is roughly constant. This equilibrium depends on\n$\\dot{M}$ and $r_0$, the radius of the largest solid added to the swarm. When\n$r_0 \\lesssim$ 10 km, this equilibrium is stable. For larger $r_0$, the mass\noscillates between high and low states; the fraction of time spent in high\nstates ranges from 100% for large $\\dot{M}$ to much less than 1% for small\n$\\dot{M}$. During high states, the stellar luminosity reprocessed by the solids\nis comparable to the excess infrared emission observed in many metallic line\nwhite dwarfs.\n",
"title": "Numerical Simulations of Collisional Cascades at the Roche Limits of White Dwarf Stars"
}
| null | null |
[
"Physics"
] | null | true | null |
11965
| null |
Validated
| null | null |
null |
{
"abstract": " Learning individual-level causal effects from observational data, such as\ninferring the most effective medication for a specific patient, is a problem of\ngrowing importance for policy makers. The most important aspect of inferring\ncausal effects from observational data is the handling of confounders, factors\nthat affect both an intervention and its outcome. A carefully designed\nobservational study attempts to measure all important confounders. However,\neven if one does not have direct access to all confounders, there may exist\nnoisy and uncertain measurement of proxies for confounders. We build on recent\nadvances in latent variable modeling to simultaneously estimate the unknown\nlatent space summarizing the confounders and the causal effect. Our method is\nbased on Variational Autoencoders (VAE) which follow the causal structure of\ninference with proxies. We show our method is significantly more robust than\nexisting methods, and matches the state-of-the-art on previous benchmarks\nfocused on individual treatment effects.\n",
"title": "Causal Effect Inference with Deep Latent-Variable Models"
}
| null | null | null | null | true | null |
11966
| null |
Default
| null | null |
null |
{
"abstract": " This article gives an overview of gamma-ray bursts (GRBs) and their relation\nto astroparticle physics and cosmology. GRBs are the most powerful explosions\nin the universe that occur roughly once per day and are characterized by\nflashes of gamma-rays typically lasting from a fraction of a second to\nthousands of seconds. Even after more than four decades since their discovery\nthey still remain not fully understood. Two types of GRBs are observed:\nspectrally harder short duration bursts and softer long duration bursts. The\nlong GRBs originate from the collapse of massive stars whereas the preferred\nmodel for the short GRBs is coalescence of compact objects such as two neutron\nstars or a neutron star and a black hole. There were suggestions that GRBs can\nproduce ultra-high energy cosmic rays and neutrinos. Also a certain sub-type of\nGRBs may serve as a new standard candle that can help constrain and measure the\ncosmological parameters to much higher redshift than what was possible so far.\nI will review the recent experimental observations.\n",
"title": "Gamma-ray bursts and their relation to astroparticle physics and cosmology"
}
| null | null | null | null | true | null |
11967
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study a class of discrete-time mean-field games under the\ninfinite-horizon risk-sensitive discounted-cost optimality criterion.\nRisk-sensitivity is introduced for each agent (player) via an exponential\nutility function. In this game model, each agent is coupled with the rest of\nthe population through the empirical distribution of the states, which affects\nboth the agent's individual cost and its state dynamics. Under mild\nassumptions, we establish the existence of a mean-field equilibrium in the\ninfinite-population limit as the number of agents ($N$) goes to infinity, and\nthen show that the policy obtained from the mean-field equilibrium constitutes\nan approximate Nash equilibrium when $N$ is sufficiently large.\n",
"title": "Discrete-time Risk-sensitive Mean-field Games"
}
| null | null | null | null | true | null |
11968
| null |
Default
| null | null |
null |
{
"abstract": " A problem of Glasner, now known as Glasner's problem, asks whether every\nminimally almost periodic, monothetic, Polish groups is extremely amenable. The\npurpose of this short note is to observe that a positive answer is obtained\nunder the additional assumption that the universal minimal flow is metrizable.\n",
"title": "Glasner's problem for Polish groups with metrizable universal minimal flow"
}
| null | null | null | null | true | null |
11969
| null |
Default
| null | null |
null |
{
"abstract": " Many different methods to train deep generative models have been introduced\nin the past. In this paper, we propose to extend the variational auto-encoder\n(VAE) framework with a new type of prior which we call \"Variational Mixture of\nPosteriors\" prior, or VampPrior for short. The VampPrior consists of a mixture\ndistribution (e.g., a mixture of Gaussians) with components given by\nvariational posteriors conditioned on learnable pseudo-inputs. We further\nextend this prior to a two layer hierarchical model and show that this\narchitecture with a coupled prior and posterior, learns significantly better\nmodels. The model also avoids the usual local optima issues related to useless\nlatent dimensions that plague VAEs. We provide empirical studies on six\ndatasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes,\nFrey Faces and Histopathology patches, and show that applying the hierarchical\nVampPrior delivers state-of-the-art results on all datasets in the unsupervised\npermutation invariant setting and the best results or comparable to SOTA\nmethods for the approach with convolutional networks.\n",
"title": "VAE with a VampPrior"
}
| null | null | null | null | true | null |
11970
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study the online learning algorithm without explicit\nregularization terms. This algorithm is essentially a stochastic gradient\ndescent scheme in a reproducing kernel Hilbert space (RKHS). The polynomially\ndecaying step size in each iteration can play a role of regularization to\nensure the generalization ability of online learning algorithm. We develop a\nnovel capacity dependent analysis on the performance of the last iterate of\nonline learning algorithm. The contribution of this paper is two-fold. First,\nour nice analysis can lead to the convergence rate in the standard mean square\ndistance which is the best so far. Second, we establish, for the first time,\nthe strong convergence of the last iterate with polynomially decaying step\nsizes in the RKHS norm. We demonstrate that the theoretical analysis\nestablished in this paper fully exploits the fine structure of the underlying\nRKHS, and thus can lead to sharp error estimates of online learning algorithm.\n",
"title": "Fast and Strong Convergence of Online Learning Algorithms"
}
| null | null | null | null | true | null |
11971
| null |
Default
| null | null |
null |
{
"abstract": " Swelling media (e.g. gels, tumors) are usually described by mechanical\nconstitutive laws (e.g. Hooke or Darcy laws). However, constitutive relations\nof real swelling media are not well known. Here, we take an opposite route and\nconsider a simple packing heuristics, i.e. the particles can't overlap. We\ndeduce a formula for the equilibrium density under a confining potential. We\nthen consider its evolution when the average particle volume and confining\npotential depend on time under two additional heuristics: (i) any two particles\ncan't swap their position; (ii) motion should obey some energy minimization\nprinciple. These heuristics determine the medium velocity consistently with the\ncontinuity equation. In the direction normal to the potential level sets the\nvelocity is related with that of the level sets while in the parallel\ndirection, it is determined by a Laplace-Beltrami operator on these sets. This\ncomplex geometrical feature cannot be recovered using a simple Darcy law.\n",
"title": "A new continuum theory for incompressible swelling materials"
}
| null | null | null | null | true | null |
11972
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we generalize the definition of \"Search Trees\" (ST) to enable\nreference values other than the key of prior inserted nodes. The idea builds on\nthe assumption an $n$-node AVL (or Red-Black) requires to assure $O(\\log_2n)$\nworst-case search time, namely, a single comparison between two keys takes\nconstant time. This means the size of each key in bits is fixed to $B=c\\log_2\nn$ ($c\\geq1$) once $n$ is determined, otherwise the $O(1)$-time comparison\nassumption does not hold. Based on this we calculate \\emph{ideal} reference\nvalues from the mid-point of the interval $0..2^B$. This idea follows\n`recursively' to assure each node along the search path is provided a reference\nvalue that guarantees an overall logarithmic time. Because the search tree\nproperty works only when keys are compared to reference values and these values\nare calculated only during searches, we term the data structure as the Hidden\nBinary Search Tree (HBST). We show elementary functions to maintain the HSBT\nheight $O(B)=O(\\log_2n)$. This result requires no special order on the input --\nas does BST -- nor self-balancing procedures, as do AVL and Red-Black.\n",
"title": "The Hidden Binary Search Tree:A Balanced Rotation-Free Search Tree in the AVL RAM Model"
}
| null | null | null | null | true | null |
11973
| null |
Default
| null | null |
null |
{
"abstract": " V391 Peg (alias HS2201+2610) is a subdwarf B (sdB) pulsating star that shows\nboth p- and g-modes. By studying the arrival times of the p-mode maxima and\nminima through the O-C method, in a previous article the presence of a planet\nwas inferred with an orbital period of 3.2 yr and a minimum mass of 3.2 M_Jup.\nHere we present an updated O-C analysis using a larger data set of 1066 hours\nof photometric time series (~2.5x larger in terms of the number of data\npoints), which covers the period between 1999 and 2012 (compared with 1999-2006\nof the previous analysis). Up to the end of 2008, the new O-C diagram of the\nmain pulsation frequency (f1) is compatible with (and improves) the previous\ntwo-component solution representing the long-term variation of the pulsation\nperiod (parabolic component) and the giant planet (sine wave component). Since\n2009, the O-C trend of f1 changes, and the time derivative of the pulsation\nperiod (p_dot) passes from positive to negative; the reason of this change of\nregime is not clear and could be related to nonlinear interactions between\ndifferent pulsation modes. With the new data, the O-C diagram of the secondary\npulsation frequency (f2) continues to show two components (parabola and sine\nwave), like in the previous analysis. Various solutions are proposed to fit the\nO-C diagrams of f1 and f2, but in all of them, the sinusoidal components of f1\nand f2 differ or at least agree less well than before. The nice agreement found\npreviously was a coincidence due to various small effects that are carefully\nanalysed. Now, with a larger dataset, the presence of a planet is more\nuncertain and would require confirmation with an independent method. The new\ndata allow us to improve the measurement of p_dot for f1 and f2: using only the\ndata up to the end of 2008, we obtain p_dot_1=(1.34+-0.04)x10**-12 and\np_dot_2=(1.62+-0.22)x10**-12 ...\n",
"title": "The sdB pulsating star V391 Peg and its putative giant planet revisited after 13 years of time-series photometric data"
}
| null | null | null | null | true | null |
11974
| null |
Default
| null | null |
null |
{
"abstract": " We study the convergence of an inexact version of the classical\nKrasnosel'skii-Mann iteration for computing fixed points of nonexpansive maps.\nOur main result establishes a new metric bound for the fixed-point residuals,\nfrom which we derive their rate of convergence as well as the convergence of\nthe iterates towards a fixed point. The results are applied to three variants\nof the basic iteration: infeasible iterations with approximate projections, the\nIshikawa iteration, and diagonal Krasnosels'kii-Mann schemes. The results are\nalso extended to continuous time in order to study the asymptotics of\nnonautonomous evolution equations governed by nonexpansive operators.\n",
"title": "Rates of convergence for inexact Krasnosel'skii-Mann iterations in Banach spaces"
}
| null | null | null | null | true | null |
11975
| null |
Default
| null | null |
null |
{
"abstract": " Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal\nbiometry, and owing to its time-consuming process, there has been a great\ndemand for automatic estimation. However, the automated analysis of ultrasound\nimages is complicated because they are patient-specific, operator-dependent,\nand machine-specific. Among various types of fetal biometry, the accurate\nestimation of abdominal circumference (AC) is especially difficult to perform\nautomatically because the abdomen has low contrast against surroundings,\nnon-uniform contrast, and irregular shape compared to other parameters.We\npropose a method for the automatic estimation of the fetal AC from 2D\nultrasound data through a specially designed convolutional neural network\n(CNN), which takes account of doctors' decision process, anatomical structure,\nand the characteristics of the ultrasound image. The proposed method uses CNN\nto classify ultrasound images (stomach bubble, amniotic fluid, and umbilical\nvein) and Hough transformation for measuring AC. We test the proposed method\nusing clinical ultrasound data acquired from 56 pregnant women. Experimental\nresults show that, with relatively small training samples, the proposed CNN\nprovides sufficient classification results for AC estimation through the Hough\ntransformation. The proposed method automatically estimates AC from ultrasound\nimages. The method is quantitatively evaluated, and shows stable performance in\nmost cases and even for ultrasound images deteriorated by shadowing artifacts.\nAs a result of experiments for our acceptance check, the accuracies are 0.809\nand 0.771 with the expert 1 and expert 2, respectively, while the accuracy\nbetween the two experts is 0.905. However, for cases of oversized fetus, when\nthe amniotic fluid is not observed or the abdominal area is distorted, it could\nnot correctly estimate AC.\n",
"title": "Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
11976
| null |
Validated
| null | null |
null |
{
"abstract": " We study the quantum phase transitions in the nickel pnctides,\nCeNi$_{2-\\delta}$(As$_{1-x}$P$_{x}$)$_{2}$ ($\\delta$ $\\approx$ 0.07-0.22). This\nseries displays the distinct heavy fermion behavior in the rarely studied\nparameter regime of dilute carrier limit. We systematically investigate the\nmagnetization, specific heat and electrical transport down to low temperatures.\nUpon increasing the P-content, the antiferromagnetic order of the Ce-4$f$\nmoment is suppressed continuously and vanishes at $x_c \\sim$ 0.55. At this\ndoping, the temperature dependences of the specific heat and longitudinal\nresistivity display non-Fermi liquid behavior. Both the residual resistivity\n$\\rho_0$ and the Sommerfeld coefficient $\\gamma_0$ are sharply peaked around\n$x_c$. When the P-content reaches close to 100\\%, we observe a clear\nlow-temperature crossover into the Fermi liquid regime. In contrast to what\nhappens in the parent compound $x$ = 0.0 as a function of pressure, we find a\nsurprising result that the non-Fermi liquid behavior persists over a nonzero\nrange of doping concentration, $x_c<x<0.9$. In this doping range, at the lowest\nmeasured temperatures, the temperature dependence of the specific-heat\ncoefficient is logarithmically divergent and that of the electrical resistivity\nis linear. We discuss the properties of\nCeNi$_{2-\\delta}$(As$_{1-x}$P$_{x}$)$_{2}$ in comparison with those of its 1111\ncounterpart, CeNi(As$_{1-x}$P$_{x}$)O. Our results indicate a non-Fermi liquid\nphase in the global phase diagram of heavy fermion metals.\n",
"title": "Heavy fermion quantum criticality at dilute carrier limit in CeNi$_{2-δ}$(As$_{1-x}$P$_{x}$)$_{2}$"
}
| null | null |
[
"Physics"
] | null | true | null |
11977
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the defocusing nonlinear wave equations (NLW) on the\ntwo-dimensional torus. In particular, we construct invariant Gibbs measures for\nthe renormalized so-called Wick ordered NLW. We then prove weak universality of\nthe Wick ordered NLW, showing that the Wick ordered NLW naturally appears as a\nsuitable scaling limit of non-renormalized NLW with Gaussian random initial\ndata.\n",
"title": "Invariant Gibbs measures for the 2-d defocusing nonlinear wave equations"
}
| null | null | null | null | true | null |
11978
| null |
Default
| null | null |
null |
{
"abstract": " A fundamental characteristic of computer networks is their topological\nstructure. The question of the description of the structural characteristics of\ncomputer networks represents a problem that is not completely solved. Search\nmethods for structures of computer networks, for which the values of the\nselected parameters of their operation quality are extreme, have not been\ncompletely developed. The construction of computer networks with optimum\nindices of their operation quality is reduced to the solution of discrete\noptimization problems over graphs. This paper describes in detail the\nadvantages of the practical use of k-geodetic graphs [2, 3] in the topological\ndesign of computer networks as an alternative for the solution of the\nfundamental problems mentioned above which, we believe, are still open. Also,\nthe topological analysis and synthesis of some classes of these networks have\nbeen performed.\n",
"title": "Topological Analysis and Synthesis of Structures related to Certain Classes of K-Geodetic Computer Networks"
}
| null | null | null | null | true | null |
11979
| null |
Default
| null | null |
null |
{
"abstract": " We present a parallel hierarchical solver for general sparse linear systems\non distributed-memory machines. For large-scale problems, this fully algebraic\nalgorithm is faster and more memory-efficient than sparse direct solvers\nbecause it exploits the low-rank structure of fill-in blocks. Depending on the\naccuracy of low-rank approximations, the hierarchical solver can be used either\nas a direct solver or as a preconditioner. The parallel algorithm is based on\ndata decomposition and requires only local communication for updating boundary\ndata on every processor. Moreover, the computation-to-communication ratio of\nthe parallel algorithm is approximately the volume-to-surface-area ratio of the\nsubdomain owned by every processor. We present various numerical results to\ndemonstrate the versatility and scalability of the parallel algorithm.\n",
"title": "A distributed-memory hierarchical solver for general sparse linear systems"
}
| null | null | null | null | true | null |
11980
| null |
Default
| null | null |
null |
{
"abstract": " We present two new large-scale datasets aimed at evaluating systems designed\nto comprehend a natural language query and extract its answer from a large\ncorpus of text. The Quasar-S dataset consists of 37000 cloze-style\n(fill-in-the-gap) queries constructed from definitions of software entity tags\non the popular website Stack Overflow. The posts and comments on the website\nserve as the background corpus for answering the cloze questions. The Quasar-T\ndataset consists of 43000 open-domain trivia questions and their answers\nobtained from various internet sources. ClueWeb09 serves as the background\ncorpus for extracting these answers. We pose these datasets as a challenge for\ntwo related subtasks of factoid Question Answering: (1) searching for relevant\npieces of text that include the correct answer to a query, and (2) reading the\nretrieved text to answer the query. We also describe a retrieval system for\nextracting relevant sentences and documents from the corpus given a query, and\ninclude these in the release for researchers wishing to only focus on (2). We\nevaluate several baselines on both datasets, ranging from simple heuristics to\npowerful neural models, and show that these lag behind human performance by\n16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at\nthis https URL .\n",
"title": "Quasar: Datasets for Question Answering by Search and Reading"
}
| null | null | null | null | true | null |
11981
| null |
Default
| null | null |
null |
{
"abstract": " As robotic systems are moved out of factory work cells into human-facing\nenvironments questions of choreography become central to their design,\nplacement, and application. With a human viewer or counterpart present, a\nsystem will automatically be interpreted within context, style of movement, and\nform factor by human beings as animate elements of their environment. The\ninterpretation by this human counterpart is critical to the success of the\nsystem's integration: knobs on the system need to make sense to a human\ncounterpart; an artificial agent should have a way of notifying a human\ncounterpart of a change in system state, possibly through motion profiles; and\nthe motion of a human counterpart may have important contextual clues for task\ncompletion. Thus, professional choreographers, dance practitioners, and\nmovement analysts are critical to research in robotics. They have design\nmethods for movement that align with human audience perception, can identify\nsimplified features of movement for human-robot interaction goals, and have\ndetailed knowledge of the capacity of human movement. This article provides\napproaches employed by one research lab, specific impacts on technical and\nartistic projects within, and principles that may guide future such work. The\nbackground section reports on choreography, somatic perspectives,\nimprovisation, the Laban/Bartenieff Movement System, and robotics. From this\ncontext methods including embodied exercises, writing prompts, and community\nbuilding activities have been developed to facilitate interdisciplinary\nresearch. The results of this work is presented as an overview of a smattering\nof projects in areas like high-level motion planning, software development for\nrapid prototyping of movement, artistic output, and user studies that help\nunderstand how people interpret movement. Finally, guiding principles for other\ngroups to adopt are posited.\n",
"title": "Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11982
| null |
Validated
| null | null |
null |
{
"abstract": " The important task of developing verification system of data of virtual\ncommunity member on the basis of computer-linguistic analysis of the content of\na large sample of Ukrainian virtual communities is solved. The subject of\nresearch is methods and tools for verification of web-members socio-demographic\ncharacteristics based on computer-linguistic analysis of their communicative\ninteraction results. The aim of paper is to verifying web-user personal data on\nthe basis of computer-linguistic analysis of web-members information tracks.\nThe structure of verification software for web-user profile is designed for a\npractical implementation of assigned tasks. The method of personal data\nverification of web-members by analyzing information track of virtual community\nmember is conducted. For the first time the method for checking the\nauthenticity of web members personal data, which helped to design of\nverification tool for socio-demographic characteristics of web-member is\ndeveloped. The verification system of data of web-members, which forms the\nverified socio-demographic profiles of web-members, is developed as a result of\nconducted experiments. Also the user interface of the developed verification\nsystem web-members data is presented. Effectiveness and efficiency of use of\nthe developed methods and means for solving tasks in web-communities\nadministration is proved by their approbation. The number of false results of\nverification system is 18%.\n",
"title": "Development of verification system of socio-demographic data of virtual community member"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11983
| null |
Validated
| null | null |
null |
{
"abstract": " We demonstrate for the first time an efficient, photonic-based astronomical\nspectrograph on the 8-m Subaru Telescope. An extreme adaptive optics system is\ncombined with pupil apodiziation optics to efficiently inject light directly\ninto a single-mode fiber, which feeds a compact cross-dispersed spectrograph\nbased on array waveguide grating technology. The instrument currently offers a\nthroughput of 5% from sky-to-detector which we outline could easily be upgraded\nto ~13% (assuming a coupling efficiency of 50%). The isolated spectrograph\nthroughput from the single-mode fiber to detector was 42% at 1550 nm. The\ncoupling efficiency into the single-mode fiber was limited by the achievable\nStrehl ratio on a given night. A coupling efficiency of 47% has been achieved\nwith ~60% Strehl ratio on-sky to date. Improvements to the adaptive optics\nsystem will enable 90% Strehl ratio and a coupling of up to 67% eventually.\nThis work demonstrates that the unique combination of advanced technologies\nenables the realization of a compact and highly efficient spectrograph, setting\na precedent for future instrument design on very-large and extremely-large\ntelescopes.\n",
"title": "Demonstration of an efficient, photonic-based astronomical spectrograph on an 8-m telescope"
}
| null | null | null | null | true | null |
11984
| null |
Default
| null | null |
null |
{
"abstract": " This paper analyzes a simple game with $n$ players. We fix a mean, $\\mu$, in\nthe interval $[0, 1]$ and let each player choose any random variable\ndistributed on that interval with the given mean. The winner of the zero-sum\ngame is the player whose random variable has the highest realization. We show\nthat the position of the mean within the interval is paramount. Remarkably, if\nthe given mean is above a crucial threshold then the unique equilibrium must\ncontain a point mass on $1$. The cutoff is strictly decreasing in the number of\nplayers, $n$; and for fixed $\\mu$, as the number of players is increased, each\nplayer places more weight on $1$ at equilibrium. We characterize the\nequilibrium as the number of players goes to infinity.\n",
"title": "A Game of Random Variables"
}
| null | null | null | null | true | null |
11985
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we tackle the accurate and consistent Structure from Motion\n(SfM) problem, in particular camera registration, far exceeding the memory of a\nsingle computer in parallel. Different from the previous methods which\ndrastically simplify the parameters of SfM and sacrifice the accuracy of the\nfinal reconstruction, we try to preserve the connectivities among cameras by\nproposing a camera clustering algorithm to divide a large SfM problem into\nsmaller sub-problems in terms of camera clusters with overlapping. We then\nexploit a hybrid formulation that applies the relative poses from local\nincremental SfM into a global motion averaging framework and produce accurate\nand consistent global camera poses. Our scalable formulation in terms of camera\nclusters is highly applicable to the whole SfM pipeline including track\ngeneration, local SfM, 3D point triangulation and bundle adjustment. We are\neven able to reconstruct the camera poses of a city-scale data-set containing\nmore than one million high-resolution images with superior accuracy and\nrobustness evaluated on benchmark, Internet, and sequential data-sets.\n",
"title": "Parallel Structure from Motion from Local Increment to Global Averaging"
}
| null | null | null | null | true | null |
11986
| null |
Default
| null | null |
null |
{
"abstract": " This volume contains the proceedings of MARS 2017, the second workshop on\nModels for Formal Analysis of Real Systems, held on April 29, 2017 in Uppala,\nSweden, as an affiliated workshop of ETAPS 2017, the European Joint Conferences\non Theory and Practice of Software.\nThe workshop emphasises modelling over verification. It aims at discussing\nthe lessons learned from making formal methods for the verification and\nanalysis of realistic systems. Examples are:\n(1) Which formalism is chosen, and why?\n(2) Which abstractions have to be made and why?\n(3) How are important characteristics of the system modelled?\n(4) Were there any complications while modelling the system?\n(5) Which measures were taken to guarantee the accuracy of the model?\nWe invited papers that present full models of real systems, which may lay the\nbasis for future comparison and analysis. An aim of the workshop is to present\ndifferent modelling approaches and discuss pros and cons for each of them.\nAlternative formal descriptions of the systems presented at this workshop are\nencouraged, which should foster the development of improved specification\nformalisms.\n",
"title": "Proceedings 2nd Workshop on Models for Formal Analysis of Real Systems"
}
| null | null | null | null | true | null |
11987
| null |
Default
| null | null |
null |
{
"abstract": " When the electron density of highly crystalline thin films is tuned by\nchemical doping or ionic liq- uid gating, interesting effects appear including\nunconventional superconductivity, sizeable spin-orbit coupling, competition\nwith charge-density waves, and a debated low-temperature metallic state that\nseems to avoid the superconducting or insulating fate of standard\ntwo-dimensional electron systems. Some experiments also find a marked tendency\nto a negative electronic compressibility. We suggest that this indicates an\ninclination for electronic phase separation resulting in a nanoscopic inhomo-\ngeneity. Although the mild modulation of the inhomogeneous landscape is\ncompatible with a high electron mobility in the metallic state, this\nintrinsically inhomogeneous character is highlighted by the peculiar behaviour\nof the metal-to-superconductor transition. Modelling the system with super-\nconducting puddles embedded in a metallic matrix, we fit the peculiar\nresistance vs. temperature curves of systems like TiSe2, MoS2, and ZrNCl. In\nthis framework also the low-temperature debated metallic state finds a natural\nexplanation in terms of the pristine metallic background embedding\nnon-percolating superconducting clusters. An intrinsically inhomogeneous\ncharacter naturally raises the question of the formation mechanism(s). We\npropose a mechanism based on the interplay be- tween electrons and the charges\nof the gating ionic liquid.\n",
"title": "Negative electronic compressibility and nanoscale inhomogeneity in ionic-liquid gated two-dimensional superconductors"
}
| null | null |
[
"Physics"
] | null | true | null |
11988
| null |
Validated
| null | null |
null |
{
"abstract": " Deep generative models such as Variational Autoencoders (VAEs) and Generative\nAdversarial Networks (GANs) have recently been applied to style and domain\ntransfer for images, and in the case of VAEs, music. GAN-based models employing\nseveral generators and some form of cycle consistency loss have been among the\nmost successful for image domain transfer. In this paper we apply such a model\nto symbolic music and show the feasibility of our approach for music genre\ntransfer. Evaluations using separate genre classifiers show that the style\ntransfer works well. In order to improve the fidelity of the transformed music,\nwe add additional discriminators that cause the generators to keep the\nstructure of the original music mostly intact, while still achieving strong\ngenre transfer. Visual and audible results further show the potential of our\napproach. To the best of our knowledge, this paper represents the first\napplication of GANs to symbolic music domain transfer.\n",
"title": "Symbolic Music Genre Transfer with CycleGAN"
}
| null | null | null | null | true | null |
11989
| null |
Default
| null | null |
null |
{
"abstract": " We develop new optimization methodology for planning installation of Flexible\nAlternating Current Transmission System (FACTS) devices of the parallel and\nshunt types into large power transmission systems, which allows to delay or\navoid installations of generally much more expensive power lines. Methodology\ntakes as an input projected economic development, expressed through a paced\ngrowth of the system loads, as well as uncertainties, expressed through\nmultiple scenarios of the growth. We price new devices according to their\ncapacities. Installation cost contributes to the optimization objective in\ncombination with the cost of operations integrated over time and averaged over\nthe scenarios. The multi-stage (-time-frame) optimization aims to achieve a\ngradual distribution of new resources in space and time. Constraints on the\ninvestment budget, or equivalently constraint on building capacity, is\nintroduced at each time frame. Our approach adjusts operationally not only\nnewly installed FACTS devices but also other already existing flexible degrees\nof freedom. This complex optimization problem is stated using the most general\nAC Power Flows. Non-linear, non-convex, multiple-scenario and multi-time-frame\noptimization is resolved via efficient heuristics, consisting of a sequence of\nalternating Linear Programmings or Quadratic Programmings (depending on the\ngeneration cost) and AC-PF solution steps designed to maintain operational\nfeasibility for all scenarios. Computational scalability and application of the\nnewly developed approach is illustrated on the example of the 2736-nodes large\nPolish system. One most important advantage of the framework is that the\noptimal capacity of FACTS is build up gradually at each time frame in a limited\nnumber of locations, thus allowing to prepare the system better for possible\ncongestion due to future economic and other uncertainties.\n",
"title": "Methodology for Multi-stage, Operations- and Uncertainty-Aware Placement and Sizing of FACTS Devices in a Large Power Transmission System"
}
| null | null | null | null | true | null |
11990
| null |
Default
| null | null |
null |
{
"abstract": " A detailed development of the principal component proxy method of dynamical\ntracer reconstruction is presented, including error analysis. The method works\nby correlating the largest principal components of a matrix representation of\nthe transport dynamics with a set of sparse measurements. The Lyapunov spectrum\nwas measured and used to quantify the lifetime of each principal component. The\nmethod was tested on the 500 K isentropic surface with ozone measurements from\nthe Polar Aerosol and Ozone Measurement (POAM) III satellite instrument during\nOctober and November 1998 and compared with the older proxy tracer method which\nworks by correlating measurements with a single other tracer or proxy. Using a\n60 day integration time and five (5) principal components, cross validation of\nglobally reconstructed ozone and comparison with ozone sondes returned\nroot-mean-square errors of 0.16 ppmv and 0.36 ppmv, respectively. This compares\nfavourably with the classic proxy tracer method in which a passive tracer\nequivalent latitude field was used for the proxy and which returned RMS errors\nof 0.30 ppmv and 0.59 ppmv for cross-validation and sonde validation\nrespectively. The method seems especially effective for shorter lived tracers\nand was far more accurate than the classic method at predicting ozone\nconcentration in the Southern hemisphere at the end of winter.\n",
"title": "PC Proxy: A New Method of Dynamical Tracer Reconstruction"
}
| null | null | null | null | true | null |
11991
| null |
Default
| null | null |
null |
{
"abstract": " The repulsive Fermi Hubbard model on the square lattice has a rich phase\ndiagram near half-filling (corresponding to the particle density per lattice\nsite $n=1$): for $n=1$ the ground state is an antiferromagnetic insulator, at\n$0.6 < n \\lesssim 0.8$, it is a $d_{x^2-y^2}$-wave superfluid (at least for\nmoderately strong interactions $U \\lesssim 4t$ in terms of the hopping $t$),\nand the region $1-n \\ll 1$ is most likely subject to phase separation. Much of\nthis physics is preempted at finite temperatures and to an extent driven by\nstrong magnetic fluctuations, their quantitative characteristics and how they\nchange with the doping level being much less understood. Experiments on\nultra-cold atoms have recently gained access to this interesting fluctuation\nregime, which is now under extensive investigation. In this work we employ a\nself-consistent skeleton diagrammatic approach to quantify the characteristic\ntemperature scale $T_{M}(n)$ for the onset of magnetic fluctuations with a\nlarge correlation length and identify their nature. Our results suggest that\nthe strongest fluctuations---and hence highest $T_{M}$ and easiest experimental\naccess to this regime---are observed at $U/t \\approx 4-6$.\n",
"title": "Magnetic Correlations in the Two-dimensional Repulsive Fermi Hubbard Model"
}
| null | null | null | null | true | null |
11992
| null |
Default
| null | null |
null |
{
"abstract": " A key phase in the bridge design process is the selection of the structural\nsystem. Due to budget and time constraints, engineers typically rely on\nengineering judgment and prior experience when selecting a structural system,\noften considering a limited range of design alternatives. The objective of this\nstudy was to explore the suitability of supervised machine learning as a\npreliminary design aid that provides guidance to engineers with regards to the\nstatistically optimal bridge type to choose, ultimately improving the\nlikelihood of optimized design, design standardization, and reduced maintenance\ncosts. In order to devise this supervised learning system, data for over\n600,000 bridges from the National Bridge Inventory database were analyzed. Key\nattributes for determining the bridge structure type were identified through\nthree feature selection techniques. Potentially useful attributes like seismic\nintensity and historic data on the cost of materials (steel and concrete) were\nthen added from the US Geological Survey (USGS) database and Engineering News\nRecord. Decision tree, Bayes network and Support Vector Machines were used for\npredicting the bridge design type. Due to state-to-state variations in material\navailability, material costs, and design codes, supervised learning models\nbased on the complete data set did not yield favorable results. Supervised\nlearning models were then trained and tested using 10-fold cross validation on\ndata for each state. Inclusion of seismic data improved the model performance\nnoticeably. The data was then resampled to reduce the bias of the models\ntowards more common design types, and the supervised learning models thus\nconstructed showed further improvements in performance. The average recall and\nprecision for the state models was 88.6% and 88.0% using Decision Trees, 84.0%\nand 83.7% using Bayesian Networks, and 80.8% and 75.6% using SVM.\n",
"title": "Bridge type classification: supervised learning on a modified NBI dataset"
}
| null | null | null | null | true | null |
11993
| null |
Default
| null | null |
null |
{
"abstract": " Automatic question-answering is a classical problem in natural language\nprocessing, which aims at designing systems that can automatically answer a\nquestion, in the same way as human does. In this work, we propose a deep\nlearning based model for automatic question-answering. First the questions and\nanswers are embedded using neural probabilistic modeling. Then a deep\nsimilarity neural network is trained to find the similarity score of a pair of\nanswer and question. Then for each question, the best answer is found as the\none with the highest similarity score. We first train this model on a\nlarge-scale public question-answering database, and then fine-tune it to\ntransfer to the customer-care chat data. We have also tested our framework on a\npublic question-answering database and achieved very good performance.\n",
"title": "Automatic Question-Answering Using A Deep Similarity Neural Network"
}
| null | null | null | null | true | null |
11994
| null |
Default
| null | null |
null |
{
"abstract": " Changes to network structure can substantially affect when and how widely new\nideas, products, and conventions are adopted. In models of biological\ncontagion, interventions that randomly rewire edges (making them \"longer\")\naccelerate spread. However, there are other models relevant to social\ncontagion, such as those motivated by myopic best-response in games with\nstrategic complements, in which individual's behavior is described by a\nthreshold number of adopting neighbors above which adoption occurs (i.e.,\ncomplex contagions). Recent work has argued that highly clustered, rather than\nrandom, networks facilitate spread of these complex contagions. Here we show\nthat minor modifications of prior analyses, which make them more realistic,\nreverse this result. The modification is that we allow very rarely below\nthreshold adoption, i.e., very rarely adoption occurs, where there is only one\nadopting neighbor. To model the trade-off between long and short edges we\nconsider networks that are the union of cycle-power-$k$ graphs and random\ngraphs on $n$ nodes. We study how the time to global spread changes as we\nreplace the cycle edges with (random) long ties. Allowing adoptions below\nthreshold to occur with order $1/\\sqrt{n}$ probability is enough to ensure that\nrandom rewiring accelerates spread. Simulations illustrate the robustness of\nthese results to other commonly-posited models for noisy best-response\nbehavior. We then examine empirical social networks, where we find that\nhypothetical interventions that (a) randomly rewire existing edges or (b) add\nrandom edges reduce time to spread compared with the original network or\naddition of \"short\", triad-closing edges, respectively. This substantially\nrevises conclusions about how interventions change the spread of behavior,\nsuggesting that those wanting to increase spread should induce formation of\nlong ties, rather than triad-closing ties.\n",
"title": "Long ties accelerate noisy threshold-based contagions"
}
| null | null | null | null | true | null |
11995
| null |
Default
| null | null |
null |
{
"abstract": " We show that the evolution of two-component particles governed by a\ntwo-dimensional spin-orbit lattice Hamiltonian can reveal transitions between\ntopological phases. A kink in the mean width of the particle distribution\nsignals the closing of the band gap, a prerequisite for a quantum phase\ntransition between topological phases. Furthermore, for realistic and\nexperimentally motivated Hamiltonians the density profile in topologically\nnon-trivial phases displays characteristic rings in the vicinity of the origin\nthat are absent in trivial phases. The results are expected to have immediate\napplication to systems of ultracold atoms and photonic lattices.\n",
"title": "Detecting topological transitions in two dimensions by Hamiltonian evolution"
}
| null | null | null | null | true | null |
11996
| null |
Default
| null | null |
null |
{
"abstract": " We present a search for optical bursts from the repeating fast radio burst\nFRB 121102 using simultaneous observations with the high-speed optical camera\nULTRASPEC on the 2.4-m Thai National Telescope and radio observations with the\n100-m Effelsberg Radio Telescope. A total of 13 radio bursts were detected, but\nwe found no evidence for corresponding optical bursts in our 70.7-ms frames.\nThe 5-sigma upper limit to the optical flux density during our observations is\n0.33 mJy at 767nm. This gives an upper limit for the optical burst fluence of\n0.046 Jy ms, which constrains the broadband spectral index of the burst\nemission to alpha < -0.2. Two of the radio pulses are separated by just 34 ms,\nwhich may represent an upper limit on a possible underlying periodicity (a\nrotation period typical of pulsars), or these pulses may have come from a\nsingle emission window that is a small fraction of a possible period.\n",
"title": "A search for optical bursts from the repeating fast radio burst FRB 121102"
}
| null | null | null | null | true | null |
11997
| null |
Default
| null | null |
null |
{
"abstract": " We report on the results of a de Haas-van Alphen (dHvA) measurement performed\non the recently discovered antiferromagnet URhIn$_5$ ($T_N$ = 98 K), a\n5\\textit{f}-analogue of the well studied heavy fermion antiferromagnet\nCeRhIn$_5$. The Fermi surface is found to consist of four surfaces: a roughly\nspherical pocket $\\beta$, with $F_\\beta \\simeq 0.3$ kT; a pillow-shaped closed\nsurface, $\\alpha$, with $F_\\alpha \\simeq 1.1$ kT; and two higher frequencies\n$\\gamma_1$ with $F_{\\gamma_1} \\simeq 3.2$ kT and $\\gamma_2$ with $F_{\\gamma_2}\n\\simeq 3.5$ kT that are seen only near the \\textit{c}-axis, and that may arise\non cylindrical Fermi surfaces. The measured cyclotron masses range from 1.9\n$m_e$ to 4.3 $m_e$. A simple LDA+SO calculation performed for the paramagnetic\nground state shows a very different Fermi surface topology, demonstrating a\nneed for more advanced electronic structure calculations.\n",
"title": "de Haas-van Alphen measurement of the antiferromagnet URhIn$_5$"
}
| null | null | null | null | true | null |
11998
| null |
Default
| null | null |
null |
{
"abstract": " An important yet challenging problem in understanding indoor scene is\nrecovering indoor frame structure from a monocular image. It is more difficult\nwhen occlusions and illumination vary, and object boundaries are weak. To\novercome these difficulties, a new approach based on line segment refinement\nwith two constraints is proposed. First, the line segments are refined by four\nconsecutive operations, i.e., reclassifying, connecting, fitting, and voting.\nSpecifically, misclassified line segments are revised by the reclassifying\noperation, some short line segments are joined by the connecting operation, the\nundetected key line segments are recovered by the fitting operation with the\nhelp of the vanishing points, the line segments converging on the frame are\nselected by the voting operation. Second, we construct four frame models\naccording to four classes of possible shooting angles of the monocular image,\nthe natures of all frame models are introduced via enforcing the cross ratio\nand depth constraints. The indoor frame is then constructed by fitting those\nrefined line segments with related frame model under the two constraints, which\njointly advance the accuracy of the frame. Experimental results on a collection\nof over 300 indoor images indicate that our algorithm has the capability of\nrecovering the frame from complex indoor scenes.\n",
"title": "Indoor Frame Recovery from Refined Line Segments"
}
| null | null | null | null | true | null |
11999
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study the classic problem of fairly allocating indivisible\nitems with the extra feature that the items lie on a line. Our goal is to find\na fair allocation that is contiguous, meaning that the bundle of each agent\nforms a contiguous block on the line. While allocations satisfying the\nclassical fairness notions of proportionality, envy-freeness, and equitability\nare not guaranteed to exist even without the contiguity requirement, we show\nthe existence of contiguous allocations satisfying approximate versions of\nthese notions that do not degrade as the number of agents or items increases.\nWe also study the efficiency loss of contiguous allocations due to fairness\nconstraints.\n",
"title": "Fairly Allocating Contiguous Blocks of Indivisible Items"
}
| null | null | null | null | true | null |
12000
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.