title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings | 69 | icml | 23 | 16 | 2023-06-17 04:13:28.238000 | https://github.com/rusty1s/pyg_autoscale | 148 | Gnnautoscale: Scalable and expressive graph neural networks via historical embeddings | https://scholar.google.com/scholar?cluster=4526974256428451675&hl=en&as_sdt=0,5 | 4 | 2,021 |
PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning | 10 | icml | 1 | 0 | 2023-06-17 04:13:28.441000 | https://github.com/filangelos/social_rl | 6 | Psiphi-learning: Reinforcement learning with demonstrations using successor features and inverse temporal difference learning | https://scholar.google.com/scholar?cluster=673567895573287554&hl=en&as_sdt=0,5 | 4 | 2,021 |
A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups | 110 | icml | 17 | 3 | 2023-06-17 04:13:28.661000 | https://github.com/mfinzi/equivariant-MLP | 222 | A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups | https://scholar.google.com/scholar?cluster=7699207538683831568&hl=en&as_sdt=0,5 | 9 | 2,021 |
Few-Shot Conformal Prediction with Auxiliary Tasks | 24 | icml | 2 | 0 | 2023-06-17 04:13:28.864000 | https://github.com/ajfisch/few-shot-cp | 5 | Few-shot conformal prediction with auxiliary tasks | https://scholar.google.com/scholar?cluster=10162141541577160393&hl=en&as_sdt=0,5 | 0 | 2,021 |
Scalable Certified Segmentation via Randomized Smoothing | 18 | icml | 1 | 0 | 2023-06-17 04:13:29.072000 | https://github.com/eth-sri/segmentation-smoothing | 9 | Scalable certified segmentation via randomized smoothing | https://scholar.google.com/scholar?cluster=9847674407340584512&hl=en&as_sdt=0,10 | 7 | 2,021 |
Online Learning with Optimism and Delay | 17 | icml | 2 | 0 | 2023-06-17 04:13:29.275000 | https://github.com/geflaspohler/poold | 9 | Online learning with optimism and delay | https://scholar.google.com/scholar?cluster=3051720071690017995&hl=en&as_sdt=0,33 | 3 | 2,021 |
Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design | 40 | icml | 9 | 0 | 2023-06-17 04:13:29.478000 | https://github.com/ae-foster/dad | 22 | Deep adaptive design: Amortizing sequential bayesian experimental design | https://scholar.google.com/scholar?cluster=8507220836791345595&hl=en&as_sdt=0,36 | 4 | 2,021 |
Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning | 62 | icml | 22 | 2 | 2023-06-17 04:13:29.738000 | https://github.com/Accenture/Labs-Federated-Learning | 50 | Clustered sampling: Low-variance and improved representativity for clients selection in federated learning | https://scholar.google.com/scholar?cluster=1617025297400599136&hl=en&as_sdt=0,33 | 16 | 2,021 |
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise | 15 | icml | 0 | 0 | 2023-06-17 04:13:29.940000 | https://github.com/spencerfrei/nn_generalization_agnostic_noise | 0 | Provable generalization of sgd-trained neural networks of any width in the presence of adversarial label noise | https://scholar.google.com/scholar?cluster=10029653979209669660&hl=en&as_sdt=0,14 | 1 | 2,021 |
Post-selection inference with HSIC-Lasso | 6 | icml | 1 | 0 | 2023-06-17 04:13:30.143000 | https://github.com/tobias-freidling/hsic-lasso-psi | 3 | Post-selection inference with HSIC-Lasso | https://scholar.google.com/scholar?cluster=10354725144319499088&hl=en&as_sdt=0,10 | 1 | 2,021 |
Variational Data Assimilation with a Learned Inverse Observation Operator | 15 | icml | 4 | 0 | 2023-06-17 04:13:30.351000 | https://github.com/googleinterns/invobs-data-assimilation | 28 | Variational data assimilation with a learned inverse observation operator | https://scholar.google.com/scholar?cluster=9123657318704968381&hl=en&as_sdt=0,5 | 3 | 2,021 |
Bayesian Quadrature on Riemannian Data Manifolds | 4 | icml | 2 | 0 | 2023-06-17 04:13:30.561000 | https://github.com/froec/BQonRDM | 8 | Bayesian quadrature on Riemannian data manifolds | https://scholar.google.com/scholar?cluster=14587892748613209913&hl=en&as_sdt=0,22 | 1 | 2,021 |
Learning Task Informed Abstractions | 27 | icml | 2 | 2 | 2023-06-17 04:13:30.764000 | https://github.com/kyonofx/tia | 11 | Learning task informed abstractions | https://scholar.google.com/scholar?cluster=2332386988369186148&hl=en&as_sdt=0,41 | 1 | 2,021 |
Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators | 11 | icml | 2 | 0 | 2023-06-17 04:13:30.972000 | https://github.com/RICE-EIC/Auto-NBA | 12 | Auto-NBA: Efficient and effective search over the joint space of networks, bitwidths, and accelerators | https://scholar.google.com/scholar?cluster=860563000728112413&hl=en&as_sdt=0,47 | 4 | 2,021 |
A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation | 10 | icml | 2 | 0 | 2023-06-17 04:13:31.177000 | https://github.com/sfujim/SR-DICE | 14 | A deep reinforcement learning approach to marginalized importance sampling with the successor representation | https://scholar.google.com/scholar?cluster=2623436752996151694&hl=en&as_sdt=0,5 | 1 | 2,021 |
Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning | 15 | icml | 2 | 0 | 2023-06-17 04:13:31.379000 | https://github.com/frt03/pic | 8 | Policy information capacity: Information-theoretic measure for task complexity in deep reinforcement learning | https://scholar.google.com/scholar?cluster=4831959598163320466&hl=en&as_sdt=0,5 | 0 | 2,021 |
Maximum Mean Discrepancy Test is Aware of Adversarial Attacks | 32 | icml | 5 | 2 | 2023-06-17 04:13:31.582000 | https://github.com/Sjtubrian/SAMMD | 16 | Maximum mean discrepancy test is aware of adversarial attacks | https://scholar.google.com/scholar?cluster=5133700864957699812&hl=en&as_sdt=0,5 | 2 | 2,021 |
Unsupervised Co-part Segmentation through Assembly | 11 | icml | 6 | 0 | 2023-06-17 04:13:31.784000 | https://github.com/Talegqz/unsupervised_co_part_segmentation | 40 | Unsupervised co-part segmentation through assembly | https://scholar.google.com/scholar?cluster=11164401170119653450&hl=en&as_sdt=0,5 | 1 | 2,021 |
RATT: Leveraging Unlabeled Data to Guarantee Generalization | 17 | icml | 0 | 0 | 2023-06-17 04:13:32.002000 | https://github.com/acmi-lab/ratt_generalization_bound | 6 | Ratt: Leveraging unlabeled data to guarantee generalization | https://scholar.google.com/scholar?cluster=5614969385611278866&hl=en&as_sdt=0,5 | 2 | 2,021 |
What does LIME really see in images? | 22 | icml | 1 | 0 | 2023-06-17 04:13:32.206000 | https://github.com/dgarreau/image_lime_theory | 4 | What does LIME really see in images? | https://scholar.google.com/scholar?cluster=8275490801192083940&hl=en&as_sdt=0,5 | 1 | 2,021 |
Strategic Classification in the Dark | 29 | icml | 2 | 0 | 2023-06-17 04:13:32.410000 | https://github.com/staretgicclfdark/strategic_rep | 0 | Strategic classification in the dark | https://scholar.google.com/scholar?cluster=15886223975765131668&hl=en&as_sdt=0,1 | 1 | 2,021 |
Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective | 25 | icml | 3 | 1 | 2023-06-17 04:13:32.612000 | https://github.com/floringogianu/snrl | 9 | Spectral normalisation for deep reinforcement learning: an optimisation perspective | https://scholar.google.com/scholar?cluster=1887962783436917172&hl=en&as_sdt=0,3 | 2 | 2,021 |
Active Slices for Sliced Stein Discrepancy | 2 | icml | 1 | 0 | 2023-06-17 04:13:32.815000 | https://github.com/WenboGong/Sliced_Kernelized_Stein_Discrepancy | 1 | Active Slices for Sliced Stein Discrepancy | https://scholar.google.com/scholar?cluster=9280564173167932948&hl=en&as_sdt=0,5 | 1 | 2,021 |
On the Problem of Underranking in Group-Fair Ranking | 10 | icml | 1 | 0 | 2023-06-17 04:13:33.018000 | https://github.com/sruthigorantla/FIGR | 0 | On the problem of underranking in group-fair ranking | https://scholar.google.com/scholar?cluster=15412568586111712326&hl=en&as_sdt=0,44 | 1 | 2,021 |
MARINA: Faster Non-Convex Distributed Learning with Compression | 56 | icml | 1 | 0 | 2023-06-17 04:13:33.221000 | https://github.com/burlachenkok/marina | 5 | MARINA: Faster non-convex distributed learning with compression | https://scholar.google.com/scholar?cluster=6014843650767988680&hl=en&as_sdt=0,5 | 2 | 2,021 |
Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline | 115 | icml | 22 | 0 | 2023-06-17 04:13:33.424000 | https://github.com/princeton-vl/SimpleView | 129 | Revisiting point cloud shape classification with a simple and effective baseline | https://scholar.google.com/scholar?cluster=17283112957651231327&hl=en&as_sdt=0,39 | 8 | 2,021 |
Dissecting Supervised Contrastive Learning | 61 | icml | 0 | 1 | 2023-06-17 04:13:33.627000 | https://github.com/plus-rkwitt/py_supcon_vs_ce | 0 | Dissecting supervised contrastive learning | https://scholar.google.com/scholar?cluster=15842603334888826339&hl=en&as_sdt=0,25 | 2 | 2,021 |
Oops I Took A Gradient: Scalable Sampling for Discrete Distributions | 52 | icml | 12 | 2 | 2023-06-17 04:13:33.831000 | https://github.com/wgrathwohl/GWG_release | 43 | Oops i took a gradient: Scalable sampling for discrete distributions | https://scholar.google.com/scholar?cluster=6540555600529946476&hl=en&as_sdt=0,39 | 4 | 2,021 |
Detecting Rewards Deterioration in Episodic Reinforcement Learning | 7 | icml | 0 | 0 | 2023-06-17 04:13:34.033000 | https://github.com/ido90/Rewards-Deterioration-Detection | 2 | Detecting rewards deterioration in episodic reinforcement learning | https://scholar.google.com/scholar?cluster=6107338977661068725&hl=en&as_sdt=0,14 | 1 | 2,021 |
Operationalizing Complex Causes: A Pragmatic View of Mediation | 4 | icml | 0 | 0 | 2023-06-17 04:13:34.236000 | https://github.com/limorigu/ComplexCauses | 4 | Operationalizing complex causes: A pragmatic view of mediation | https://scholar.google.com/scholar?cluster=15565452123708375262&hl=en&as_sdt=0,5 | 2 | 2,021 |
Distribution-Free Calibration Guarantees for Histogram Binning without Sample Splitting | 22 | icml | 5 | 0 | 2023-06-17 04:13:34.439000 | https://github.com/aigen/df-posthoc-calibration | 31 | Distribution-free calibration guarantees for histogram binning without sample splitting | https://scholar.google.com/scholar?cluster=1595974871643501822&hl=en&as_sdt=0,44 | 1 | 2,021 |
Correcting Exposure Bias for Link Recommendation | 21 | icml | 1 | 0 | 2023-06-17 04:13:34.642000 | https://github.com/shantanu95/exposure-bias-link-rec | 6 | Correcting exposure bias for link recommendation | https://scholar.google.com/scholar?cluster=8695845050687290736&hl=en&as_sdt=0,5 | 2 | 2,021 |
The Heavy-Tail Phenomenon in SGD | 65 | icml | 1 | 0 | 2023-06-17 04:13:34.845000 | https://github.com/umutsimsekli/sgd_ht | 1 | The heavy-tail phenomenon in SGD | https://scholar.google.com/scholar?cluster=11485380306468946114&hl=en&as_sdt=0,5 | 1 | 2,021 |
Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks | 17 | icml | 3 | 0 | 2023-06-17 04:13:35.048000 | https://github.com/AI-secure/Knowledge-Enhanced-Machine-Learning-Pipeline | 10 | Knowledge enhanced machine learning pipeline against diverse adversarial attacks | https://scholar.google.com/scholar?cluster=7636701886743640050&hl=en&as_sdt=0,33 | 2 | 2,021 |
Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration | 14 | icml | 0 | 1 | 2023-06-17 04:13:35.250000 | https://github.com/seungyulhan/dac | 4 | Diversity actor-critic: Sample-aware entropy regularization for sample-efficient exploration | https://scholar.google.com/scholar?cluster=1891726031922597340&hl=en&as_sdt=0,11 | 1 | 2,021 |
Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning | 22 | icml | 4 | 0 | 2023-06-17 04:13:35.453000 | https://github.com/ahjwang/messenger-emma | 17 | Grounding language to entities and dynamics for generalization in reinforcement learning | https://scholar.google.com/scholar?cluster=14975248165561232256&hl=en&as_sdt=0,5 | 1 | 2,021 |
SPECTRE: defending against backdoor attacks using robust statistics | 47 | icml | 5 | 4 | 2023-06-17 04:13:35.657000 | https://github.com/SewoongLab/spectre-defense | 15 | Spectre: Defending against backdoor attacks using robust statistics | https://scholar.google.com/scholar?cluster=17952878874994811152&hl=en&as_sdt=0,15 | 2 | 2,021 |
Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity | 26 | icml | 0 | 0 | 2023-06-17 04:13:35.859000 | https://github.com/bayer-science-for-a-better-life/graph-attribution | 7 | Improving molecular graph neural network explainability with orthonormalization and induced sparsity | https://scholar.google.com/scholar?cluster=2317141663535501848&hl=en&as_sdt=0,5 | 1 | 2,021 |
Optimizing Black-box Metrics with Iterative Example Weighting | 5 | icml | 0 | 0 | 2023-06-17 04:13:36.063000 | https://github.com/koyejolab/fweg | 2 | Optimizing black-box metrics with iterative example weighting | https://scholar.google.com/scholar?cluster=2459105363066716864&hl=en&as_sdt=0,47 | 1 | 2,021 |
Trees with Attention for Set Prediction Tasks | 0 | icml | 1 | 2 | 2023-06-17 04:13:36.267000 | https://github.com/TAU-MLwell/Set-Tree | 10 | Trees with Attention for Set Prediction Tasks | https://scholar.google.com/scholar?cluster=8916867411595092231&hl=en&as_sdt=0,5 | 3 | 2,021 |
MC-LSTM: Mass-Conserving LSTM | 41 | icml | 12 | 0 | 2023-06-17 04:13:36.470000 | https://github.com/ml-jku/mc-lstm | 32 | Mc-lstm: Mass-conserving lstm | https://scholar.google.com/scholar?cluster=4541460761992496905&hl=en&as_sdt=0,46 | 4 | 2,021 |
Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes | 20 | icml | 0 | 0 | 2023-06-17 04:13:36.674000 | https://github.com/anonymous-code-0/SteerableCNP | 0 | Equivariant learning of stochastic fields: Gaussian processes and steerable conditional neural processes | https://scholar.google.com/scholar?cluster=12538236800312580419&hl=en&as_sdt=0,48 | 1 | 2,021 |
Off-Belief Learning | 42 | icml | 7 | 3 | 2023-06-17 04:13:36.877000 | https://github.com/facebookresearch/off-belief-learning | 36 | Off-belief learning | https://scholar.google.com/scholar?cluster=9880359834919449179&hl=en&as_sdt=0,39 | 9 | 2,021 |
Generalizable Episodic Memory for Deep Reinforcement Learning | 22 | icml | 4 | 1 | 2023-06-17 04:13:37.079000 | https://github.com/MouseHu/GEM | 10 | Generalizable episodic memory for deep reinforcement learning | https://scholar.google.com/scholar?cluster=2172996156668096387&hl=en&as_sdt=0,47 | 2 | 2,021 |
STRODE: Stochastic Boundary Ordinary Differential Equation | 5 | icml | 2 | 1 | 2023-06-17 04:13:37.282000 | https://github.com/Waffle-Liu/STRODE | 13 | Strode: Stochastic boundary ordinary differential equation | https://scholar.google.com/scholar?cluster=3501265210663364162&hl=en&as_sdt=0,5 | 3 | 2,021 |
Generative Adversarial Transformers | 128 | icml | 142 | 14 | 2023-06-17 04:13:37.484000 | https://github.com/dorarad/gansformer | 1,272 | Generative adversarial transformers | https://scholar.google.com/scholar?cluster=2292407280859337870&hl=en&as_sdt=0,15 | 38 | 2,021 |
Selecting Data Augmentation for Simulating Interventions | 47 | icml | 3 | 0 | 2023-06-17 04:13:37.713000 | https://github.com/AMLab-Amsterdam/DataAugmentationInterventions | 25 | Selecting data augmentation for simulating interventions | https://scholar.google.com/scholar?cluster=3812556752145273819&hl=en&as_sdt=0,11 | 6 | 2,021 |
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning | 62 | icml | 0 | 0 | 2023-06-17 04:13:37.916000 | https://github.com/AlexImmer/marglik | 5 | Scalable marginal likelihood estimation for model selection in deep learning | https://scholar.google.com/scholar?cluster=11062863403728072122&hl=en&as_sdt=0,5 | 3 | 2,021 |
Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization | 3 | icml | 1 | 0 | 2023-06-17 04:13:38.118000 | https://github.com/HeddaCohenIndelman/PerturbedStructuredPredictorsDirect | 4 | Learning randomly perturbed structured predictors for direct loss minimization | https://scholar.google.com/scholar?cluster=6521871878208082553&hl=en&as_sdt=0,50 | 1 | 2,021 |
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning | 33 | icml | 11 | 1 | 2023-06-17 04:13:38.320000 | https://github.com/shariqiqbal2810/REFIL | 50 | Randomized entity-wise factorization for multi-agent reinforcement learning | https://scholar.google.com/scholar?cluster=4592647130622480373&hl=en&as_sdt=0,25 | 2 | 2,021 |
Instance-Optimal Compressed Sensing via Posterior Sampling | 28 | icml | 5 | 1 | 2023-06-17 04:13:38.523000 | https://github.com/ajiljalal/code-cs-fairness | 17 | Instance-optimal compressed sensing via posterior sampling | https://scholar.google.com/scholar?cluster=13669430670080066426&hl=en&as_sdt=0,28 | 3 | 2,021 |
Fairness for Image Generation with Uncertain Sensitive Attributes | 22 | icml | 5 | 1 | 2023-06-17 04:13:38.733000 | https://github.com/ajiljalal/code-cs-fairness | 17 | Fairness for image generation with uncertain sensitive attributes | https://scholar.google.com/scholar?cluster=8101927413528099299&hl=en&as_sdt=0,32 | 3 | 2,021 |
In-Database Regression in Input Sparsity Time | 9 | icml | 0 | 0 | 2023-06-17 04:13:38.935000 | https://github.com/AnonymousFireman/ICML_code | 0 | In-Database Regression in Input Sparsity Time | https://scholar.google.com/scholar?cluster=4719057238276619749&hl=en&as_sdt=0,5 | 1 | 2,021 |
Parallel and Flexible Sampling from Autoregressive Models via Langevin Dynamics | 18 | icml | 4 | 1 | 2023-06-17 04:13:39.137000 | https://github.com/vivjay30/pnf-sampling | 19 | Parallel and flexible sampling from autoregressive models via langevin dynamics | https://scholar.google.com/scholar?cluster=6113516044812949338&hl=en&as_sdt=0,5 | 3 | 2,021 |
Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding | 32 | icml | 2 | 1 | 2023-06-17 04:13:39.339000 | https://github.com/anndvision/quince | 20 | Quantifying ignorance in individual-level causal-effect estimates under hidden confounding | https://scholar.google.com/scholar?cluster=4021084687511550592&hl=en&as_sdt=0,34 | 1 | 2,021 |
Bilevel Optimization: Convergence Analysis and Enhanced Design | 95 | icml | 12 | 0 | 2023-06-17 04:13:39.541000 | https://github.com/junjieyang97/stocbio_hp | 35 | Bilevel optimization: Convergence analysis and enhanced design | https://scholar.google.com/scholar?cluster=14240180646297063660&hl=en&as_sdt=0,7 | 1 | 2,021 |
Self-Damaging Contrastive Learning | 35 | icml | 5 | 1 | 2023-06-17 04:13:39.743000 | https://github.com/VITA-Group/SDCLR | 56 | Self-damaging contrastive learning | https://scholar.google.com/scholar?cluster=16794370267246676640&hl=en&as_sdt=0,5 | 3 | 2,021 |
Prioritized Level Replay | 80 | icml | 15 | 2 | 2023-06-17 04:13:39.945000 | https://github.com/facebookresearch/level-replay | 67 | Prioritized level replay | https://scholar.google.com/scholar?cluster=18011658212512846682&hl=en&as_sdt=0,44 | 9 | 2,021 |
Streaming and Distributed Algorithms for Robust Column Subset Selection | 3 | icml | 0 | 0 | 2023-06-17 04:13:40.148000 | https://github.com/11hifish/robust_css | 0 | Streaming and distributed algorithms for robust column subset selection | https://scholar.google.com/scholar?cluster=14557967983043893613&hl=en&as_sdt=0,31 | 1 | 2,021 |
Adversarial Option-Aware Hierarchical Imitation Learning | 8 | icml | 4 | 1 | 2023-06-17 04:13:40.351000 | https://github.com/id9502/Option-GAIL | 12 | Adversarial option-aware hierarchical imitation learning | https://scholar.google.com/scholar?cluster=15905939393304829332&hl=en&as_sdt=0,23 | 2 | 2,021 |
Provable Lipschitz Certification for Generative Models | 11 | icml | 0 | 5 | 2023-06-17 04:13:40.555000 | https://github.com/revbucket/lipMIP | 12 | Provable Lipschitz certification for generative models | https://scholar.google.com/scholar?cluster=12680803124320000894&hl=en&as_sdt=0,33 | 2 | 2,021 |
A Differentiable Point Process with Its Application to Spiking Neural Networks | 2 | icml | 0 | 0 | 2023-06-17 04:13:40.757000 | https://github.com/ibm-research-tokyo/diffsnn | 18 | A differentiable point process with its application to spiking neural networks | https://scholar.google.com/scholar?cluster=18295729593563933234&hl=en&as_sdt=0,47 | 4 | 2,021 |
SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes | 8 | icml | 2 | 1 | 2023-06-17 04:13:40.959000 | https://github.com/activatedgeek/simplex-gp | 6 | Skiing on simplices: Kernel interpolation on the permutohedral lattice for scalable gaussian processes | https://scholar.google.com/scholar?cluster=612518699030619789&hl=en&as_sdt=0,5 | 4 | 2,021 |
Variational Auto-Regressive Gaussian Processes for Continual Learning | 16 | icml | 4 | 0 | 2023-06-17 04:13:41.161000 | https://github.com/uber-research/vargp | 21 | Variational auto-regressive gaussian processes for continual learning | https://scholar.google.com/scholar?cluster=11399430121097777886&hl=en&as_sdt=0,34 | 3 | 2,021 |
Learning from History for Byzantine Robust Optimization | 82 | icml | 2 | 0 | 2023-06-17 04:13:41.364000 | https://github.com/epfml/byzantine-robust-optimizer | 15 | Learning from history for byzantine robust optimization | https://scholar.google.com/scholar?cluster=3091706733962162017&hl=en&as_sdt=0,10 | 6 | 2,021 |
Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation | 20 | icml | 2 | 1 | 2023-06-17 04:13:41.566000 | https://github.com/MasaKat0/D3RE | 1 | Non-negative bregman divergence minimization for deep direct density ratio estimation | https://scholar.google.com/scholar?cluster=10575793668423594372&hl=en&as_sdt=0,24 | 2 | 2,021 |
Prior Image-Constrained Reconstruction using Style-Based Generative Models | 17 | icml | 2 | 0 | 2023-06-17 04:13:41.769000 | https://github.com/comp-imaging-sci/pic-recon | 8 | Prior image-constrained reconstruction using style-based generative models | https://scholar.google.com/scholar?cluster=11782166038775253980&hl=en&as_sdt=0,44 | 1 | 2,021 |
Self Normalizing Flows | 6 | icml | 8 | 0 | 2023-06-17 04:13:41.970000 | https://github.com/akandykeller/SelfNormalizingFlows | 66 | Self normalizing flows | https://scholar.google.com/scholar?cluster=16907220136527385464&hl=en&as_sdt=0,5 | 3 | 2,021 |
Markpainting: Adversarial Machine Learning meets Inpainting | 8 | icml | 2 | 3 | 2023-06-17 04:13:42.173000 | https://github.com/iliaishacked/markpainting | 20 | Markpainting: Adversarial machine learning meets inpainting | https://scholar.google.com/scholar?cluster=7879607124420125546&hl=en&as_sdt=0,32 | 3 | 2,021 |
Neural SDEs as Infinite-Dimensional GANs | 64 | icml | 157 | 16 | 2023-06-17 04:13:42.375000 | https://github.com/google-research/torchsde | 1,277 | Neural sdes as infinite-dimensional gans | https://scholar.google.com/scholar?cluster=5987016743553578663&hl=en&as_sdt=0,33 | 35 | 2,021 |
GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training | 84 | icml | 44 | 27 | 2023-06-17 04:13:42.577000 | https://github.com/decile-team/cords | 272 | Grad-match: Gradient matching based data subset selection for efficient deep model training | https://scholar.google.com/scholar?cluster=8588416693456815954&hl=en&as_sdt=0,33 | 10 | 2,021 |
Self-Improved Retrosynthetic Planning | 13 | icml | 0 | 2 | 2023-06-17 04:13:42.779000 | https://github.com/junsu-kim97/self_improved_retro | 18 | Self-improved retrosynthetic planning | https://scholar.google.com/scholar?cluster=18216216524696929776&hl=en&as_sdt=0,33 | 1 | 2,021 |
Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech | 258 | icml | 911 | 108 | 2023-06-17 04:13:42.983000 | https://github.com/jaywalnut310/vits | 4,387 | Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech | https://scholar.google.com/scholar?cluster=12414540587288194560&hl=en&as_sdt=0,22 | 42 | 2,021 |
A Policy Gradient Algorithm for Learning to Learn in Multiagent Reinforcement Learning | 38 | icml | 5 | 1 | 2023-06-17 04:13:43.186000 | https://github.com/dkkim93/meta-mapg | 25 | A policy gradient algorithm for learning to learn in multiagent reinforcement learning | https://scholar.google.com/scholar?cluster=9520170531989775101&hl=en&as_sdt=0,5 | 2 | 2,021 |
Unsupervised Skill Discovery with Bottleneck Option Learning | 16 | icml | 2 | 0 | 2023-06-17 04:13:43.388000 | https://github.com/jaekyeom/IBOL | 27 | Unsupervised skill discovery with bottleneck option learning | https://scholar.google.com/scholar?cluster=2474291061858386960&hl=en&as_sdt=0,33 | 2 | 2,021 |
ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision | 665 | icml | 181 | 51 | 2023-06-17 04:13:43.591000 | https://github.com/dandelin/vilt | 1,094 | Vilt: Vision-and-language transformer without convolution or region supervision | https://scholar.google.com/scholar?cluster=12987945369444025427&hl=en&as_sdt=0,44 | 15 | 2,021 |
CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients | 62 | icml | 9 | 2 | 2023-06-17 04:13:43.794000 | https://github.com/danikiyasseh/CLOCS | 27 | Clocs: Contrastive learning of cardiac signals across space, time, and patients | https://scholar.google.com/scholar?cluster=16333919134757348473&hl=en&as_sdt=0,14 | 4 | 2,021 |
WILDS: A Benchmark of in-the-Wild Distribution Shifts | 719 | icml | 109 | 5 | 2023-06-17 04:13:43.996000 | https://github.com/p-lambda/wilds | 482 | Wilds: A benchmark of in-the-wild distribution shifts | https://scholar.google.com/scholar?cluster=11557463912604627857&hl=en&as_sdt=0,5 | 20 | 2,021 |
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable? | 30 | icml | 0 | 0 | 2023-06-17 04:13:44.198000 | https://github.com/TUM-DAML/dbu-robustness | 21 | Evaluating robustness of predictive uncertainty estimation: Are Dirichlet-based models reliable? | https://scholar.google.com/scholar?cluster=5773054947592188875&hl=en&as_sdt=0,5 | 2 | 2,021 |
Kernel Stein Discrepancy Descent | 19 | icml | 2 | 0 | 2023-06-17 04:13:44.400000 | https://github.com/pierreablin/ksddescent | 12 | Kernel stein discrepancy descent | https://scholar.google.com/scholar?cluster=5389096233704622104&hl=en&as_sdt=0,33 | 2 | 2,021 |
Active Testing: Sample-Efficient Model Evaluation | 22 | icml | 6 | 1 | 2023-06-17 04:13:44.602000 | https://github.com/jlko/active-testing | 20 | Active testing: Sample-efficient model evaluation | https://scholar.google.com/scholar?cluster=9561072418583325722&hl=en&as_sdt=0,5 | 1 | 2,021 |
Offline Reinforcement Learning with Fisher Divergence Critic Regularization | 154 | icml | 7,322 | 1,026 | 2023-06-17 04:13:44.804000 | https://github.com/google-research/google-research | 29,791 | Offline reinforcement learning with fisher divergence critic regularization | https://scholar.google.com/scholar?cluster=4410288794309638335&hl=en&as_sdt=0,5 | 727 | 2,021 |
Out-of-Distribution Generalization via Risk Extrapolation (REx) | 443 | icml | 4 | 1 | 2023-06-17 04:13:45.007000 | https://github.com/capybaralet/REx_code_release | 60 | Out-of-distribution generalization via risk extrapolation (rex) | https://scholar.google.com/scholar?cluster=10054528338033032937&hl=en&as_sdt=0,25 | 2 | 2,021 |
Near-Optimal Confidence Sequences for Bounded Random Variables | 3 | icml | 1 | 0 | 2023-06-17 04:13:45.210000 | https://github.com/enosair/bentkus_conf_seq | 1 | Near-optimal confidence sequences for bounded random variables | https://scholar.google.com/scholar?cluster=1224018117329927923&hl=en&as_sdt=0,10 | 3 | 2,021 |
A Scalable Second Order Method for Ill-Conditioned Matrix Completion from Few Samples | 11 | icml | 7 | 0 | 2023-06-17 04:13:45.412000 | https://github.com/ckuemmerle/MatrixIRLS | 10 | A scalable second order method for ill-conditioned matrix completion from few samples | https://scholar.google.com/scholar?cluster=9201585357486239881&hl=en&as_sdt=0,15 | 2 | 2,021 |
ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks | 118 | icml | 16 | 4 | 2023-06-17 04:13:45.615000 | https://github.com/SamsungLabs/ASAM | 111 | Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks | https://scholar.google.com/scholar?cluster=8550448363439632053&hl=en&as_sdt=0,31 | 5 | 2,021 |
Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix | 28 | icml | 0 | 0 | 2023-06-17 04:13:45.844000 | https://github.com/gdisag/gradient_disaggregation | 12 | Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix | https://scholar.google.com/scholar?cluster=1910992678848824138&hl=en&as_sdt=0,5 | 1 | 2,021 |
Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification | 14 | icml | 0 | 0 | 2023-06-17 04:13:46.046000 | https://github.com/movinghoon/ESFR | 12 | Unsupervised embedding adaptation via early-stage feature reconstruction for few-shot classification | https://scholar.google.com/scholar?cluster=16796057083006115935&hl=en&as_sdt=0,5 | 1 | 2,021 |
Continual Learning in the Teacher-Student Setup: Impact of Task Similarity | 29 | icml | 0 | 1 | 2023-06-17 04:13:46.248000 | https://github.com/seblee97/student_teacher_catastrophic | 4 | Continual learning in the teacher-student setup: Impact of task similarity | https://scholar.google.com/scholar?cluster=4325632592050646056&hl=en&as_sdt=0,11 | 2 | 2,021 |
SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement Learning | 137 | icml | 27 | 1 | 2023-06-17 04:13:46.451000 | https://github.com/pokaxpoka/sunrise | 110 | Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning | https://scholar.google.com/scholar?cluster=8840831494454574191&hl=en&as_sdt=0,5 | 6 | 2,021 |
PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training | 76 | icml | 17 | 6 | 2023-06-17 04:13:46.654000 | https://github.com/rll-research/bpref | 76 | Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training | https://scholar.google.com/scholar?cluster=9254305801075741995&hl=en&as_sdt=0,43 | 0 | 2,021 |
Stability and Generalization of Stochastic Gradient Methods for Minimax Problems | 21 | icml | 0 | 0 | 2023-06-17 04:13:46.856000 | https://github.com/zhenhuan-yang/minimax-stability | 5 | Stability and generalization of stochastic gradient methods for minimax problems | https://scholar.google.com/scholar?cluster=5282146573067352151&hl=en&as_sdt=0,32 | 1 | 2,021 |
Better Training using Weight-Constrained Stochastic Dynamics | 2 | icml | 0 | 0 | 2023-06-17 04:13:47.058000 | https://github.com/TiffanyVlaar/ConstrainedNNtraining | 4 | Better Training using Weight-Constrained Stochastic Dynamics | https://scholar.google.com/scholar?cluster=16942829728118781879&hl=en&as_sdt=0,5 | 2 | 2,021 |
Globally-Robust Neural Networks | 72 | icml | 4 | 1 | 2023-06-17 04:13:47.261000 | https://github.com/klasleino/gloro | 25 | Globally-robust neural networks | https://scholar.google.com/scholar?cluster=8564874255784830612&hl=en&as_sdt=0,5 | 2 | 2,021 |
Strategic Classification Made Practical | 25 | icml | 0 | 0 | 2023-06-17 04:13:47.462000 | https://github.com/SagiLevanon1/scmp | 5 | Strategic classification made practical | https://scholar.google.com/scholar?cluster=6308861918899589533&hl=en&as_sdt=0,10 | 1 | 2,021 |
Improved, Deterministic Smoothing for L_1 Certified Robustness | 20 | icml | 0 | 0 | 2023-06-17 04:13:47.665000 | https://github.com/alevine0/smoothingSplittingNoise | 3 | Improved, deterministic smoothing for L_1 certified robustness | https://scholar.google.com/scholar?cluster=4413252390109069610&hl=en&as_sdt=0,33 | 1 | 2,021 |
BASE Layers: Simplifying Training of Large, Sparse Models | 86 | icml | 5,878 | 1,031 | 2023-06-17 04:13:47.868000 | https://github.com/pytorch/fairseq | 26,483 | Base layers: Simplifying training of large, sparse models | https://scholar.google.com/scholar?cluster=10892687538376450252&hl=en&as_sdt=0,19 | 411 | 2,021 |
A Free Lunch From ANN: Towards Efficient, Accurate Spiking Neural Networks Calibration | 84 | icml | 12 | 5 | 2023-06-17 04:13:48.070000 | https://github.com/yhhhli/SNN_Calibration | 70 | A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration | https://scholar.google.com/scholar?cluster=15407151931731425738&hl=en&as_sdt=0,11 | 3 | 2,021 |
Ditto: Fair and Robust Federated Learning Through Personalization | 340 | icml | 28 | 1 | 2023-06-17 04:13:48.272000 | https://github.com/litian96/ditto | 100 | Ditto: Fair and robust federated learning through personalization | https://scholar.google.com/scholar?cluster=11515326237813489969&hl=en&as_sdt=0,5 | 2 | 2,021 |
Provably End-to-end Label-noise Learning without Anchor Points | 58 | icml | 0 | 0 | 2023-06-17 04:13:48.474000 | https://github.com/xuefeng-li1/Provably-end-to-end-label-noise-learning-without-anchor-points | 10 | Provably end-to-end label-noise learning without anchor points | https://scholar.google.com/scholar?cluster=9258083582460233447&hl=en&as_sdt=0,15 | 1 | 2,021 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.