title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Barlow Twins: Self-Supervised Learning via Redundancy Reduction | 1,208 | icml | 122 | 8 | 2023-06-17 04:14:29.523000 | https://github.com/facebookresearch/barlowtwins | 886 | Barlow twins: Self-supervised learning via redundancy reduction | https://scholar.google.com/scholar?cluster=5159677840794766125&hl=en&as_sdt=0,47 | 28 | 2,021 |
You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling | 10 | icml | 3 | 2 | 2023-06-17 04:14:29.769000 | https://github.com/mlpen/yoso | 13 | You only sample (almost) once: Linear cost self-attention via bernoulli sampling | https://scholar.google.com/scholar?cluster=11877607783928250360&hl=en&as_sdt=0,10 | 2 | 2,021 |
DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 67 | icml | 477 | 18 | 2023-06-17 04:14:29.972000 | https://github.com/kwai/DouZero | 3,324 | Douzero: Mastering doudizhu with self-play deep reinforcement learning | https://scholar.google.com/scholar?cluster=10717987879996790788&hl=en&as_sdt=0,33 | 44 | 2,021 |
DORO: Distributional and Outlier Robust Optimization | 27 | icml | 4 | 0 | 2023-06-17 04:14:30.174000 | https://github.com/RuntianZ/doro | 25 | Doro: Distributional and outlier robust optimization | https://scholar.google.com/scholar?cluster=7792478456437572549&hl=en&as_sdt=0,6 | 2 | 2,021 |
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons | 31 | icml | 6 | 0 | 2023-06-17 04:14:30.377000 | https://github.com/zbh2047/L_inf-dist-net | 38 | Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons | https://scholar.google.com/scholar?cluster=6201420149183682924&hl=en&as_sdt=0,5 | 2 | 2,021 |
Efficient Lottery Ticket Finding: Less Data is More | 38 | icml | 4 | 0 | 2023-06-17 04:14:30.580000 | https://github.com/VITA-Group/PrAC-LTH | 24 | Efficient lottery ticket finding: Less data is more | https://scholar.google.com/scholar?cluster=9030177952981756712&hl=en&as_sdt=0,14 | 8 | 2,021 |
Robust Policy Gradient against Strong Data Corruption | 22 | icml | 0 | 0 | 2023-06-17 04:14:30.783000 | https://github.com/zhangxz1123/FilteredPolicyGradient | 4 | Robust policy gradient against strong data corruption | https://scholar.google.com/scholar?cluster=5709291198914313258&hl=en&as_sdt=0,47 | 2 | 2,021 |
PAPRIKA: Private Online False Discovery Rate Control | 5 | icml | 0 | 0 | 2023-06-17 04:14:30.985000 | https://github.com/wanrongz/PAPRIKA | 6 | Paprika: Private online false discovery rate control | https://scholar.google.com/scholar?cluster=16053819406696763043&hl=en&as_sdt=0,44 | 2 | 2,021 |
Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation | 12 | icml | 1 | 1 | 2023-06-17 04:14:31.187000 | https://github.com/AI-secure/PSBA | 5 | Progressive-scale boundary blackbox attack via projective gradient estimation | https://scholar.google.com/scholar?cluster=2561734592069193549&hl=en&as_sdt=0,11 | 2 | 2,021 |
Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization | 44 | icml | 0 | 0 | 2023-06-17 04:14:31.389000 | https://github.com/YivanZhang/lio | 9 | Learning noise transition matrix from only noisy labels via total variation regularization | https://scholar.google.com/scholar?cluster=14671082055157503187&hl=en&as_sdt=0,34 | 2 | 2,021 |
Quantile Bandits for Best Arms Identification | 9 | icml | 0 | 0 | 2023-06-17 04:14:31.591000 | https://github.com/Mengyanz/QSAR | 0 | Quantile bandits for best arms identification | https://scholar.google.com/scholar?cluster=6809249853640844054&hl=en&as_sdt=0,5 | 2 | 2,021 |
iDARTS: Differentiable Architecture Search with Stochastic Implicit Gradients | 32 | icml | 2 | 2 | 2023-06-17 04:14:31.794000 | https://github.com/MiaoZhang0525/iDARTS | 9 | idarts: Differentiable architecture search with stochastic implicit gradients | https://scholar.google.com/scholar?cluster=2918201960391178882&hl=en&as_sdt=0,47 | 2 | 2,021 |
Average-Reward Off-Policy Policy Evaluation with Function Approximation | 23 | icml | 658 | 6 | 2023-06-17 04:14:31.997000 | https://github.com/ShangtongZhang/DeepRL | 2,943 | Average-reward off-policy policy evaluation with function approximation | https://scholar.google.com/scholar?cluster=12042728594024517731&hl=en&as_sdt=0,10 | 93 | 2,021 |
MetaCURE: Meta Reinforcement Learning with Empowerment-Driven Exploration | 16 | icml | 3 | 0 | 2023-06-17 04:14:32.201000 | https://github.com/NagisaZj/MetaCURE-Public | 12 | Metacure: Meta reinforcement learning with empowerment-driven exploration | https://scholar.google.com/scholar?cluster=8017350448991384435&hl=en&as_sdt=0,5 | 2 | 2,021 |
World Model as a Graph: Learning Latent Landmarks for Planning | 41 | icml | 2 | 0 | 2023-06-17 04:14:32.404000 | https://github.com/LunjunZhang/world-model-as-a-graph | 53 | World model as a graph: Learning latent landmarks for planning | https://scholar.google.com/scholar?cluster=11617385762396360333&hl=en&as_sdt=0,5 | 1 | 2,021 |
Breaking the Deadly Triad with a Target Network | 29 | icml | 658 | 6 | 2023-06-17 04:14:32.607000 | https://github.com/ShangtongZhang/DeepRL | 2,943 | Breaking the deadly triad with a target network | https://scholar.google.com/scholar?cluster=3294420755935359524&hl=en&as_sdt=0,5 | 93 | 2,021 |
Dataset Condensation with Differentiable Siamese Augmentation | 82 | icml | 73 | 0 | 2023-06-17 04:14:32.809000 | https://github.com/VICO-UoE/DatasetCondensation | 331 | Dataset condensation with differentiable siamese augmentation | https://scholar.google.com/scholar?cluster=14949848395042620640&hl=en&as_sdt=0,5 | 9 | 2,021 |
Calibrate Before Use: Improving Few-shot Performance of Language Models | 366 | icml | 42 | 3 | 2023-06-17 04:14:33.012000 | https://github.com/tonyzhaozh/few-shot-learning | 273 | Calibrate before use: Improving few-shot performance of language models | https://scholar.google.com/scholar?cluster=8877771337173887679&hl=en&as_sdt=0,5 | 5 | 2,021 |
Few-Shot Neural Architecture Search | 71 | icml | 7 | 3 | 2023-06-17 04:14:33.215000 | https://github.com/aoiang/few-shot-NAS | 39 | Few-shot neural architecture search | https://scholar.google.com/scholar?cluster=668653762741709836&hl=en&as_sdt=0,5 | 4 | 2,021 |
How Framelets Enhance Graph Neural Networks | 41 | icml | 13 | 0 | 2023-06-17 04:14:33.416000 | https://github.com/YuGuangWang/UFG | 30 | How framelets enhance graph neural networks | https://scholar.google.com/scholar?cluster=13922049936410780570&hl=en&as_sdt=0,44 | 2 | 2,021 |
Probabilistic Sequential Shrinking: A Best Arm Identification Algorithm for Stochastic Bandits with Corruptions | 6 | icml | 0 | 0 | 2023-06-17 04:14:33.619000 | https://github.com/zixinzh/2021-ICML | 0 | Probabilistic sequential shrinking: A best arm identification algorithm for stochastic bandits with corruptions | https://scholar.google.com/scholar?cluster=17868833179563071427&hl=en&as_sdt=0,47 | 1 | 2,021 |
Asymmetric Loss Functions for Learning with Noisy Labels | 25 | icml | 4 | 3 | 2023-06-17 04:14:33.822000 | https://github.com/hitcszx/ALFs | 28 | Asymmetric loss functions for learning with noisy labels | https://scholar.google.com/scholar?cluster=425870196210326248&hl=en&as_sdt=0,3 | 3 | 2,021 |
Examining and Combating Spurious Features under Distribution Shift | 34 | icml | 3 | 0 | 2023-06-17 04:14:34.026000 | https://github.com/violet-zct/group-conditional-DRO | 14 | Examining and combating spurious features under distribution shift | https://scholar.google.com/scholar?cluster=14520135804314510635&hl=en&as_sdt=0,14 | 1 | 2,021 |
Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm | 11 | icml | 3 | 1 | 2023-06-17 04:14:34.228000 | https://github.com/VITA-Group/SparseADV_Homotopy | 7 | Sparse and imperceptible adversarial attack via a homotopy algorithm | https://scholar.google.com/scholar?cluster=18221995160833723432&hl=en&as_sdt=0,14 | 8 | 2,021 |
Data-Free Knowledge Distillation for Heterogeneous Federated Learning | 218 | icml | 61 | 12 | 2023-06-17 04:14:34.431000 | https://github.com/zhuangdizhu/FedGen | 185 | Data-free knowledge distillation for heterogeneous federated learning | https://scholar.google.com/scholar?cluster=7623989304932004124&hl=en&as_sdt=0,6 | 2 | 2,021 |
Commutative Lie Group VAE for Disentanglement Learning | 13 | icml | 0 | 0 | 2023-06-17 04:14:34.633000 | https://github.com/zhuxinqimac/CommutativeLieGroupVAE-Pytorch | 21 | Commutative lie group vae for disentanglement learning | https://scholar.google.com/scholar?cluster=13512230477271020552&hl=en&as_sdt=0,3 | 2 | 2,021 |
Contrastive Learning Inverts the Data Generating Process | 118 | icml | 8 | 3 | 2023-06-17 04:14:34.835000 | https://github.com/brendel-group/cl-ica | 76 | Contrastive learning inverts the data generating process | https://scholar.google.com/scholar?cluster=6297973976914221052&hl=en&as_sdt=0,6 | 7 | 2,021 |
Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning | 30 | icml | 2 | 0 | 2023-06-17 04:14:35.040000 | https://github.com/lmzintgraf/hyperx | 12 | Exploration in approximate hyper-state space for meta reinforcement learning | https://scholar.google.com/scholar?cluster=598880115896472356&hl=en&as_sdt=0,22 | 2 | 2,021 |
Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning | 21 | icml | 6 | 0 | 2023-06-17 04:54:22.079000 | https://github.com/mominabbass/sharp-maml | 23 | Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning | https://scholar.google.com/scholar?cluster=14950420836477699137&hl=en&as_sdt=0,22 | 1 | 2,022 |
Active Sampling for Min-Max Fairness | 15 | icml | 1 | 1 | 2023-06-17 04:54:22.293000 | https://github.com/amazon-research/active-sampling-for-minmax-fairness | 4 | Active sampling for min-max fairness | https://scholar.google.com/scholar?cluster=7250212054919979465&hl=en&as_sdt=0,5 | 6 | 2,022 |
Meaningfully debugging model mistakes using conceptual counterfactual explanations | 20 | icml | 5 | 1 | 2023-06-17 04:54:22.498000 | https://github.com/mertyg/debug-mistakes-cce | 70 | Meaningfully debugging model mistakes using conceptual counterfactual explanations | https://scholar.google.com/scholar?cluster=2849569429175172034&hl=en&as_sdt=0,5 | 8 | 2,022 |
On the Convergence of the Shapley Value in Parametric Bayesian Learning Games | 6 | icml | 0 | 0 | 2023-06-17 04:54:22.703000 | https://github.com/XinyiYS/Parametric-Bayesian-Learning-Games | 1 | On the convergence of the Shapley value in parametric Bayesian learning games | https://scholar.google.com/scholar?cluster=7727281335591886084&hl=en&as_sdt=0,5 | 1 | 2,022 |
Individual Preference Stability for Clustering | 2 | icml | 1 | 0 | 2023-06-17 04:54:22.909000 | https://github.com/amazon-research/ip-stability-for-clustering | 0 | Individual Preference Stability for Clustering | https://scholar.google.com/scholar?cluster=5704874975941768336&hl=en&as_sdt=0,30 | 6 | 2,022 |
Minimum Cost Intervention Design for Causal Effect Identification | 2 | icml | 0 | 0 | 2023-06-17 04:54:23.115000 | https://github.com/sinaakbarii/min_cost_intervention | 1 | Minimum cost intervention design for causal effect identification | https://scholar.google.com/scholar?cluster=8464705336757566822&hl=en&as_sdt=0,44 | 1 | 2,022 |
How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models | 60 | icml | 4 | 0 | 2023-06-17 04:54:23.322000 | https://github.com/vanderschaarlab/evaluating-generative-models | 22 | How faithful is your synthetic data? sample-level metrics for evaluating and auditing generative models | https://scholar.google.com/scholar?cluster=15840878488291944826&hl=en&as_sdt=0,33 | 5 | 2,022 |
Deploying Convolutional Networks on Untrusted Platforms Using 2D Holographic Reduced Representations | 2 | icml | 0 | 1 | 2023-06-17 04:54:23.528000 | https://github.com/neuromorphiccomputationresearchprogram/connectionist-symbolic-pseudo-secrets | 3 | Deploying Convolutional Networks on Untrusted Platforms Using 2D Holographic Reduced Representations | https://scholar.google.com/scholar?cluster=7363780369551842627&hl=en&as_sdt=0,44 | 1 | 2,022 |
Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer | 8 | icml | 1 | 0 | 2023-06-17 04:54:23.734000 | https://github.com/lucasalegre/sfols | 6 | Optimistic linear support and successor features as a basis for optimal policy transfer | https://scholar.google.com/scholar?cluster=130731457432112857&hl=en&as_sdt=0,5 | 2 | 2,022 |
Structured Stochastic Gradient MCMC | 4 | icml | 1 | 0 | 2023-06-17 04:54:23.940000 | https://github.com/ajboyd2/pytorch_lvi | 1 | Structured stochastic gradient MCMC | https://scholar.google.com/scholar?cluster=8097612641869986343&hl=en&as_sdt=0,5 | 2 | 2,022 |
XAI for Transformers: Better Explanations through Conservative Propagation | 16 | icml | 12 | 5 | 2023-06-17 04:54:24.145000 | https://github.com/ameenali/xai_transformers | 33 | XAI for transformers: Better explanations through conservative propagation | https://scholar.google.com/scholar?cluster=8318067021687688094&hl=en&as_sdt=0,5 | 2 | 2,022 |
Minimax Classification under Concept Drift with Multidimensional Adaptation and Performance Guarantees | 1 | icml | 0 | 0 | 2023-06-17 04:54:24.349000 | https://github.com/machinelearningbcam/amrc-for-concept-drift-icml-2022 | 6 | Minimax classification under concept drift with multidimensional adaptation and performance guarantees | https://scholar.google.com/scholar?cluster=6492087255845076443&hl=en&as_sdt=0,5 | 1 | 2,022 |
Scalable First-Order Bayesian Optimization via Structured Automatic Differentiation | 2 | icml | 3 | 8 | 2023-06-17 04:54:24.555000 | https://github.com/sebastianament/covariancefunctions.jl | 17 | Scalable First-Order Bayesian Optimization via Structured Automatic Differentiation | https://scholar.google.com/scholar?cluster=17864781963029193260&hl=en&as_sdt=0,11 | 2 | 2,022 |
Towards Understanding Sharpness-Aware Minimization | 42 | icml | 3 | 0 | 2023-06-17 04:54:24.761000 | https://github.com/tml-epfl/understanding-sam | 24 | Towards understanding sharpness-aware minimization | https://scholar.google.com/scholar?cluster=18222527206389875127&hl=en&as_sdt=0,3 | 2 | 2,022 |
Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging | 27 | icml | 5 | 0 | 2023-06-17 04:54:24.966000 | https://github.com/aangelopoulos/im2im-uq | 33 | Image-to-image regression with distribution-free uncertainty quantification and applications in imaging | https://scholar.google.com/scholar?cluster=3321497325155679298&hl=en&as_sdt=0,5 | 4 | 2,022 |
Online Balanced Experimental Design | 0 | icml | 1 | 1 | 2023-06-17 04:54:25.171000 | https://github.com/ddimmery/balancer-package | 0 | Online Balanced Experimental Design | https://scholar.google.com/scholar?cluster=9578642124774969527&hl=en&as_sdt=0,33 | 1 | 2,022 |
Thresholded Lasso Bandit | 11 | icml | 0 | 0 | 2023-06-17 04:54:25.386000 | https://github.com/cyberagentailab/thresholded-lasso-bandit | 5 | Thresholded lasso bandit | https://scholar.google.com/scholar?cluster=2549693999294336180&hl=en&as_sdt=0,44 | 1 | 2,022 |
From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model | 1 | icml | 1 | 0 | 2023-06-17 04:54:25.592000 | https://github.com/BaeHeeSun/NPC | 16 | From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model | https://scholar.google.com/scholar?cluster=8277956937717286777&hl=en&as_sdt=0,5 | 3 | 2,022 |
Gaussian Mixture Variational Autoencoder with Contrastive Learning for Multi-Label Classification | 3 | icml | 4 | 0 | 2023-06-17 04:54:25.798000 | https://github.com/junwenbai/c-gmvae | 23 | Gaussian mixture variational autoencoder with contrastive learning for multi-label classification | https://scholar.google.com/scholar?cluster=9275720515589327599&hl=en&as_sdt=0,3 | 2 | 2,022 |
Certified Neural Network Watermarks with Randomized Smoothing | 6 | icml | 2 | 0 | 2023-06-17 04:54:26.004000 | https://github.com/arpitbansal297/certified_watermarks | 9 | Certified Neural Network Watermarks with Randomized Smoothing | https://scholar.google.com/scholar?cluster=2567091061635643130&hl=en&as_sdt=0,22 | 2 | 2,022 |
Learning Stable Classifiers by Transferring Unstable Features | 5 | icml | 0 | 0 | 2023-06-17 04:54:26.215000 | https://github.com/YujiaBao/Tofu | 7 | Learning stable classifiers by transferring unstable features | https://scholar.google.com/scholar?cluster=13001665395610981653&hl=en&as_sdt=0,5 | 1 | 2,022 |
Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models | 30 | icml | 7 | 1 | 2023-06-17 04:54:26.425000 | https://github.com/baofff/Extended-Analytic-DPM | 87 | Estimating the optimal covariance with imperfect mean in diffusion probabilistic models | https://scholar.google.com/scholar?cluster=2323665209976347341&hl=en&as_sdt=0,5 | 2 | 2,022 |
On the Surrogate Gap between Contrastive and Supervised Losses | 8 | icml | 0 | 0 | 2023-06-17 04:54:26.631000 | https://github.com/nzw0301/gap-contrastive-and-supervised-losses | 7 | On the surrogate gap between contrastive and supervised losses | https://scholar.google.com/scholar?cluster=17468865477895467662&hl=en&as_sdt=0,33 | 3 | 2,022 |
Imitation Learning by Estimating Expertise of Demonstrators | 11 | icml | 1 | 0 | 2023-06-17 04:54:26.838000 | https://github.com/stanford-iliad/ileed | 4 | Imitation learning by estimating expertise of demonstrators | https://scholar.google.com/scholar?cluster=13040919863635608534&hl=en&as_sdt=0,5 | 4 | 2,022 |
Volatility Based Kernels and Moving Average Means for Accurate Forecasting with Gaussian Processes | 1 | icml | 8 | 2 | 2023-06-17 04:54:27.046000 | https://github.com/g-benton/volt | 39 | Volatility Based Kernels and Moving Average Means for Accurate Forecasting with Gaussian Processes | https://scholar.google.com/scholar?cluster=445432332886185125&hl=en&as_sdt=0,5 | 3 | 2,022 |
Gradient Descent on Neurons and its Link to Approximate Second-order Optimization | 3 | icml | 2 | 0 | 2023-06-17 04:54:27.253000 | https://github.com/freedbee/neuron_descent_and_kfac | 1 | Gradient descent on neurons and its link to approximate second-order optimization | https://scholar.google.com/scholar?cluster=4847605706007812580&hl=en&as_sdt=0,14 | 1 | 2,022 |
Skin Deep Unlearning: Artefact and Instrument Debiasing in the Context of Melanoma Classification | 9 | icml | 2 | 0 | 2023-06-17 04:54:27.465000 | https://github.com/pbevan1/Skin-Deep-Unlearning | 5 | Skin deep unlearning: artefact and instrument debiasing in the context of melanoma classification | https://scholar.google.com/scholar?cluster=13843943708217895697&hl=en&as_sdt=0,5 | 1 | 2,022 |
Approximate Bayesian Computation with Domain Expert in the Loop | 4 | icml | 0 | 0 | 2023-06-17 04:54:27.671000 | https://github.com/lfilstro/hitl-abc | 1 | Approximate Bayesian Computation with Domain Expert in the Loop | https://scholar.google.com/scholar?cluster=17515613516089862675&hl=en&as_sdt=0,33 | 1 | 2,022 |
Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning | 6 | icml | 1 | 0 | 2023-06-17 04:54:27.878000 | https://github.com/albietz/ppsgd | 5 | Personalization improves privacy-accuracy tradeoffs in federated learning | https://scholar.google.com/scholar?cluster=2803924388956334708&hl=en&as_sdt=0,5 | 1 | 2,022 |
Non-Vacuous Generalisation Bounds for Shallow Neural Networks | 11 | icml | 0 | 0 | 2023-06-17 04:54:28.087000 | https://github.com/biggs/shallow-nets | 0 | Non-vacuous generalisation bounds for shallow neural networks | https://scholar.google.com/scholar?cluster=11560382540049939968&hl=en&as_sdt=0,33 | 2 | 2,022 |
Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities | 12 | icml | 1 | 0 | 2023-06-17 04:54:28.293000 | https://github.com/j-cb/breaking_down_ood_detection | 10 | Breaking down out-of-distribution detection: Many methods based on ood training data estimate a combination of the same core quantities | https://scholar.google.com/scholar?cluster=3629472061640674656&hl=en&as_sdt=0,11 | 1 | 2,022 |
Optimizing Sequential Experimental Design with Deep Reinforcement Learning | 13 | icml | 5 | 0 | 2023-06-17 04:54:28.499000 | https://github.com/csiro-mlai/RL-BOED | 5 | Optimizing Sequential Experimental Design with Deep Reinforcement Learning | https://scholar.google.com/scholar?cluster=17698300138792965088&hl=en&as_sdt=0,21 | 3 | 2,022 |
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective | 4 | icml | 1 | 0 | 2023-06-17 04:54:28.704000 | https://github.com/fietelab/wide-network-alignment | 2 | How to train your wide neural network without backprop: An input-weight alignment perspective | https://scholar.google.com/scholar?cluster=9130275033770297216&hl=en&as_sdt=0,33 | 2 | 2,022 |
Lie Point Symmetry Data Augmentation for Neural PDE Solvers | 17 | icml | 5 | 1 | 2023-06-17 04:54:28.916000 | https://github.com/brandstetter-johannes/lpsda | 28 | Lie point symmetry data augmentation for neural pde solvers | https://scholar.google.com/scholar?cluster=6135726084743263275&hl=en&as_sdt=0,5 | 2 | 2,022 |
An iterative clustering algorithm for the Contextual Stochastic Block Model with optimality guarantees | 4 | icml | 0 | 0 | 2023-06-17 04:54:29.125000 | https://github.com/glmbraun/csbm | 2 | An iterative clustering algorithm for the Contextual Stochastic Block Model with optimality guarantees | https://scholar.googleusercontent.com/scholar?q=cache:6omcJTzt9pMJ:scholar.google.com/+An+iterative+clustering+algorithm+for+the+Contextual+Stochastic+Block+Model+with+optimality+guarantees&hl=en&as_sdt=0,5 | 1 | 2,022 |
Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems | 10 | icml | 6 | 0 | 2023-06-17 04:54:29.332000 | https://github.com/durstewitzlab/dendplrnn | 6 | Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems | https://scholar.google.com/scholar?cluster=8212489607836330678&hl=en&as_sdt=0,10 | 1 | 2,022 |
Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters | 7 | icml | 2 | 0 | 2023-06-17 04:54:29.543000 | https://github.com/lmotte/graph-prediction-with-fused-gromov-wasserstein | 10 | Learning to predict graphs with fused Gromov-Wasserstein barycenters | https://scholar.google.com/scholar?cluster=449987462895486157&hl=en&as_sdt=0,10 | 1 | 2,022 |
Measuring dissimilarity with diffeomorphism invariance | 1 | icml | 1 | 0 | 2023-06-17 04:54:29.752000 | https://github.com/theophilec/diffy | 5 | Measuring dissimilarity with diffeomorphism invariance | https://scholar.google.com/scholar?cluster=9356741545436506583&hl=en&as_sdt=0,11 | 1 | 2,022 |
Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for Safety-Critical Applications | 7 | icml | 3 | 0 | 2023-06-17 04:54:29.960000 | https://github.com/aCapone1/gauss_proc_unknown_hyp | 0 | Gaussian process uniform error bounds with unknown hyperparameters for safety-critical applications | https://scholar.google.com/scholar?cluster=10619138412695371190&hl=en&as_sdt=0,5 | 1 | 2,022 |
Burst-Dependent Plasticity and Dendritic Amplification Support Target-Based Learning and Hierarchical Imitation Learning | 3 | icml | 1 | 0 | 2023-06-17 04:54:30.167000 | https://github.com/cristianocapone/lttb | 1 | Burst-dependent plasticity and dendritic amplification support target-based learning and hierarchical imitation learning | https://scholar.google.com/scholar?cluster=8004952254033817821&hl=en&as_sdt=0,33 | 1 | 2,022 |
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization | 9 | icml | 0 | 0 | 2023-06-17 04:54:30.374000 | https://github.com/yaircarmon/recapp | 0 | Recapp: Crafting a more efficient catalyst for convex optimization | https://scholar.google.com/scholar?cluster=7906072571653012949&hl=en&as_sdt=0,5 | 4 | 2,022 |
YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone | 59 | icml | 1,642 | 38 | 2023-06-17 04:54:30.586000 | https://github.com/coqui-ai/TTS | 12,544 | Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone | https://scholar.google.com/scholar?cluster=8575580251111777245&hl=en&as_sdt=0,5 | 169 | 2,022 |
Stabilizing Off-Policy Deep Reinforcement Learning from Pixels | 7 | icml | 0 | 1 | 2023-06-17 04:54:30.794000 | https://github.com/aladoro/stabilizing-off-policy-rl | 8 | Stabilizing off-policy deep reinforcement learning from pixels | https://scholar.google.com/scholar?cluster=14839229722928778219&hl=en&as_sdt=0,5 | 2 | 2,022 |
Robust Imitation Learning against Variations in Environment Dynamics | 3 | icml | 2 | 0 | 2023-06-17 04:54:31.011000 | https://github.com/jongseongchae/rime | 4 | Robust imitation learning against variations in environment dynamics | https://scholar.google.com/scholar?cluster=16698148577673896615&hl=en&as_sdt=0,19 | 1 | 2,022 |
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction | 21 | icml | 3 | 2 | 2023-06-17 04:54:31.221000 | https://github.com/facebookresearch/unirex | 21 | Unirex: A unified learning framework for language model rationale extraction | https://scholar.google.com/scholar?cluster=7352055260763393065&hl=en&as_sdt=0,21 | 8 | 2,022 |
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing? | 15 | icml | 5 | 1 | 2023-06-17 04:54:31.456000 | https://github.com/sutd-visual-computing-group/LS-KD-compatibility | 9 | Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing? | https://scholar.google.com/scholar?cluster=7014741791819212008&hl=en&as_sdt=0,39 | 1 | 2,022 |
Learning Bellman Complete Representations for Offline Policy Evaluation | 2 | icml | 0 | 0 | 2023-06-17 04:54:31.662000 | https://github.com/causalml/bcrl | 7 | Learning Bellman Complete Representations for Offline Policy Evaluation | https://scholar.google.com/scholar?cluster=6803502920630786381&hl=en&as_sdt=0,33 | 1 | 2,022 |
Sample Efficient Learning of Predictors that Complement Humans | 5 | icml | 2 | 0 | 2023-06-17 04:54:31.869000 | https://github.com/clinicalml/active_learn_to_defer | 4 | Sample efficient learning of predictors that complement humans | https://scholar.google.com/scholar?cluster=14604138868272717546&hl=en&as_sdt=0,5 | 9 | 2,022 |
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets | 18 | icml | 4 | 0 | 2023-06-17 04:54:32.076000 | https://github.com/vita-group/structure-lth | 20 | Coarsening the granularity: Towards structurally sparse lottery tickets | https://scholar.google.com/scholar?cluster=11130219439194607083&hl=en&as_sdt=0,5 | 7 | 2,022 |
Learning Domain Adaptive Object Detection with Probabilistic Teacher | 14 | icml | 7 | 4 | 2023-06-17 04:54:32.283000 | https://github.com/hikvision-research/probabilisticteacher | 52 | Learning domain adaptive object detection with probabilistic teacher | https://scholar.google.com/scholar?cluster=17755903452096200771&hl=en&as_sdt=0,34 | 5 | 2,022 |
Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning | 18 | icml | 2 | 2 | 2023-06-17 04:54:32.489000 | https://github.com/HazyResearch/thanos-code | 16 | Perfectly balanced: Improving transfer and robustness of supervised contrastive learning | https://scholar.google.com/scholar?cluster=4069781946979626386&hl=en&as_sdt=0,5 | 17 | 2,022 |
On Collective Robustness of Bagging Against Data Poisoning | 5 | icml | 1 | 0 | 2023-06-17 04:54:32.695000 | https://github.com/emiyalzn/icml22-crb | 2 | On Collective Robustness of Bagging Against Data Poisoning | https://scholar.google.com/scholar?cluster=7671982562316508504&hl=en&as_sdt=0,5 | 1 | 2,022 |
Structure-Aware Transformer for Graph Representation Learning | 51 | icml | 25 | 0 | 2023-06-17 04:54:32.900000 | https://github.com/borgwardtlab/sat | 149 | Structure-aware transformer for graph representation learning | https://scholar.google.com/scholar?cluster=4875324713433840142&hl=en&as_sdt=0,5 | 6 | 2,022 |
Optimization-Induced Graph Implicit Nonlinear Diffusion | 10 | icml | 0 | 0 | 2023-06-17 04:54:33.110000 | https://github.com/7qchen/gind | 15 | Optimization-induced graph implicit nonlinear diffusion | https://scholar.google.com/scholar?cluster=1600506523476072350&hl=en&as_sdt=0,14 | 2 | 2,022 |
Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile | 2 | icml | 0 | 0 | 2023-06-17 04:54:33.326000 | https://github.com/anfeather/eigen-reptile | 8 | Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile | https://scholar.google.com/scholar?cluster=8530355739289210050&hl=en&as_sdt=0,5 | 1 | 2,022 |
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training | 2 | icml | 0 | 0 | 2023-06-17 04:54:33.531000 | https://github.com/vita-group/double-win-lth | 9 | Data-Efficient Double-Win Lottery Tickets from Robust Pre-training | https://scholar.google.com/scholar?cluster=2999471991915534947&hl=en&as_sdt=0,15 | 8 | 2,022 |
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness | 3 | icml | 3 | 0 | 2023-06-17 04:54:33.737000 | https://github.com/vita-group/linearity-grafting | 14 | Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness | https://scholar.google.com/scholar?cluster=2944620875879702886&hl=en&as_sdt=0,5 | 9 | 2,022 |
Task-aware Privacy Preservation for Multi-dimensional Data | 4 | icml | 1 | 0 | 2023-06-17 04:54:33.943000 | https://github.com/chengjiangnan/task_aware_privacy | 2 | Task-aware privacy preservation for multi-dimensional data | https://scholar.google.com/scholar?cluster=12634725104863101184&hl=en&as_sdt=0,14 | 1 | 2,022 |
Adversarially Trained Actor Critic for Offline Reinforcement Learning | 44 | icml | 6 | 0 | 2023-06-17 04:54:34.149000 | https://github.com/microsoft/atac | 48 | Adversarially trained actor critic for offline reinforcement learning | https://scholar.google.com/scholar?cluster=8385322441763797566&hl=en&as_sdt=0,22 | 8 | 2,022 |
RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests | 13 | icml | 1 | 0 | 2023-06-17 04:54:34.355000 | https://github.com/victor5as/rieszlearning | 7 | Riesznet and forestriesz: Automatic debiased machine learning with neural nets and random forests | https://scholar.google.com/scholar?cluster=9961128829212907766&hl=en&as_sdt=0,48 | 1 | 2,022 |
Selective Network Linearization for Efficient Private Inference | 9 | icml | 1 | 0 | 2023-06-17 04:54:34.560000 | https://github.com/nyu-dice-lab/selective_network_linearization | 3 | Selective network linearization for efficient private inference | https://scholar.google.com/scholar?cluster=14016452576504224756&hl=en&as_sdt=0,11 | 5 | 2,022 |
From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers | 8 | icml | 4 | 0 | 2023-06-17 04:54:34.766000 | https://github.com/hl-hanlin/gkat | 7 | From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers | https://scholar.google.com/scholar?cluster=1399080390715291897&hl=en&as_sdt=0,33 | 1 | 2,022 |
Context-Aware Drift Detection | 7 | icml | 180 | 127 | 2023-06-17 04:54:34.971000 | https://github.com/SeldonIO/alibi-detect | 1,843 | Context-aware drift detection | https://scholar.google.com/scholar?cluster=9993193813631773645&hl=en&as_sdt=0,5 | 35 | 2,022 |
Diffusion bridges vector quantized variational autoencoders | 5 | icml | 1 | 0 | 2023-06-17 04:54:35.176000 | https://github.com/maxjcohen/diffusion-bridges | 14 | Diffusion bridges vector quantized Variational AutoEncoders | https://scholar.google.com/scholar?cluster=15768272528622480760&hl=en&as_sdt=0,21 | 4 | 2,022 |
Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model | 5 | icml | 0 | 0 | 2023-06-17 04:54:35.383000 | https://github.com/JRConti/EthicalModule_vMF | 1 | Mitigating gender bias in face recognition using the von mises-fisher mixture model | https://scholar.google.com/scholar?cluster=11800206203871099663&hl=en&as_sdt=0,10 | 1 | 2,022 |
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses | 28 | icml | 1 | 0 | 2023-06-17 04:54:35.590000 | https://github.com/fra31/evaluating-adaptive-test-time-defenses | 14 | Evaluating the adversarial robustness of adaptive test-time defenses | https://scholar.google.com/scholar?cluster=9007385894917173233&hl=en&as_sdt=0,23 | 2 | 2,022 |
Adversarial Robustness against Multiple and Single $l_p$-Threat Models via Quick Fine-Tuning of Robust Classifiers | 6 | icml | 3 | 1 | 2023-06-17 04:54:35.796000 | https://github.com/fra31/robust-finetuning | 14 | Adversarial Robustness against Multiple and Single -Threat Models via Quick Fine-Tuning of Robust Classifiers | https://scholar.google.com/scholar?cluster=14798100310510930510&hl=en&as_sdt=0,5 | 2 | 2,022 |
Continuous Control with Action Quantization from Demonstrations | 4 | icml | 7,322 | 1,026 | 2023-06-17 04:54:36.002000 | https://github.com/google-research/google-research | 29,792 | Continuous Control with Action Quantization from Demonstrations | https://scholar.google.com/scholar?cluster=18354958382752460493&hl=en&as_sdt=0,5 | 727 | 2,022 |
Dialog Inpainting: Turning Documents into Dialogs | 17 | icml | 2 | 2 | 2023-06-17 04:54:36.208000 | https://github.com/google-research/dialog-inpainting | 85 | Dialog inpainting: Turning documents into dialogs | https://scholar.google.com/scholar?cluster=13888132119591432248&hl=en&as_sdt=0,44 | 8 | 2,022 |
DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training | 22 | icml | 8 | 1 | 2023-06-17 04:54:36.421000 | https://github.com/rong-dai/dispfl | 34 | Dispfl: Towards communication-efficient personalized federated learning via decentralized sparse training | https://scholar.google.com/scholar?cluster=13590903827423118545&hl=en&as_sdt=0,5 | 2 | 2,022 |
Unsupervised Image Representation Learning with Deep Latent Particles | 1 | icml | 1 | 0 | 2023-06-17 04:54:36.626000 | https://github.com/taldatech/deep-latent-particles-pytorch | 21 | Unsupervised Image Representation Learning with Deep Latent Particles | https://scholar.google.com/scholar?cluster=8443981998714808027&hl=en&as_sdt=0,24 | 3 | 2,022 |
Monarch: Expressive Structured Matrices for Efficient and Accurate Training | 21 | icml | 17 | 11 | 2023-06-17 04:54:36.831000 | https://github.com/hazyresearch/monarch | 127 | Monarch: Expressive structured matrices for efficient and accurate training | https://scholar.google.com/scholar?cluster=908299519413693348&hl=en&as_sdt=0,48 | 22 | 2,022 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.