title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Test-Time Training Can Close the Natural Distribution Shift Performance Gap in Deep Learning Based Compressed Sensing | 8 | icml | 2 | 0 | 2023-06-17 04:54:37.037000 | https://github.com/mli-lab/ttt_for_deep_learning_cs | 9 | Test-time training can close the natural distribution shift performance gap in deep learning based compressed sensing | https://scholar.google.com/scholar?cluster=17586372982715627644&hl=en&as_sdt=0,33 | 1 | 2,022 |
Knowledge Base Question Answering by Case-based Reasoning over Subgraphs | 19 | icml | 5 | 4 | 2023-06-17 04:54:37.243000 | https://github.com/rajarshd/cbr-subg | 28 | Knowledge base question answering by case-based reasoning over subgraphs | https://scholar.google.com/scholar?cluster=9521902592444277767&hl=en&as_sdt=0,33 | 4 | 2,022 |
Robust Multi-Objective Bayesian Optimization Under Input Noise | 15 | icml | 1 | 0 | 2023-06-17 04:54:37.448000 | https://github.com/facebookresearch/robust_mobo | 36 | Robust multi-objective bayesian optimization under input noise | https://scholar.google.com/scholar?cluster=14538783621300673718&hl=en&as_sdt=0,5 | 13 | 2,022 |
Attentional Meta-learners for Few-shot Polythetic Classification | 1 | icml | 1 | 0 | 2023-06-17 04:54:37.654000 | https://github.com/rvinas/polythetic_metalearning | 7 | Attentional Meta-learners for Few-shot Polythetic Classification | https://scholar.google.com/scholar?cluster=5360824455580624680&hl=en&as_sdt=0,47 | 3 | 2,022 |
Adversarial Vulnerability of Randomized Ensembles | 1 | icml | 1 | 0 | 2023-06-17 04:54:37.859000 | https://github.com/hsndbk4/arc | 9 | Adversarial Vulnerability of Randomized Ensembles | https://scholar.google.com/scholar?cluster=2408757977511355426&hl=en&as_sdt=0,5 | 1 | 2,022 |
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization | 4 | icml | 4 | 0 | 2023-06-17 04:54:38.066000 | https://github.com/gbdl/bbi | 5 | Born-Infeld (BI) for AI: energy-conserving descent (ECD) for optimization | https://scholar.google.com/scholar?cluster=11927103073322066327&hl=en&as_sdt=0,44 | 2 | 2,022 |
Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass | 7 | icml | 2 | 0 | 2023-06-17 04:54:38.271000 | https://github.com/giorgiad/pepita | 16 | Error-driven input modulation: solving the credit assignment problem without a backward pass | https://scholar.google.com/scholar?cluster=12440766337737848620&hl=en&as_sdt=0,5 | 1 | 2,022 |
DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations | 20 | icml | 4 | 0 | 2023-06-17 04:54:38.477000 | https://github.com/fdeng18/dreamer-pro | 26 | Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations | https://scholar.google.com/scholar?cluster=11064573461444670693&hl=en&as_sdt=0,34 | 1 | 2,022 |
NeuralEF: Deconstructing Kernels by Deep Neural Networks | 9 | icml | 1 | 0 | 2023-06-17 04:54:38.683000 | https://github.com/thudzj/neuraleigenfunction | 10 | Neuralef: Deconstructing kernels by deep neural networks | https://scholar.google.com/scholar?cluster=14961387103388663924&hl=en&as_sdt=0,47 | 2 | 2,022 |
Generalization and Robustness Implications in Object-Centric Learning | 20 | icml | 2 | 0 | 2023-06-17 04:54:38.889000 | https://github.com/addtt/object-centric-library | 61 | Generalization and robustness implications in object-centric learning | https://scholar.google.com/scholar?cluster=9362373326387424526&hl=en&as_sdt=0,33 | 3 | 2,022 |
Fair Generalized Linear Models with a Convex Penalty | 1 | icml | 1 | 1 | 2023-06-17 04:54:39.095000 | https://github.com/hyungrok-do/fair-glm-cvx | 0 | Fair Generalized Linear Models with a Convex Penalty | https://scholar.google.com/scholar?cluster=11693304205339987181&hl=en&as_sdt=0,33 | 3 | 2,022 |
On the Adversarial Robustness of Causal Algorithmic Recourse | 28 | icml | 0 | 0 | 2023-06-17 04:54:39.300000 | https://github.com/ricardodominguez/adversariallyrobustrecourse | 5 | On the adversarial robustness of causal algorithmic recourse | https://scholar.google.com/scholar?cluster=16011924534958641945&hl=en&as_sdt=0,14 | 1 | 2,022 |
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks | 4 | icml | 1 | 0 | 2023-06-17 04:54:39.505000 | https://github.com/RunpeiDong/DGMS | 5 | Finding the task-optimal low-bit sub-distribution in deep neural networks | https://scholar.google.com/scholar?cluster=7264575101488982108&hl=en&as_sdt=0,47 | 2 | 2,022 |
PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs | 2 | icml | 0 | 0 | 2023-06-17 04:54:39.711000 | https://github.com/zehao-dong/pace | 7 | PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs | https://scholar.google.com/scholar?cluster=11354614986119464774&hl=en&as_sdt=0,5 | 1 | 2,022 |
Adapting to Mixing Time in Stochastic Optimization with Markovian Data | 8 | icml | 8 | 0 | 2023-06-17 04:54:39.916000 | https://github.com/Rondorf/BOReL | 20 | Adapting to mixing time in stochastic optimization with markovian data | https://scholar.google.com/scholar?cluster=4133641935390571413&hl=en&as_sdt=0,45 | 3 | 2,022 |
TACTiS: Transformer-Attentional Copulas for Time Series | 11 | icml | 11 | 3 | 2023-06-17 04:54:40.121000 | https://github.com/ServiceNow/tactis | 72 | Tactis: Transformer-attentional copulas for time series | https://scholar.google.com/scholar?cluster=5604382526172400005&hl=en&as_sdt=0,33 | 8 | 2,022 |
Learning Iterative Reasoning through Energy Minimization | 4 | icml | 6 | 4 | 2023-06-17 04:54:40.327000 | https://github.com/yilundu/irem_code_release | 38 | Learning iterative reasoning through energy minimization | https://scholar.google.com/scholar?cluster=1554477033097529382&hl=en&as_sdt=0,7 | 3 | 2,022 |
SE(3) Equivariant Graph Neural Networks with Complete Local Frames | 10 | icml | 6 | 1 | 2023-06-17 04:54:40.534000 | https://github.com/mouthful/ClofNet | 11 | SE (3) Equivariant Graph Neural Networks with Complete Local Frames | https://scholar.google.com/scholar?cluster=14602440346377958112&hl=en&as_sdt=0,33 | 2 | 2,022 |
A Context-Integrated Transformer-Based Neural Network for Auction Design | 10 | icml | 1 | 0 | 2023-06-17 04:54:40.739000 | https://github.com/zjduan/CITransNet | 10 | A context-integrated transformer-based neural network for auction design | https://scholar.google.com/scholar?cluster=9850607820011561614&hl=en&as_sdt=0,5 | 1 | 2,022 |
From data to functa: Your data point is a function and you can treat it like one | 33 | icml | 4 | 3 | 2023-06-17 04:54:40.944000 | https://github.com/deepmind/functa | 101 | From data to functa: Your data point is a function and you can treat it like one | https://scholar.google.com/scholar?cluster=4550089326904681331&hl=en&as_sdt=0,39 | 8 | 2,022 |
On the Difficulty of Defending Self-Supervised Learning against Model Extraction | 7 | icml | 0 | 0 | 2023-06-17 04:54:41.150000 | https://github.com/cleverhans-lab/ssl-attacks-defenses | 1 | On the difficulty of defending self-supervised learning against model extraction | https://scholar.google.com/scholar?cluster=16145224211258754535&hl=en&as_sdt=0,33 | 1 | 2,022 |
LIMO: Latent Inceptionism for Targeted Molecule Generation | 8 | icml | 14 | 9 | 2023-06-17 04:54:41.356000 | https://github.com/rose-stl-lab/limo | 44 | LIMO: Latent Inceptionism for Targeted Molecule Generation | https://scholar.google.com/scholar?cluster=12167942813454300503&hl=en&as_sdt=0,10 | 3 | 2,022 |
FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning | 5 | icml | 1 | 1 | 2023-06-17 04:54:41.561000 | https://github.com/aelgabli/fednew | 9 | FedNew: A communication-efficient and privacy-preserving Newton-type method for federated learning | https://scholar.google.com/scholar?cluster=13605239667986344129&hl=en&as_sdt=0,5 | 1 | 2,022 |
For Learning in Symmetric Teams, Local Optima are Global Nash Equilibria | 1 | icml | 0 | 0 | 2023-06-17 04:54:41.767000 | https://github.com/scottemmons/coordination | 0 | For learning in symmetric teams, local optima are global nash equilibria | https://scholar.google.com/scholar?cluster=16109782432543935692&hl=en&as_sdt=0,33 | 2 | 2,022 |
Towards Scaling Difference Target Propagation by Learning Backprop Targets | 11 | icml | 0 | 0 | 2023-06-17 04:54:41.973000 | https://github.com/bptargetdtp/scalabledtp | 1 | Towards scaling difference target propagation by learning backprop targets | https://scholar.google.com/scholar?cluster=16976057052458549832&hl=en&as_sdt=0,5 | 2 | 2,022 |
Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information | 18 | icml | 8 | 0 | 2023-06-17 04:54:42.180000 | https://github.com/kawine/dataset_difficulty | 58 | Understanding Dataset Difficulty with $\mathcalV $-Usable Information | https://scholar.google.com/scholar?cluster=446878521601081307&hl=en&as_sdt=0,5 | 1 | 2,022 |
Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning | 36 | icml | 9 | 1 | 2023-06-17 04:54:42.386000 | https://github.com/google-research/head2toe | 71 | Head2toe: Utilizing intermediate representations for better transfer learning | https://scholar.google.com/scholar?cluster=12027550380073751806&hl=en&as_sdt=0,33 | 6 | 2,022 |
Variational Sparse Coding with Learned Thresholding | 0 | icml | 1 | 0 | 2023-06-17 04:54:42.593000 | https://github.com/kfallah/variational-sparse-coding | 7 | Variational Sparse Coding with Learned Thresholding | https://scholar.google.com/scholar?cluster=10401057138019982209&hl=en&as_sdt=0,43 | 2 | 2,022 |
Training Discrete Deep Generative Models via Gapped Straight-Through Estimator | 4 | icml | 0 | 0 | 2023-06-17 04:54:42.798000 | https://github.com/chijames/gst | 8 | Training Discrete Deep Generative Models via Gapped Straight-Through Estimator | https://scholar.google.com/scholar?cluster=3212785124198988357&hl=en&as_sdt=0,50 | 1 | 2,022 |
DRIBO: Robust Deep Reinforcement Learning via Multi-View Information Bottleneck | 16 | icml | 2 | 1 | 2023-06-17 04:54:43.003000 | https://github.com/BU-DEPEND-Lab/DRIBO | 4 | Dribo: Robust deep reinforcement learning via multi-view information bottleneck | https://scholar.google.com/scholar?cluster=17795910493641193453&hl=en&as_sdt=0,10 | 1 | 2,022 |
Variational Wasserstein gradient flow | 20 | icml | 0 | 0 | 2023-06-17 04:54:43.209000 | https://github.com/sbyebss/variational_wgf | 9 | Variational wasserstein gradient flow | https://scholar.google.com/scholar?cluster=4247639090058922494&hl=en&as_sdt=0,34 | 1 | 2,022 |
Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP) | 38 | icml | 1 | 0 | 2023-06-17 04:54:43.428000 | https://github.com/mlfoundations/imagenet-captions | 33 | Data determines distributional robustness in contrastive language image pre-training (clip) | https://scholar.google.com/scholar?cluster=12568254041342889008&hl=en&as_sdt=0,5 | 5 | 2,022 |
An Equivalence Between Data Poisoning and Byzantine Gradient Attacks | 5 | icml | 1 | 0 | 2023-06-17 04:54:43.634000 | https://github.com/lpd-epfl/attack_equivalence | 1 | An equivalence between data poisoning and byzantine gradient attacks | https://scholar.google.com/scholar?cluster=15814948581438408162&hl=en&as_sdt=0,5 | 1 | 2,022 |
Investigating Generalization by Controlling Normalized Margin | 3 | icml | 0 | 0 | 2023-06-17 04:54:43.839000 | https://github.com/alexfarhang/margin | 1 | Investigating Generalization by Controlling Normalized Margin | https://scholar.google.com/scholar?cluster=715638377527231014&hl=en&as_sdt=0,34 | 1 | 2,022 |
Private frequency estimation via projective geometry | 6 | icml | 0 | 0 | 2023-06-17 04:54:44.044000 | https://github.com/minilek/private_frequency_oracles | 3 | Private frequency estimation via projective geometry | https://scholar.google.com/scholar?cluster=5605547034926514625&hl=en&as_sdt=0,33 | 1 | 2,022 |
Coordinated Double Machine Learning | 0 | icml | 1 | 0 | 2023-06-17 04:54:44.250000 | https://github.com/nitaifingerhut/c-dml | 3 | Coordinated Double Machine Learning | https://scholar.google.com/scholar?cluster=3098806630799952921&hl=en&as_sdt=0,10 | 2 | 2,022 |
Conformal Prediction Sets with Limited False Positives | 4 | icml | 0 | 1 | 2023-06-17 04:54:44.457000 | https://github.com/ajfisch/conformal-fp | 0 | Conformal prediction sets with limited false positives | https://scholar.google.com/scholar?cluster=3023340906965759657&hl=en&as_sdt=0,36 | 1 | 2,022 |
Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness | 1 | icml | 1 | 0 | 2023-06-17 04:54:44.662000 | https://github.com/benevolentai/comp | 7 | Contrastive mixture of posteriors for counterfactual inference, data integration and fairness | https://scholar.google.com/scholar?cluster=7874050188706328624&hl=en&as_sdt=0,5 | 3 | 2,022 |
A Neural Tangent Kernel Perspective of GANs | 13 | icml | 2 | 0 | 2023-06-17 04:54:44.869000 | https://github.com/emited/gantk2 | 13 | A neural tangent kernel perspective of gans | https://scholar.google.com/scholar?cluster=4606779800346786718&hl=en&as_sdt=0,5 | 4 | 2,022 |
SPDY: Accurate Pruning with Speedup Guarantees | 7 | icml | 4 | 3 | 2023-06-17 04:54:45.075000 | https://github.com/ist-daslab/spdy | 11 | SPDY: Accurate pruning with speedup guarantees | https://scholar.google.com/scholar?cluster=9481477632006628831&hl=en&as_sdt=0,32 | 5 | 2,022 |
Scaling Structured Inference with Randomization | 2 | icml | 3 | 0 | 2023-06-17 04:54:45.280000 | https://github.com/franxyao/rdp | 13 | Scaling structured inference with randomization | https://scholar.google.com/scholar?cluster=13234676438098295868&hl=en&as_sdt=0,38 | 2 | 2,022 |
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks | 5 | icml | 1 | 1 | 2023-06-17 04:54:45.488000 | https://github.com/rice-eic/depthshrinker | 36 | DepthShrinker: a new compression paradigm towards boosting real-hardware efficiency of compact neural networks | https://scholar.google.com/scholar?cluster=13003128521759488248&hl=en&as_sdt=0,33 | 10 | 2,022 |
$p$-Laplacian Based Graph Neural Networks | 7 | icml | 2 | 0 | 2023-06-17 04:54:45.693000 | https://github.com/guoji-fu/pgnns | 21 | -Laplacian Based Graph Neural Networks | https://scholar.google.com/scholar?cluster=15123165040444629585&hl=en&as_sdt=0,33 | 2 | 2,022 |
Generalizing Gaussian Smoothing for Random Search | 2 | icml | 0 | 0 | 2023-06-17 04:54:45.898000 | https://github.com/isl-org/generalized-smoothing | 3 | Generalizing Gaussian Smoothing for Random Search | https://scholar.google.com/scholar?cluster=2545306041243695019&hl=en&as_sdt=0,5 | 4 | 2,022 |
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems | 2 | icml | 1 | 0 | 2023-06-17 04:54:46.105000 | https://github.com/wi-pi/rethinking-image-scaling-attacks | 3 | Rethinking image-scaling attacks: The interplay between vulnerabilities in machine learning systems | https://scholar.google.com/scholar?cluster=9730023948978190760&hl=en&as_sdt=0,11 | 2 | 2,022 |
Lazy Estimation of Variable Importance for Large Neural Networks | 1 | icml | 0 | 0 | 2023-06-17 04:54:46.313000 | https://github.com/willett-group/lazyvi | 0 | Lazy Estimation of Variable Importance for Large Neural Networks | https://scholar.google.com/scholar?cluster=11646154414177168250&hl=en&as_sdt=0,3 | 2 | 2,022 |
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack | 3 | icml | 2 | 0 | 2023-06-17 04:54:46.526000 | https://github.com/sjtubrian/mm-attack | 4 | Fast and reliable evaluation of adversarial robustness with minimum-margin attack | https://scholar.google.com/scholar?cluster=16577119936016409064&hl=en&as_sdt=0,5 | 1 | 2,022 |
Value Function based Difference-of-Convex Algorithm for Bilevel Hyperparameter Selection Problems | 9 | icml | 3 | 0 | 2023-06-17 04:54:46.731000 | https://github.com/sustech-optimization/vf-idca | 2 | Value function based difference-of-convex algorithm for bilevel hyperparameter selection problems | https://scholar.google.com/scholar?cluster=5559492833861486776&hl=en&as_sdt=0,10 | 1 | 2,022 |
Learning to Incorporate Texture Saliency Adaptive Attention to Image Cartoonization | 1 | icml | 1 | 1 | 2023-06-17 04:54:46.937000 | https://github.com/xianggao1102/learning-to-incorporate-texture-saliency-adaptive-attention-to-image-cartoonization | 4 | Learning to Incorporate Texture Saliency Adaptive Attention to Image Cartoonization | https://scholar.google.com/scholar?cluster=11484326183315995757&hl=en&as_sdt=0,33 | 1 | 2,022 |
Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification | 3 | icml | 0 | 2 | 2023-06-17 04:54:47.142000 | https://github.com/garcinc/noised-topk | 10 | Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification | https://scholar.google.com/scholar?cluster=16642060329776900644&hl=en&as_sdt=0,47 | 2 | 2,022 |
A Functional Information Perspective on Model Interpretation | 1 | icml | 0 | 0 | 2023-06-17 04:54:47.347000 | https://github.com/nitaytech/functionalexplanation | 5 | A Functional Information Perspective on Model Interpretation | https://scholar.google.com/scholar?cluster=5647868257497386951&hl=en&as_sdt=0,33 | 1 | 2,022 |
Inducing Causal Structure for Interpretable Neural Networks | 20 | icml | 0 | 0 | 2023-06-17 04:54:47.554000 | https://github.com/frankaging/interchange-intervention-training | 7 | Inducing causal structure for interpretable neural networks | https://scholar.google.com/scholar?cluster=3318078853003855419&hl=en&as_sdt=0,5 | 2 | 2,022 |
Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning | 9 | icml | 5 | 0 | 2023-06-17 04:54:47.760000 | https://github.com/jmaces/aapm-ct-challenge | 34 | Near-exact recovery for tomographic inverse problems via deep learning | https://scholar.google.com/scholar?cluster=10012619344494620426&hl=en&as_sdt=0,5 | 3 | 2,022 |
Equivariance versus Augmentation for Spherical Images | 8 | icml | 1 | 0 | 2023-06-17 04:54:47.966000 | https://github.com/janegerken/sem_seg_s2cnn | 2 | Equivariance versus augmentation for spherical images | https://scholar.google.com/scholar?cluster=2388075100052458630&hl=en&as_sdt=0,33 | 0 | 2,022 |
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations | 1 | icml | 0 | 1 | 2023-06-17 04:54:48.170000 | https://github.com/youranonymousefriend/plugininversion | 9 | Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations | https://scholar.google.com/scholar?cluster=3783911125052785325&hl=en&as_sdt=0,5 | 1 | 2,022 |
SCHA-VAE: Hierarchical Context Aggregation for Few-Shot Generation | 2 | icml | 0 | 0 | 2023-06-17 04:54:48.377000 | https://github.com/georgosgeorgos/hierarchical-few-shot-generative-models | 10 | Scha-vae: Hierarchical context aggregation for few-shot generation | https://scholar.google.com/scholar?cluster=18154128388289892262&hl=en&as_sdt=0,23 | 1 | 2,022 |
RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression | 8 | icml | 5 | 2 | 2023-06-17 04:54:48.585000 | https://github.com/BorealisAI/ranksim-imbalanced-regression | 27 | RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression | https://scholar.google.com/scholar?cluster=2649008384099907500&hl=en&as_sdt=0,5 | 2 | 2,022 |
Causal Inference Through the Structural Causal Marginal Problem | 6 | icml | 3 | 0 | 2023-06-17 04:54:48.791000 | https://github.com/lgresele/structural-causal-marginal | 2 | Causal inference through the structural causal marginal problem | https://scholar.google.com/scholar?cluster=2256399104999533783&hl=en&as_sdt=0,47 | 1 | 2,022 |
Variational Mixtures of ODEs for Inferring Cellular Gene Expression Dynamics | 3 | icml | 1 | 2 | 2023-06-17 04:54:48.997000 | https://github.com/welch-lab/velovae | 21 | Variational mixtures of ODEs for inferring cellular gene expression dynamics | https://scholar.google.com/scholar?cluster=5570506012304975998&hl=en&as_sdt=0,47 | 5 | 2,022 |
Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity | 7 | icml | 1 | 0 | 2023-06-17 04:54:49.202000 | https://github.com/GuanSuns/ASGRL | 11 | Leveraging approximate symbolic models for reinforcement learning via skill diversity | https://scholar.google.com/scholar?cluster=9607066569965060600&hl=en&as_sdt=0,29 | 1 | 2,022 |
Bounding Training Data Reconstruction in Private (Deep) Learning | 14 | icml | 0 | 0 | 2023-06-17 04:54:49.411000 | https://github.com/facebookresearch/bounding_data_reconstruction | 10 | Bounding training data reconstruction in private (deep) learning | https://scholar.google.com/scholar?cluster=3008455373482985083&hl=en&as_sdt=0,23 | 4 | 2,022 |
NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks | 8 | icml | 2 | 0 | 2023-06-17 04:54:49.617000 | https://github.com/burakgurbuz97/nispa | 15 | Nispa: Neuro-inspired stability-plasticity adaptation for continual learning in sparse networks | https://scholar.google.com/scholar?cluster=17073314745146797398&hl=en&as_sdt=0,5 | 2 | 2,022 |
Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets | 25 | icml | 4 | 0 | 2023-06-17 04:54:49.823000 | https://github.com/avihu111/typiclust | 44 | Active learning on a budget: Opposite strategies suit high and low budgets | https://scholar.google.com/scholar?cluster=7933856557848734665&hl=en&as_sdt=0,36 | 4 | 2,022 |
You Only Cut Once: Boosting Data Augmentation with a Single Cut | 9 | icml | 10 | 3 | 2023-06-17 04:54:50.032000 | https://github.com/junlinhan/yoco | 93 | You only cut once: Boosting data augmentation with a single cut | https://scholar.google.com/scholar?cluster=501111593877482032&hl=en&as_sdt=0,24 | 3 | 2,022 |
Scalable MCMC Sampling for Nonsymmetric Determinantal Point Processes | 1 | icml | 0 | 0 | 2023-06-17 04:54:50.238000 | https://github.com/insuhan/ndpp-mcmc-sampling | 0 | Scalable mcmc sampling for nonsymmetric determinantal point processes | https://scholar.google.com/scholar?cluster=280717695600419200&hl=en&as_sdt=0,5 | 1 | 2,022 |
Adversarial Attacks on Gaussian Process Bandits | 2 | icml | 0 | 0 | 2023-06-17 04:54:50.443000 | https://github.com/eric-vader/attack-bo | 1 | Adversarial attacks on Gaussian process bandits | https://scholar.google.com/scholar?cluster=13292319437654740768&hl=en&as_sdt=0,5 | 2 | 2,022 |
Temporal Difference Learning for Model Predictive Control | 35 | icml | 40 | 1 | 2023-06-17 04:54:50.650000 | https://github.com/nicklashansen/tdmpc | 201 | Temporal difference learning for model predictive control | https://scholar.google.com/scholar?cluster=10762661949285432757&hl=en&as_sdt=0,34 | 4 | 2,022 |
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses | 13 | icml | 1 | 0 | 2023-06-17 04:54:50.855000 | https://github.com/logan-stapleton/strategic-instrumental-variable-regression | 0 | Strategic instrumental variable regression: Recovering causal relationships from strategic responses | https://scholar.google.com/scholar?cluster=5426296166892217767&hl=en&as_sdt=0,29 | 2 | 2,022 |
General-purpose, long-context autoregressive modeling with Perceiver AR | 22 | icml | 18 | 16 | 2023-06-17 04:54:51.061000 | https://github.com/google-research/perceiver-ar | 202 | General-purpose, long-context autoregressive modeling with perceiver ar | https://scholar.google.com/scholar?cluster=1307821423265105144&hl=en&as_sdt=0,1 | 12 | 2,022 |
On Distribution Shift in Learning-based Bug Detectors | 10 | icml | 4 | 2 | 2023-06-17 04:54:51.266000 | https://github.com/eth-sri/learning-real-bug-detector | 12 | On distribution shift in learning-based bug detectors | https://scholar.google.com/scholar?cluster=16187870824460798751&hl=en&as_sdt=0,1 | 8 | 2,022 |
GNNRank: Learning Global Rankings from Pairwise Comparisons via Directed Graph Neural Networks | 6 | icml | 8 | 0 | 2023-06-17 04:54:51.472000 | https://github.com/sherylhyx/gnnrank | 39 | GNNRank: Learning global rankings from pairwise comparisons via directed graph neural networks | https://scholar.google.com/scholar?cluster=4446473441491315248&hl=en&as_sdt=0,5 | 2 | 2,022 |
Sparse Double Descent: Where Network Pruning Aggravates Overfitting | 7 | icml | 1 | 0 | 2023-06-17 04:54:51.677000 | https://github.com/hezheug/sparse-double-descent | 14 | Sparse Double Descent: Where Network Pruning Aggravates Overfitting | https://scholar.google.com/scholar?cluster=13575634226332267218&hl=en&as_sdt=0,5 | 2 | 2,022 |
Label-Descriptive Patterns and Their Application to Characterizing Classification Errors | 2 | icml | 0 | 0 | 2023-06-17 04:54:51.883000 | https://github.com/uds-lsv/premise | 2 | Label-descriptive patterns and their application to characterizing classification errors | https://scholar.google.com/scholar?cluster=17151062876326396641&hl=en&as_sdt=0,5 | 5 | 2,022 |
NOMU: Neural Optimization-based Model Uncertainty | 10 | icml | 5 | 1 | 2023-06-17 04:54:52.089000 | https://github.com/marketdesignresearch/NOMU | 7 | Nomu: Neural optimization-based model uncertainty | https://scholar.google.com/scholar?cluster=17483969048738577269&hl=en&as_sdt=0,39 | 1 | 2,022 |
Scaling Out-of-Distribution Detection for Real-World Settings | 137 | icml | 19 | 0 | 2023-06-17 04:54:52.295000 | https://github.com/hendrycks/anomaly-seg | 144 | Scaling out-of-distribution detection for real-world settings | https://scholar.google.com/scholar?cluster=8919172731066658800&hl=en&as_sdt=0,10 | 9 | 2,022 |
Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology | 0 | icml | 0 | 0 | 2023-06-17 04:54:52.501000 | https://github.com/valentinhofmann/unsupervised_bias | 0 | Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology | https://scholar.google.com/scholar?cluster=11219729475628655718&hl=en&as_sdt=0,5 | 1 | 2,022 |
Equivariant Diffusion for Molecule Generation in 3D | 145 | icml | 64 | 13 | 2023-06-17 04:54:52.707000 | https://github.com/ehoogeboom/e3_diffusion_for_molecules | 260 | Equivariant diffusion for molecule generation in 3d | https://scholar.google.com/scholar?cluster=9412014854490527272&hl=en&as_sdt=0,14 | 7 | 2,022 |
Conditional GANs with Auxiliary Discriminative Classifier | 7 | icml | 4 | 0 | 2023-06-17 04:54:52.912000 | https://github.com/houliangict/adcgan | 15 | Conditional GANs with auxiliary discriminative classifier | https://scholar.google.com/scholar?cluster=868024013198158367&hl=en&as_sdt=0,5 | 1 | 2,022 |
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents | 162 | icml | 20 | 3 | 2023-06-17 04:54:53.118000 | https://github.com/huangwl18/language-planner | 163 | Language models as zero-shot planners: Extracting actionable knowledge for embodied agents | https://scholar.google.com/scholar?cluster=11998123682359381476&hl=en&as_sdt=0,3 | 4 | 2,022 |
Going Deeper into Permutation-Sensitive Graph Neural Networks | 11 | icml | 4 | 0 | 2023-06-17 04:54:53.323000 | https://github.com/zhongyu1998/pg-gnn | 20 | Going Deeper into Permutation-Sensitive Graph Neural Networks | https://scholar.google.com/scholar?cluster=14997369349376020515&hl=en&as_sdt=0,5 | 1 | 2,022 |
Directed Acyclic Transformer for Non-Autoregressive Machine Translation | 15 | icml | 10 | 6 | 2023-06-17 04:54:53.529000 | https://github.com/thu-coai/da-transformer | 89 | Directed acyclic transformer for non-autoregressive machine translation | https://scholar.google.com/scholar?cluster=12752123369496105828&hl=en&as_sdt=0,33 | 7 | 2,022 |
Unsupervised Ground Metric Learning Using Wasserstein Singular Vectors | 2 | icml | 0 | 0 | 2023-06-17 04:54:53.734000 | https://github.com/gjhuizing/wsingular | 7 | Unsupervised Ground Metric Learning Using Wasserstein Singular Vectors | https://scholar.google.com/scholar?cluster=15888088169122917171&hl=en&as_sdt=0,5 | 2 | 2,022 |
Robust Kernel Density Estimation with Median-of-Means principle | 8 | icml | 3 | 3 | 2023-06-17 04:54:53.940000 | https://github.com/lminvielle/mom-kde | 6 | Robust kernel density estimation with median-of-means principle | https://scholar.google.com/scholar?cluster=14673811907284819215&hl=en&as_sdt=0,5 | 3 | 2,022 |
Proximal Denoiser for Convergent Plug-and-Play Optimization with Nonconvex Regularization | 17 | icml | 3 | 0 | 2023-06-17 04:54:54.145000 | https://github.com/samuro95/prox-pnp | 4 | Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization | https://scholar.google.com/scholar?cluster=12256965087281375600&hl=en&as_sdt=0,5 | 2 | 2,022 |
LeNSE: Learning To Navigate Subgraph Embeddings for Large-Scale Combinatorial Optimisation | 4 | icml | 2 | 0 | 2023-06-17 04:54:54.350000 | https://github.com/davidireland3/lense | 9 | LeNSE: Learning To Navigate Subgraph Embeddings for Large-Scale Combinatorial Optimisation | https://scholar.google.com/scholar?cluster=7267816984726307573&hl=en&as_sdt=0,26 | 3 | 2,022 |
The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention | 12 | icml | 1 | 1 | 2023-06-17 04:54:54.555000 | https://github.com/robertcsordas/linear_layer_as_attention | 14 | The dual form of neural networks revisited: Connecting test time predictions to training patterns via spotlights of attention | https://scholar.google.com/scholar?cluster=11337857580515349157&hl=en&as_sdt=0,5 | 2 | 2,022 |
A Modern Self-Referential Weight Matrix That Learns to Modify Itself | 19 | icml | 17 | 3 | 2023-06-17 04:54:54.761000 | https://github.com/idsia/modern-srwm | 148 | A modern self-referential weight matrix that learns to modify itself | https://scholar.google.com/scholar?cluster=10630456414832460528&hl=en&as_sdt=0,33 | 8 | 2,022 |
A deep convolutional neural network that is invariant to time rescaling | 2 | icml | 0 | 1 | 2023-06-17 04:54:54.967000 | https://github.com/compmem/SITHCon | 2 | A deep convolutional neural network that is invariant to time rescaling | https://scholar.google.com/scholar?cluster=731774651536846779&hl=en&as_sdt=0,5 | 4 | 2,022 |
Biological Sequence Design with GFlowNets | 31 | icml | 14 | 7 | 2023-06-17 04:54:55.172000 | https://github.com/mj10/bioseq-gfn-al | 51 | Biological sequence design with gflownets | https://scholar.google.com/scholar?cluster=13153301030980981497&hl=en&as_sdt=0,39 | 1 | 2,022 |
Combining Diverse Feature Priors | 5 | icml | 0 | 0 | 2023-06-17 04:54:55.378000 | https://github.com/MadryLab/copriors | 7 | Combining diverse feature priors | https://scholar.google.com/scholar?cluster=3431368394631636693&hl=en&as_sdt=0,33 | 5 | 2,022 |
Training Your Sparse Neural Network Better with Any Mask | 5 | icml | 3 | 0 | 2023-06-17 04:54:55.584000 | https://github.com/vita-group/tost | 20 | Training your sparse neural network better with any mask | https://scholar.google.com/scholar?cluster=17434761620518064417&hl=en&as_sdt=0,11 | 10 | 2,022 |
Planning with Diffusion for Flexible Behavior Synthesis | 64 | icml | 56 | 8 | 2023-06-17 04:54:55.789000 | https://github.com/jannerm/diffuser | 441 | Planning with diffusion for flexible behavior synthesis | https://scholar.google.com/scholar?cluster=17441916079353459921&hl=en&as_sdt=0,44 | 8 | 2,022 |
HyperImpute: Generalized Iterative Imputation with Automatic Model Selection | 9 | icml | 4 | 0 | 2023-06-17 04:54:55.997000 | https://github.com/vanderschaarlab/hyperimpute | 94 | Hyperimpute: Generalized iterative imputation with automatic model selection | https://scholar.google.com/scholar?cluster=7345905181972151816&hl=en&as_sdt=0,33 | 3 | 2,022 |
Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization | 5 | icml | 0 | 0 | 2023-06-17 04:54:56.204000 | https://github.com/adrianjav/impartial-vaes | 3 | Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization | https://scholar.google.com/scholar?cluster=14600839373536938661&hl=en&as_sdt=0,11 | 1 | 2,022 |
MASER: Multi-Agent Reinforcement Learning with Subgoals Generated from Experience Replay Buffer | 9 | icml | 4 | 3 | 2023-06-17 04:54:56.452000 | https://github.com/jiwonjeon9603/maser | 11 | Maser: Multi-agent reinforcement learning with subgoals generated from experience replay buffer | https://scholar.google.com/scholar?cluster=3511041100939657281&hl=en&as_sdt=0,45 | 2 | 2,022 |
Improving Policy Optimization with Generalist-Specialist Learning | 5 | icml | 0 | 0 | 2023-06-17 04:54:56.658000 | https://github.com/seanjia/gsl | 3 | Improving policy optimization with generalist-specialist learning | https://scholar.google.com/scholar?cluster=14525219330814535505&hl=en&as_sdt=0,23 | 1 | 2,022 |
Supervised Off-Policy Ranking | 6 | icml | 1 | 1 | 2023-06-17 04:54:56.864000 | https://github.com/SOPR-T/SOPR-T | 5 | Supervised off-policy ranking | https://scholar.google.com/scholar?cluster=12930957527069555602&hl=en&as_sdt=0,10 | 1 | 2,022 |
Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations | 47 | icml | 17 | 2 | 2023-06-17 04:54:57.069000 | https://github.com/harryjo97/gdss | 81 | Score-based generative modeling of graphs via the system of stochastic differential equations | https://scholar.google.com/scholar?cluster=4163972994004543532&hl=en&as_sdt=0,5 | 2 | 2,022 |
Robust Fine-Tuning of Deep Neural Networks with Hessian-based Generalization Guarantees | 7 | icml | 0 | 0 | 2023-06-17 04:54:57.275000 | https://github.com/neu-statsml-research/robust-fine-tuning | 2 | Robust fine-tuning of deep neural networks with hessian-based generalization guarantees | https://scholar.google.com/scholar?cluster=6709344473214339936&hl=en&as_sdt=0,5 | 1 | 2,022 |
Flashlight: Enabling Innovation in Tools for Machine Learning | 11 | icml | 468 | 106 | 2023-06-17 04:54:57.481000 | https://github.com/flashlight/flashlight | 4,858 | Flashlight: Enabling innovation in tools for machine learning | https://scholar.google.com/scholar?cluster=13806487547053815832&hl=en&as_sdt=0,5 | 123 | 2,022 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.