title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions | 14 | icml | 2 | 0 | 2023-06-17 04:55:18.353000 | https://github.com/pilancilab/scnn | 5 | Fast convex optimization for two-layer relu networks: Equivalent model classes and cone decompositions | https://scholar.google.com/scholar?cluster=7077031077028119954&hl=en&as_sdt=0,21 | 3 | 2,022 |
Invariant Ancestry Search | 1 | icml | 0 | 0 | 2023-06-17 04:55:18.559000 | https://github.com/phillipmogensen/invariantancestrysearch | 0 | Invariant Ancestry Search | https://scholar.google.com/scholar?cluster=7085135570627495556&hl=en&as_sdt=0,10 | 1 | 2,022 |
SpeqNets: Sparsity-aware permutation-equivariant graph networks | 21 | icml | 3 | 0 | 2023-06-17 04:55:18.765000 | https://github.com/chrsmrrs/speqnets | 9 | Speqnets: Sparsity-aware permutation-equivariant graph networks | https://scholar.google.com/scholar?cluster=18273879943488078405&hl=en&as_sdt=0,1 | 1 | 2,022 |
CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer | 4 | icml | 2 | 1 | 2023-06-17 04:55:18.970000 | https://github.com/YaoMarkMu/CtrlFormer_robotic | 26 | Ctrlformer: Learning transferable state representation for visual control via transformer | https://scholar.google.com/scholar?cluster=15994281746681133957&hl=en&as_sdt=0,5 | 2 | 2,022 |
AutoSNN: Towards Energy-Efficient Spiking Neural Networks | 19 | icml | 1 | 0 | 2023-06-17 04:55:19.177000 | https://github.com/nabk89/autosnn | 11 | AutoSNN: towards energy-efficient spiking neural networks | https://scholar.google.com/scholar?cluster=4509781886252984486&hl=en&as_sdt=0,44 | 1 | 2,022 |
Overcoming Oscillations in Quantization-Aware Training | 12 | icml | 6 | 4 | 2023-06-17 04:55:19.383000 | https://github.com/qualcomm-ai-research/oscillations-qat | 35 | Overcoming oscillations in quantization-aware training | https://scholar.google.com/scholar?cluster=7420900147449297727&hl=en&as_sdt=0,33 | 6 | 2,022 |
Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation | 3 | icml | 1 | 0 | 2023-06-17 04:55:19.589000 | https://github.com/cs-giung/distill-latentbe | 2 | Improving ensemble distillation with weight averaging and diversifying perturbation | https://scholar.google.com/scholar?cluster=15634605277253421377&hl=en&as_sdt=0,5 | 1 | 2,022 |
Measuring Representational Robustness of Neural Networks Through Shared Invariances | 2 | icml | 0 | 0 | 2023-06-17 04:55:19.796000 | https://github.com/nvedant07/stir | 5 | Measuring Representational Robustness of Neural Networks Through Shared Invariances | https://scholar.google.com/scholar?cluster=11535296107699738994&hl=en&as_sdt=0,5 | 2 | 2,022 |
Multi-Task Learning as a Bargaining Game | 20 | icml | 16 | 0 | 2023-06-17 04:55:20.002000 | https://github.com/avivnavon/nash-mtl | 116 | Multi-task learning as a bargaining game | https://scholar.google.com/scholar?cluster=3841743488607196482&hl=en&as_sdt=0,5 | 4 | 2,022 |
Variational Inference for Infinitely Deep Neural Networks | 2 | icml | 0 | 1 | 2023-06-17 04:55:20.208000 | https://github.com/anazaret/unbounded-depth-neural-networks | 12 | Variational Inference for Infinitely Deep Neural Networks | https://scholar.google.com/scholar?cluster=15923008707496019552&hl=en&as_sdt=0,5 | 1 | 2,022 |
Stable Conformal Prediction Sets | 7 | icml | 0 | 0 | 2023-06-17 04:55:20.414000 | https://github.com/EugeneNdiaye/stable_conformal_prediction | 3 | Stable conformal prediction sets | https://scholar.google.com/scholar?cluster=1322086183676915267&hl=en&as_sdt=0,36 | 2 | 2,022 |
Sublinear-Time Clustering Oracle for Signed Graphs | 0 | icml | 0 | 0 | 2023-06-17 04:55:20.621000 | https://github.com/stefanresearch/signed-oracle | 0 | Sublinear-Time Clustering Oracle for Signed Graphs | https://scholar.google.com/scholar?cluster=11680644385251401321&hl=en&as_sdt=0,5 | 1 | 2,022 |
Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling | 21 | icml | 6 | 0 | 2023-06-17 04:55:20.827000 | https://github.com/tung-nd/tnp-pytorch | 42 | Transformer neural processes: Uncertainty-aware meta learning via sequence modeling | https://scholar.google.com/scholar?cluster=8314226561470238527&hl=en&as_sdt=0,39 | 2 | 2,022 |
Improving Transformers with Probabilistic Attention Keys | 9 | icml | 6 | 1 | 2023-06-17 04:55:21.033000 | https://github.com/minhtannguyen/transformer-mgk | 20 | Improving transformers with probabilistic attention keys | https://scholar.google.com/scholar?cluster=15369073464631209004&hl=en&as_sdt=0,33 | 1 | 2,022 |
Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs | 16 | icml | 29 | 1 | 2023-06-17 04:55:21.239000 | https://github.com/twni2016/pomdp-baselines | 212 | Recurrent model-free rl can be a strong baseline for many pomdps | https://scholar.google.com/scholar?cluster=10952850493674011457&hl=en&as_sdt=0,39 | 5 | 2,022 |
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models | 742 | icml | 457 | 23 | 2023-06-17 04:55:21.445000 | https://github.com/openai/glide-text2im | 3,226 | Glide: Towards photorealistic image generation and editing with text-guided diffusion models | https://scholar.google.com/scholar?cluster=15472303808406531445&hl=en&as_sdt=0,34 | 142 | 2,022 |
Diffusion Models for Adversarial Purification | 72 | icml | 22 | 0 | 2023-06-17 04:55:21.653000 | https://github.com/NVlabs/DiffPure | 163 | Diffusion models for adversarial purification | https://scholar.google.com/scholar?cluster=9166244005732160404&hl=en&as_sdt=0,5 | 5 | 2,022 |
The Primacy Bias in Deep Reinforcement Learning | 23 | icml | 6 | 0 | 2023-06-17 04:55:21.859000 | https://github.com/evgenii-nikishin/rl_with_resets | 82 | The primacy bias in deep reinforcement learning | https://scholar.google.com/scholar?cluster=11620338198970862085&hl=en&as_sdt=0,48 | 3 | 2,022 |
Efficient Test-Time Model Adaptation without Forgetting | 40 | icml | 5 | 0 | 2023-06-17 04:55:22.065000 | https://github.com/mr-eggplant/eata | 65 | Efficient test-time model adaptation without forgetting | https://scholar.google.com/scholar?cluster=17499416478096807711&hl=en&as_sdt=0,5 | 2 | 2,022 |
Utilizing Expert Features for Contrastive Learning of Time-Series Representations | 5 | icml | 2 | 2 | 2023-06-17 04:55:22.270000 | https://github.com/boschresearch/expclr | 14 | Utilizing expert features for contrastive learning of time-series representations | https://scholar.google.com/scholar?cluster=16790455232498977165&hl=en&as_sdt=0,33 | 6 | 2,022 |
Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval | 33 | icml | 21 | 4 | 2023-06-17 04:55:22.477000 | https://github.com/oatml-markslab/tranception | 88 | Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval | https://scholar.google.com/scholar?cluster=13139855140556717827&hl=en&as_sdt=0,44 | 5 | 2,022 |
Scalable Deep Gaussian Markov Random Fields for General Graphs | 2 | icml | 3 | 0 | 2023-06-17 04:55:22.684000 | https://github.com/joeloskarsson/graph-dgmrf | 4 | Scalable Deep Gaussian Markov Random Fields for General Graphs | https://scholar.google.com/scholar?cluster=16619238478793238405&hl=en&as_sdt=0,48 | 3 | 2,022 |
Zero-shot AutoML with Pretrained Models | 2 | icml | 2 | 0 | 2023-06-17 04:55:22.890000 | https://github.com/automl/zero-shot-automl-with-pretrained-models | 35 | Zero-Shot AutoML with Pretrained Models | https://scholar.google.com/scholar?cluster=4155086096102443249&hl=en&as_sdt=0,21 | 9 | 2,022 |
History Compression via Language Models in Reinforcement Learning | 8 | icml | 4 | 0 | 2023-06-17 04:55:23.096000 | https://github.com/ml-jku/helm | 38 | History compression via language models in reinforcement learning | https://scholar.google.com/scholar?cluster=3335833011258515063&hl=en&as_sdt=0,19 | 6 | 2,022 |
A Unified Weight Initialization Paradigm for Tensorial Convolutional Neural Networks | 2 | icml | 9 | 2 | 2023-06-17 04:55:23.302000 | https://github.com/tnbar/tednet | 64 | A Unified Weight Initialization Paradigm for Tensorial Convolutional Neural Networks | https://scholar.google.com/scholar?cluster=2601266852558996821&hl=en&as_sdt=0,22 | 3 | 2,022 |
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition | 31 | icml | 7 | 0 | 2023-06-17 04:55:23.509000 | https://github.com/p2333/score | 58 | Robustness and accuracy could be reconcilable by (proper) definition | https://scholar.google.com/scholar?cluster=12573058517676493723&hl=en&as_sdt=0,5 | 2 | 2,022 |
Learning Symmetric Embeddings for Equivariant World Models | 16 | icml | 0 | 1 | 2023-06-17 04:55:23.717000 | https://github.com/jypark0/sen | 4 | Learning symmetric embeddings for equivariant world models | https://scholar.google.com/scholar?cluster=17517971134760315540&hl=en&as_sdt=0,33 | 1 | 2,022 |
Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness | 7 | icml | 7 | 0 | 2023-06-17 04:55:23.924000 | https://github.com/xxxnell/spatial-smoothing | 70 | Blurs behave like ensembles: Spatial smoothings to improve accuracy, uncertainty, and robustness | https://scholar.google.com/scholar?cluster=11971703868153296298&hl=en&as_sdt=0,33 | 2 | 2,022 |
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution | 24 | icml | 3 | 1 | 2023-06-17 04:55:24.130000 | https://github.com/ml-jku/align-rudder | 18 | Align-rudder: Learning from few demonstrations by reward redistribution | https://scholar.google.com/scholar?cluster=17099796649634976721&hl=en&as_sdt=0,36 | 6 | 2,022 |
POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging | 8 | icml | 11 | 7 | 2023-06-17 04:55:24.336000 | https://github.com/shishirpatil/poet | 127 | POET: Training neural networks on tiny devices with integrated rematerialization and paging | https://scholar.google.com/scholar?cluster=5184430437455623817&hl=en&as_sdt=0,6 | 9 | 2,022 |
Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding | 32 | icml | 1,936 | 479 | 2023-06-17 04:55:24.542000 | https://github.com/espnet/espnet | 6,692 | Branchformer: Parallel mlp-attention architectures to capture local and global context for speech recognition and understanding | https://scholar.google.com/scholar?cluster=8709670323739096599&hl=en&as_sdt=0,33 | 179 | 2,022 |
Pocket2Mol: Efficient Molecular Sampling Based on 3D Protein Pockets | 30 | icml | 46 | 9 | 2023-06-17 04:55:24.751000 | https://github.com/pengxingang/pocket2mol | 155 | Pocket2mol: Efficient molecular sampling based on 3d protein pockets | https://scholar.google.com/scholar?cluster=5422392293509643070&hl=en&as_sdt=0,33 | 8 | 2,022 |
Differentiable Top-k Classification Learning | 8 | icml | 0 | 1 | 2023-06-17 04:55:24.977000 | https://github.com/felix-petersen/difftopk | 48 | Differentiable top-k classification learning | https://scholar.google.com/scholar?cluster=2888939572667326983&hl=en&as_sdt=0,33 | 3 | 2,022 |
Multi-scale Feature Learning Dynamics: Insights for Double Descent | 8 | icml | 2 | 0 | 2023-06-17 04:55:25.183000 | https://github.com/nndoubledescent/doubledescent | 0 | Multi-scale feature learning dynamics: Insights for double descent | https://scholar.google.com/scholar?cluster=15892651020867127021&hl=en&as_sdt=0,33 | 1 | 2,022 |
A Differential Entropy Estimator for Training Neural Networks | 13 | icml | 4 | 0 | 2023-06-17 04:55:25.390000 | https://github.com/g-pichler/knife | 9 | A differential entropy estimator for training neural networks | https://scholar.google.com/scholar?cluster=5856117255578319314&hl=en&as_sdt=0,33 | 1 | 2,022 |
Federated Learning with Partial Model Personalization | 30 | icml | 0 | 0 | 2023-06-17 04:55:25.596000 | https://github.com/krishnap25/fl_partial_personalization | 1 | Federated learning with partial model personalization | https://scholar.google.com/scholar?cluster=4750968691898857474&hl=en&as_sdt=0,11 | 3 | 2,022 |
Geometric Multimodal Contrastive Representation Learning | 7 | icml | 4 | 0 | 2023-06-17 04:55:25.801000 | https://github.com/miguelsvasco/gmc | 17 | Geometric Multimodal Contrastive Representation Learning | https://scholar.google.com/scholar?cluster=1723737180667149201&hl=en&as_sdt=0,50 | 2 | 2,022 |
On the Practicality of Deterministic Epistemic Uncertainty | 15 | icml | 178 | 119 | 2023-06-17 04:55:26.007000 | https://github.com/google/uncertainty-baselines | 1,244 | On the practicality of deterministic epistemic uncertainty | https://scholar.google.com/scholar?cluster=10237983835645354047&hl=en&as_sdt=0,33 | 20 | 2,022 |
ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers | 22 | icml | 19 | 4 | 2023-06-17 04:55:26.214000 | https://github.com/auspicious3000/contentvec | 277 | Contentvec: An improved self-supervised speech representation by disentangling speakers | https://scholar.google.com/scholar?cluster=16442143470536354603&hl=en&as_sdt=0,26 | 7 | 2,022 |
Generalizing to Evolving Domains with Latent Structure-Aware Sequential Autoencoder | 3 | icml | 4 | 0 | 2023-06-17 04:55:26.440000 | https://github.com/wonderseven/lssae | 19 | Generalizing to Evolving Domains with Latent Structure-Aware Sequential Autoencoder | https://scholar.google.com/scholar?cluster=8021731201291301386&hl=en&as_sdt=0,22 | 3 | 2,022 |
Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence | 3 | icml | 1 | 0 | 2023-06-17 04:55:26.646000 | https://github.com/zhqiu/ndcg-optimization | 2 | Large-scale stochastic optimization of ndcg surrogates for deep learning with provable convergence | https://scholar.google.com/scholar?cluster=9377138316635213561&hl=en&as_sdt=0,33 | 1 | 2,022 |
Latent Outlier Exposure for Anomaly Detection with Contaminated Data | 15 | icml | 8 | 1 | 2023-06-17 04:55:26.853000 | https://github.com/boschresearch/LatentOE-AD | 34 | Latent outlier exposure for anomaly detection with contaminated data | https://scholar.google.com/scholar?cluster=3679566789459312121&hl=en&as_sdt=0,33 | 4 | 2,022 |
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning | 7 | icml | 0 | 0 | 2023-06-17 04:55:27.059000 | https://github.com/baichenjia/contrastive-ucb | 9 | Contrastive ucb: Provably efficient contrastive self-supervised learning in online reinforcement learning | https://scholar.google.com/scholar?cluster=4487688180752876620&hl=en&as_sdt=0,5 | 2 | 2,022 |
Particle Transformer for Jet Tagging | 10 | icml | 25 | 1 | 2023-06-17 04:55:27.265000 | https://github.com/jet-universe/particle_transformer | 43 | Particle transformer for jet tagging | https://scholar.google.com/scholar?cluster=12329206017907212560&hl=en&as_sdt=0,23 | 3 | 2,022 |
Winning the Lottery Ahead of Time: Efficient Early Network Pruning | 5 | icml | 2 | 0 | 2023-06-17 04:55:27.480000 | https://github.com/johnrachwan123/Early-Cropression-via-Gradient-Flow-Preservation | 15 | Winning the lottery ahead of time: Efficient early network pruning | https://scholar.google.com/scholar?cluster=3167787605705434615&hl=en&as_sdt=0,41 | 2 | 2,022 |
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale | 59 | icml | 3,110 | 886 | 2023-06-17 04:55:27.718000 | https://github.com/microsoft/DeepSpeed | 25,974 | Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation ai scale | https://scholar.google.com/scholar?cluster=6450094276419504510&hl=en&as_sdt=0,22 | 290 | 2,022 |
A Closer Look at Smoothness in Domain Adversarial Training | 20 | icml | 4 | 2 | 2023-06-17 04:55:27.927000 | https://github.com/val-iisc/sdat | 40 | A closer look at smoothness in domain adversarial training | https://scholar.google.com/scholar?cluster=11164597139581450427&hl=en&as_sdt=0,33 | 14 | 2,022 |
Linear Adversarial Concept Erasure | 25 | icml | 3 | 2 | 2023-06-17 04:55:28.133000 | https://github.com/shauli-ravfogel/rlace-icml | 23 | Linear adversarial concept erasure | https://scholar.google.com/scholar?cluster=157683061025883774&hl=en&as_sdt=0,31 | 1 | 2,022 |
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks | 15 | icml | 0 | 0 | 2023-06-17 04:55:28.339000 | https://github.com/asafmaman101/imp_reg_htf | 4 | Implicit regularization in hierarchical tensor factorization and deep convolutional neural networks | https://scholar.google.com/scholar?cluster=12909622448171060632&hl=en&as_sdt=0,33 | 2 | 2,022 |
The dynamics of representation learning in shallow, non-linear autoencoders | 2 | icml | 0 | 0 | 2023-06-17 04:55:28.548000 | https://github.com/mariaref/nonlinearshallowae | 5 | The dynamics of representation learning in shallow, non-linear autoencoders | https://scholar.google.com/scholar?cluster=14118431460184328977&hl=en&as_sdt=0,31 | 2 | 2,022 |
Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs | 1 | icml | 0 | 0 | 2023-06-17 04:55:28.755000 | https://github.com/sjtu-xai-lab/transformation-complexity | 1 | Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs | https://scholar.google.com/scholar?cluster=1146425504680188001&hl=en&as_sdt=0,33 | 1 | 2,022 |
Benchmarking and Analyzing Point Cloud Classification under Corruptions | 26 | icml | 3 | 0 | 2023-06-17 04:55:28.962000 | https://github.com/jiawei-ren/modelnetc | 50 | Benchmarking and analyzing point cloud classification under corruptions | https://scholar.google.com/scholar?cluster=4434116773940428233&hl=en&as_sdt=0,33 | 6 | 2,022 |
Robust SDE-Based Variational Formulations for Solving Linear PDEs via Deep Learning | 2 | icml | 1 | 0 | 2023-06-17 04:55:29.168000 | https://github.com/juliusberner/robust_kolmogorov | 2 | Robust SDE-based variational formulations for solving linear PDEs via deep learning | https://scholar.google.com/scholar?cluster=5839668907631655505&hl=en&as_sdt=0,16 | 1 | 2,022 |
LyaNet: A Lyapunov Framework for Training Neural ODEs | 17 | icml | 3 | 0 | 2023-06-17 04:55:29.382000 | https://github.com/ivandariojr/lyapunovlearning | 27 | LyaNet: A Lyapunov framework for training neural ODEs | https://scholar.google.com/scholar?cluster=11176249487221195122&hl=en&as_sdt=0,33 | 3 | 2,022 |
Short-Term Plasticity Neurons Learning to Learn and Forget | 8 | icml | 1 | 0 | 2023-06-17 04:55:29.589000 | https://github.com/neuromorphiccomputing/stpn | 17 | Short-term plasticity neurons learning to learn and forget | https://scholar.google.com/scholar?cluster=13353176637859953693&hl=en&as_sdt=0,5 | 4 | 2,022 |
Function-space Inference with Sparse Implicit Processes | 2 | icml | 2 | 0 | 2023-06-17 04:55:29.800000 | https://github.com/simonrsantana/sparse-implicit-processes | 1 | Function-space Inference with Sparse Implicit Processes | https://scholar.google.com/scholar?cluster=3087914783084308149&hl=en&as_sdt=0,50 | 1 | 2,022 |
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images | 3 | icml | 3 | 0 | 2023-06-17 04:55:30.007000 | https://github.com/tomron27/dd_med | 2 | Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images | https://scholar.google.com/scholar?cluster=10465544337215443782&hl=en&as_sdt=0,14 | 1 | 2,022 |
A Consistent and Efficient Evaluation Strategy for Attribution Methods | 18 | icml | 4 | 3 | 2023-06-17 04:55:30.216000 | https://github.com/tleemann/road_evaluation | 12 | A consistent and efficient evaluation strategy for attribution methods | https://scholar.google.com/scholar?cluster=16933534039020294474&hl=en&as_sdt=0,44 | 1 | 2,022 |
Direct Behavior Specification via Constrained Reinforcement Learning | 14 | icml | 2 | 0 | 2023-06-17 04:55:30.424000 | https://github.com/ubisoft/directbehaviorspecification | 8 | Direct behavior specification via constrained reinforcement learning | https://scholar.google.com/scholar?cluster=12930072295285422644&hl=en&as_sdt=0,18 | 2 | 2,022 |
Graph-Coupled Oscillator Networks | 26 | icml | 7 | 1 | 2023-06-17 04:55:30.631000 | https://github.com/tk-rusch/graphcon | 39 | Graph-coupled oscillator networks | https://scholar.google.com/scholar?cluster=9009434155878040135&hl=en&as_sdt=0,5 | 3 | 2,022 |
Hindering Adversarial Attacks with Implicit Neural Representations | 1 | icml | 0 | 0 | 2023-06-17 04:55:30.837000 | https://github.com/deepmind/linac | 8 | Hindering Adversarial Attacks with Implicit Neural Representations | https://scholar.google.com/scholar?cluster=14287948960663739347&hl=en&as_sdt=0,33 | 2 | 2,022 |
Exploiting Independent Instruments: Identification and Distribution Generalization | 5 | icml | 0 | 0 | 2023-06-17 04:55:31.043000 | https://github.com/sorawitj/hsic-x | 5 | Exploiting independent instruments: Identification and distribution generalization | https://scholar.google.com/scholar?cluster=7573181679595557794&hl=en&as_sdt=0,31 | 1 | 2,022 |
LSB: Local Self-Balancing MCMC in Discrete Spaces | 5 | icml | 0 | 0 | 2023-06-17 04:55:31.250000 | https://github.com/emsansone/lsb | 2 | Lsb: Local self-balancing mcmc in discrete spaces | https://scholar.google.com/scholar?cluster=4624892797012274460&hl=en&as_sdt=0,11 | 2 | 2,022 |
PoF: Post-Training of Feature Extractor for Improving Generalization | 1 | icml | 0 | 0 | 2023-06-17 04:55:31.463000 | https://github.com/densoitlab/pof-v1 | 3 | PoF: Post-Training of Feature Extractor for Improving Generalization | https://scholar.google.com/scholar?cluster=1799078834754218861&hl=en&as_sdt=0,31 | 2 | 2,022 |
An Asymptotic Test for Conditional Independence using Analytic Kernel Embeddings | 2 | icml | 1 | 0 | 2023-06-17 04:55:31.670000 | https://github.com/meyerscetbon/lp-ci-test | 0 | An Asymptotic Test for Conditional Independence using Analytic Kernel Embeddings | https://scholar.google.com/scholar?cluster=14026015450757796884&hl=en&as_sdt=0,33 | 3 | 2,022 |
Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs | 19 | icml | 0 | 0 | 2023-06-17 04:55:31.877000 | https://github.com/meyerscetbon/lineargromov | 1 | Linear-time gromov wasserstein distances using low rank couplings and costs | https://scholar.google.com/scholar?cluster=883418138428344777&hl=en&as_sdt=0,14 | 2 | 2,022 |
Modeling Irregular Time Series with Continuous Recurrent Units | 15 | icml | 9 | 6 | 2023-06-17 04:55:32.085000 | https://github.com/boschresearch/continuous-recurrent-units | 32 | Modeling irregular time series with continuous recurrent units | https://scholar.google.com/scholar?cluster=7564792311041526490&hl=en&as_sdt=0,19 | 7 | 2,022 |
Data-SUITE: Data-centric identification of in-distribution incongruous examples | 3 | icml | 4 | 0 | 2023-06-17 04:55:32.291000 | https://github.com/seedatnabeel/data-suite | 7 | Data-SUITE: Data-centric identification of in-distribution incongruous examples | https://scholar.google.com/scholar?cluster=11485689307897239676&hl=en&as_sdt=0,33 | 2 | 2,022 |
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization | 5 | icml | 0 | 0 | 2023-06-17 04:55:32.497000 | https://github.com/mselezniova/ntk_beyond_limit | 0 | Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization | https://scholar.google.com/scholar?cluster=16495366436833298314&hl=en&as_sdt=0,3 | 1 | 2,022 |
Reinforcement Learning with Action-Free Pre-Training from Videos | 34 | icml | 5 | 0 | 2023-06-17 04:55:32.703000 | https://github.com/younggyoseo/apv | 46 | Reinforcement learning with action-free pre-training from videos | https://scholar.google.com/scholar?cluster=6676654951334590185&hl=en&as_sdt=0,5 | 4 | 2,022 |
Selective Regression under Fairness Criteria | 3 | icml | 0 | 0 | 2023-06-17 04:55:32.909000 | https://github.com/abhin02/fair-selective-regression | 4 | Selective regression under fairness criteria | https://scholar.google.com/scholar?cluster=11829060385063117064&hl=en&as_sdt=0,33 | 1 | 2,022 |
A State-Distribution Matching Approach to Non-Episodic Reinforcement Learning | 8 | icml | 1 | 0 | 2023-06-17 04:55:33.115000 | https://github.com/architsharma97/medal | 4 | A state-distribution matching approach to non-episodic reinforcement learning | https://scholar.google.com/scholar?cluster=14448955307324292158&hl=en&as_sdt=0,31 | 2 | 2,022 |
Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold | 4 | icml | 0 | 0 | 2023-06-17 04:55:33.322000 | https://github.com/fietelab/mesh | 1 | Content addressable memory without catastrophic forgetting by heteroassociation with a fixed scaffold | https://scholar.google.com/scholar?cluster=16874084475877050820&hl=en&as_sdt=0,5 | 4 | 2,022 |
DNS: Determinantal Point Process Based Neural Network Sampler for Ensemble Reinforcement Learning | 2 | icml | 0 | 0 | 2023-06-17 04:55:33.528000 | https://github.com/IntelLabs/DNS | 1 | DNS: Determinantal point process based neural network sampler for ensemble reinforcement learning | https://scholar.google.com/scholar?cluster=16987143666282140914&hl=en&as_sdt=0,34 | 2 | 2,022 |
PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs | 3 | icml | 0 | 1 | 2023-06-17 04:55:33.734000 | https://github.com/shenzy08/PDO-s3DCNN | 3 | Pdo-s3dcnns: Partial differential operator based steerable 3d cnns | https://scholar.google.com/scholar?cluster=7127988507569489900&hl=en&as_sdt=0,24 | 1 | 2,022 |
Staged Training for Transformer Language Models | 3 | icml | 1 | 1 | 2023-06-17 04:55:33.941000 | https://github.com/allenai/staged-training | 19 | Staged training for transformer language models | https://scholar.google.com/scholar?cluster=4204701598187830659&hl=en&as_sdt=0,5 | 5 | 2,022 |
Adversarial Masking for Self-Supervised Learning | 32 | icml | 5 | 2 | 2023-06-17 04:55:34.147000 | https://github.com/yugeten/adios | 50 | Adversarial masking for self-supervised learning | https://scholar.google.com/scholar?cluster=3881185449721325576&hl=en&as_sdt=0,5 | 3 | 2,022 |
Visual Attention Emerges from Recurrent Sparse Reconstruction | 4 | icml | 2 | 0 | 2023-06-17 04:55:34.353000 | https://github.com/bfshi/vars | 24 | Visual attention emerges from recurrent sparse reconstruction | https://scholar.google.com/scholar?cluster=626547526031635836&hl=en&as_sdt=0,44 | 1 | 2,022 |
Robust Group Synchronization via Quadratic Programming | 1 | icml | 0 | 1 | 2023-06-17 04:55:34.559000 | https://github.com/colewyeth/desc | 6 | Robust Group Synchronization via Quadratic Programming | https://scholar.google.com/scholar?cluster=14329242327668843280&hl=en&as_sdt=0,39 | 3 | 2,022 |
Log-Euclidean Signatures for Intrinsic Distances Between Unaligned Datasets | 3 | icml | 1 | 0 | 2023-06-17 04:55:34.764000 | https://github.com/shnitzer/les-distance | 4 | Log-euclidean signatures for intrinsic distances between unaligned datasets | https://scholar.google.com/scholar?cluster=528448898197574004&hl=en&as_sdt=0,24 | 1 | 2,022 |
Demystifying the Adversarial Robustness of Random Transformation Defenses | 7 | icml | 0 | 0 | 2023-06-17 04:55:34.970000 | https://github.com/wagner-group/demystify-random-transform | 5 | Demystifying the adversarial robustness of random transformation defenses | https://scholar.google.com/scholar?cluster=6394427111079703523&hl=en&as_sdt=0,23 | 1 | 2,022 |
Communicating via Markov Decision Processes | 4 | icml | 0 | 3 | 2023-06-17 04:55:35.176000 | https://github.com/schroederdewitt/meme | 1 | Communicating via Markov Decision Processes | https://scholar.google.com/scholar?cluster=1909863582927997201&hl=en&as_sdt=0,5 | 3 | 2,022 |
The Multivariate Community Hawkes Model for Dependent Relational Events in Continuous-time Networks | 3 | icml | 1 | 0 | 2023-06-17 04:55:35.381000 | https://github.com/ideaslabut/multivariate-community-hawkes | 1 | The multivariate community hawkes model for dependent relational events in continuous-time networks | https://scholar.google.com/scholar?cluster=16117758994538292993&hl=en&as_sdt=0,33 | 3 | 2,022 |
A General Recipe for Likelihood-free Bayesian Optimization | 8 | icml | 2 | 0 | 2023-06-17 04:55:35.587000 | https://github.com/lfbo-ml/lfbo | 39 | A general recipe for likelihood-free Bayesian optimization | https://scholar.google.com/scholar?cluster=2199690906597156790&hl=en&as_sdt=0,37 | 3 | 2,022 |
Saute RL: Almost Surely Safe Reinforcement Learning Using State Augmentation | 15 | icml | 271 | 7 | 2023-06-17 04:55:35.793000 | https://github.com/huawei-noah/hebo | 1,285 | Sauté rl: Almost surely safe reinforcement learning using state augmentation | https://scholar.google.com/scholar?cluster=12545517423097788852&hl=en&as_sdt=0,22 | 130 | 2,022 |
Accelerating Bayesian Optimization for Biological Sequence Design with Denoising Autoencoders | 24 | icml | 15 | 1 | 2023-06-17 04:55:35.999000 | https://github.com/samuelstanton/lambo | 50 | Accelerating bayesian optimization for biological sequence design with denoising autoencoders | https://scholar.google.com/scholar?cluster=2506639909996415595&hl=en&as_sdt=0,33 | 2 | 2,022 |
3D Infomax improves GNNs for Molecular Property Prediction | 66 | icml | 29 | 4 | 2023-06-17 04:55:36.206000 | https://github.com/hannesstark/3dinfomax | 116 | 3d infomax improves gnns for molecular property prediction | https://scholar.google.com/scholar?cluster=18195860750409632321&hl=en&as_sdt=0,5 | 3 | 2,022 |
EquiBind: Geometric Deep Learning for Drug Binding Structure Prediction | 83 | icml | 99 | 5 | 2023-06-17 04:55:36.413000 | https://github.com/HannesStark/EquiBind | 397 | Equibind: Geometric deep learning for drug binding structure prediction | https://scholar.google.com/scholar?cluster=2579310543705352041&hl=en&as_sdt=0,5 | 9 | 2,022 |
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | 6 | icml | 4 | 1 | 2023-06-17 04:55:36.620000 | https://github.com/LukasStruppek/Plug-and-Play-Attacks | 16 | Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | https://scholar.google.com/scholar?cluster=10382805845190184141&hl=en&as_sdt=0,20 | 2 | 2,022 |
MAE-DET: Revisiting Maximum Entropy Principle in Zero-Shot NAS for Efficient Object Detection | 10 | icml | 32 | 9 | 2023-06-17 04:55:36.826000 | https://github.com/alibaba/lightweight-neural-architecture-search | 266 | Mae-det: Revisiting maximum entropy principle in zero-shot nas for efficient object detection | https://scholar.google.com/scholar?cluster=9429584722885379910&hl=en&as_sdt=0,33 | 10 | 2,022 |
Out-of-Distribution Detection with Deep Nearest Neighbors | 79 | icml | 14 | 1 | 2023-06-17 04:55:37.032000 | https://github.com/deeplearning-wisc/knn-ood | 118 | Out-of-distribution detection with deep nearest neighbors | https://scholar.google.com/scholar?cluster=8587930909818673494&hl=en&as_sdt=0,33 | 2 | 2,022 |
Black-Box Tuning for Language-Model-as-a-Service | 52 | icml | 28 | 4 | 2023-06-17 04:55:37.248000 | https://github.com/txsun1997/black-box-tuning | 223 | Black-box tuning for language-model-as-a-service | https://scholar.google.com/scholar?cluster=6566630989334663783&hl=en&as_sdt=0,22 | 7 | 2,022 |
Causal Imitation Learning under Temporally Correlated Noise | 13 | icml | 0 | 0 | 2023-06-17 04:55:37.461000 | https://github.com/gkswamy98/causal_il | 6 | Causal imitation learning under temporally correlated noise | https://scholar.google.com/scholar?cluster=3778588231646817630&hl=en&as_sdt=0,5 | 2 | 2,022 |
SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization | 10 | icml | 14 | 1 | 2023-06-17 04:55:37.667000 | https://github.com/sony/sqvae | 132 | SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization | https://scholar.google.com/scholar?cluster=13353459274510421570&hl=en&as_sdt=0,10 | 6 | 2,022 |
A Tree-based Model Averaging Approach for Personalized Treatment Effect Estimation from Heterogeneous Data Sources | 18 | icml | 1 | 0 | 2023-06-17 04:55:37.872000 | https://github.com/ellenxtan/ifedtree | 8 | A tree-based model averaging approach for personalized treatment effect estimation from heterogeneous data sources | https://scholar.google.com/scholar?cluster=602189476639254582&hl=en&as_sdt=0,5 | 3 | 2,022 |
Rethinking Graph Neural Networks for Anomaly Detection | 24 | icml | 20 | 0 | 2023-06-17 04:55:38.077000 | https://github.com/squareroot3/rethinking-anomaly-detection | 118 | Rethinking graph neural networks for anomaly detection | https://scholar.google.com/scholar?cluster=15800828162221381866&hl=en&as_sdt=0,33 | 1 | 2,022 |
Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning | 15 | icml | 9 | 5 | 2023-06-17 04:55:38.284000 | https://github.com/wizard1203/vhl | 29 | Virtual homogeneity learning: Defending against data heterogeneity in federated learning | https://scholar.google.com/scholar?cluster=5551753342557173221&hl=en&as_sdt=0,34 | 2 | 2,022 |
FedNest: Federated Bilevel, Minimax, and Compositional Optimization | 23 | icml | 1 | 0 | 2023-06-17 04:55:38.491000 | https://github.com/ucr-optml/FedNest | 8 | FedNest: Federated bilevel, minimax, and compositional optimization | https://scholar.google.com/scholar?cluster=7138561365880400777&hl=en&as_sdt=0,24 | 2 | 2,022 |
LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood | 8 | icml | 1 | 1 | 2023-06-17 04:55:38.697000 | https://github.com/opium-sh/lidl | 7 | Lidl: Local intrinsic dimension estimation using approximate likelihood | https://scholar.google.com/scholar?cluster=9636618006452252616&hl=en&as_sdt=0,11 | 3 | 2,022 |
Quantifying and Learning Linear Symmetry-Based Disentanglement | 7 | icml | 0 | 0 | 2023-06-17 04:55:38.902000 | https://github.com/luis-armando-perez-rey/lsbd-vae | 0 | Quantifying and learning linear symmetry-based disentanglement | https://scholar.google.com/scholar?cluster=11951723712936247797&hl=en&as_sdt=0,33 | 2 | 2,022 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.