title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning | 3 | icml | 1 | 0 | 2023-06-17 04:54:57.688000 | https://github.com/kkalais/stochlwta-ml | 2 | Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning | https://scholar.google.com/scholar?cluster=12812982432289049616&hl=en&as_sdt=0,22 | 1 | 2,022 |
Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning | 8 | icml | 1 | 0 | 2023-06-17 04:54:57.893000 | https://github.com/causalml/doubly-robust-dropel | 4 | Doubly robust distributionally robust off-policy evaluation and learning | https://scholar.google.com/scholar?cluster=3538177620069646339&hl=en&as_sdt=0,5 | 0 | 2,022 |
Comprehensive Analysis of Negative Sampling in Knowledge Graph Representation Learning | 3 | icml | 0 | 1 | 2023-06-17 04:54:58.099000 | https://github.com/kamigaito/icml2022 | 9 | Comprehensive analysis of negative sampling in knowledge graph representation learning | https://scholar.google.com/scholar?cluster=4661195844634999621&hl=en&as_sdt=0,5 | 2 | 2,022 |
Composing Partial Differential Equations with Physics-Aware Neural Networks | 6 | icml | 10 | 0 | 2023-06-17 04:54:58.304000 | https://github.com/cognitivemodeling/finn | 25 | Composing partial differential equations with physics-aware neural networks | https://scholar.google.com/scholar?cluster=5219761110162787549&hl=en&as_sdt=0,44 | 5 | 2,022 |
FOCUS: Familiar Objects in Common and Uncommon Settings | 5 | icml | 0 | 0 | 2023-06-17 04:54:58.509000 | https://github.com/priyathamkat/focus | 4 | Focus: Familiar objects in common and uncommon settings | https://scholar.google.com/scholar?cluster=2485805129814216346&hl=en&as_sdt=0,5 | 2 | 2,022 |
Training OOD Detectors in their Natural Habitats | 18 | icml | 1 | 0 | 2023-06-17 04:54:58.715000 | https://github.com/jkatzsam/woods_ood | 13 | Training ood detectors in their natural habitats | https://scholar.google.com/scholar?cluster=8582043463264170613&hl=en&as_sdt=0,5 | 1 | 2,022 |
Secure Quantized Training for Deep Learning | 18 | icml | 9 | 3 | 2023-06-17 04:54:58.921000 | https://github.com/csiro-mlai/deep-mpc | 26 | Secure quantized training for deep learning | https://scholar.google.com/scholar?cluster=15154157227965198183&hl=en&as_sdt=0,5 | 3 | 2,022 |
A Convergent and Dimension-Independent Min-Max Optimization Algorithm | 3 | icml | 0 | 0 | 2023-06-17 04:54:59.126000 | https://github.com/vijaykeswani/min-max-optimization-algorithm | 1 | A convergent and dimension-independent first-order algorithm for min-max optimization | https://scholar.google.com/scholar?cluster=1442030372277689222&hl=en&as_sdt=0,5 | 2 | 2,022 |
Multi-Level Branched Regularization for Federated Learning | 3 | icml | 4 | 1 | 2023-06-17 04:54:59.332000 | https://github.com/jinkyu032/FedMLB | 13 | Multi-level branched regularization for federated learning | https://scholar.google.com/scholar?cluster=2425993830334019201&hl=en&as_sdt=0,5 | 1 | 2,022 |
Learning fair representation with a parametric integral probability metric | 5 | icml | 1 | 0 | 2023-06-17 04:54:59.545000 | https://github.com/kwkimonline/sipm-lfr | 3 | Learning fair representation with a parametric integral probability metric | https://scholar.google.com/scholar?cluster=7724112263757302618&hl=en&as_sdt=0,47 | 1 | 2,022 |
Dataset Condensation via Efficient Synthetic-Data Parameterization | 28 | icml | 12 | 1 | 2023-06-17 04:54:59.750000 | https://github.com/snu-mllab/efficient-dataset-condensation | 65 | Dataset condensation via efficient synthetic-data parameterization | https://scholar.google.com/scholar?cluster=13062983297577274052&hl=en&as_sdt=0,5 | 2 | 2,022 |
ViT-NeT: Interpretable Vision Transformers with Neural Tree Decoder | 14 | icml | 4 | 4 | 2023-06-17 04:54:59.956000 | https://github.com/jumpsnack/ViT-NeT | 21 | Vit-net: Interpretable vision transformers with neural tree decoder | https://scholar.google.com/scholar?cluster=7284110818114269396&hl=en&as_sdt=0,33 | 2 | 2,022 |
Sanity Simulations for Saliency Methods | 10 | icml | 0 | 0 | 2023-06-17 04:55:00.162000 | https://github.com/wnstlr/SMERF | 3 | Sanity simulations for saliency methods | https://scholar.google.com/scholar?cluster=7944058318921349973&hl=en&as_sdt=0,10 | 2 | 2,022 |
Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation | 26 | icml | 6 | 0 | 2023-06-17 04:55:00.367000 | https://github.com/Kim-Dongjun/Soft-Truncation | 43 | Soft truncation: A universal training technique of score-based diffusion model for high precision score estimation | https://scholar.google.com/scholar?cluster=547732243097530529&hl=en&as_sdt=0,5 | 4 | 2,022 |
Rotting Infinitely Many-Armed Bandits | 0 | icml | 0 | 0 | 2023-06-17 04:55:00.573000 | https://github.com/junghunkim7786/rotting_infinite_armed_bandits | 0 | Rotting infinitely many-armed bandits | https://scholar.google.com/scholar?cluster=7431943945679360181&hl=en&as_sdt=0,23 | 1 | 2,022 |
Generalizing to New Physical Systems via Context-Informed Dynamics Model | 10 | icml | 1 | 0 | 2023-06-17 04:55:00.778000 | https://github.com/yuan-yin/coda | 12 | Generalizing to new physical systems via context-informed dynamics model | https://scholar.google.com/scholar?cluster=9987364402754968813&hl=en&as_sdt=0,31 | 2 | 2,022 |
Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups | 9 | icml | 1 | 0 | 2023-06-17 04:55:00.983000 | https://github.com/david-knigge/separable-group-convolutional-networks | 9 | Exploiting redundancy: Separable group convolutional networks on lie groups | https://scholar.google.com/scholar?cluster=15152080644760721791&hl=en&as_sdt=0,5 | 2 | 2,022 |
Controlling Conditional Language Models without Catastrophic Forgetting | 8 | icml | 21 | 0 | 2023-06-17 04:55:01.189000 | https://github.com/naver/gdc | 108 | Controlling Conditional Language Models without Catastrophic Forgetting | https://scholar.google.com/scholar?cluster=13215553222930646661&hl=en&as_sdt=0,11 | 10 | 2,022 |
Reconstructing Nonlinear Dynamical Systems from Multi-Modal Time Series | 10 | icml | 4 | 0 | 2023-06-17 04:55:01.394000 | https://github.com/durstewitzlab/mmplrnn | 1 | Reconstructing nonlinear dynamical systems from multi-modal time series | https://scholar.google.com/scholar?cluster=17080536605245199937&hl=en&as_sdt=0,14 | 1 | 2,022 |
Functional Generalized Empirical Likelihood Estimation for Conditional Moment Restrictions | 4 | icml | 1 | 0 | 2023-06-17 04:55:01.600000 | https://github.com/heinerkremer/functional-gel | 1 | Functional Generalized Empirical Likelihood Estimation for Conditional Moment Restrictions | https://scholar.google.com/scholar?cluster=4926746325545340155&hl=en&as_sdt=0,34 | 2 | 2,022 |
Balancing Discriminability and Transferability for Source-Free Domain Adaptation | 26 | icml | 0 | 0 | 2023-06-17 04:55:01.805000 | https://github.com/val-iisc/MixupDA | 6 | Balancing discriminability and transferability for source-free domain adaptation | https://scholar.google.com/scholar?cluster=9320809919166954591&hl=en&as_sdt=0,5 | 11 | 2,022 |
Large Batch Experience Replay | 8 | icml | 1 | 1 | 2023-06-17 04:55:02.011000 | https://github.com/sureli/laber | 6 | Large batch experience replay | https://scholar.google.com/scholar?cluster=7195743594836265223&hl=en&as_sdt=0,36 | 1 | 2,022 |
FedScale: Benchmarking Model and System Performance of Federated Learning at Scale | 64 | icml | 101 | 39 | 2023-06-17 04:55:02.233000 | https://github.com/SymbioticLab/FedScale | 302 | Fedscale: Benchmarking model and system performance of federated learning at scale | https://scholar.google.com/scholar?cluster=9366536104914467915&hl=en&as_sdt=0,5 | 10 | 2,022 |
Functional Output Regression with Infimal Convolution: Exploring the Huber and $ε$-insensitive Losses | 4 | icml | 0 | 0 | 2023-06-17 04:55:02.448000 | https://github.com/allambert/foreg | 4 | Functional Output Regression with Infimal Convolution: Exploring the Huber and -insensitive Losses | https://scholar.google.com/scholar?cluster=13118582575057878063&hl=en&as_sdt=0,31 | 2 | 2,022 |
Marginal Tail-Adaptive Normalizing Flows | 1 | icml | 2 | 0 | 2023-06-17 04:55:02.655000 | https://github.com/mikelasz/marginaltailadaptiveflow | 0 | Marginal tail-adaptive normalizing flows | https://scholar.google.com/scholar?cluster=3241792279775112520&hl=en&as_sdt=0,5 | 1 | 2,022 |
Implicit Bias of Linear Equivariant Networks | 11 | icml | 0 | 0 | 2023-06-17 04:55:02.862000 | https://github.com/kristian-georgiev/implicit-bias-of-linear-equivariant-networks | 0 | Implicit bias of linear equivariant networks | https://scholar.google.com/scholar?cluster=5414336386133292832&hl=en&as_sdt=0,7 | 1 | 2,022 |
Differentially Private Maximal Information Coefficients | 0 | icml | 0 | 0 | 2023-06-17 04:55:03.069000 | https://github.com/jlazarsfeld/dp-mic | 4 | Differentially Private Maximal Information Coefficients | https://scholar.google.com/scholar?cluster=14074773669133605205&hl=en&as_sdt=0,32 | 2 | 2,022 |
Neural Tangent Kernel Analysis of Deep Narrow Neural Networks | 2 | icml | 0 | 0 | 2023-06-17 04:55:03.275000 | https://github.com/lthilnklover/deep-narrow-ntk | 1 | Neural tangent kernel analysis of deep narrow neural networks | https://scholar.google.com/scholar?cluster=11344426025520591295&hl=en&as_sdt=0,11 | 2 | 2,022 |
Dataset Condensation with Contrastive Signals | 18 | icml | 0 | 1 | 2023-06-17 04:55:03.481000 | https://github.com/saehyung-lee/dcc | 12 | Dataset condensation with contrastive signals | https://scholar.google.com/scholar?cluster=7694046388594127798&hl=en&as_sdt=0,11 | 1 | 2,022 |
Confidence Score for Source-Free Unsupervised Domain Adaptation | 16 | icml | 1 | 0 | 2023-06-17 04:55:03.686000 | https://github.com/jhyun17/cowa-jmds | 15 | Confidence score for source-free unsupervised domain adaptation | https://scholar.google.com/scholar?cluster=10361966623265648313&hl=en&as_sdt=0,5 | 1 | 2,022 |
Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization | 8 | icml | 0 | 0 | 2023-06-17 04:55:03.892000 | https://github.com/snu-mllab/discreteblockbayesattack | 17 | Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization | https://scholar.google.com/scholar?cluster=10043868339521505770&hl=en&as_sdt=0,10 | 1 | 2,022 |
Least Squares Estimation using Sketched Data with Heteroskedastic Errors | 2 | icml | 0 | 0 | 2023-06-17 04:55:04.098000 | https://github.com/sokbae/replication-leeng-2022-icml | 0 | Least Squares Estimation Using Sketched Data with Heteroskedastic Errors | https://scholar.google.com/scholar?cluster=2973545111138164523&hl=en&as_sdt=0,47 | 1 | 2,022 |
Generalized Strategic Classification and the Case of Aligned Incentives | 6 | icml | 0 | 0 | 2023-06-17 04:55:04.304000 | https://github.com/SagiLevanon1/GSC | 1 | Generalized strategic classification and the case of aligned incentives | https://scholar.google.com/scholar?cluster=5634368728411242394&hl=en&as_sdt=0,5 | 1 | 2,022 |
Neural Inverse Transform Sampler | 1 | icml | 0 | 0 | 2023-06-17 04:55:04.510000 | https://github.com/lihenryhfl/nits | 1 | Neural Inverse Transform Sampler | https://scholar.google.com/scholar?cluster=3014954787029992873&hl=en&as_sdt=0,5 | 2 | 2,022 |
PLATINUM: Semi-Supervised Model Agnostic Meta-Learning using Submodular Mutual Information | 1 | icml | 1 | 3 | 2023-06-17 04:55:04.715000 | https://github.com/hugo101/platinum | 1 | Platinum: Semi-supervised model agnostic meta-learning using submodular mutual information | https://scholar.google.com/scholar?cluster=1070646536780297100&hl=en&as_sdt=0,5 | 2 | 2,022 |
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation | 483 | icml | 504 | 186 | 2023-06-17 04:55:04.922000 | https://github.com/salesforce/lavis | 5,513 | Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation | https://scholar.google.com/scholar?cluster=7770442917120891581&hl=en&as_sdt=0,5 | 75 | 2,022 |
Achieving Fairness at No Utility Cost via Data Reweighing with Influence | 9 | icml | 2 | 0 | 2023-06-17 04:55:05.127000 | https://github.com/brandeis-machine-learning/influence-fairness | 5 | Achieving fairness at no utility cost via data reweighing with influence | https://scholar.google.com/scholar?cluster=1481946580804842338&hl=en&as_sdt=0,10 | 0 | 2,022 |
MetAug: Contrastive Learning via Meta Feature Augmentation | 10 | icml | 2 | 1 | 2023-06-17 04:55:05.333000 | https://github.com/lionellee9089/metaug | 15 | Metaug: Contrastive learning via meta feature augmentation | https://scholar.google.com/scholar?cluster=13342110327075124099&hl=en&as_sdt=0,33 | 1 | 2,022 |
PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration | 6 | icml | 1 | 1 | 2023-06-17 04:55:05.539000 | https://github.com/yeshenpy/pmic | 7 | PMIC: Improving multi-agent reinforcement learning with progressive mutual information collaboration | https://scholar.google.com/scholar?cluster=2755470732694105502&hl=en&as_sdt=0,5 | 3 | 2,022 |
Let Invariant Rationale Discovery Inspire Graph Contrastive Learning | 30 | icml | 1 | 1 | 2023-06-17 04:55:05.746000 | https://github.com/lsh0520/rgcl | 23 | Let invariant rationale discovery inspire graph contrastive learning | https://scholar.google.com/scholar?cluster=13286040992676917455&hl=en&as_sdt=0,19 | 2 | 2,022 |
Private Adaptive Optimization with Side information | 14 | icml | 1 | 0 | 2023-06-17 04:55:05.953000 | https://github.com/litian96/adadps | 12 | Private adaptive optimization with side information | https://scholar.google.com/scholar?cluster=15603924695620252408&hl=en&as_sdt=0,18 | 1 | 2,022 |
Permutation Search of Tensor Network Structures via Local Sampling | 4 | icml | 1 | 0 | 2023-06-17 04:55:06.158000 | https://github.com/chaoliatriken/tnls | 2 | Permutation search of tensor network structures via local sampling | https://scholar.google.com/scholar?cluster=14266729648210963776&hl=en&as_sdt=0,5 | 1 | 2,022 |
Double Sampling Randomized Smoothing | 5 | icml | 2 | 0 | 2023-06-17 04:55:06.364000 | https://github.com/llylly/dsrs | 5 | Double sampling randomized smoothing | https://scholar.google.com/scholar?cluster=13905428147766407509&hl=en&as_sdt=0,5 | 1 | 2,022 |
HousE: Knowledge Graph Embedding with Householder Parameterization | 10 | icml | 2 | 0 | 2023-06-17 04:55:06.571000 | https://github.com/anrep/house | 15 | House: Knowledge graph embedding with householder parameterization | https://scholar.google.com/scholar?cluster=15337285257575958816&hl=en&as_sdt=0,34 | 1 | 2,022 |
Learning Multiscale Transformer Models for Sequence Generation | 4 | icml | 2 | 1 | 2023-06-17 04:55:06.778000 | https://github.com/libeineu/umst | 10 | Learning multiscale transformer models for sequence generation | https://scholar.google.com/scholar?cluster=10490177289793431927&hl=en&as_sdt=0,5 | 1 | 2,022 |
Finding Global Homophily in Graph Neural Networks When Meeting Heterophily | 38 | icml | 3 | 0 | 2023-06-17 04:55:06.984000 | https://github.com/recklessronan/glognn | 26 | Finding global homophily in graph neural networks when meeting heterophily | https://scholar.google.com/scholar?cluster=881393506933530763&hl=en&as_sdt=0,5 | 1 | 2,022 |
Exploring and Exploiting Hubness Priors for High-Quality GAN Latent Sampling | 0 | icml | 0 | 0 | 2023-06-17 04:55:07.191000 | https://github.com/byronliang8/hubnessgansampling | 8 | Exploring and exploiting hubness priors for high-quality GAN latent sampling | https://scholar.google.com/scholar?cluster=12825471375795704979&hl=en&as_sdt=0,5 | 1 | 2,022 |
Reducing Variance in Temporal-Difference Value Estimation via Ensemble of Deep Networks | 4 | icml | 1 | 0 | 2023-06-17 04:55:07.396000 | https://github.com/indylab/meanq | 8 | Reducing variance in temporal-difference value estimation via ensemble of deep networks | https://scholar.google.com/scholar?cluster=5733035201533168571&hl=en&as_sdt=0,5 | 0 | 2,022 |
Order Constraints in Optimal Transport | 1 | icml | 294 | 54 | 2023-06-17 04:55:07.603000 | https://github.com/Trusted-AI/AIX360 | 1,340 | Order Constraints in Optimal Transport | https://scholar.google.com/scholar?cluster=1063075229818760095&hl=en&as_sdt=0,5 | 51 | 2,022 |
Flow-Guided Sparse Transformer for Video Deblurring | 23 | icml | 12 | 1 | 2023-06-17 04:55:07.808000 | https://github.com/linjing7/VR-Baseline | 122 | Flow-guided sparse transformer for video deblurring | https://scholar.google.com/scholar?cluster=14219657862279161517&hl=en&as_sdt=0,5 | 13 | 2,022 |
Federated Learning with Positive and Unlabeled Data | 8 | icml | 1 | 2 | 2023-06-17 04:55:08.013000 | https://github.com/littlesunlxy/fedpu-torch | 7 | Federated Learning with Positive and Unlabeled Data | https://scholar.google.com/scholar?cluster=5808543531345013860&hl=en&as_sdt=0,29 | 1 | 2,022 |
Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video Restoration | 2 | icml | 12 | 1 | 2023-06-17 04:55:08.223000 | https://github.com/linjing7/VR-Baseline | 122 | Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video Restoration | https://scholar.google.com/scholar?cluster=11447631455312360639&hl=en&as_sdt=0,5 | 13 | 2,022 |
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks | 0 | icml | 0 | 0 | 2023-06-17 04:55:08.433000 | https://github.com/linweiran/CGD | 1 | Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks | https://scholar.google.com/scholar?cluster=14082433359159261518&hl=en&as_sdt=0,5 | 1 | 2,022 |
Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments | 8 | icml | 0 | 0 | 2023-06-17 04:55:08.639000 | https://github.com/lazycal/ame | 1 | Measuring the effect of training data on deep learning predictions via randomized experiments | https://scholar.google.com/scholar?cluster=7808395865683583052&hl=en&as_sdt=0,43 | 1 | 2,022 |
Interactively Learning Preference Constraints in Linear Bandits | 1 | icml | 0 | 0 | 2023-06-17 04:55:08.847000 | https://github.com/lasgroup/adaptive-constraint-learning | 3 | Interactively Learning Preference Constraints in Linear Bandits | https://scholar.google.com/scholar?cluster=10442761554995680158&hl=en&as_sdt=0,2 | 2 | 2,022 |
CITRIS: Causal Identifiability from Temporal Intervened Sequences | 31 | icml | 5 | 1 | 2023-06-17 04:55:09.055000 | https://github.com/phlippe/citris | 38 | Citris: Causal identifiability from temporal intervened sequences | https://scholar.google.com/scholar?cluster=9740161650140858183&hl=en&as_sdt=0,36 | 6 | 2,022 |
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models | 7 | icml | 0 | 1 | 2023-06-17 04:55:09.261000 | https://github.com/deepmind/streamingqa | 35 | Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models | https://scholar.google.com/scholar?cluster=14847402247915330134&hl=en&as_sdt=0,5 | 3 | 2,022 |
Constrained Variational Policy Optimization for Safe Reinforcement Learning | 19 | icml | 6 | 2 | 2023-06-17 04:55:09.466000 | https://github.com/liuzuxin/cvpo-safe-rl | 42 | Constrained variational policy optimization for safe reinforcement learning | https://scholar.google.com/scholar?cluster=13833315390800713597&hl=en&as_sdt=0,48 | 3 | 2,022 |
Boosting Graph Structure Learning with Dummy Nodes | 4 | icml | 3 | 0 | 2023-06-17 04:55:09.672000 | https://github.com/hkust-knowcomp/dummynode4graphlearning | 14 | Boosting graph structure learning with dummy nodes | https://scholar.google.com/scholar?cluster=11720456442737654498&hl=en&as_sdt=0,5 | 2 | 2,022 |
Rethinking Attention-Model Explainability through Faithfulness Violation Test | 6 | icml | 2 | 0 | 2023-06-17 04:55:09.878000 | https://github.com/BierOne/Attention-Faithfulness | 15 | Rethinking attention-model explainability through faithfulness violation test | https://scholar.google.com/scholar?cluster=2225803020950336962&hl=en&as_sdt=0,10 | 1 | 2,022 |
Generating 3D Molecules for Target Protein Binding | 33 | icml | 22 | 0 | 2023-06-17 04:55:10.084000 | https://github.com/divelab/graphbp | 82 | Generating 3d molecules for target protein binding | https://scholar.google.com/scholar?cluster=5832718815392405433&hl=en&as_sdt=0,23 | 4 | 2,022 |
REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy Transfer | 8 | icml | 1 | 0 | 2023-06-17 04:55:10.290000 | https://github.com/xingyul/revolver | 21 | Revolver: Continuous evolutionary models for robot-to-robot policy transfer | https://scholar.google.com/scholar?cluster=4925772100401553485&hl=en&as_sdt=0,5 | 0 | 2,022 |
Local Augmentation for Graph Neural Networks | 27 | icml | 10 | 0 | 2023-06-17 04:55:10.496000 | https://github.com/songtaoliu0823/lagnn | 49 | Local augmentation for graph neural networks | https://scholar.google.com/scholar?cluster=1477899180662383839&hl=en&as_sdt=0,33 | 2 | 2,022 |
GACT: Activation Compressed Training for Generic Network Architectures | 6 | icml | 7 | 0 | 2023-06-17 04:55:10.702000 | https://github.com/LiuXiaoxuanPKU/GACT-ICML | 25 | GACT: Activation compressed training for generic network architectures | https://scholar.google.com/scholar?cluster=12961558979640169971&hl=en&as_sdt=0,11 | 1 | 2,022 |
Robust Training under Label Noise by Over-parameterization | 32 | icml | 6 | 2 | 2023-06-17 04:55:10.911000 | https://github.com/shengliu66/sop | 45 | Robust training under label noise by over-parameterization | https://scholar.google.com/scholar?cluster=7351288537652812990&hl=en&as_sdt=0,5 | 4 | 2,022 |
Bayesian Model Selection, the Marginal Likelihood, and Generalization | 22 | icml | 2 | 0 | 2023-06-17 04:55:11.124000 | https://github.com/sanaelotfi/bayesian_model_comparison | 29 | Bayesian model selection, the marginal likelihood, and generalization | https://scholar.google.com/scholar?cluster=9966221610854779885&hl=en&as_sdt=0,10 | 2 | 2,022 |
Additive Gaussian Processes Revisited | 5 | icml | 3 | 2 | 2023-06-17 04:55:11.347000 | https://github.com/amzn/orthogonal-additive-gaussian-processes | 27 | Additive Gaussian Processes Revisited | https://scholar.google.com/scholar?cluster=6171646250259596364&hl=en&as_sdt=0,7 | 1 | 2,022 |
ModLaNets: Learning Generalisable Dynamics via Modularity and Physical Inductive Bias | 3 | icml | 0 | 0 | 2023-06-17 04:55:11.555000 | https://github.com/YupuLu/ModLaNets | 3 | Modlanets: Learning generalisable dynamics via modularity and physical inductive bias | https://scholar.google.com/scholar?cluster=13273673478017721155&hl=en&as_sdt=0,5 | 1 | 2,022 |
Model-Free Opponent Shaping | 16 | icml | 3 | 0 | 2023-06-17 04:55:11.762000 | https://github.com/luchris429/model-free-opponent-shaping | 8 | Model-free opponent shaping | https://scholar.google.com/scholar?cluster=2936183608022340062&hl=en&as_sdt=0,6 | 1 | 2,022 |
Orchestra: Unsupervised Federated Learning via Globally Consistent Clustering | 13 | icml | 8 | 1 | 2023-06-17 04:55:11.968000 | https://github.com/akhilmathurs/orchestra | 34 | Orchestra: Unsupervised federated learning via globally consistent clustering | https://scholar.google.com/scholar?cluster=12370876234487104592&hl=en&as_sdt=0,5 | 3 | 2,022 |
A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions | 13 | icml | 1 | 0 | 2023-06-17 04:55:12.173000 | https://github.com/optimization-for-data-driven-science/xai | 0 | A rigorous study of integrated gradients method and extensions to internal neuron attributions | https://scholar.google.com/scholar?cluster=2734810007243082678&hl=en&as_sdt=0,14 | 3 | 2,022 |
Channel Importance Matters in Few-Shot Image Classification | 9 | icml | 6 | 0 | 2023-06-17 04:55:12.378000 | https://github.com/Frankluox/Channel_Importance_FSL | 41 | Channel importance matters in few-shot image classification | https://scholar.google.com/scholar?cluster=11800681644277658610&hl=en&as_sdt=0,5 | 3 | 2,022 |
Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching | 5 | icml | 2 | 0 | 2023-06-17 04:55:12.584000 | https://github.com/jasonma2016/smodice | 20 | Versatile offline imitation from observations and examples via regularized state-occupancy matching | https://scholar.google.com/scholar?cluster=11179690746522153663&hl=en&as_sdt=0,5 | 2 | 2,022 |
Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding | 0 | icml | 0 | 1 | 2023-06-17 04:55:12.790000 | https://github.com/haotiansustc/deepinfo | 3 | Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding | https://scholar.google.com/scholar?cluster=17376169416462148944&hl=en&as_sdt=0,5 | 1 | 2,022 |
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings | 9 | icml | 0 | 0 | 2023-06-17 04:55:12.995000 | https://github.com/zib-iol/fw-rde | 4 | Interpretable neural networks with frank-wolfe: Sparse relevance maps and relevance orderings | https://scholar.google.com/scholar?cluster=1124674536822867580&hl=en&as_sdt=0,21 | 2 | 2,022 |
A Tighter Analysis of Spectral Clustering, and Beyond | 4 | icml | 0 | 0 | 2023-06-17 04:55:13.201000 | https://github.com/pmacg/spectral-clustering-meta-graphs | 3 | A Tighter Analysis of Spectral Clustering, and Beyond | https://scholar.google.com/scholar?cluster=7116468291147711017&hl=en&as_sdt=0,10 | 1 | 2,022 |
Feature selection using e-values | 1 | icml | 0 | 0 | 2023-06-17 04:55:13.407000 | https://github.com/shubhobm/e-values | 2 | Feature Selection using e-values | https://scholar.google.com/scholar?cluster=14169974284290385503&hl=en&as_sdt=0,5 | 2 | 2,022 |
Nonparametric Involutive Markov Chain Monte Carlo | 0 | icml | 2 | 1 | 2023-06-17 04:55:13.612000 | https://github.com/fzaiser/nonparametric-hmc | 12 | Nonparametric Involutive Markov Chain Monte Carlo | https://scholar.google.com/scholar?cluster=17862750245568901583&hl=en&as_sdt=0,25 | 1 | 2,022 |
More Efficient Sampling for Tensor Decomposition With Worst-Case Guarantees | 9 | icml | 0 | 0 | 2023-06-17 04:55:13.818000 | https://github.com/osmanmalik/td-als-es | 3 | More efficient sampling for tensor decomposition with worst-case guarantees | https://scholar.google.com/scholar?cluster=18131307988891143062&hl=en&as_sdt=0,5 | 1 | 2,022 |
Unaligned Supervision for Automatic Music Transcription in The Wild | 4 | icml | 1 | 1 | 2023-06-17 04:55:14.024000 | https://github.com/benadar293/benadar293.github.io | 16 | Unaligned supervision for automatic music transcription in the wild | https://scholar.google.com/scholar?cluster=7612759621426730574&hl=en&as_sdt=0,43 | 1 | 2,022 |
Decision-Focused Learning: Through the Lens of Learning to Rank | 7 | icml | 1 | 0 | 2023-06-17 04:55:14.230000 | https://github.com/jayman91/ltr-predopt | 5 | Decision-Focused Learning: Through the Lens of Learning to Rank | https://scholar.google.com/scholar?cluster=68474757504279365&hl=en&as_sdt=0,5 | 1 | 2,022 |
Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models | 5 | icml | 0 | 0 | 2023-06-17 04:55:14.440000 | https://github.com/tmanole/refined-mixture-rates | 1 | Refined convergence rates for maximum likelihood estimation under finite mixture models | https://scholar.google.com/scholar?cluster=15536015401615707970&hl=en&as_sdt=0,34 | 2 | 2,022 |
On the Effects of Artificial Data Modification | 0 | icml | 0 | 0 | 2023-06-17 04:55:14.646000 | https://github.com/antoniamarcu/data-modification | 1 | On the Effects of Artificial Data Modification | https://scholar.google.com/scholar?cluster=5171301994487774624&hl=en&as_sdt=0,33 | 2 | 2,022 |
Personalized Federated Learning through Local Memorization | 15 | icml | 11 | 1 | 2023-06-17 04:55:14.851000 | https://github.com/omarfoq/knn-per | 32 | Personalized federated learning through local memorization | https://scholar.google.com/scholar?cluster=1735959565667819081&hl=en&as_sdt=0,5 | 1 | 2,022 |
Closed-Form Diffeomorphic Transformations for Time Series Alignment | 0 | icml | 1 | 0 | 2023-06-17 04:55:15.058000 | https://github.com/imartinezl/difw | 12 | Closed-Form Diffeomorphic Transformations for Time Series Alignment | https://scholar.google.com/scholar?cluster=15344236423757479416&hl=en&as_sdt=0,5 | 2 | 2,022 |
SPECTRE: Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators | 17 | icml | 4 | 0 | 2023-06-17 04:55:15.264000 | https://github.com/karolismart/spectre | 18 | Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators | https://scholar.google.com/scholar?cluster=12175380990160510944&hl=en&as_sdt=0,14 | 2 | 2,022 |
Continual Repeated Annealed Flow Transport Monte Carlo | 8 | icml | 10 | 0 | 2023-06-17 04:55:15.470000 | https://github.com/deepmind/annealed_flow_transport | 35 | Continual repeated annealed flow transport Monte Carlo | https://scholar.google.com/scholar?cluster=15272534120760724190&hl=en&as_sdt=0,33 | 4 | 2,022 |
How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection | 3 | icml | 3 | 1 | 2023-06-17 04:55:15.675000 | https://github.com/mmazeika/model-stealing-defenses | 2 | How to steer your adversary: Targeted and efficient model stealing defenses with gradient redirection | https://scholar.google.com/scholar?cluster=12763327756240287958&hl=en&as_sdt=0,5 | 1 | 2,022 |
Causal Transformer for Estimating Counterfactual Outcomes | 15 | icml | 10 | 3 | 2023-06-17 04:55:15.882000 | https://github.com/Valentyn1997/CausalTransformer | 48 | Causal transformer for estimating counterfactual outcomes | https://scholar.google.com/scholar?cluster=15562561940840223837&hl=en&as_sdt=0,5 | 2 | 2,022 |
Steerable 3D Spherical Neurons | 2 | icml | 0 | 0 | 2023-06-17 04:55:16.088000 | https://github.com/pavlo-melnyk/steerable-3d-neurons | 0 | Steerable 3D Spherical Neurons | https://scholar.google.com/scholar?cluster=12172638513685585373&hl=en&as_sdt=0,23 | 2 | 2,022 |
Transformers are Meta-Reinforcement Learners | 15 | icml | 4 | 3 | 2023-06-17 04:55:16.294000 | https://github.com/luckeciano/transformers-metarl | 32 | Transformers are meta-reinforcement learners | https://scholar.google.com/scholar?cluster=4334650228414799916&hl=en&as_sdt=0,33 | 4 | 2,022 |
Stochastic Rising Bandits | 4 | icml | 0 | 0 | 2023-06-17 04:55:16.500000 | https://github.com/albertometelli/stochastic-rising-bandits | 4 | Stochastic Rising Bandits | https://scholar.google.com/scholar?cluster=15697580060507911770&hl=en&as_sdt=0,5 | 1 | 2,022 |
Minimizing Control for Credit Assignment with Strong Feedback | 4 | icml | 3 | 0 | 2023-06-17 04:55:16.706000 | https://github.com/mariacer/strong_dfc | 8 | Minimizing control for credit assignment with strong feedback | https://scholar.google.com/scholar?cluster=4546119476247760219&hl=en&as_sdt=0,33 | 1 | 2,022 |
Distribution Regression with Sliced Wasserstein Kernels | 4 | icml | 0 | 0 | 2023-06-17 04:55:16.912000 | https://github.com/dimsum2k/drswk | 4 | Distribution Regression with Sliced Wasserstein Kernels | https://scholar.google.com/scholar?cluster=6056433376162861662&hl=en&as_sdt=0,33 | 1 | 2,022 |
Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism | 32 | icml | 15 | 0 | 2023-06-17 04:55:17.118000 | https://github.com/Graph-COM/GSAT | 112 | Interpretable and generalizable graph learning via stochastic attention mechanism | https://scholar.google.com/scholar?cluster=15869188404391034141&hl=en&as_sdt=0,5 | 2 | 2,022 |
Modeling Structure with Undirected Neural Networks | 0 | icml | 0 | 0 | 2023-06-17 04:55:17.323000 | https://github.com/deep-spin/unn | 5 | Modeling Structure with Undirected Neural Networks | https://scholar.google.com/scholar?cluster=2812799179011776020&hl=en&as_sdt=0,33 | 4 | 2,022 |
Universal Hopfield Networks: A General Framework for Single-Shot Associative Memory Models | 10 | icml | 2 | 0 | 2023-06-17 04:55:17.529000 | https://github.com/BerenMillidge/Theory_Associative_Memory | 12 | Universal hopfield networks: A general framework for single-shot associative memory models | https://scholar.google.com/scholar?cluster=11661827262437868518&hl=en&as_sdt=0,5 | 3 | 2,022 |
Prioritized Training on Points that are Learnable, Worth Learning, and not yet Learnt | 24 | icml | 18 | 2 | 2023-06-17 04:55:17.735000 | https://github.com/oatml/rho-loss | 158 | Prioritized training on points that are learnable, worth learning, and not yet learnt | https://scholar.google.com/scholar?cluster=5784378723216835078&hl=en&as_sdt=0,33 | 6 | 2,022 |
POEM: Out-of-Distribution Detection with Posterior Sampling | 20 | icml | 1 | 1 | 2023-06-17 04:55:17.940000 | https://github.com/deeplearning-wisc/poem | 22 | Poem: Out-of-distribution detection with posterior sampling | https://scholar.google.com/scholar?cluster=14373980882186283690&hl=en&as_sdt=0,33 | 4 | 2,022 |
Proximal and Federated Random Reshuffling | 21 | icml | 2 | 0 | 2023-06-17 04:55:18.146000 | https://github.com/konstmish/rr_prox_fed | 2 | Proximal and federated random reshuffling | https://scholar.google.com/scholar?cluster=4410848419822485671&hl=en&as_sdt=0,33 | 2 | 2,022 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.