title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Manifold Identification for Ultimately Communication-Efficient Distributed Optimization | 4 | icml | 0 | 0 | 2023-06-17 03:57:19.438000 | https://github.com/leepei/madpqn | 0 | Manifold identification for ultimately communication-efficient distributed optimization | https://scholar.google.com/scholar?cluster=7891580359300327237&hl=en&as_sdt=0,47 | 4 | 2,020 |
PENNI: Pruned Kernel Sharing for Efficient CNN Inference | 12 | icml | 4 | 4 | 2023-06-17 03:57:19.640000 | https://github.com/timlee0212/PENNI | 7 | Penni: Pruned kernel sharing for efficient CNN inference | https://scholar.google.com/scholar?cluster=15394571534654834943&hl=en&as_sdt=0,33 | 3 | 2,020 |
Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning | 54 | icml | 5 | 1 | 2023-06-17 03:57:19.841000 | https://github.com/liqing-ustc/NGS | 50 | Closed loop neural-symbolic learning via integrating neural perception, grammar parsing, and symbolic reasoning | https://scholar.google.com/scholar?cluster=9257372000778020812&hl=en&as_sdt=0,47 | 3 | 2,020 |
Latent Space Factorisation and Manipulation via Matrix Subspace Projection | 30 | icml | 3 | 2 | 2023-06-17 03:57:20.043000 | https://github.com/lissomx/MSP | 10 | Latent space factorisation and manipulation via matrix subspace projection | https://scholar.google.com/scholar?cluster=9592355331559392684&hl=en&as_sdt=0,45 | 2 | 2,020 |
Learning from Irregularly-Sampled Time Series: A Missing Data Perspective | 40 | icml | 11 | 1 | 2023-06-17 03:57:20.246000 | https://github.com/steveli/partial-encoder-decoder | 34 | Learning from irregularly-sampled time series: A missing data perspective | https://scholar.google.com/scholar?cluster=9259999612636522766&hl=en&as_sdt=0,5 | 2 | 2,020 |
Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation | 618 | icml | 77 | 2 | 2023-06-17 03:57:20.448000 | https://github.com/tim-learn/SHOT | 340 | Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation | https://scholar.google.com/scholar?cluster=2414062070271265691&hl=en&as_sdt=0,31 | 7 | 2,020 |
Variable Skipping for Autoregressive Range Density Estimation | 7 | icml | 3 | 0 | 2023-06-17 03:57:20.651000 | https://github.com/var-skip/var-skip | 6 | Variable skipping for autoregressive range density estimation | https://scholar.google.com/scholar?cluster=16617388741966363068&hl=en&as_sdt=0,5 | 2 | 2,020 |
Handling the Positive-Definite Constraint in the Bayesian Learning Rule | 19 | icml | 1 | 1 | 2023-06-17 03:57:20.854000 | https://github.com/yorkerlin/iBayesLRule | 4 | Handling the positive-definite constraint in the Bayesian learning rule | https://scholar.google.com/scholar?cluster=14519338791070687660&hl=en&as_sdt=0,26 | 4 | 2,020 |
InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs | 62 | icml | 8 | 0 | 2023-06-17 03:57:21.073000 | https://github.com/fjxmlzn/InfoGAN-CR | 40 | Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans | https://scholar.google.com/scholar?cluster=4410576608706121212&hl=en&as_sdt=0,5 | 6 | 2,020 |
Generalized and Scalable Optimal Sparse Decision Trees | 93 | icml | 29 | 11 | 2023-06-17 03:57:21.275000 | https://github.com/Jimmy-Lin/GeneralizedOptimalSparseDecisionTrees | 48 | Generalized and scalable optimal sparse decision trees | https://scholar.google.com/scholar?cluster=15979140727083888111&hl=en&as_sdt=0,14 | 5 | 2,020 |
Time-aware Large Kernel Convolutions | 23 | icml | 6 | 0 | 2023-06-17 03:57:21.478000 | https://github.com/lioutasb/TaLKConvolutions | 28 | Time-aware large kernel convolutions | https://scholar.google.com/scholar?cluster=2978340010054806540&hl=en&as_sdt=0,5 | 4 | 2,020 |
Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable Embeddings with Generative Priors | 19 | icml | 2 | 0 | 2023-06-17 03:57:21.680000 | https://github.com/selwyn96/Quant_CS | 1 | Sample complexity bounds for 1-bit compressive sensing and binary stable embeddings with generative priors | https://scholar.google.com/scholar?cluster=14332918764703179344&hl=en&as_sdt=0,5 | 2 | 2,020 |
An Imitation Learning Approach for Cache Replacement | 53 | icml | 7,322 | 1,026 | 2023-06-17 03:57:21.882000 | https://github.com/google-research/google-research | 29,791 | An imitation learning approach for cache replacement | https://scholar.google.com/scholar?cluster=14524866221937250156&hl=en&as_sdt=0,5 | 727 | 2,020 |
Hallucinative Topological Memory for Zero-Shot Visual Planning | 33 | icml | 6 | 0 | 2023-06-17 03:57:22.084000 | https://github.com/thanard/hallucinative-topological-memory | 12 | Hallucinative topological memory for zero-shot visual planning | https://scholar.google.com/scholar?cluster=2366589002127869836&hl=en&as_sdt=0,5 | 2 | 2,020 |
Learning Deep Kernels for Non-Parametric Two-Sample Tests | 125 | icml | 9 | 0 | 2023-06-17 03:57:22.286000 | https://github.com/fengliu90/DK-for-TST | 38 | Learning deep kernels for non-parametric two-sample tests | https://scholar.google.com/scholar?cluster=11419051350787047758&hl=en&as_sdt=0,10 | 5 | 2,020 |
Finding trainable sparse networks through Neural Tangent Transfer | 21 | icml | 8 | 1 | 2023-06-17 03:57:22.487000 | https://github.com/fmi-basel/neural-tangent-transfer | 14 | Finding trainable sparse networks through neural tangent transfer | https://scholar.google.com/scholar?cluster=4513428362784750127&hl=en&as_sdt=0,5 | 4 | 2,020 |
Weakly-Supervised Disentanglement Without Compromises | 212 | icml | 199 | 20 | 2023-06-17 03:57:22.689000 | https://github.com/google-research/disentanglement_lib | 1,301 | Weakly-supervised disentanglement without compromises | https://scholar.google.com/scholar?cluster=17730117604231114120&hl=en&as_sdt=0,11 | 35 | 2,020 |
Too Relaxed to Be Fair | 45 | icml | 3 | 0 | 2023-06-17 03:57:22.892000 | https://github.com/mlohaus/SearchFair | 9 | Too relaxed to be fair | https://scholar.google.com/scholar?cluster=8729544437248973666&hl=en&as_sdt=0,34 | 2 | 2,020 |
Differentiating through the Fréchet Mean | 52 | icml | 2 | 4 | 2023-06-17 03:57:23.095000 | https://github.com/CUAI/Differentiable-Frechet-Mean | 50 | Differentiating through the fréchet mean | https://scholar.google.com/scholar?cluster=1425573169014829533&hl=en&as_sdt=0,5 | 7 | 2,020 |
Progressive Graph Learning for Open-Set Domain Adaptation | 73 | icml | 5 | 2 | 2023-06-17 03:57:23.296000 | https://github.com/BUserName/PGL | 28 | Progressive graph learning for open-set domain adaptation | https://scholar.google.com/scholar?cluster=2624735787669105317&hl=en&as_sdt=0,5 | 4 | 2,020 |
Learning Algebraic Multigrid Using Graph Neural Networks | 43 | icml | 3 | 0 | 2023-06-17 03:57:23.497000 | https://github.com/ilayluz/learning-amg | 12 | Learning algebraic multigrid using graph neural networks | https://scholar.google.com/scholar?cluster=9215058872113912967&hl=en&as_sdt=0,5 | 4 | 2,020 |
Progressive Identification of True Labels for Partial-Label Learning | 99 | icml | 5 | 0 | 2023-06-17 03:57:23.700000 | https://github.com/Lvcrezia77/PRODEN | 41 | Progressive identification of true labels for partial-label learning | https://scholar.google.com/scholar?cluster=17946181753810073887&hl=en&as_sdt=0,5 | 1 | 2,020 |
Efficient Continuous Pareto Exploration in Multi-Task Learning | 54 | icml | 27 | 1 | 2023-06-17 03:57:23.901000 | https://github.com/mit-gfx/ContinuousParetoMTL | 117 | Efficient continuous pareto exploration in multi-task learning | https://scholar.google.com/scholar?cluster=14510629090081206490&hl=en&as_sdt=0,5 | 20 | 2,020 |
Normalized Loss Functions for Deep Learning with Noisy Labels | 239 | icml | 25 | 1 | 2023-06-17 03:57:24.103000 | https://github.com/HanxunH/Active-Passive-Losses | 106 | Normalized loss functions for deep learning with noisy labels | https://scholar.google.com/scholar?cluster=15594415410821742634&hl=en&as_sdt=0,5 | 4 | 2,020 |
Adversarial Neural Pruning with Latent Vulnerability Suppression | 37 | icml | 1 | 0 | 2023-06-17 03:57:24.305000 | https://github.com/divyam3897/ANP_VS | 14 | Adversarial neural pruning with latent vulnerability suppression | https://scholar.google.com/scholar?cluster=14781666760584022356&hl=en&as_sdt=0,5 | 3 | 2,020 |
Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization | 75 | icml | 11 | 3 | 2023-06-17 03:57:24.507000 | https://github.com/dbmptr/EPOSearch | 41 | Multi-task learning with user preferences: Gradient descent with controlled ascent in pareto optimization | https://scholar.google.com/scholar?cluster=14380308074302940199&hl=en&as_sdt=0,5 | 1 | 2,020 |
Adversarial Robustness Against the Union of Multiple Perturbation Models | 124 | icml | 3 | 1 | 2023-06-17 03:57:24.709000 | https://github.com/locuslab/robust_union | 23 | Adversarial robustness against the union of multiple perturbation models | https://scholar.google.com/scholar?cluster=7466169251019166105&hl=en&as_sdt=0,14 | 7 | 2,020 |
Adaptive Gradient Descent without Descent | 51 | icml | 5 | 0 | 2023-06-17 03:57:24.911000 | https://github.com/ymalitsky/adaptive_gd | 39 | Adaptive gradient descent without descent | https://scholar.google.com/scholar?cluster=9121623366075061608&hl=en&as_sdt=0,5 | 5 | 2,020 |
Emergence of Separable Manifolds in Deep Language Representations | 31 | icml | 3 | 0 | 2023-06-17 03:57:25.114000 | https://github.com/schung039/contextual-repr-manifolds | 5 | Emergence of separable manifolds in deep language representations | https://scholar.google.com/scholar?cluster=5179476739222728970&hl=en&as_sdt=0,5 | 2 | 2,020 |
Minimax Pareto Fairness: A Multi Objective Perspective | 126 | icml | 5 | 0 | 2023-06-17 03:57:25.316000 | https://github.com/natalialmg/MMPF | 21 | Minimax pareto fairness: A multi objective perspective | https://scholar.google.com/scholar?cluster=7690434188548585535&hl=en&as_sdt=0,31 | 3 | 2,020 |
Predictive Multiplicity in Classification | 75 | icml | 2 | 2 | 2023-06-17 03:57:25.519000 | https://github.com/charliemarx/pmtools | 9 | Predictive multiplicity in classification | https://scholar.google.com/scholar?cluster=12971902900115271261&hl=en&as_sdt=0,5 | 3 | 2,020 |
Neural Datalog Through Time: Informed Temporal Modeling via Logical Specification | 17 | icml | 5 | 1 | 2023-06-17 03:57:25.720000 | https://github.com/HMEIatJHU/neural-datalog-through-time | 30 | Neural Datalog through time: Informed temporal modeling via logical specification | https://scholar.google.com/scholar?cluster=13196809524951928440&hl=en&as_sdt=0,5 | 1 | 2,020 |
Scalable Identification of Partially Observed Systems with Certainty-Equivalent EM | 8 | icml | 1 | 1 | 2023-06-17 03:57:25.923000 | https://github.com/sisl/CEEM | 8 | Scalable identification of partially observed systems with certainty-equivalent EM | https://scholar.google.com/scholar?cluster=12141244862224511768&hl=en&as_sdt=0,32 | 17 | 2,020 |
Training Binary Neural Networks using the Bayesian Learning Rule | 32 | icml | 5 | 1 | 2023-06-17 03:57:26.124000 | https://github.com/team-approx-bayes/BayesBiNN | 33 | Training binary neural networks using the bayesian learning rule | https://scholar.google.com/scholar?cluster=8866131573979767036&hl=en&as_sdt=0,33 | 7 | 2,020 |
Control Frequency Adaptation via Action Persistence in Batch Reinforcement Learning | 23 | icml | 1 | 0 | 2023-06-17 03:57:26.327000 | https://github.com/albertometelli/pfqi | 3 | Control frequency adaptation via action persistence in batch reinforcement learning | https://scholar.google.com/scholar?cluster=6884047998353070413&hl=en&as_sdt=0,33 | 3 | 2,020 |
Projective Preferential Bayesian Optimization | 7 | icml | 0 | 1 | 2023-06-17 03:57:26.530000 | https://github.com/AaltoPML/PPBO | 10 | Projective preferential bayesian optimization | https://scholar.google.com/scholar?cluster=16344312867654899507&hl=en&as_sdt=0,5 | 8 | 2,020 |
VideoOneNet: Bidirectional Convolutional Recurrent OneNet with Trainable Data Steps for Video Processing | 0 | icml | 0 | 0 | 2023-06-17 03:57:26.732000 | https://github.com/srph25/videoonenet | 0 | VideoOneNet: bidirectional convolutional recurrent onenet with trainable data steps for video processing | https://scholar.google.com/scholar?cluster=1084769805460535145&hl=en&as_sdt=0,5 | 2 | 2,020 |
Learning Reasoning Strategies in End-to-End Differentiable Proving | 63 | icml | 17 | 3 | 2023-06-17 03:57:26.935000 | https://github.com/uclnlp/ctp | 47 | Learning reasoning strategies in end-to-end differentiable proving | https://scholar.google.com/scholar?cluster=16334802341623350418&hl=en&as_sdt=0,10 | 2 | 2,020 |
Coresets for Data-efficient Training of Machine Learning Models | 137 | icml | 18 | 4 | 2023-06-17 03:57:27.137000 | https://github.com/baharanm/craig | 47 | Coresets for data-efficient training of machine learning models | https://scholar.google.com/scholar?cluster=15062918067238617199&hl=en&as_sdt=0,14 | 1 | 2,020 |
Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules | 53 | icml | 4 | 0 | 2023-06-17 03:57:27.339000 | https://github.com/sarthmit/BRIMs | 27 | Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules | https://scholar.google.com/scholar?cluster=15085852194314811643&hl=en&as_sdt=0,37 | 3 | 2,020 |
Transformation of ReLU-based recurrent neural networks from discrete-time to continuous-time | 8 | icml | 0 | 0 | 2023-06-17 03:57:27.539000 | https://github.com/DurstewitzLab/contPLRNN | 0 | Transformation of ReLU-based recurrent neural networks from discrete-time to continuous-time | https://scholar.google.com/scholar?cluster=8416515873686618077&hl=en&as_sdt=0,44 | 1 | 2,020 |
An end-to-end approach for the verification problem: learning the right distance | 12 | icml | 1 | 0 | 2023-06-17 03:57:27.742000 | https://github.com/joaomonteirof/e2e_verification | 6 | An end-to-end approach for the verification problem: learning the right distance | https://scholar.google.com/scholar?cluster=18311458565256398722&hl=en&as_sdt=0,5 | 4 | 2,020 |
Confidence-Aware Learning for Deep Neural Networks | 89 | icml | 11 | 2 | 2023-06-17 03:57:27.943000 | https://github.com/daintlab/confidence-aware-learning | 62 | Confidence-aware learning for deep neural networks | https://scholar.google.com/scholar?cluster=7136169408479402844&hl=en&as_sdt=0,36 | 6 | 2,020 |
Topological Autoencoders | 111 | icml | 26 | 0 | 2023-06-17 03:57:28.145000 | https://github.com/BorgwardtLab/topological-autoencoders | 105 | Topological autoencoders | https://scholar.google.com/scholar?cluster=11510547932502602061&hl=en&as_sdt=0,10 | 7 | 2,020 |
Fair Learning with Private Demographic Data | 48 | icml | 1 | 0 | 2023-06-17 03:57:28.347000 | https://github.com/husseinmozannar/fairlearn_private_data | 4 | Fair learning with private demographic data | https://scholar.google.com/scholar?cluster=16497841133836187682&hl=en&as_sdt=0,5 | 2 | 2,020 |
Consistent Estimators for Learning to Defer to an Expert | 101 | icml | 7 | 15 | 2023-06-17 03:57:28.548000 | https://github.com/clinicalml/learn-to-defer | 9 | Consistent estimators for learning to defer to an expert | https://scholar.google.com/scholar?cluster=3621001929696373512&hl=en&as_sdt=0,5 | 3 | 2,020 |
Missing Data Imputation using Optimal Transport | 62 | icml | 11 | 1 | 2023-06-17 03:57:28.750000 | https://github.com/BorisMuzellec/MissingDataOT | 73 | Missing data imputation using optimal transport | https://scholar.google.com/scholar?cluster=1517478488560941748&hl=en&as_sdt=0,5 | 4 | 2,020 |
Voice Separation with an Unknown Number of Multiple Speakers | 145 | icml | 159 | 27 | 2023-06-17 03:57:28.952000 | https://github.com/facebookresearch/svoice | 1,030 | Voice separation with an unknown number of multiple speakers | https://scholar.google.com/scholar?cluster=8245320586171214224&hl=en&as_sdt=0,21 | 24 | 2,020 |
Reliable Fidelity and Diversity Metrics for Generative Models | 147 | icml | 28 | 7 | 2023-06-17 03:57:29.153000 | https://github.com/clovaai/generative-evaluation-prdc | 207 | Reliable fidelity and diversity metrics for generative models | https://scholar.google.com/scholar?cluster=6046067727543252873&hl=en&as_sdt=0,5 | 9 | 2,020 |
Bayesian Sparsification of Deep C-valued Networks | 10 | icml | 25 | 7 | 2023-06-17 03:57:29.357000 | https://github.com/ivannz/cplxmodule | 119 | Bayesian sparsification of deep c-valued networks | https://scholar.google.com/scholar?cluster=17209924131548214610&hl=en&as_sdt=0,33 | 11 | 2,020 |
Oracle Efficient Private Non-Convex Optimization | 7 | icml | 1 | 0 | 2023-06-17 03:57:29.559000 | https://github.com/giusevtr/private_objective_perturbation | 3 | Oracle efficient private non-convex optimization | https://scholar.google.com/scholar?cluster=7786612400665657488&hl=en&as_sdt=0,5 | 0 | 2,020 |
Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization | 25 | icml | 35 | 22 | 2023-06-17 03:57:29.761000 | https://github.com/openopt/copt | 125 | Stochastic Frank-Wolfe for constrained finite-sum minimization | https://scholar.google.com/scholar?cluster=611899428047262705&hl=en&as_sdt=0,14 | 12 | 2,020 |
Aggregation of Multiple Knockoffs | 15 | icml | 7 | 5 | 2023-06-17 03:57:29.963000 | https://github.com/ja-che/hidimstat | 20 | Aggregation of multiple knockoffs | https://scholar.google.com/scholar?cluster=656849439593762318&hl=en&as_sdt=0,5 | 7 | 2,020 |
Knowing The What But Not The Where in Bayesian Optimization | 34 | icml | 3 | 0 | 2023-06-17 03:57:30.166000 | https://github.com/ntienvu/KnownOptimum_BO | 13 | Knowing the what but not the where in Bayesian optimization | https://scholar.google.com/scholar?cluster=16424117469518186156&hl=en&as_sdt=0,33 | 1 | 2,020 |
Robust Bayesian Classification Using An Optimistic Score Ratio | 11 | icml | 0 | 0 | 2023-06-17 03:57:30.368000 | https://github.com/nian-si/bsc | 0 | Robust bayesian classification using an optimistic score ratio | https://scholar.google.com/scholar?cluster=7833733923868334694&hl=en&as_sdt=0,33 | 1 | 2,020 |
LP-SparseMAP: Differentiable Relaxed Optimization for Sparse Structured Prediction | 13 | icml | 7 | 2 | 2023-06-17 03:57:30.572000 | https://github.com/deep-spin/lp-sparsemap | 39 | Lp-sparsemap: Differentiable relaxed optimization for sparse structured prediction | https://scholar.google.com/scholar?cluster=13952332112683207065&hl=en&as_sdt=0,36 | 7 | 2,020 |
Consistent Structured Prediction with Max-Min Margin Markov Networks | 12 | icml | 5 | 1 | 2023-06-17 03:57:30.777000 | https://github.com/alexnowakvila/maxminloss | 7 | Consistent structured prediction with max-min margin markov networks | https://scholar.google.com/scholar?cluster=10738021504710900469&hl=en&as_sdt=0,10 | 2 | 2,020 |
T-Basis: a Compact Representation for Neural Networks | 22 | icml | 1 | 0 | 2023-06-17 03:57:30.992000 | https://github.com/toshas/tbasis | 8 | T-basis: a compact representation for neural networks | https://scholar.google.com/scholar?cluster=12293196328367856783&hl=en&as_sdt=0,5 | 1 | 2,020 |
Interferometric Graph Transform: a Deep Unsupervised Graph Representation | 6 | icml | 1 | 0 | 2023-06-17 03:57:31.205000 | https://github.com/edouardoyallon/interferometric-graph-transform | 9 | Interferometric graph transform: a deep unsupervised graph representation | https://scholar.google.com/scholar?cluster=7788892344484265680&hl=en&as_sdt=0,5 | 2 | 2,020 |
Learning to Score Behaviors for Guided Policy Optimization | 26 | icml | 7 | 0 | 2023-06-17 03:57:31.409000 | https://github.com/behaviorguidedRL/BGRL | 23 | Learning to score behaviors for guided policy optimization | https://scholar.google.com/scholar?cluster=7653224630549423499&hl=en&as_sdt=0,5 | 4 | 2,020 |
Adversarial Mutual Information for Text Generation | 4 | icml | 1 | 2 | 2023-06-17 03:57:31.657000 | https://github.com/ZJULearning/AMI | 7 | Adversarial mutual information for text generation | https://scholar.google.com/scholar?cluster=5510716302378812620&hl=en&as_sdt=0,32 | 3 | 2,020 |
Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis | 14 | icml | 9 | 0 | 2023-06-17 03:57:31.877000 | https://github.com/Rose-STL-Lab/mrtl | 11 | Multiresolution tensor learning for efficient and interpretable spatial analysis | https://scholar.google.com/scholar?cluster=15097484700920257271&hl=en&as_sdt=0,10 | 4 | 2,020 |
Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning | 46 | icml | 80 | 9 | 2023-06-17 03:57:32.105000 | https://github.com/alex-petrenko/sample-factory | 593 | Sample factory: Egocentric 3d control from pixels at 100000 fps with asynchronous reinforcement learning | https://scholar.google.com/scholar?cluster=7436378038868807375&hl=en&as_sdt=0,5 | 17 | 2,020 |
Scalable Differential Privacy with Certified Robustness in Adversarial Learning | 34 | icml | 1 | 0 | 2023-06-17 03:57:32.315000 | https://github.com/haiphanNJIT/StoBatch | 6 | Scalable differential privacy with certified robustness in adversarial learning | https://scholar.google.com/scholar?cluster=11508415782067363031&hl=en&as_sdt=0,48 | 3 | 2,020 |
WaveFlow: A Compact Flow-based Model for Raw Audio | 95 | icml | 82 | 0 | 2023-06-17 03:57:32.518000 | https://github.com/PaddlePaddle/Parakeet | 584 | Waveflow: A compact flow-based model for raw audio | https://scholar.google.com/scholar?cluster=15645705670677592172&hl=en&as_sdt=0,39 | 29 | 2,020 |
Efficient Domain Generalization via Common-Specific Low-Rank Decomposition | 129 | icml | 7 | 0 | 2023-06-17 03:57:32.721000 | https://github.com/vihari/csd | 43 | Efficient domain generalization via common-specific low-rank decomposition | https://scholar.google.com/scholar?cluster=11307656152978308596&hl=en&as_sdt=0,47 | 3 | 2,020 |
Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning | 82 | icml | 21 | 8 | 2023-06-17 03:57:32.923000 | https://github.com/spitis/mrl | 95 | Maximum entropy gain exploration for long horizon multi-goal reinforcement learning | https://scholar.google.com/scholar?cluster=11035896371402538645&hl=en&as_sdt=0,47 | 5 | 2,020 |
Explaining Groups of Points in Low-Dimensional Representations | 16 | icml | 2 | 1 | 2023-06-17 03:57:33.151000 | https://github.com/GDPlumb/ELDR | 7 | Explaining groups of points in low-dimensional representations | https://scholar.google.com/scholar?cluster=2769965454437760669&hl=en&as_sdt=0,24 | 4 | 2,020 |
SoftSort: A Continuous Relaxation for the argsort Operator | 30 | icml | 5 | 4 | 2023-06-17 03:57:33.353000 | https://github.com/sprillo/softsort | 31 | Softsort: A continuous relaxation for the argsort operator | https://scholar.google.com/scholar?cluster=16358906798054657773&hl=en&as_sdt=0,5 | 5 | 2,020 |
Graph-based Nearest Neighbor Search: From Practice to Theory | 34 | icml | 2 | 0 | 2023-06-17 03:57:33.554000 | https://github.com/Shekhale/gbnns_theory | 15 | Graph-based nearest neighbor search: From practice to theory | https://scholar.google.com/scholar?cluster=13724716068024753657&hl=en&as_sdt=0,5 | 0 | 2,020 |
Deep Isometric Learning for Visual Recognition | 42 | icml | 21 | 0 | 2023-06-17 03:57:33.757000 | https://github.com/HaozhiQi/ISONet | 143 | Deep isometric learning for visual recognition | https://scholar.google.com/scholar?cluster=11095100806384225671&hl=en&as_sdt=0,14 | 9 | 2,020 |
Unsupervised Speech Decomposition via Triple Information Bottleneck | 131 | icml | 93 | 27 | 2023-06-17 03:57:33.960000 | https://github.com/auspicious3000/SpeechSplit | 529 | Unsupervised speech decomposition via triple information bottleneck | https://scholar.google.com/scholar?cluster=6104818093122244998&hl=en&as_sdt=0,44 | 23 | 2,020 |
DeepCoDA: personalized interpretability for compositional health data | 8 | icml | 1 | 0 | 2023-06-17 03:57:34.162000 | https://github.com/nphdang/DeepCoDA | 6 | Deepcoda: personalized interpretability for compositional health data | https://scholar.google.com/scholar?cluster=1822616617548782028&hl=en&as_sdt=0,5 | 3 | 2,020 |
Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning | 82 | icml | 2 | 0 | 2023-06-17 03:57:34.364000 | https://github.com/adishs/icml2020_rl-policy-teaching_code | 8 | Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning | https://scholar.google.com/scholar?cluster=2440833771930412039&hl=en&as_sdt=0,47 | 1 | 2,020 |
The Sample Complexity of Best-$k$ Items Selection from Pairwise Comparisons | 10 | icml | 0 | 0 | 2023-06-17 03:57:34.564000 | https://github.com/WenboRen/Topk-Ranking-from-Pairwise-Comparisons | 1 | The Sample Complexity of Best- Items Selection from Pairwise Comparisons | https://scholar.google.com/scholar?cluster=5765760591952820635&hl=en&as_sdt=0,5 | 1 | 2,020 |
Overfitting in adversarially robust deep learning | 555 | icml | 30 | 2 | 2023-06-17 03:57:34.771000 | https://github.com/locuslab/robust_overfitting | 145 | Overfitting in adversarially robust deep learning | https://scholar.google.com/scholar?cluster=3283552716843896977&hl=en&as_sdt=0,34 | 8 | 2,020 |
Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge | 147 | icml | 13 | 1 | 2023-06-17 03:57:34.973000 | https://github.com/laura-rieger/deep-explanation-penalization | 120 | Interpretations are useful: penalizing explanations to align neural networks with prior knowledge | https://scholar.google.com/scholar?cluster=15865202666417121360&hl=en&as_sdt=0,33 | 8 | 2,020 |
FR-Train: A Mutual Information-Based Approach to Fair and Robust Training | 53 | icml | 4 | 0 | 2023-06-17 03:57:35.176000 | https://github.com/yuji-roh/fr-train | 12 | Fr-train: A mutual information-based approach to fair and robust training | https://scholar.google.com/scholar?cluster=13680487688009337153&hl=en&as_sdt=0,33 | 3 | 2,020 |
Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning | 20 | icml | 1 | 0 | 2023-06-17 03:57:35.378000 | https://github.com/estherrolf/multi-objective-impact | 5 | Balancing competing objectives with noisy data: Score-based classifiers for welfare-aware machine learning | https://scholar.google.com/scholar?cluster=13495786516885485801&hl=en&as_sdt=0,33 | 8 | 2,020 |
Attentive Group Equivariant Convolutional Networks | 61 | icml | 3 | 0 | 2023-06-17 03:57:35.579000 | https://github.com/dwromero/att_gconvs | 46 | Attentive group equivariant convolutional networks | https://scholar.google.com/scholar?cluster=7532982364611268025&hl=en&as_sdt=0,5 | 3 | 2,020 |
Bayesian Optimisation over Multiple Continuous and Categorical Inputs | 63 | icml | 5 | 2 | 2023-06-17 03:57:35.782000 | https://github.com/rubinxin/CoCaBO_code | 38 | Bayesian optimisation over multiple continuous and categorical inputs | https://scholar.google.com/scholar?cluster=6939944017464158601&hl=en&as_sdt=0,5 | 3 | 2,020 |
Bounding the fairness and accuracy of classifiers from population statistics | 12 | icml | 1 | 0 | 2023-06-17 03:57:35.985000 | https://github.com/sivansabato/bfa | 0 | Bounding the fairness and accuracy of classifiers from population statistics | https://scholar.google.com/scholar?cluster=2023767612415868273&hl=en&as_sdt=0,15 | 2 | 2,020 |
Radioactive data: tracing through training | 47 | icml | 9 | 3 | 2023-06-17 03:57:36.186000 | https://github.com/facebookresearch/radioactive_data | 37 | Radioactive data: tracing through training | https://scholar.google.com/scholar?cluster=10544737846821362051&hl=en&as_sdt=0,48 | 7 | 2,020 |
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics | 51 | icml | 0 | 0 | 2023-06-17 03:57:36.389000 | https://github.com/saharaja/ICML2020-fairness | 0 | Measuring non-expert comprehension of machine learning fairness metrics | https://scholar.google.com/scholar?cluster=9761297825118487455&hl=en&as_sdt=0,44 | 2 | 2,020 |
Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models | 22 | icml | 6 | 2 | 2023-06-17 03:57:36.590000 | https://github.com/usaito/counterfactual-cv | 29 | Counterfactual cross-validation: Stable model selection procedure for causal inference models | https://scholar.google.com/scholar?cluster=10053699039608727761&hl=en&as_sdt=0,39 | 2 | 2,020 |
Learning to Simulate Complex Physics with Graph Networks | 658 | icml | 2,436 | 170 | 2023-06-17 03:57:36.792000 | https://github.com/deepmind/deepmind-research | 11,905 | Learning to simulate complex physics with graph networks | https://scholar.google.com/scholar?cluster=7841761417368333272&hl=en&as_sdt=0,5 | 336 | 2,020 |
Discriminative Adversarial Search for Abstractive Summarization | 24 | icml | 1,868 | 365 | 2023-06-17 03:57:36.994000 | https://github.com/microsoft/unilm | 12,786 | Discriminative adversarial search for abstractive summarization | https://scholar.google.com/scholar?cluster=2830447746758496884&hl=en&as_sdt=0,5 | 260 | 2,020 |
Planning to Explore via Self-Supervised World Models | 237 | icml | 26 | 12 | 2023-06-17 03:57:37.196000 | https://github.com/ramanans1/plan2explore | 201 | Planning to explore via self-supervised world models | https://scholar.google.com/scholar?cluster=804828726250878727&hl=en&as_sdt=0,33 | 14 | 2,020 |
Lookahead-Bounded Q-learning | 6 | icml | 1 | 0 | 2023-06-17 03:57:37.398000 | https://github.com/ibrahim-elshar/LBQL_ICML2020 | 4 | Lookahead-bounded q-learning | https://scholar.google.com/scholar?cluster=15722192187033607775&hl=en&as_sdt=0,39 | 1 | 2,020 |
PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions | 27 | icml | 5 | 2 | 2023-06-17 03:57:37.600000 | https://github.com/shenzy08/PDO-eConvs | 13 | Pdo-econvs: Partial differential operator based equivariant convolutions | https://scholar.google.com/scholar?cluster=8875071450506377272&hl=en&as_sdt=0,14 | 1 | 2,020 |
Educating Text Autoencoders: Latent Representation Guidance via Denoising | 44 | icml | 39 | 3 | 2023-06-17 03:57:37.801000 | https://github.com/shentianxiao/text-autoencoders | 185 | Educating text autoencoders: Latent representation guidance via denoising | https://scholar.google.com/scholar?cluster=3322516432269705271&hl=en&as_sdt=0,31 | 9 | 2,020 |
PowerNorm: Rethinking Batch Normalization in Transformers | 55 | icml | 16 | 2 | 2023-06-17 03:57:38.004000 | https://github.com/sIncerass/powernorm | 107 | Powernorm: Rethinking batch normalization in transformers | https://scholar.google.com/scholar?cluster=11876493237600488243&hl=en&as_sdt=0,5 | 8 | 2,020 |
Incremental Sampling Without Replacement for Sequence Models | 14 | icml | 3 | 0 | 2023-06-17 03:57:38.206000 | https://github.com/google-research/unique-randomizer | 6 | Incremental sampling without replacement for sequence models | https://scholar.google.com/scholar?cluster=570267648910120463&hl=en&as_sdt=0,5 | 6 | 2,020 |
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective | 74 | icml | 6 | 24 | 2023-06-17 03:57:38.407000 | https://github.com/bfshi/InfoDrop | 121 | Informative dropout for robust representation learning: A shape-bias perspective | https://scholar.google.com/scholar?cluster=14939290265495016487&hl=en&as_sdt=0,11 | 10 | 2,020 |
Dispersed Exponential Family Mixture VAEs for Interpretable Text Generation | 18 | icml | 2 | 0 | 2023-06-17 03:57:38.609000 | https://github.com/wenxianxian/demvae | 25 | Dispersed exponential family mixture vaes for interpretable text generation | https://scholar.google.com/scholar?cluster=8941211277689628269&hl=en&as_sdt=0,5 | 3 | 2,020 |
Predictive Coding for Locally-Linear Control | 11 | icml | 3 | 0 | 2023-06-17 03:57:38.810000 | https://github.com/VinAIResearch/PC3-pytorch | 16 | Predictive coding for locally-linear control | https://scholar.google.com/scholar?cluster=8037643226796861111&hl=en&as_sdt=0,5 | 3 | 2,020 |
A Generative Model for Molecular Distance Geometry | 68 | icml | 13 | 5 | 2023-06-17 03:57:39.013000 | https://github.com/gncs/graphdg | 32 | A generative model for molecular distance geometry | https://scholar.google.com/scholar?cluster=11522427677669311015&hl=en&as_sdt=0,5 | 2 | 2,020 |
Reinforcement Learning for Molecular Design Guided by Quantum Mechanics | 82 | icml | 22 | 7 | 2023-06-17 03:57:39.218000 | https://github.com/gncs/molgym | 94 | Reinforcement learning for molecular design guided by quantum mechanics | https://scholar.google.com/scholar?cluster=2647402113412769429&hl=en&as_sdt=0,7 | 5 | 2,020 |
Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise | 40 | icml | 0 | 0 | 2023-06-17 03:57:39.421000 | https://github.com/umutsimsekli/fuld | 0 | Fractional underdamped langevin dynamics: Retargeting sgd with momentum under heavy-tailed gradient noise | https://scholar.google.com/scholar?cluster=12546091337586051753&hl=en&as_sdt=0,5 | 1 | 2,020 |
FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis | 22 | icml | 2 | 0 | 2023-06-17 03:57:39.627000 | https://github.com/travelbureau/f0_icml_code | 5 | FormulaZero: Distributionally robust online adaptation via offline population synthesis | https://scholar.google.com/scholar?cluster=4155022533808347163&hl=en&as_sdt=0,47 | 4 | 2,020 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.