title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Streaming Radiance Fields for 3D Video Synthesis | 10 | neurips | 5 | 3 | 2023-06-16 22:58:15.484000 | https://github.com/algohunt/streamrf | 96 | Streaming radiance fields for 3d video synthesis | https://scholar.google.com/scholar?cluster=1594613451261987052&hl=en&as_sdt=0,14 | 8 | 2,022 |
Neural Matching Fields: Implicit Representation of Matching Fields for Visual Correspondence | 2 | neurips | 1 | 2 | 2023-06-16 22:58:15.696000 | https://github.com/KU-CVLAB/NeMF | 70 | Neural Matching Fields: Implicit Representation of Matching Fields for Visual Correspondence | https://scholar.google.com/scholar?cluster=1968290052561441459&hl=en&as_sdt=0,5 | 7 | 2,022 |
Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis | 3 | neurips | 2 | 1 | 2023-06-16 22:58:15.909000 | https://github.com/mengweiren/longitudinal-representation-learning | 14 | Local spatiotemporal representation learning for longitudinally-consistent neuroimage analysis | https://scholar.google.com/scholar?cluster=8437472979024832790&hl=en&as_sdt=0,31 | 2 | 2,022 |
Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning | 2 | neurips | 2 | 0 | 2023-06-16 22:58:16.122000 | https://github.com/tliu1997/arnpg-morl | 5 | Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning | https://scholar.google.com/scholar?cluster=15219127852751471694&hl=en&as_sdt=0,23 | 1 | 2,022 |
TAP-Vid: A Benchmark for Tracking Any Point in a Video | 4 | neurips | 19 | 1 | 2023-06-16 22:58:16.336000 | https://github.com/deepmind/tapnet | 212 | TAP-Vid: A Benchmark for Tracking Any Point in a Video | https://scholar.google.com/scholar?cluster=17092201381170534981&hl=en&as_sdt=0,33 | 17 | 2,022 |
A Classification of $G$-invariant Shallow Neural Networks | 5 | neurips | 0 | 0 | 2023-06-16 22:58:16.558000 | https://github.com/dagrawa2/gsnn_classification_code | 0 | A Classification of -invariant Shallow Neural Networks | https://scholar.google.com/scholar?cluster=11077075131989361762&hl=en&as_sdt=0,33 | 1 | 2,022 |
Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources | 1 | neurips | 0 | 1 | 2023-06-16 22:58:16.810000 | https://github.com/bariscanbozkurt/biologically-plausible-detmaxnns-for-blind-source-separation | 0 | Biologically-plausible determinant maximization neural networks for blind separation of correlated sources | https://scholar.google.com/scholar?cluster=1796611740779169279&hl=en&as_sdt=0,5 | 1 | 2,022 |
What Makes Graph Neural Networks Miscalibrated? | 3 | neurips | 0 | 0 | 2023-06-16 22:58:17.023000 | https://github.com/hans66hsu/gats | 12 | What Makes Graph Neural Networks Miscalibrated? | https://scholar.google.com/scholar?cluster=18376762019790948001&hl=en&as_sdt=0,5 | 2 | 2,022 |
Stochastic Adaptive Activation Function | 0 | neurips | 0 | 0 | 2023-06-16 22:58:17.241000 | https://github.com/kyungsu-lee-ksl/ash | 4 | Stochastic Adaptive Activation Function | https://scholar.google.com/scholar?cluster=12146555690401173811&hl=en&as_sdt=0,5 | 2 | 2,022 |
Video compression dataset and benchmark of learning-based video-quality metrics | 4 | neurips | 0 | 0 | 2023-06-16 22:58:17.453000 | https://github.com/msu-video-group/msu_vqm_compression_benchmark | 16 | Video compression dataset and benchmark of learning-based video-quality metrics | https://scholar.google.com/scholar?cluster=11117086154139094350&hl=en&as_sdt=0,47 | 3 | 2,022 |
Prototypical VoteNet for Few-Shot 3D Point Cloud Object Detection | 1 | neurips | 2 | 0 | 2023-06-16 22:58:17.664000 | https://github.com/cvmi-lab/fs3d | 36 | Prototypical VoteNet for Few-Shot 3D Point Cloud Object Detection | https://scholar.google.com/scholar?cluster=15115934605186565266&hl=en&as_sdt=0,43 | 5 | 2,022 |
Efficient Dataset Distillation using Random Feature Approximation | 11 | neurips | 1 | 1 | 2023-06-16 22:58:17.876000 | https://github.com/yolky/rfad | 23 | Efficient dataset distillation using random feature approximation | https://scholar.google.com/scholar?cluster=12794285551052496052&hl=en&as_sdt=0,5 | 3 | 2,022 |
Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport? | 9 | neurips | 1 | 0 | 2023-06-16 22:58:18.087000 | https://github.com/justkolesov/wasserstein1benchmark | 17 | Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport? | https://scholar.google.com/scholar?cluster=168357485459111534&hl=en&as_sdt=0,18 | 2 | 2,022 |
PALBERT: Teaching ALBERT to Ponder | 0 | neurips | 0 | 0 | 2023-06-16 22:58:18.299000 | https://github.com/tinkoff-ai/palbert | 34 | PALBERT: Teaching ALBERT to Ponder | https://scholar.google.com/scholar?cluster=13888821126915681625&hl=en&as_sdt=0,14 | 3 | 2,022 |
Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes | 2 | neurips | 0 | 0 | 2023-06-16 22:58:18.512000 | https://github.com/tipt0p/three_regimes_on_the_sphere | 3 | Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes | https://scholar.google.com/scholar?cluster=6680465751161236858&hl=en&as_sdt=0,31 | 1 | 2,022 |
Exploring the Whole Rashomon Set of Sparse Decision Trees | 12 | neurips | 5 | 1 | 2023-06-16 22:58:18.724000 | https://github.com/ubc-systopia/treeFarms | 19 | Exploring the whole rashomon set of sparse decision trees | https://scholar.google.com/scholar?cluster=8197518784888953073&hl=en&as_sdt=0,34 | 1 | 2,022 |
Graph Self-supervised Learning with Accurate Discrepancy Learning | 6 | neurips | 2 | 1 | 2023-06-16 22:58:18.936000 | https://github.com/dongkikim95/d-sla | 12 | Graph self-supervised learning with accurate discrepancy learning | https://scholar.google.com/scholar?cluster=6899266835558351745&hl=en&as_sdt=0,11 | 1 | 2,022 |
Multi-Scale Adaptive Network for Single Image Denoising | 5 | neurips | 1 | 0 | 2023-06-16 22:58:19.153000 | https://github.com/xlearning-scu/2022-neurips-msanet | 2 | Multi-Scale Adaptive Network for Single Image Denoising | https://scholar.google.com/scholar?cluster=12092498430345383404&hl=en&as_sdt=0,39 | 2 | 2,022 |
Constrained Predictive Coding as a Biologically Plausible Model of the Cortical Hierarchy | 2 | neurips | 0 | 0 | 2023-06-16 22:58:19.364000 | https://github.com/ttesileanu/bio-pcn | 5 | Constrained predictive coding as a biologically plausible model of the cortical hierarchy | https://scholar.google.com/scholar?cluster=11118175748957488346&hl=en&as_sdt=0,5 | 1 | 2,022 |
Near-Optimal Collaborative Learning in Bandits | 5 | neurips | 0 | 0 | 2023-06-16 22:58:19.577000 | https://github.com/clreda/near-optimal-federated | 1 | Near-optimal collaborative learning in bandits | https://scholar.google.com/scholar?cluster=11872427930011371643&hl=en&as_sdt=0,11 | 1 | 2,022 |
TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers | 8 | neurips | 3 | 0 | 2023-06-16 22:58:19.788000 | https://github.com/mlvlab/tokenmixup | 39 | Tokenmixup: Efficient attention-guided token-level data augmentation for transformers | https://scholar.google.com/scholar?cluster=3326108237146565481&hl=en&as_sdt=0,25 | 5 | 2,022 |
Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models | 24 | neurips | 12 | 2 | 2023-06-16 22:58:20 | https://github.com/azshue/TPT | 64 | Test-time prompt tuning for zero-shot generalization in vision-language models | https://scholar.google.com/scholar?cluster=213109028691722316&hl=en&as_sdt=0,1 | 3 | 2,022 |
SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders | 22 | neurips | 3 | 2 | 2023-06-16 22:58:20.214000 | https://github.com/ucasligang/semmae | 16 | Semmae: Semantic-guided masking for learning masked autoencoders | https://scholar.google.com/scholar?cluster=16607040036096933653&hl=en&as_sdt=0,23 | 1 | 2,022 |
BiT: Robustly Binarized Multi-distilled Transformer | 13 | neurips | 9 | 5 | 2023-06-16 22:58:20.437000 | https://github.com/facebookresearch/bit | 67 | Bit: Robustly binarized multi-distilled transformer | https://scholar.google.com/scholar?cluster=1714008465250842352&hl=en&as_sdt=0,5 | 12 | 2,022 |
Knowledge-Aware Bayesian Deep Topic Model | 6 | neurips | 2 | 1 | 2023-06-16 22:58:20.647000 | https://github.com/wds2014/topickg | 3 | Knowledge-aware Bayesian deep topic model | https://scholar.google.com/scholar?cluster=2627842395179821875&hl=en&as_sdt=0,44 | 1 | 2,022 |
SelecMix: Debiased Learning by Contradicting-pair Sampling | 1 | neurips | 1 | 0 | 2023-06-16 22:58:20.862000 | https://github.com/bluemoon010/selecmix | 7 | SelecMix: Debiased Learning by Contradicting-pair Sampling | https://scholar.google.com/scholar?cluster=2915792353103786474&hl=en&as_sdt=0,5 | 2 | 2,022 |
P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting | 19 | neurips | 9 | 3 | 2023-06-16 22:58:21.073000 | https://github.com/wangzy22/P2P | 99 | P2p: Tuning pre-trained image models for point cloud analysis with point-to-pixel prompting | https://scholar.google.com/scholar?cluster=16387925596110304701&hl=en&as_sdt=0,44 | 8 | 2,022 |
Variational inference via Wasserstein gradient flows | 18 | neurips | 0 | 0 | 2023-06-16 22:58:21.288000 | https://github.com/marc-h-lambert/w-vi | 4 | Variational inference via Wasserstein gradient flows | https://scholar.google.com/scholar?cluster=6278239632923753494&hl=en&as_sdt=0,5 | 1 | 2,022 |
projUNN: efficient method for training deep networks with unitary matrices | 5 | neurips | 4 | 2 | 2023-06-16 22:58:21.500000 | https://github.com/facebookresearch/projunn | 20 | projUNN: efficient method for training deep networks with unitary matrices | https://scholar.google.com/scholar?cluster=1850320121010807682&hl=en&as_sdt=0,5 | 49 | 2,022 |
Multi-dataset Training of Transformers for Robust Action Recognition | 2 | neurips | 0 | 0 | 2023-06-16 22:58:21.711000 | https://github.com/junweiliang/multitrain | 9 | Multi-dataset Training of Transformers for Robust Action Recognition | https://scholar.google.com/scholar?cluster=18278928779930263666&hl=en&as_sdt=0,31 | 5 | 2,022 |
Recipe for a General, Powerful, Scalable Graph Transformer | 62 | neurips | 63 | 5 | 2023-06-16 22:58:21.922000 | https://github.com/rampasek/GraphGPS | 390 | Recipe for a general, powerful, scalable graph transformer | https://scholar.google.com/scholar?cluster=6992910764828744943&hl=en&as_sdt=0,33 | 11 | 2,022 |
Rare Gems: Finding Lottery Tickets at Initialization | 10 | neurips | 2 | 9 | 2023-06-16 22:58:22.134000 | https://github.com/ksreenivasan/pruning_is_enough | 8 | Rare gems: Finding lottery tickets at initialization | https://scholar.google.com/scholar?cluster=18354752168208884490&hl=en&as_sdt=0,14 | 4 | 2,022 |
Online Bipartite Matching with Advice: Tight Robustness-Consistency Tradeoffs for the Two-Stage Model | 4 | neurips | 0 | 0 | 2023-06-16 22:58:22.345000 | https://github.com/mapleox/matching_predictions | 1 | Online Bipartite Matching with Advice: Tight Robustness-Consistency Tradeoffs for the Two-Stage Model | https://scholar.google.com/scholar?cluster=10540192598939165742&hl=en&as_sdt=0,14 | 1 | 2,022 |
Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials | 4 | neurips | 0 | 0 | 2023-06-16 22:58:22.556000 | https://github.com/eshnich/escape_ntk | 0 | Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials | https://scholar.google.com/scholar?cluster=9098044485141039309&hl=en&as_sdt=0,36 | 1 | 2,022 |
Pure Transformers are Powerful Graph Learners | 20 | neurips | 35 | 8 | 2023-06-16 22:58:22.766000 | https://github.com/jw9730/tokengt | 226 | Pure transformers are powerful graph learners | https://scholar.google.com/scholar?cluster=1854387804616571098&hl=en&as_sdt=0,5 | 10 | 2,022 |
NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation | 0 | neurips | 2 | 0 | 2023-06-16 22:58:22.978000 | https://github.com/jeremiemelo/neurolight | 23 | NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation | https://scholar.google.com/scholar?cluster=8881238430961631710&hl=en&as_sdt=0,5 | 5 | 2,022 |
Learning the Structure of Large Networked Systems Obeying Conservation Laws | 1 | neurips | 0 | 0 | 2023-06-16 22:58:23.190000 | https://github.com/anirudhrayas/slnscl | 0 | Learning the Structure of Large Networked Systems Obeying Conservation Laws | https://scholar.google.com/scholar?cluster=5489652265848095626&hl=en&as_sdt=0,5 | 1 | 2,022 |
Bridging the Gap Between Vision Transformers and Convolutional Neural Networks on Small Datasets | 2 | neurips | 3 | 0 | 2023-06-16 22:58:23.409000 | https://github.com/arieseirack/dhvt | 41 | Bridging the Gap Between Vision Transformers and Convolutional Neural Networks on Small Datasets | https://scholar.google.com/scholar?cluster=10766475797615971517&hl=en&as_sdt=0,14 | 3 | 2,022 |
Private Set Generation with Discriminative Information | 11 | neurips | 0 | 2 | 2023-06-16 22:58:23.619000 | https://github.com/dingfanchen/private-set | 13 | Private set generation with discriminative information | https://scholar.google.com/scholar?cluster=1058785882009175393&hl=en&as_sdt=0,44 | 1 | 2,022 |
Provable Defense against Backdoor Policies in Reinforcement Learning | 0 | neurips | 0 | 0 | 2023-06-16 22:58:23.830000 | https://github.com/skbharti/provable-defense-in-rl | 4 | Provable Defense against Backdoor Policies in Reinforcement Learning | https://scholar.google.com/scholar?cluster=15582632130939406311&hl=en&as_sdt=0,5 | 1 | 2,022 |
Diffusion Models as Plug-and-Play Priors | 32 | neurips | 10 | 3 | 2023-06-16 22:58:24.042000 | https://github.com/alexgraikos/diffusion_priors | 134 | Diffusion models as plug-and-play priors | https://scholar.google.com/scholar?cluster=1664893972448348110&hl=en&as_sdt=0,47 | 3 | 2,022 |
VaiPhy: a Variational Inference Based Algorithm for Phylogeny | 2 | neurips | 0 | 0 | 2023-06-16 22:58:24.253000 | https://github.com/lagergren-lab/vaiphy | 1 | VaiPhy: a Variational Inference Based Algorithm for Phylogeny | https://scholar.google.com/scholar?cluster=8569696227907853831&hl=en&as_sdt=0,5 | 1 | 2,022 |
A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal | 9 | neurips | 2 | 0 | 2023-06-16 22:58:24.465000 | https://github.com/yaqianzhang/repeatedaugmentedrehearsal | 6 | A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal | https://scholar.google.com/scholar?cluster=9507643277060053536&hl=en&as_sdt=0,48 | 2 | 2,022 |
Compressible-composable NeRF via Rank-residual Decomposition | 23 | neurips | 10 | 4 | 2023-06-16 22:58:24.675000 | https://github.com/ashawkey/ccnerf | 116 | Compressible-composable nerf via rank-residual decomposition | https://scholar.google.com/scholar?cluster=15357102335001383949&hl=en&as_sdt=0,5 | 11 | 2,022 |
Injecting Domain Knowledge from Empirical Interatomic Potentials to Neural Networks for Predicting Material Properties | 1 | neurips | 0 | 0 | 2023-06-16 22:58:24.886000 | https://github.com/shuix007/eip4nnpotentials | 1 | Injecting domain knowledge from empirical interatomic potentials to neural networks for predicting material properties | https://scholar.google.com/scholar?cluster=1090911456582952021&hl=en&as_sdt=0,10 | 3 | 2,022 |
Learning Modular Simulations for Homogeneous Systems | 0 | neurips | 1 | 0 | 2023-06-16 22:58:25.097000 | https://github.com/microsoft/mpnode.jl | 29 | Learning Modular Simulations for Homogeneous Systems | https://scholar.google.com/scholar?cluster=16943302604921582247&hl=en&as_sdt=0,48 | 4 | 2,022 |
Semi-Discrete Normalizing Flows through Differentiable Tessellation | 2 | neurips | 1 | 0 | 2023-06-16 22:58:25.308000 | https://github.com/facebookresearch/semi-discrete-flow | 20 | Semi-Discrete Normalizing Flows through Differentiable Tessellation | https://scholar.google.com/scholar?cluster=2894615893347018628&hl=en&as_sdt=0,39 | 3 | 2,022 |
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks | 5 | neurips | 0 | 1 | 2023-06-16 22:58:25.519000 | https://github.com/sizhe-chen/aaa | 13 | Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks | https://scholar.google.com/scholar?cluster=1904818914099445692&hl=en&as_sdt=0,21 | 1 | 2,022 |
Sequence-to-Set Generative Models | 0 | neurips | 2 | 0 | 2023-06-16 22:58:25.731000 | https://github.com/longtaotang/setlearning | 1 | Sequence-to-Set Generative Models | https://scholar.google.com/scholar?cluster=11832911442532697900&hl=en&as_sdt=0,5 | 1 | 2,022 |
Near-Optimal Multi-Agent Learning for Safe Coverage Control | 1 | neurips | 1 | 0 | 2023-06-16 22:58:25.943000 | https://github.com/manish-pra/safemac | 7 | Near-Optimal Multi-Agent Learning for Safe Coverage Control | https://scholar.google.com/scholar?cluster=9831092712630856956&hl=en&as_sdt=0,33 | 2 | 2,022 |
Beyond spectral gap: the role of the topology in decentralized learning | 6 | neurips | 0 | 0 | 2023-06-16 22:58:26.155000 | https://github.com/epfml/topology-in-decentralized-learning | 6 | Beyond spectral gap: The role of the topology in decentralized learning | https://scholar.google.com/scholar?cluster=1362974330315569640&hl=en&as_sdt=0,44 | 3 | 2,022 |
Periodic Graph Transformers for Crystal Material Property Prediction | 11 | neurips | 3 | 1 | 2023-06-16 22:58:26.366000 | https://github.com/YKQ98/Matformer | 47 | Periodic Graph Transformers for Crystal Material Property Prediction | https://scholar.google.com/scholar?cluster=9619404030822952789&hl=en&as_sdt=0,38 | 5 | 2,022 |
Deliberated Domain Bridging for Domain Adaptive Semantic Segmentation | 6 | neurips | 5 | 2 | 2023-06-16 22:58:26.579000 | https://github.com/xiaoachen98/DDB | 52 | Deliberated Domain Bridging for Domain Adaptive Semantic Segmentation | https://scholar.google.com/scholar?cluster=12908675739985569858&hl=en&as_sdt=0,5 | 3 | 2,022 |
DreamShard: Generalizable Embedding Table Placement for Recommender Systems | 9 | neurips | 1 | 0 | 2023-06-16 22:58:26.790000 | https://github.com/daochenzha/dreamshard | 26 | Dreamshard: Generalizable embedding table placement for recommender systems | https://scholar.google.com/scholar?cluster=5762579680936509835&hl=en&as_sdt=0,5 | 3 | 2,022 |
Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE | 2 | neurips | 2 | 0 | 2023-06-16 22:58:27.001000 | https://github.com/smlc-nysbc/target-vae | 13 | Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE | https://scholar.google.com/scholar?cluster=4643268267251719909&hl=en&as_sdt=0,33 | 3 | 2,022 |
PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points | 2 | neurips | 1 | 2 | 2023-06-16 22:58:27.213000 | https://github.com/mcg-nju/pointtad | 31 | Pointtad: Multi-label temporal action detection with learnable query points | https://scholar.google.com/scholar?cluster=4239613475999349516&hl=en&as_sdt=0,33 | 3 | 2,022 |
Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions | 0 | neurips | 0 | 0 | 2023-06-16 22:58:27.424000 | https://github.com/Stalence/NeuralExt | 4 | Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions | https://scholar.google.com/scholar?cluster=11142300575635398098&hl=en&as_sdt=0,5 | 1 | 2,022 |
Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide Image Classification | 4 | neurips | 10 | 1 | 2023-06-16 22:58:27.635000 | https://github.com/miccaiif/weno | 34 | Bi-directional weakly supervised knowledge distillation for whole slide image classification | https://scholar.google.com/scholar?cluster=8347896172205638655&hl=en&as_sdt=0,36 | 3 | 2,022 |
PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient | 4 | neurips | 181 | 91 | 2023-06-16 22:58:27.846000 | https://github.com/open-mmlab/mmrazor | 1,088 | PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient | https://scholar.google.com/scholar?cluster=15197137746726757661&hl=en&as_sdt=0,10 | 19 | 2,022 |
NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis | 17 | neurips | 153 | 14 | 2023-06-16 22:58:28.057000 | https://github.com/microsoft/nuwa | 2,707 | Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis | https://scholar.google.com/scholar?cluster=13240374514444074345&hl=en&as_sdt=0,7 | 143 | 2,022 |
Stability Analysis and Generalization Bounds of Adversarial Training | 3 | neurips | 0 | 0 | 2023-06-16 22:58:28.268000 | https://github.com/JiancongXiao/Stability-of-Adversarial-Training | 2 | Stability analysis and generalization bounds of adversarial training | https://scholar.google.com/scholar?cluster=4247121934226238783&hl=en&as_sdt=0,33 | 1 | 2,022 |
STaR: Bootstrapping Reasoning With Reasoning | 85 | neurips | 6 | 0 | 2023-06-16 22:58:28.484000 | https://github.com/ezelikman/STaR | 20 | Star: Bootstrapping reasoning with reasoning | https://scholar.google.com/scholar?cluster=6588800596180274414&hl=en&as_sdt=0,14 | 1 | 2,022 |
Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation | 2 | neurips | 1 | 0 | 2023-06-16 22:58:28.695000 | https://github.com/stilwell-git/adaptation-with-noisy-oracle | 3 | Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation | https://scholar.google.com/scholar?cluster=12503746065360790746&hl=en&as_sdt=0,5 | 2 | 2,022 |
Weakly Supervised Representation Learning with Sparse Perturbations | 11 | neurips | 0 | 0 | 2023-06-16 22:58:28.906000 | https://github.com/ahujak/wsrl | 0 | Weakly supervised representation learning with sparse perturbations | https://scholar.google.com/scholar?cluster=5928274395682008683&hl=en&as_sdt=0,41 | 1 | 2,022 |
Watermarking for Out-of-distribution Detection | 4 | neurips | 2 | 0 | 2023-06-16 22:58:29.117000 | https://github.com/qizhouwang/watermarking | 10 | Watermarking for Out-of-distribution Detection | https://scholar.google.com/scholar?cluster=14042029283291490588&hl=en&as_sdt=0,33 | 1 | 2,022 |
EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records | 2 | neurips | 9 | 0 | 2023-06-16 22:58:29.329000 | https://github.com/glee4810/EHRSQL | 36 | EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records | https://scholar.google.com/scholar?cluster=8956258088205666681&hl=en&as_sdt=0,23 | 3 | 2,022 |
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability | 2 | neurips | 1 | 0 | 2023-06-16 22:58:29.540000 | https://github.com/LevinRoman/parameter-space-saliency | 21 | Where do Models go wrong? Parameter-space saliency maps for explainability | https://scholar.google.com/scholar?cluster=6375709581845585510&hl=en&as_sdt=0,5 | 2 | 2,022 |
Using Embeddings for Causal Estimation of Peer Influence in Social Networks | 2 | neurips | 3 | 0 | 2023-06-16 22:58:29.751000 | https://github.com/irinacristali/peer-contagion-on-networks | 6 | Using embeddings for causal estimation of peer influence in social networks | https://scholar.google.com/scholar?cluster=10956063829097823219&hl=en&as_sdt=0,15 | 1 | 2,022 |
Quo Vadis: Is Trajectory Forecasting the Key Towards Long-Term Multi-Object Tracking? | 5 | neurips | 1 | 0 | 2023-06-16 22:58:29.962000 | https://github.com/dendorferpatrick/quovadis | 19 | Quo Vadis: Is Trajectory Forecasting the Key Towards Long-Term Multi-Object Tracking? | https://scholar.google.com/scholar?cluster=17768927827009981298&hl=en&as_sdt=0,14 | 3 | 2,022 |
Wasserstein Iterative Networks for Barycenter Estimation | 11 | neurips | 0 | 1 | 2023-06-16 22:58:30.174000 | https://github.com/iamalexkorotin/wassersteiniterativenetworks | 3 | Wasserstein iterative networks for barycenter estimation | https://scholar.google.com/scholar?cluster=6505548225666677645&hl=en&as_sdt=0,33 | 2 | 2,022 |
OpenXAI: Towards a Transparent Evaluation of Model Explanations | 14 | neurips | 21 | 4 | 2023-06-16 22:58:30.402000 | https://github.com/ai4life-group/openxai | 158 | Openxai: Towards a transparent evaluation of model explanations | https://scholar.google.com/scholar?cluster=1602716306137073411&hl=en&as_sdt=0,15 | 6 | 2,022 |
The Hessian Screening Rule | 1 | neurips | 0 | 0 | 2023-06-16 22:58:30.614000 | https://github.com/jolars/HessianScreening | 2 | The hessian screening rule | https://scholar.google.com/scholar?cluster=4519092645139921267&hl=en&as_sdt=0,5 | 3 | 2,022 |
Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging | 6 | neurips | 0 | 0 | 2023-06-16 22:58:30.825000 | https://github.com/totilas/muffliato | 0 | Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging | https://scholar.google.com/scholar?cluster=1367771846266948746&hl=en&as_sdt=0,5 | 1 | 2,022 |
What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment | 5 | neurips | 0 | 0 | 2023-06-16 22:58:31.036000 | https://github.com/causalml/boundsonfractionnegativelyaffected | 1 | What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment | https://scholar.google.com/scholar?cluster=15108195108201398305&hl=en&as_sdt=0,33 | 0 | 2,022 |
Training Subset Selection for Weak Supervision | 7 | neurips | 1 | 0 | 2023-06-16 22:58:31.247000 | https://github.com/hunterlang/weaksup-subset-selection | 11 | Training Subset Selection for Weak Supervision | https://scholar.google.com/scholar?cluster=8350401146899292084&hl=en&as_sdt=0,33 | 1 | 2,022 |
Expansion and Shrinkage of Localization for Weakly-Supervised Semantic Segmentation | 7 | neurips | 0 | 1 | 2023-06-16 22:58:31.460000 | https://github.com/tyroneli/esol_wsss | 13 | Expansion and Shrinkage of Localization for Weakly-Supervised Semantic Segmentation | https://scholar.google.com/scholar?cluster=7949251840753978462&hl=en&as_sdt=0,14 | 4 | 2,022 |
RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning | 23 | neurips | 5 | 0 | 2023-06-16 22:58:31.671000 | https://github.com/marc-rigter/rambo | 13 | Rambo-rl: Robust adversarial model-based offline reinforcement learning | https://scholar.google.com/scholar?cluster=10956894200939947900&hl=en&as_sdt=0,5 | 3 | 2,022 |
Improved techniques for deterministic l2 robustness | 2 | neurips | 0 | 0 | 2023-06-16 22:58:31.882000 | https://github.com/singlasahil14/improved_l2_robustness | 2 | Improved techniques for deterministic l2 robustness | https://scholar.google.com/scholar?cluster=7826478224730238594&hl=en&as_sdt=0,5 | 1 | 2,022 |
Normalizing Flows for Knockoff-free Controlled Feature Selection | 1 | neurips | 2 | 1 | 2023-06-16 22:58:32.093000 | https://github.com/dereklhansen/flowselect | 6 | Normalizing flows for knockoff-free controlled feature selection | https://scholar.google.com/scholar?cluster=1427873937634321585&hl=en&as_sdt=0,5 | 1 | 2,022 |
Efficient Architecture Search for Diverse Tasks | 5 | neurips | 3 | 0 | 2023-06-16 22:58:32.305000 | https://github.com/sjunhongshen/dash | 20 | Efficient architecture search for diverse tasks | https://scholar.google.com/scholar?cluster=6159039417231853231&hl=en&as_sdt=0,39 | 1 | 2,022 |
Inherently Explainable Reinforcement Learning in Natural Language | 4 | neurips | 0 | 0 | 2023-06-16 22:58:32.516000 | https://github.com/xiangyu-peng/hex-rl | 5 | Inherently explainable reinforcement learning in natural language | https://scholar.google.com/scholar?cluster=14816477869397516232&hl=en&as_sdt=0,10 | 1 | 2,022 |
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting | 7 | neurips | 21 | 0 | 2023-06-16 22:58:32.727000 | https://github.com/naver/gdc | 108 | On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting | https://scholar.google.com/scholar?cluster=852205239586657946&hl=en&as_sdt=0,51 | 10 | 2,022 |
Ask4Help: Learning to Leverage an Expert for Embodied Tasks | 2 | neurips | 0 | 0 | 2023-06-16 22:58:32.939000 | https://github.com/allenai/ask4help | 17 | Ask4help: Learning to leverage an expert for embodied tasks | https://scholar.google.com/scholar?cluster=893074409326064845&hl=en&as_sdt=0,33 | 3 | 2,022 |
Active Bayesian Causal Inference | 7 | neurips | 2 | 0 | 2023-06-16 22:58:33.150000 | https://github.com/chritoth/active-bayesian-causal-inference | 21 | Active Bayesian Causal Inference | https://scholar.google.com/scholar?cluster=14185975867772832007&hl=en&as_sdt=0,5 | 2 | 2,022 |
LogiGAN: Learning Logical Reasoning via Adversarial Pre-training | 3 | neurips | 58 | 10 | 2023-06-16 22:58:33.361000 | https://github.com/microsoft/ContextualSP | 310 | Logigan: Learning logical reasoning via adversarial pre-training | https://scholar.google.com/scholar?cluster=16806536241461518439&hl=en&as_sdt=0,5 | 15 | 2,022 |
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | 88 | neurips | 308 | 111 | 2023-06-16 22:58:33.573000 | https://github.com/hazyresearch/flash-attention | 3,654 | Flashattention: Fast and memory-efficient exact attention with io-awareness | https://scholar.google.com/scholar?cluster=4436654227589737701&hl=en&as_sdt=0,5 | 67 | 2,022 |
Self-Supervised Visual Representation Learning with Semantic Grouping | 14 | neurips | 6 | 4 | 2023-06-16 22:58:33.784000 | https://github.com/CVMI-Lab/SlotCon | 76 | Self-supervised visual representation learning with semantic grouping | https://scholar.google.com/scholar?cluster=11920603760559197380&hl=en&as_sdt=0,5 | 3 | 2,022 |
Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds | 8 | neurips | 2 | 0 | 2023-06-16 22:58:33.995000 | https://github.com/junshengzhou/cap-udf | 37 | Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds | https://scholar.google.com/scholar?cluster=3947486102565885083&hl=en&as_sdt=0,47 | 3 | 2,022 |
Multi-Agent Reinforcement Learning is a Sequence Modeling Problem | 26 | neurips | 26 | 4 | 2023-06-16 22:58:34.209000 | https://github.com/pku-marl/multi-agent-transformer | 147 | Multi-agent reinforcement learning is a sequence modeling problem | https://scholar.google.com/scholar?cluster=14170076594522259195&hl=en&as_sdt=0,39 | 7 | 2,022 |
Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination | 6 | neurips | 1 | 0 | 2023-06-16 22:58:34.420000 | https://github.com/ma921/basq | 11 | Fast Bayesian inference with batch Bayesian quadrature via kernel recombination | https://scholar.google.com/scholar?cluster=9942624906464459479&hl=en&as_sdt=0,14 | 1 | 2,022 |
Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks | 0 | neurips | 0 | 0 | 2023-06-16 22:58:34.632000 | https://github.com/mlohaus/disparatetreatment | 0 | Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks | https://scholar.google.com/scholar?cluster=8811649943714147381&hl=en&as_sdt=0,5 | 1 | 2,022 |
Fast Instrument Learning with Faster Rates | 1 | neurips | 0 | 0 | 2023-06-16 22:58:34.843000 | https://github.com/meta-inf/fil | 0 | Fast Instrument Learning with Faster Rates | https://scholar.google.com/scholar?cluster=6761597304576361829&hl=en&as_sdt=0,31 | 1 | 2,022 |
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition | 62 | neurips | 13 | 12 | 2023-06-16 22:58:35.055000 | https://github.com/ShoufaChen/AdaptFormer | 194 | Adaptformer: Adapting vision transformers for scalable visual recognition | https://scholar.google.com/scholar?cluster=17752815312316743733&hl=en&as_sdt=0,47 | 6 | 2,022 |
Symmetry Teleportation for Accelerated Optimization | 2 | neurips | 3 | 0 | 2023-06-16 22:58:35.266000 | https://github.com/rose-stl-lab/symmetry-teleportation | 6 | Symmetry Teleportation for Accelerated Optimization | https://scholar.google.com/scholar?cluster=1373110452926814805&hl=en&as_sdt=0,5 | 2 | 2,022 |
Wasserstein Logistic Regression with Mixed Features | 1 | neurips | 0 | 0 | 2023-06-16 22:58:35.477000 | https://github.com/selvi-aras/wassersteinlr | 3 | Wasserstein logistic regression with mixed features | https://scholar.google.com/scholar?cluster=7859002643668729721&hl=en&as_sdt=0,6 | 3 | 2,022 |
Trajectory Inference via Mean-field Langevin in Path Space | 5 | neurips | 0 | 0 | 2023-06-16 22:58:35.689000 | https://github.com/zsteve/mfl | 1 | Trajectory inference via mean-field Langevin in path space | https://scholar.google.com/scholar?cluster=14010724729856799724&hl=en&as_sdt=0,33 | 1 | 2,022 |
SwinTrack: A Simple and Strong Baseline for Transformer Tracking | 79 | neurips | 37 | 25 | 2023-06-16 22:58:35.902000 | https://github.com/litinglin/swintrack | 213 | Swintrack: A simple and strong baseline for transformer tracking | https://scholar.google.com/scholar?cluster=6278077695056066484&hl=en&as_sdt=0,44 | 5 | 2,022 |
Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation | 3 | neurips | 0 | 2 | 2023-06-16 22:58:36.115000 | https://github.com/joonho-jang/uadal | 8 | Unknown-aware domain adversarial learning for open-set domain adaptation | https://scholar.google.com/scholar?cluster=17997080445903067240&hl=en&as_sdt=0,33 | 2 | 2,022 |
Poisson Flow Generative Models | 17 | neurips | 60 | 3 | 2023-06-16 22:58:36.326000 | https://github.com/newbeeer/poisson_flow | 747 | Poisson flow generative models | https://scholar.google.com/scholar?cluster=14573129279323287718&hl=en&as_sdt=0,5 | 15 | 2,022 |
Invertible Monotone Operators for Normalizing Flows | 0 | neurips | 0 | 0 | 2023-06-16 22:58:36.538000 | https://github.com/mlvlab/monotoneflows | 7 | Invertible Monotone Operators for Normalizing Flows | https://scholar.google.com/scholar?cluster=9497056797525394758&hl=en&as_sdt=0,5 | 3 | 2,022 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.