bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
listlengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
listlengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
listlengths 0
100
| Datasets
listlengths 0
11
| Spaces
listlengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
https://openreview.net/forum?id=m0vfXMrLwF
|
@inproceedings{
rastegar2023learn,
title={Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery},
author={Sarah Rastegar and Hazel Doughty and Cees G. M. Snoek},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=m0vfXMrLwF}
}
|
In the quest for unveiling novel categories at test time, we confront the inherent limitations of traditional supervised recognition models that are restricted by a predefined category set. While strides have been made in the realms of self-supervised and open-world learning towards test-time category discovery, a crucial yet often overlooked question persists: what exactly delineates a category? In this paper, we conceptualize a category through the lens of optimization, viewing it as an optimal solution to a well-defined problem. Harnessing this unique conceptualization, we propose a novel, efficient and self-supervised method capable of discovering previously unknown categories at test time. A salient feature of our approach is the assignment of minimum length category codes to individual data instances, which encapsulates the implicit category hierarchy prevalent in real-world datasets. This mechanism affords us enhanced control over category granularity, thereby equipping our model to handle fine-grained categories adeptly. Experimental evaluations, bolstered by state-of-the-art benchmark comparisons, testify to the efficacy of our solution in managing unknown categories at test time. Furthermore, we fortify our proposition with a theoretical foundation, providing proof of its optimality. Our code is available at: https://github.com/SarahRastegar/InfoSieve.
|
Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery
|
[
"Sarah Rastegar",
"Hazel Doughty",
"Cees G. M. Snoek"
] |
Conference
|
poster
|
2310.19776
|
[
"https://github.com/sarahrastegar/infosieve"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=m0RbqrUM26
|
@inproceedings{
li2023styletts,
title={Style{TTS} 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models},
author={Yinghao Aaron Li and Cong Han and Vinay S Raghavan and Gavin Mischler and Nima Mesgarani},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=m0RbqrUM26}
}
|
In this paper, we present StyleTTS 2, a text-to-speech (TTS) model that leverages style diffusion and adversarial training with large speech language models (SLMs) to achieve human-level TTS synthesis. StyleTTS 2 differs from its predecessor by modeling styles as a latent random variable through diffusion models to generate the most suitable style for the text without requiring reference speech, achieving efficient latent diffusion while benefiting from the diverse speech synthesis offered by diffusion models. Furthermore, we employ large pre-trained SLMs, such as WavLM, as discriminators with our novel differentiable duration modeling for end-to-end training, resulting in improved speech naturalness. StyleTTS 2 surpasses human recordings on the single-speaker LJSpeech dataset and matches it on the multispeaker VCTK dataset as judged by native English speakers. Moreover, when trained on the LibriTTS dataset, our model outperforms previous publicly available models for zero-shot speaker adaptation. This work achieves the first human-level TTS on both single and multispeaker datasets, showcasing the potential of style diffusion and adversarial training with large SLMs. The audio demos and source code are available at https://styletts2.github.io/.
|
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
|
[
"Yinghao Aaron Li",
"Cong Han",
"Vinay S Raghavan",
"Gavin Mischler",
"Nima Mesgarani"
] |
Conference
|
poster
|
2306.07691
|
[
""
] |
https://huggingface.co/papers/2306.07691
| 0 | 4 | 0 | 5 | 1 |
[
"ShoukanLabs/Vokan"
] |
[] |
[
"styletts2/styletts2",
"ShoukanLabs/Vokan",
"Korakoe/Vokan-V0.5",
"21world/styletts2",
"ve-dot-exe/styletts2",
"devinschumacher/styletts2-voice-cloning",
"otioss/Accent_App",
"shivank-pixis/styletts2",
"rohitmenonhart/styletts2-f1",
"antoniomae/styletts2VOICE-CLONE22",
"GaboChoropan/styletts2"
] |
null |
https://openreview.net/forum?id=lzqaQRsITh
|
@inproceedings{
chu2023diffcomplete,
title={DiffComplete: Diffusion-based Generative 3D Shape Completion},
author={Ruihang Chu and Enze Xie and Shentong Mo and Zhenguo Li and Matthias Nie{\ss}ner and Chi-Wing Fu and Jiaya Jia},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lzqaQRsITh}
}
|
We introduce a new diffusion-based approach for shape completion on 3D range scans. Compared with prior deterministic and probabilistic methods, we strike a balance between realism, multi-modality, and high fidelity. We propose DiffComplete by casting shape completion as a generative task conditioned on the incomplete shape. Our key designs are two-fold. First, we devise a hierarchical feature aggregation mechanism to inject conditional features in a spatially-consistent manner. So, we can capture both local details and broader contexts of the conditional inputs to control the shape completion. Second, we propose an occupancy-aware fusion strategy in our model to enable the completion of multiple partial shapes and introduce higher flexibility on the input conditions. DiffComplete sets a new SOTA performance (e.g., 40% decrease on $l_1$ error) on two large-scale 3D shape completion benchmarks. Our completed shapes not only have a realistic outlook compared with the deterministic methods but also exhibit high similarity to the ground truths compared with the probabilistic alternatives. Further, DiffComplete has strong generalizability on objects of entirely unseen classes for both synthetic and real data, eliminating the need for model re-training in various applications.
|
DiffComplete: Diffusion-based Generative 3D Shape Completion
|
[
"Ruihang Chu",
"Enze Xie",
"Shentong Mo",
"Zhenguo Li",
"Matthias Nießner",
"Chi-Wing Fu",
"Jiaya Jia"
] |
Conference
|
poster
|
2306.16329
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lxGFGMMSVl
|
@inproceedings{
raya2023spontaneous,
title={Spontaneous symmetry breaking in generative diffusion models},
author={Gabriel Raya and Luca Ambrogioni},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lxGFGMMSVl}
}
|
Generative diffusion models have recently emerged as a leading approach for generating high-dimensional data. In this paper, we show that the dynamics of these models exhibit a spontaneous symmetry breaking that divides the generative dynamics into two distinct phases: 1) A linear steady-state dynamics around a central fixed-point and 2) an attractor dynamics directed towards the data manifold. These two "phases'' are separated by the change in stability of the central fixed-point, with the resulting window of instability being responsible for the diversity of the generated samples. Using both theoretical and empirical evidence, we show that an accurate simulation of the early dynamics does not significantly contribute to the final generation, since early fluctuations are reverted to the central fixed point. To leverage this insight, we propose a Gaussian late initialization scheme, which significantly improves model performance, achieving up to 3x FID improvements on fast samplers, while also increasing sample diversity (e.g., racial composition of generated CelebA images). Our work offers a new way to understand the generative dynamics of diffusion models that has the potential to bring about higher performance and less biased fast-samplers.
|
Spontaneous symmetry breaking in generative diffusion models
|
[
"Gabriel Raya",
"Luca Ambrogioni"
] |
Conference
|
poster
|
2305.19693
|
[
"https://github.com/gabrielraya/symmetry_breaking_diffusion_models"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lwg3ohkFRv
|
@inproceedings{
luo2023care,
title={{CARE}: Modeling Interacting Dynamics Under Temporal Environmental Variation},
author={Xiao Luo and Haixin Wang and Zijie Huang and Huiyu Jiang and Abhijeet Sadashiv Gangan and Song Jiang and Yizhou Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lwg3ohkFRv}
}
|
Modeling interacting dynamical systems, such as fluid dynamics and intermolecular interactions, is a fundamental research problem for understanding and simulating complex real-world systems. Many of these systems can be naturally represented by dynamic graphs, and graph neural network-based approaches have been proposed and shown promising performance. However, most of these approaches assume the underlying dynamics does not change over time, which is unfortunately untrue. For example, a molecular dynamics can be affected by the environment temperature over the time. In this paper, we take an attempt to provide a probabilistic view for time-varying dynamics and propose a model Context-attended Graph ODE (CARE) for modeling time-varying interacting dynamical systems. In our CARE, we explicitly use a context variable to model time-varying environment and construct an encoder to initialize the context variable from historical trajectories. Furthermore, we employ a neural ODE model to depict the dynamic evolution of the context variable inferred from system states. This context variable is incorporated into a coupled ODE to simultaneously drive the evolution of systems. Comprehensive experiments on four datasets demonstrate the effectiveness of our proposed CARE compared with several state-of-the-art approaches.
|
CARE: Modeling Interacting Dynamics Under Temporal Environmental Variation
|
[
"Xiao Luo",
"Haixin Wang",
"Zijie Huang",
"Huiyu Jiang",
"Abhijeet Sadashiv Gangan",
"Song Jiang",
"Yizhou Sun"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lvvaNwnP6M
|
@inproceedings{
ze2023hindex,
title={H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation},
author={Yanjie Ze and Yuyao Liu and Ruizhe Shi and Jiaxin Qin and Zhecheng Yuan and Jiashun Wang and Huazhe Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lvvaNwnP6M}
}
|
Human hands possess remarkable dexterity and have long served as a source of inspiration for robotic manipulation. In this work, we propose a human $\textbf{H}$and-$\textbf{In}$formed visual representation learning framework to solve difficult $\textbf{Dex}$terous manipulation tasks ($\textbf{H-InDex}$) with reinforcement learning. Our framework consists of three stages: $\textit{(i)}$ pre-training representations with 3D human hand pose estimation, $\textit{(ii)}$ offline adapting representations with self-supervised keypoint detection, and $\textit{(iii)}$ reinforcement learning with exponential moving average BatchNorm. The last two stages only modify $0.36$% parameters of the pre-trained representation in total, ensuring the knowledge from pre-training is maintained to the full extent. We empirically study $\textbf{12}$ challenging dexterous manipulation tasks and find that $\textbf{H-InDex}$ largely surpasses strong baseline methods and the recent visual foundation models for motor control. Code and videos are available at https://yanjieze.com/H-InDex .
|
H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation
|
[
"Yanjie Ze",
"Yuyao Liu",
"Ruizhe Shi",
"Jiaxin Qin",
"Zhecheng Yuan",
"Jiashun Wang",
"Huazhe Xu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=luyXPdkNSN
|
@inproceedings{
li2023knearestneighbor,
title={K-Nearest-Neighbor Local Sampling Based Conditional Independence Testing},
author={Shuai Li and Yingjie Zhang and Hongtu Zhu and Christina Dan Wang and Hai Shu and Ziqi Chen and Zhuoran Sun and Yanfeng Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=luyXPdkNSN}
}
|
Conditional independence (CI) testing is a fundamental task in statistics and machine learning, but its effectiveness is hindered by the challenges posed by high-dimensional conditioning variables and limited data samples. This article introduces a novel testing approach to address these challenges and enhance control of the type I error while achieving high power under alternative hypotheses. The proposed approach incorporates a computationally efficient classifier-based conditional mutual information (CMI) estimator, capable of capturing intricate dependence structures among variables. To approximate a distribution encoding the null hypothesis, a $k$-nearest-neighbor local sampling strategy is employed. An important advantage of this approach is its ability to operate without assumptions about distribution forms or feature dependencies. Furthermore, it eliminates the need to derive asymptotic null distributions for the estimated CMI and avoids dataset splitting, making it particularly suitable for small datasets. The method presented in this article demonstrates asymptotic control of the type I error and consistency against all alternative hypotheses. Extensive analyses using both synthetic and real data highlight the computational efficiency of the proposed test. Moreover, it outperforms existing state-of-the-art methods in terms of type I and II errors, even in scenarios with high-dimensional conditioning sets. Additionally, the proposed approach exhibits robustness in the presence of heavy-tailed data.
|
K-Nearest-Neighbor Local Sampling Based Conditional Independence Testing
|
[
"Shuai Li",
"Yingjie Zhang",
"Hongtu Zhu",
"Christina Dan Wang",
"Hai Shu",
"Ziqi Chen",
"Zhuoran Sun",
"Yanfeng Yang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lpx9LZPVtZ
|
@inproceedings{
xiao2023spa,
title={{SPA}: A Graph Spectral Alignment Perspective for Domain Adaptation},
author={Zhiqing Xiao and Haobo Wang and Ying Jin and Lei Feng and Gang Chen and Fei Huang and Junbo Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lpx9LZPVtZ}
}
|
Unsupervised domain adaptation (UDA) is a pivotal form in machine learning to extend the in-domain model to the distinctive target domains where the data distributions differ. Most prior works focus on capturing the inter-domain transferability but largely overlook rich intra-domain structures, which empirically results in even worse discriminability. In this work, we introduce a novel graph SPectral Alignment (SPA) framework to tackle the tradeoff. The core of our method is briefly condensed as follows: (i)-by casting the DA problem to graph primitives, SPA composes a coarse graph alignment mechanism with a novel spectral regularizer towards aligning the domain graphs in eigenspaces; (ii)-we further develop a fine-grained message propagation module --- upon a novel neighbor-aware self-training mechanism --- in order for enhanced discriminability in the target domain. On standardized benchmarks, the extensive experiments of SPA demonstrate that its performance has surpassed the existing cutting-edge DA methods. Coupled with dense model analysis, we conclude that our approach indeed possesses superior efficacy, robustness, discriminability, and transferability. Code and data are available at: https://github.com/CrownX/SPA.
|
SPA: A Graph Spectral Alignment Perspective for Domain Adaptation
|
[
"Zhiqing Xiao",
"Haobo Wang",
"Ying Jin",
"Lei Feng",
"Gang Chen",
"Fei Huang",
"Junbo Zhao"
] |
Conference
|
poster
|
2310.17594
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lp9GR2t3hn
|
@inproceedings{
du2023protodiff,
title={ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion},
author={Yingjun Du and Zehao Xiao and Shengcai Liao and Cees G. M. Snoek},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lp9GR2t3hn}
}
|
Prototype-based meta-learning has emerged as a powerful technique for addressing few-shot learning challenges. However, estimating a deterministic prototype using a simple average function from a limited number of examples remains a fragile process. To overcome this limitation, we introduce ProtoDiff, a novel framework that leverages a task-guided diffusion model during the meta-training phase to gradually generate prototypes, thereby providing efficient class representations. Specifically, a set of prototypes is optimized to achieve per-task prototype overfitting, enabling accurately obtaining the overfitted prototypes for individual tasks.
Furthermore, we introduce a task-guided diffusion process within the prototype space, enabling the meta-learning of a generative process that transitions from a vanilla prototype to an overfitted prototype. ProtoDiff gradually generates task-specific prototypes from random noise during the meta-test stage, conditioned on the limited samples available for the new task. Furthermore, to expedite training and enhance ProtoDiff's performance, we propose the utilization of residual prototype learning, which leverages the sparsity of the residual prototype. We conduct thorough ablation studies to demonstrate its ability to accurately capture the underlying prototype distribution and enhance generalization. The new state-of-the-art performance on within-domain, cross-domain, and few-task few-shot classification further substantiates the benefit of ProtoDiff.
|
ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion
|
[
"Yingjun Du",
"Zehao Xiao",
"Shengcai Liao",
"Cees G. M. Snoek"
] |
Conference
|
poster
|
2306.14770
|
[
"https://github.com/ydu-uva/protodiff"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=loxinzXlCx
|
@inproceedings{
tyurin2023a,
title={A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting},
author={Alexander Tyurin and Peter Richt{\'a}rik},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=loxinzXlCx}
}
|
We present a new method that includes three key components of distributed optimization and federated learning: variance reduction of stochastic gradients, partial participation, and compressed communication. We prove that the new method has optimal oracle complexity and state-of-the-art communication complexity in the partial participation setting. Regardless of the communication compression feature, our method successfully combines variance reduction and partial participation: we get the optimal oracle complexity, never need the participation of all nodes, and do not require the bounded gradients (dissimilarity) assumption.
|
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting
|
[
"Alexander Tyurin",
"Peter Richtárik"
] |
Conference
|
poster
|
2205.15580
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=loixpHDZKj
|
@inproceedings{
li2023robust,
title={Robust Learning for Smoothed Online Convex Optimization with Feedback Delay},
author={Pengfei Li and Jianyi Yang and Adam Wierman and Shaolei Ren},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=loixpHDZKj}
}
|
We study a general form of Smoothed Online Convex Optimization, a.k.a. SOCO, including multi-step switching costs and feedback delay. We propose a novel machine learning (ML) augmented online algorithm, Robustness-Constrained Learning (RCL), which combines untrusted ML predictions with a trusted expert online algorithm via constrained projection to robustify the ML prediction. Specifically, we prove that RCL is able to guarantee $(1+\lambda)$-competitiveness against any given expert for any $\lambda>0$, while also explicitly training the ML model in a robustification-aware manner to improve the average-case performance. Importantly, RCL is the first ML-augmented algorithm with a provable robustness guarantee in the case of multi-step switching cost and feedback delay. We demonstrate the improvement of RCL in both robustness and average performance using battery management as a case study.
|
Robust Learning for Smoothed Online Convex Optimization with Feedback Delay
|
[
"Pengfei Li",
"Jianyi Yang",
"Adam Wierman",
"Shaolei Ren"
] |
Conference
|
poster
|
2310.20098
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lnTpBUge5G
|
@inproceedings{
yu2023sample,
title={Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and Optimal Algorithms},
author={Qian Yu and Yining Wang and Baihe Huang and Qi Lei and Jason D. Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lnTpBUge5G}
}
|
In stochastic zeroth-order optimization, a problem of practical relevance is understanding how to fully exploit the local geometry of the underlying objective function. We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity. Our contribution is twofold. First, from an information-theoretic point of view, we prove tight lower bounds on Hessian-dependent complexities by introducing a concept called \emph{energy allocation}, which captures the interaction between the searching algorithm and the geometry of objective functions. A matching upper bound is obtained by solving the optimal energy spectrum. Then, algorithmically, we show the existence of a Hessian-independent algorithm that universally achieves the asymptotic optimal sample complexities for all Hessian instances. The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions, which are enabled by a truncation method.
|
Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and Optimal Algorithms
|
[
"Qian Yu",
"Yining Wang",
"Baihe Huang",
"Qi Lei",
"Jason D. Lee"
] |
Conference
|
poster
|
2306.12383
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lmXNcKhj4c
|
@inproceedings{
chiu2023flexible,
title={Flexible Attention-Based Multi-Policy Fusion for Efficient Deep Reinforcement Learning},
author={Zih-Yun Chiu and Yi-Lin Tuan and William Yang Wang and Michael C. Yip},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lmXNcKhj4c}
}
|
Reinforcement learning (RL) agents have long sought to approach the efficiency of human learning. Humans are great observers who can learn by aggregating external knowledge from various sources, including observations from others' policies of attempting a task. Prior studies in RL have incorporated external knowledge policies to help agents improve sample efficiency. However, it remains non-trivial to perform arbitrary combinations and replacements of those policies, an essential feature for generalization and transferability. In this work, we present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility. We propose a new actor architecture for KGRL, Knowledge-Inclusive Attention Network (KIAN), which allows free knowledge rearrangement due to embedding-based attentive action prediction. KIAN also addresses entropy imbalance, a problem arising in maximum entropy KGRL that hinders an agent from efficiently exploring the environment, through a new design of policy distributions. The experimental results demonstrate that KIAN outperforms alternative methods incorporating external knowledge policies and achieves efficient and flexible learning. Our implementation is available at https://github.com/Pascalson/KGRL.git .
|
Flexible Attention-Based Multi-Policy Fusion for Efficient Deep Reinforcement Learning
|
[
"Zih-Yun Chiu",
"Yi-Lin Tuan",
"William Yang Wang",
"Michael C. Yip"
] |
Conference
|
poster
|
2210.03729
|
[
"https://github.com/pascalson/kgrl"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=llP6lmMiXE
|
@inproceedings{
sanborn2023a,
title={A General Framework for Robust G-Invariance in G-Equivariant Networks},
author={Sophia Sanborn and Nina Miolane},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=llP6lmMiXE}
}
|
We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks ($G$-CNNs), which we call the $G$-triple-correlation ($G$-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also \textit{complete}. Many commonly used invariant maps\textemdash such as the \texttt{max}\textemdash are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the $G$-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max $G$-Pooling in $G$-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for $G$-CNNs defined on both commutative and non-commutative groups\textemdash $SO(2)$, $O(2)$, $SO(3)$, and $O(3)$ (discretized as the cyclic $C8$, dihedral $D16$, chiral octahedral $O$ and full octahedral $O_h$ groups)\textemdash acting on $\mathbb{R}^2$ and $\mathbb{R}^3$ on both $G$-MNIST and $G$-ModelNet10 datasets.
|
A General Framework for Robust G-Invariance in G-Equivariant Networks
|
[
"Sophia Sanborn",
"Nina Miolane"
] |
Conference
|
poster
|
2310.18564
|
[
"https://github.com/gtc-invariance/gtc-invariance"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lkEiOZlmPm
|
@inproceedings{
makarychev2023singlepass,
title={Single-Pass Pivot Algorithm for Correlation Clustering. Keep it simple!},
author={Konstantin Makarychev and Sayak Chakrabarty},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lkEiOZlmPm}
}
|
We show that a simple single-pass semi-streaming variant of the Pivot algorithm for Correlation Clustering gives a (3+eps)-approximation using O(n/eps) words of memory. This is a slight improvement over the recent results of Cambus, Kuhn, Lindy, Pai, and Uitto, who gave a (3+eps)-approximation using O(n log n) words of memory, and Behnezhad, Charikar, Ma, and Tan, who gave a 5-approximation using O(n) words of memory. One of the main contributions of our paper is that the algorithm and its analysis are simple and easy to understand.
|
Single-Pass Pivot Algorithm for Correlation Clustering. Keep it simple!
|
[
"Konstantin Makarychev",
"Sayak Chakrabarty"
] |
Conference
|
poster
|
2305.13560
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lkBygTc0SI
|
@inproceedings{
meehan2023do,
title={Do {SSL} Models Have D\'ej\`a Vu? A Case of Unintended Memorization in Self-supervised Learning},
author={Casey Meehan and Florian Bordes and Pascal Vincent and Kamalika Chaudhuri and Chuan Guo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lkBygTc0SI}
}
|
Self-supervised learning (SSL) algorithms can produce useful image representations by learning to associate different parts of natural images with one another. However, when taken to the extreme, SSL models can unintendedly memorize specific parts in individual training samples rather than learning semantically meaningful associations. In this work, we perform a systematic study of the unintended memorization of image-specific information in SSL models -- which we refer to as déjà vu memorization. Concretely, we show that given the trained model and a crop of a training image containing only the background (e.g., water, sky, grass), it is possible to infer the foreground object with high accuracy or even visually reconstruct it. Furthermore, we show that déjà vu memorization is common to different SSL algorithms, is exacerbated by certain design choices, and cannot be detected by conventional techniques for evaluating representation quality. Our study of déjà vu memorization reveals previously unknown privacy risks in SSL models, as well as suggests potential practical mitigation strategies.
|
Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
|
[
"Casey Meehan",
"Florian Bordes",
"Pascal Vincent",
"Kamalika Chaudhuri",
"Chuan Guo"
] |
Conference
|
poster
|
2304.13850
|
[
"https://github.com/facebookresearch/dejavu"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lk6KDG6qI7
|
@inproceedings{
cheng2023a,
title={A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression},
author={Tin Sum Cheng and Aurelien Lucchi and Anastasis Kratsios and Ivan Dokmani{\'c} and David Belius},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lk6KDG6qI7}
}
|
Existing statistical learning guarantees for general kernel regressors often yield loose bounds when used with finite-rank kernels. Yet, finite-rank kernels naturally appear in a number of machine learning problems, e.g. when fine-tuning a pre-trained deep neural network's last layer to adapt it to a novel task when performing transfer learning. We address this gap for finite-rank kernel ridge regression (KRR) by deriving sharp non-asymptotic upper and lower bounds for the KRR test error of any finite-rank KRR. Our bounds are tighter than previously derived bounds on finite-rank KRR and, unlike comparable results, they also remain valid for any regularization parameters.
|
A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression
|
[
"Tin Sum Cheng",
"Aurelien Lucchi",
"Anastasis Kratsios",
"Ivan Dokmanić",
"David Belius"
] |
Conference
|
poster
|
2310.00987
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ljgM3vNqfQ
|
@inproceedings{
lai2023nominality,
title={Nominality Score Conditioned Time Series Anomaly Detection by Point/Sequential Reconstruction},
author={Chih-Yu Lai and Fan-Keng Sun and Zhengqi Gao and Jeffrey Lang and Duane S Boning},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ljgM3vNqfQ}
}
|
Time series anomaly detection is challenging due to the complexity and variety of patterns that can occur. One major difficulty arises from modeling time-dependent relationships to find contextual anomalies while maintaining detection accuracy for point anomalies. In this paper, we propose a framework for unsupervised time series anomaly detection that utilizes point-based and sequence-based reconstruction models. The point-based model attempts to quantify point anomalies, and the sequence-based model attempts to quantify both point and contextual anomalies. Under the formulation that the observed time point is a two-stage deviated value from a nominal time point, we introduce a nominality score calculated from the ratio of a combined value of the reconstruction errors. We derive an induced anomaly score by further integrating the nominality score and anomaly score, then theoretically prove the superiority of the induced anomaly score over the original anomaly score under certain conditions. Extensive studies conducted on several public datasets show that the proposed framework outperforms most state-of-the-art baselines for time series anomaly detection.
|
Nominality Score Conditioned Time Series Anomaly Detection by Point/Sequential Reconstruction
|
[
"Chih-Yu Lai",
"Fan-Keng Sun",
"Zhengqi Gao",
"Jeffrey Lang",
"Duane S Boning"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=liMSqUuVg9
|
@inproceedings{
bai2023transformers,
title={Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection},
author={Yu Bai and Fan Chen and Huan Wang and Caiming Xiong and Song Mei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=liMSqUuVg9}
}
|
Neural sequence models based on the transformer architecture have demonstrated remarkable \emph{in-context learning} (ICL) abilities, where they can perform new tasks when prompted with training and test examples, without any parameter update to the model. This work first provides a comprehensive statistical theory for transformers to perform ICL. Concretely, we show that transformers can implement a broad class of standard machine learning algorithms in context, such as least squares, ridge regression, Lasso, learning generalized linear models, and gradient descent on two-layer neural networks, with near-optimal predictive power on various in-context data distributions. Using an efficient implementation of in-context gradient descent as the underlying mechanism, our transformer constructions admit mild size bounds, and can be learned with polynomially many pretraining sequences.
Building on these ``base'' ICL algorithms, intriguingly, we show that transformers can implement more complex ICL procedures involving \emph{in-context algorithm selection}, akin to what a statistician can do in real life---A \emph{single} transformer can adaptively select different base ICL algorithms---or even perform qualitatively different tasks---on different input sequences, without any explicit prompting of the right algorithm or task. We both establish this in theory by explicit constructions, and also observe this phenomenon experimentally. In theory, we construct two general mechanisms for algorithm selection with concrete examples: pre-ICL testing, and post-ICL validation. As an example, we use the post-ICL validation mechanism to construct a transformer that can perform nearly Bayes-optimal ICL on a challenging task---noisy linear models with mixed noise levels. Experimentally, we demonstrate the strong in-context algorithm selection capabilities of standard transformer architectures.
|
Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection
|
[
"Yu Bai",
"Fan Chen",
"Huan Wang",
"Caiming Xiong",
"Song Mei"
] |
Conference
|
oral
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=leS8668NJm
|
@inproceedings{
jiao2023toward,
title={Toward Re-Identifying Any Animal},
author={Bingliang Jiao and Lingqiao Liu and Liying Gao and Ruiqi Wu and Guosheng Lin and PENG WANG and Yanning Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=leS8668NJm}
}
|
The current state of re-identification (ReID) models poses limitations to their applicability in the open world, as they are primarily designed and trained for specific categories like person or vehicle. In light of the importance of ReID technology for tracking wildlife populations and migration patterns, we propose a new task called ``Re-identify Any Animal in the Wild'' (ReID-AW). This task aims to develop a ReID model capable of handling any unseen wildlife category it encounters. To address this challenge, we have created a comprehensive dataset called Wildlife-71, which includes ReID data from 71 different wildlife categories. This dataset is the first of its kind to encompass multiple object categories in the realm of ReID. Furthermore, we have developed a universal re-identification model named UniReID specifically for the ReID-AW task. To enhance the model's adaptability to the target category, we employ a dynamic prompting mechanism using category-specific visual prompts. These prompts are generated based on knowledge gained from a set of pre-selected images within the target category. Additionally, we leverage explicit semantic knowledge derived from the large-scale pre-trained language model, GPT-4. This allows UniReID to focus on regions that are particularly useful for distinguishing individuals within the target category. Extensive experiments have demonstrated the remarkable generalization capability of our UniReID model. It showcases promising performance in handling arbitrary wildlife categories, offering significant advancements in the field of ReID for wildlife conservation and research purposes.
|
Toward Re-Identifying Any Animal
|
[
"Bingliang Jiao",
"Lingqiao Liu",
"Liying Gao",
"Ruiqi Wu",
"Guosheng Lin",
"PENG WANG",
"Yanning Zhang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lds9D17HRd
|
@inproceedings{
zhang2023a,
title={A Tale of Two Features: Stable Diffusion Complements {DINO} for Zero-Shot Semantic Correspondence},
author={Junyi Zhang and Charles Herrmann and Junhwa Hur and Luisa Polania Cabrera and Varun Jampani and Deqing Sun and Ming-Hsuan Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lds9D17HRd}
}
|
Text-to-image diffusion models have made significant advances in generating and editing high-quality images. As a result, numerous approaches have explored the ability of diffusion model features to understand and process single images for downstream tasks, e.g., classification, semantic segmentation, and stylization. However, significantly less is known about what these features reveal across multiple, different images and objects. In this work, we exploit Stable Diffusion (SD) features for semantic and dense correspondence and discover that with simple post-processing, SD features can perform quantitatively similar to SOTA representations. Interestingly, the qualitative analysis reveals that SD features have very different properties compared to existing representation learning features, such as the recently released DINOv2: while DINOv2 provides sparse but accurate matches, SD features provide high-quality spatial information but sometimes inaccurate semantic matches. We demonstrate that a simple fusion of these two features works surprisingly well, and a zero-shot evaluation using nearest neighbors on these fused features provides a significant performance gain over state-of-the-art methods on benchmark datasets, e.g., SPair-71k, PF-Pascal, and TSS. We also show that these correspondences can enable interesting applications such as instance swapping in two images. Project page: https://sd-complements-dino.github.io/.
|
A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
|
[
"Junyi Zhang",
"Charles Herrmann",
"Junhwa Hur",
"Luisa Polania Cabrera",
"Varun Jampani",
"Deqing Sun",
"Ming-Hsuan Yang"
] |
Conference
|
poster
|
2305.15347
|
[
"https://github.com/Junyi42/sd-dino"
] |
https://huggingface.co/papers/2305.15347
| 1 | 0 | 0 | 7 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=lclQ2RvWYu
|
@inproceedings{
zhao2023a,
title={A Single 2D Pose with Context is Worth Hundreds for 3D Human Pose Estimation},
author={Qitao Zhao and Ce Zheng and Mengyuan Liu and Chen Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lclQ2RvWYu}
}
|
The dominant paradigm in 3D human pose estimation that lifts a 2D pose sequence to 3D heavily relies on long-term temporal clues (i.e., using a daunting number of video frames) for improved accuracy, which incurs performance saturation, intractable computation and the non-causal problem. This can be attributed to their inherent inability to perceive spatial context as plain 2D joint coordinates carry no visual cues. To address this issue, we propose a straightforward yet powerful solution: leveraging the $\textit{readily available}$ intermediate visual representations produced by off-the-shelf (pre-trained) 2D pose detectors -- no finetuning on the 3D task is even needed. The key observation is that, while the pose detector learns to localize 2D joints, such representations (e.g., feature maps) implicitly encode the joint-centric spatial context thanks to the regional operations in backbone networks. We design a simple baseline named $\textbf{Context-Aware PoseFormer}$ to showcase its effectiveness. $\textit{Without access to any temporal information}$, the proposed method significantly outperforms its context-agnostic counterpart, PoseFormer, and other state-of-the-art methods using up to $\textit{hundreds of}$ video frames regarding both speed and precision. $\textit{Project page:}$ https://qitaozhao.github.io/ContextAware-PoseFormer
|
A Single 2D Pose with Context is Worth Hundreds for 3D Human Pose Estimation
|
[
"Qitao Zhao",
"Ce Zheng",
"Mengyuan Liu",
"Chen Chen"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lYNSvp51a7
|
@inproceedings{
cheng2023lift,
title={Lift Yourself Up: Retrieval-augmented Text Generation with Self-Memory},
author={Xin Cheng and Di Luo and Xiuying Chen and Lemao Liu and Dongyan Zhao and Rui Yan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lYNSvp51a7}
}
|
With direct access to human-written reference as memory, retrieval-augmented generation has achieved much progress in a wide range of text generation tasks. Since better memory would typically prompt better generation (we define this as primal problem). The traditional approach for memory retrieval involves selecting memory that exhibits the highest similarity to the input. However, this method is constrained by the quality of the fixed corpus from which memory is retrieved. In this paper, by exploring the duality of the primal problem: better generation also prompts better memory, we propose a novel framework, selfmem, which addresses this limitation by iteratively employing a retrieval-augmented generator to create an unbounded memory pool and using a memory selector to choose one output as memory for the subsequent generation round. This enables the model to leverage its own output, referred to as self-memory, for improved generation. We evaluate the effectiveness of selfmem on three distinct text generation tasks: neural machine translation, abstractive text summarization, and dialogue generation, under two generation paradigms: fine-tuned small model and few-shot LLM. Our approach achieves state-of-the-art results in four directions in JRC-Acquis translation dataset, 50.3 ROUGE-1 in XSum, and 62.9 ROUGE-1 in BigPatent, demonstrating the potential of self-memory in enhancing retrieval-augmented generation models. Furthermore, we conduct thorough analyses of each component in the selfmem framework to identify current system bottlenecks and provide insights for future research.
|
Lift Yourself Up: Retrieval-augmented Text Generation with Self-Memory
|
[
"Xin Cheng",
"Di Luo",
"Xiuying Chen",
"Lemao Liu",
"Dongyan Zhao",
"Rui Yan"
] |
Conference
|
poster
|
[
"https://github.com/hannibal046/selfmemory"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lXuByUeHhd
|
@inproceedings{
xie2023doremi,
title={DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining},
author={Sang Michael Xie and Hieu Pham and Xuanyi Dong and Nan Du and Hanxiao Liu and Yifeng Lu and Percy Liang and Quoc V Le and Tengyu Ma and Adams Wei Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lXuByUeHhd}
}
|
The mixture proportions of pretraining data domains (e.g., Wikipedia, books, web text) greatly affect language model (LM) performance. In this paper, we propose Domain Reweighting with Minimax Optimization (DoReMi), which first trains a small proxy model using group distributionally robust optimization (Group DRO) over domains to produce domain weights (mixture proportions) without knowledge of downstream tasks. We then resample a dataset with these domain weights and train a larger, full-sized model. In our experiments, we use DoReMi on a 280M-parameter proxy model to set the domain weights for training an 8B-parameter model (30x larger) more efficiently. On The Pile, DoReMi improves perplexity across all domains, even when it downweights a domain. DoReMi improves average few-shot downstream accuracy by 6.5% points over a baseline model trained using The Pile's default domain weights and reaches the baseline accuracy with 2.6x fewer training steps. On the GLaM dataset, DoReMi, which has no knowledge of downstream tasks, even matches the performance of using domain weights tuned on downstream tasks.
|
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
|
[
"Sang Michael Xie",
"Hieu Pham",
"Xuanyi Dong",
"Nan Du",
"Hanxiao Liu",
"Yifeng Lu",
"Percy Liang",
"Quoc V Le",
"Tengyu Ma",
"Adams Wei Yu"
] |
Conference
|
spotlight
|
2305.10429
|
[
"https://github.com/sangmichaelxie/doremi"
] |
https://huggingface.co/papers/2305.10429
| 2 | 3 | 2 | 10 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=lXOoR4KYcJ
|
@inproceedings{
luo2023entropybased,
title={Entropy-based Training Methods for Scalable Neural Implicit Samplers},
author={Weijian Luo and Boya Zhang and Zhihua Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lXOoR4KYcJ}
}
|
Efficiently sampling from un-normalized target distributions is a fundamental problem in scientific computing and machine learning. Traditional approaches such as Markov Chain Monte Carlo (MCMC) guarantee asymptotically unbiased samples from such distributions but suffer from computational inefficiency, particularly when dealing with high-dimensional targets, as they require numerous iterations to generate a batch of samples. In this paper, we introduce an efficient and scalable neural implicit sampler that overcomes these limitations. The implicit sampler can generate large batches of samples with low computational costs by leveraging a neural transformation that directly maps easily sampled latent vectors to target samples without the need for iterative procedures. To train the neural implicit samplers, we introduce two novel methods: the KL training method and the Fisher training method. The former method minimizes the Kullback-Leibler divergence, while the latter minimizes the Fisher divergence between the sampler and the target distributions. By employing the two training methods, we effectively optimize the neural implicit samplers to learn and generate from the desired target distribution. To demonstrate the effectiveness, efficiency, and scalability of our proposed samplers, we evaluate them on three sampling benchmarks with different scales. These benchmarks include sampling from 2D targets, Bayesian inference, and sampling from high-dimensional energy-based models (EBMs). Notably, in the experiment involving high-dimensional EBMs, our sampler produces samples that are comparable to those generated by MCMC-based methods while being more than 100 times more efficient, showcasing the efficiency of our neural sampler. Besides the theoretical contributions and strong empirical performances, the proposed neural samplers and corresponding training methods will shed light on further research on developing efficient samplers for various applications beyond the ones explored in this study.
|
Entropy-based Training Methods for Scalable Neural Implicit Samplers
|
[
"Weijian Luo",
"Boya Zhang",
"Zhihua Zhang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lV3LIGlc1w
|
@inproceedings{
yang2023not,
title={Not All Out-of-Distribution Data Are Harmful to Open-Set Active Learning},
author={Yang Yang and Yuxuan Zhang and XIN SONG and Yi Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lV3LIGlc1w}
}
|
Active learning (AL) methods have been proven to be an effective way to reduce the labeling effort by intelligently selecting valuable instances for annotation. Despite their great success with in-distribution (ID) scenarios, AL methods suffer from performance degradation in many real-world applications because out-of-distribution (OOD) instances are always inevitably contained in unlabeled data, which may lead to inefficient sampling. Therefore, several attempts have been explored open-set AL by strategically selecting pure ID instances while filtering OOD instances. However, concentrating solely on selecting pseudo-ID instances may cause the training constraint of the ID classifier and OOD detector. To address this issue, we propose a simple yet effective sampling scheme, Progressive Active Learning (PAL), which employs a progressive sampling mechanism to leverage the active selection of valuable OOD instances. The proposed PAL measures unlabeled instances by synergistically evaluating instances' informativeness and representativeness, and thus it can balance the pseudo-ID and pseudo-OOD instances in each round to enhance both the capacity of the ID classifier and the OOD detector. %Meanwhile, PAL measures unlabeled instances by synergistically evaluating instances' informativeness and representativeness, which can more effectively estimate the values of instances.
Extensive experiments on various open-set AL scenarios demonstrate the effectiveness of the proposed PAL, compared with the state-of-the-art methods. The code is available at \url{https://github.com/njustkmg/PAL}.
|
Not All Out-of-Distribution Data Are Harmful to Open-Set Active Learning
|
[
"Yang Yang",
"Yuxuan Zhang",
"XIN SONG",
"Yi Xu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lT9n36RH1w
|
@inproceedings{
zhang2023unconstrained,
title={Unconstrained Dynamic Regret via Sparse Coding},
author={Zhiyu Zhang and Ashok Cutkosky and Ioannis Paschalidis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lT9n36RH1w}
}
|
Motivated by the challenge of nonstationarity in sequential decision making, we study Online Convex Optimization (OCO) under the coupling of two problem structures: the domain is unbounded, and the comparator sequence $u_1,\ldots,u_T$ is arbitrarily time-varying. As no algorithm can guarantee low regret simultaneously against all comparator sequences, handling this setting requires moving from minimax optimality to comparator adaptivity. That is, sensible regret bounds should depend on certain complexity measures of the comparator relative to one's prior knowledge. This paper achieves a new type of such adaptive regret bounds leveraging a sparse coding framework. The complexity of the comparator is measured by its energy and its sparsity on a user-specified dictionary, which offers considerable versatility. For example, equipped with a wavelet dictionary, our framework improves the state-of-the-art bound (Jacobsen & Cutkosky, 2022) by adapting to both ($i$) the magnitude of the comparator average $||\bar u||=||\sum_{t=1}^Tu_t/T||$, rather than the maximum $\max_t||u_t||$; and ($ii$) the comparator variability $\sum_{t=1}^T||u_t-\bar u||$, rather than the uncentered sum $\sum_{t=1}^T||u_t||$. Furthermore, our proof is simpler due to decoupling function approximation from regret minimization.
|
Unconstrained Dynamic Regret via Sparse Coding
|
[
"Zhiyu Zhang",
"Ashok Cutkosky",
"Ioannis Paschalidis"
] |
Conference
|
poster
|
2301.13349
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lSbbC2VyCu
|
@inproceedings{
rame2023rewarded,
title={Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards},
author={Alexandre Rame and Guillaume Couairon and Corentin Dancette and Jean-Baptiste Gaya and Mustafa Shukor and Laure Soulier and Matthieu Cord},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lSbbC2VyCu}
}
|
Foundation models are first pre-trained on vast unsupervised datasets and then fine-tuned on labeled data. Reinforcement learning, notably from human feedback (RLHF), can further align the network with the intended usage. Yet the imperfections in the proxy reward may hinder the training and lead to suboptimal results; the diversity of objectives in real-world tasks and human opinions exacerbate the issue. This paper proposes embracing the heterogeneity of diverse rewards by following a multi-policy strategy. Rather than focusing on a single a priori reward, we aim for Pareto-optimal generalization across the entire space of preferences. To this end, we propose rewarded soup, first specializing multiple networks independently (one for each proxy reward) and then interpolating their weights linearly. This succeeds empirically because we show that the weights remain linearly connected when fine-tuned on diverse rewards from a shared pre-trained initialization. We demonstrate the effectiveness of our approach for text-to-text (summarization, Q&A, helpful assistant, review), text-image (image captioning, text-to-image generation, visual grounding), and control (locomotion) tasks. We hope to enhance the alignment of deep models, and how they interact with the world in all its diversity.
|
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
|
[
"Alexandre Rame",
"Guillaume Couairon",
"Corentin Dancette",
"Jean-Baptiste Gaya",
"Mustafa Shukor",
"Laure Soulier",
"Matthieu Cord"
] |
Conference
|
poster
|
2306.04488
|
[
"https://github.com/alexrame/rewardedsoups"
] |
https://huggingface.co/papers/2306.04488
| 1 | 2 | 0 | 7 | 1 |
[] |
[] |
[
"alexrame/rewardedsoups"
] |
null |
https://openreview.net/forum?id=lSLYXuLqRQ
|
@inproceedings{
wang2023masked,
title={Masked Space-Time Hash Encoding for Efficient Dynamic Scene Reconstruction},
author={Feng Wang and Zilong Chen and Guokang Wang and Yafei Song and Huaping Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lSLYXuLqRQ}
}
|
In this paper, we propose the Masked Space-Time Hash encoding (MSTH), a novel method for efficiently reconstructing dynamic 3D scenes from multi-view or monocular videos. Based on the observation that dynamic scenes often contain substantial static areas that result in redundancy in storage and computations, MSTH represents a dynamic scene as a weighted combination of a 3D hash encoding and a 4D hash encoding. The weights for the two components are represented by a learnable mask which is guided by an uncertainty-based objective to reflect the spatial and temporal importance of each 3D position. With this design, our method can reduce the hash collision rate by avoiding redundant queries and modifications on static areas, making it feasible to represent a large number of space-time voxels by hash tables with small size.Besides, without the requirements to fit the large numbers of temporally redundant features independently, our method is easier to optimize and converge rapidly with only twenty minutes of training for a 300-frame dynamic scene. We evaluate our method on extensive dynamic scenes. As a result, MSTH obtains consistently better results than previous state-of-the-art methods with only 20 minutes of training time and 130 MB of memory storage.
|
Masked Space-Time Hash Encoding for Efficient Dynamic Scene Reconstruction
|
[
"Feng Wang",
"Zilong Chen",
"Guokang Wang",
"Yafei Song",
"Huaping Liu"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lRxpVfDMzz
|
@inproceedings{
ge2023extensible,
title={Extensible Prompts for Language Models on Zero-shot Language Style Customization},
author={Tao Ge and Jing Hu and Li Dong and Shaoguang Mao and Yan Xia and Xun Wang and Si-Qing Chen and Furu Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lRxpVfDMzz}
}
|
We propose eXtensible Prompt (X-Prompt) for prompting a large language model (LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL but also an extensible vocabulary of imaginary words. Registering new imaginary words allows us to instruct the LLM to comprehend concepts that are difficult to describe with NL words, thereby making a prompt more descriptive. Also, these imaginary words are designed to be out-of-distribution (OOD) robust so that they can be (re)used like NL words in various prompts, distinguishing X-Prompt from soft prompt that is for fitting in-distribution data. We propose context-augmented learning (CAL) to learn imaginary words for general usability, enabling them to work properly in OOD (unseen) prompts. We experiment X-Prompt for zero-shot language style customization as a case study. The promising results of X-Prompt demonstrate its potential to facilitate advanced interaction beyond the natural language interface, bridging the communication gap between humans and LLMs.
|
Extensible Prompts for Language Models on Zero-shot Language Style Customization
|
[
"Tao Ge",
"Jing Hu",
"Li Dong",
"Shaoguang Mao",
"Yan Xia",
"Xun Wang",
"Si-Qing Chen",
"Furu Wei"
] |
Conference
|
poster
|
2212.00616
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lRu0dN7BY6
|
@inproceedings{
zhu2023social,
title={Social Motion Prediction with Cognitive Hierarchies},
author={Wentao Zhu and Jason Qin and Yuke Lou and Hang Ye and Xiaoxuan Ma and Hai Ci and Yizhou Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lRu0dN7BY6}
}
|
Humans exhibit a remarkable capacity for anticipating the actions of others and planning their own actions accordingly. In this study, we strive to replicate this ability by addressing the social motion prediction problem. We introduce a new benchmark, a novel formulation, and a cognition-inspired framework. We present Wusi, a 3D multi-person motion dataset under the context of team sports, which features intense and strategic human interactions and diverse pose distributions. By reformulating the problem from a multi-agent reinforcement learning perspective, we incorporate behavioral cloning and generative adversarial imitation learning to boost learning efficiency and generalization. Furthermore, we take into account the cognitive aspects of the human social action planning process and develop a cognitive hierarchy framework to predict strategic human social interactions. We conduct comprehensive experiments to validate the effectiveness of our proposed dataset and approach.
|
Social Motion Prediction with Cognitive Hierarchies
|
[
"Wentao Zhu",
"Jason Qin",
"Yuke Lou",
"Hang Ye",
"Xiaoxuan Ma",
"Hai Ci",
"Yizhou Wang"
] |
Conference
|
poster
|
2311.04726
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lRG11M91dx
|
@inproceedings{
lin2023functionalgroupbased,
title={Functional-Group-Based Diffusion for Pocket-Specific Molecule Generation and Elaboration},
author={Haitao Lin and Yufei Huang and Odin Zhang and Yunfan Liu and Lirong Wu and Siyuan Li and Zhiyuan Chen and Stan Z. Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lRG11M91dx}
}
|
In recent years, AI-assisted drug design methods have been proposed to generate molecules given the pockets' structures of target proteins. Most of them are {\em atom-level-based} methods, which consider atoms as basic components and generate atom positions and types. In this way, however, it is hard to generate realistic fragments with complicated structures. To solve this, we propose \textsc{D3FG}, a {\em functional-group-based} diffusion model for pocket-specific molecule generation and elaboration. \textsc{D3FG} decomposes molecules into two categories of components: functional groups defined as rigid bodies and linkers as mass points. And the two kinds of components can together form complicated fragments that enhance ligand-protein interactions.
To be specific, in the diffusion process, \textsc{D3FG} diffuses the data distribution of the positions, orientations, and types of the components into a prior distribution; In the generative process, the noise is gradually removed from the three variables by denoisers parameterized with designed equivariant graph neural networks. In the experiments, our method can generate molecules with more realistic 3D structures, competitive affinities toward the protein targets, and better drug properties. Besides, \textsc{D3FG} as a solution to a new task of molecule elaboration, could generate molecules with high affinities based on existing ligands and the hotspots of target proteins.
|
Functional-Group-Based Diffusion for Pocket-Specific Molecule Generation and Elaboration
|
[
"Haitao Lin",
"Yufei Huang",
"Odin Zhang",
"Yunfan Liu",
"Lirong Wu",
"Siyuan Li",
"Zhiyuan Chen",
"Stan Z. Li"
] |
Conference
|
poster
|
2306.13769
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lOCHMGO6ow
|
@inproceedings{
park2023energybased,
title={Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models},
author={Geon Yeong Park and Jeongsol Kim and Beomsu Kim and Sang Wan Lee and Jong Chul Ye},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lOCHMGO6ow}
}
|
Despite the remarkable performance of text-to-image diffusion models in image generation tasks, recent studies have raised the issue that generated images sometimes cannot capture the intended semantic contents of the text prompts, which phenomenon is often called semantic misalignment. To address this, here we present a novel energy-based model (EBM) framework for adaptive context control by modeling the posterior of context vectors. Specifically, we first formulate EBMs of latent image representations and text embeddings in each cross-attention layer of the denoising autoencoder. Then, we obtain the gradient of the log posterior of context vectors, which can be updated and transferred to the subsequent cross-attention layer, thereby implicitly minimizing a nested hierarchy of energy functions.
Our latent EBMs further allow zero-shot compositional generation as a linear combination of cross-attention outputs from different contexts.
Using extensive experiments, we demonstrate that the proposed method is highly effective in handling various image generation tasks, including multi-concept generation, text-guided image inpainting, and real and synthetic image editing. Code: https://github.com/EnergyAttention/Energy-Based-CrossAttention.
|
Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models
|
[
"Geon Yeong Park",
"Jeongsol Kim",
"Beomsu Kim",
"Sang Wan Lee",
"Jong Chul Ye"
] |
Conference
|
poster
|
2306.09869
|
[
"https://github.com/energyattention/energy-based-crossattention"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lM1UnEssuX
|
@inproceedings{
aliakbarpour2023hypothesis,
title={Hypothesis Selection with Memory Constraints},
author={Maryam Aliakbarpour and Mark Bun and Adam Smith},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lM1UnEssuX}
}
|
Hypothesis selection is a fundamental problem in learning theory and statistics.
Given a dataset and a finite set of candidate distributions, the goal is to select a distribution that matches the data as well as possible.
More specifically, suppose we have sample access to an unknown distribution $P$ over a domain $\mathcal{X}$ that we know is well-approximated by one of a
a class of $n$ distributions (a.k.a. hypotheses), $\mathcal{H} \coloneqq \{H_1, H_2, \ldots, H_n\}$. The goal is to design an algorithm that outputs a distribution $\hat{H} \in \mathcal{H}$ whose total variation distance from $P$ is nearly minimal.
In this work, we study the hypothesis selection problem under memory constraints. We consider a model where samples from $P$ are presented in a stream and we access each sample $x$ via ``PDF-comparison'' queries that allow us to compare the probability densities of any pair of hypotheses
at the domain point $x$ (i.e., is $H_i(x) < H_j(x)$?). This model allows us to study how much memory is needed at any point in time to store information about the portion of the stream seen so far.
Our main result is an algorithm that achieves a nearly optimal tradeoff between memory usage and the number of samples required. In particular, given $b$ bits of memory (for $b$ roughly between $\log n$ and $n$), our algorithm solves the hypothesis selection problem with $s$ samples, where $b \cdot s = O(n \log n)$. This result is optimal up to an $O(\log n)$ factor, for all $b$.
|
Hypothesis Selection with Memory Constraints
|
[
"Maryam Aliakbarpour",
"Mark Bun",
"Adam Smith"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lM0xyViO90
|
@inproceedings{
anagnostides2023on,
title={On the Interplay between Social Welfare and Tractability of Equilibria},
author={Ioannis Anagnostides and Tuomas Sandholm},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lM0xyViO90}
}
|
Computational tractability and social welfare (aka. efficiency) of equilibria are two fundamental but in general orthogonal considerations in algorithmic game theory. Nevertheless, we show that when (approximate) full efficiency can be guaranteed via a smoothness argument a la Roughgarden, Nash equilibria are approachable under a family of no-regret learning algorithms, thereby enabling fast and decentralized computation. We leverage this connection to obtain new convergence results in large games---wherein the number of players $n \gg 1$---under the well-documented property of full efficiency via smoothness in the limit. Surprisingly, our framework unifies equilibrium computation in disparate classes of problems including games with vanishing strategic sensitivity and two-player zero-sum games, illuminating en route an immediate but overlooked equivalence between smoothness and a well-studied condition in the optimization literature known as the Minty property. Finally, we establish that a family of no-regret dynamics attains a welfare bound that improves over the smoothness framework while at the same time guaranteeing convergence to the set of coarse correlated equilibria. We show this by employing the clairvoyant mirror descent algortihm recently introduced by Piliouras et al.
|
On the Interplay between Social Welfare and Tractability of Equilibria
|
[
"Ioannis Anagnostides",
"Tuomas Sandholm"
] |
Conference
|
poster
|
2310.16976
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lLztVBaBVU
|
@inproceedings{
cho2023pdp,
title={{PDP}: Parameter-free Differentiable Pruning is All You Need},
author={Minsik Cho and Saurabh Adya and Devang Naik},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lLztVBaBVU}
}
|
DNN pruning is a popular way to reduce the size of a model, improve the inference
latency, and minimize the power consumption on DNN accelerators. However,
existing approaches might be too complex, expensive or ineffective to apply to
a variety of vision/language tasks, DNN architectures and to honor structured
pruning constraints. In this paper, we propose an efficient yet effective train-time
pruning scheme, Parameter-free Differentiable Pruning (PDP), which offers state-
of-the-art qualities in model size, accuracy, and training cost. PDP uses a dynamic
function of weights during training to generate soft pruning masks for the weights
in a parameter-free manner for a given pruning target. While differentiable, the
simplicity and efficiency of PDP make it universal enough to deliver state-of-the-art
random/structured/channel pruning results on various vision and natural language
tasks. For example, for MobileNet-v1, PDP can achieve 68.2% top-1 ImageNet1k
accuracy at 86.6% sparsity, which is 1.7% higher accuracy than those from the
state-of-the-art algorithms. Also, PDP yields over 83.1% accuracy on Multi-Genre
Natural Language Inference with 90% sparsity for BERT, while the next best from
the existing techniques shows 81.5% accuracy. In addition, PDP can be applied to
structured pruning, such as N:M pruning and channel pruning. For 1:4 structured
pruning of ResNet18, PDP improved the top-1 ImageNet1k accuracy by over 3.6%
over the state-of-the-art. For channel pruning of ResNet50, PDP reduced the top-1
ImageNet1k accuracy by 0.6% from the state-of-the-art.
|
PDP: Parameter-free Differentiable Pruning is All You Need
|
[
"Minsik Cho",
"Saurabh Adya",
"Devang Naik"
] |
Conference
|
poster
|
2305.11203
|
[
""
] |
https://huggingface.co/papers/2305.11203
| 0 | 0 | 0 | 3 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=lJWUJWLCJo
|
@inproceedings{
bertsch2023unlimiformer,
title={Unlimiformer: Long-Range Transformers with Unlimited Length Input},
author={Amanda Bertsch and Uri Alon and Graham Neubig and Matthew R. Gormley},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lJWUJWLCJo}
}
|
Since the proposal of transformers, these models have been limited to bounded input lengths, because of their need to attend to every token in the input. In this work, we propose Unlimiformer: a general approach that wraps any existing pretrained encoder-decoder transformer, and offloads the cross-attention computation to a single $k$-nearest-neighbor ($k$NN) index, while the returned $k$NN distances are the attention dot-product scores. This $k$NN index can be kept on either the GPU or CPU memory and queried in sub-linear time; this way, we can index practically unlimited input sequences, while every attention head in every decoder layer retrieves its top-$k$ keys, instead of attending to every key. We evaluate Unlimiformer on several long-document and book-summarization benchmarks, showing that it can process even **500k** token-long inputs from the BookSum dataset, without any input truncation at test time. We demonstrate that Unlimiformer improves pretrained models such as BART and Longformer by extending them to unlimited inputs without additional learned weights and without modifying their code. Our code and models are publicly available at https://github.com/abertsch72/unlimiformer , and support LLaMA-2 as well.
|
Unlimiformer: Long-Range Transformers with Unlimited Length Input
|
[
"Amanda Bertsch",
"Uri Alon",
"Graham Neubig",
"Matthew R. Gormley"
] |
Conference
|
poster
|
2305.01625
|
[
"https://github.com/abertsch72/unlimiformer"
] |
https://huggingface.co/papers/2305.01625
| 1 | 6 | 3 | 4 | 1 |
[
"abertsch/unlimiformer-bart-booksum-alternating",
"abertsch/unlimiformer-bart-summscreen-retrieval",
"abertsch/bart-base-govreport",
"abertsch/unlimiformer-bart-summscreen-earlyk",
"abertsch/unlimiformer-bart-booksum-retrieval",
"abertsch/bart-base-summscreen",
"abertsch/unlimiformer-bart-govreport-alternating",
"abertsch/unlimiformer-bart-booksum-random-encoding",
"abertsch/bart-base-booksum",
"abertsch/unlimiformer-earlyk-bart-booksum",
"abertsch/unlimiformer-bart-govreport-earlyk"
] |
[] |
[] |
null |
https://openreview.net/forum?id=lJDoPAjkCV
|
@inproceedings{
wang2023goldyolo,
title={Gold-{YOLO}: Efficient Object Detector via Gather-and-Distribute Mechanism},
author={Chengcheng Wang and Wei He and Ying Nie and Jianyuan Guo and Chuanjian Liu and Yunhe Wang and Kai Han},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lJDoPAjkCV}
}
|
In the past years, YOLO-series models have emerged as the leading approaches in the area of real-time object detection. Many studies pushed up the baseline to a higher level by modifying the architecture, augmenting data and designing new losses. However, we find previous models still suffer from information fusion problem, although Feature Pyramid Network (FPN) and Path Aggregation Network (PANet) have alleviated this. Therefore, this study provides an advanced Gatherand-Distribute mechanism (GD) mechanism, which is realized with convolution and self-attention operations. This new designed model named as Gold-YOLO, which boosts the multi-scale feature fusion capabilities and achieves an ideal balance between latency and accuracy across all model scales. Additionally, we implement MAE-style pretraining in the YOLO-series for the first time, allowing YOLOseries models could be to benefit from unsupervised pretraining. Gold-YOLO-N attains an outstanding 39.9% AP on the COCO val2017 datasets and 1030 FPS on a T4 GPU, which outperforms the previous SOTA model YOLOv6-3.0-N with similar FPS by +2.4%. The PyTorch code is available at https://github.com/huawei-noah/Efficient-Computing/tree/master/Detection/Gold-YOLO, and the MindSpore code is available at https://gitee.com/mindspore/models/tree/master/research/cv/Gold_YOLO.
|
Gold-YOLO: Efficient Object Detector via Gather-and-Distribute Mechanism
|
[
"Chengcheng Wang",
"Wei He",
"Ying Nie",
"Jianyuan Guo",
"Chuanjian Liu",
"Yunhe Wang",
"Kai Han"
] |
Conference
|
poster
|
2309.11331
|
[
"https://github.com/huawei-noah/Efficient-Computing/tree/master/Detection/Gold-YOLO"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lHa7gFbmvS
|
@inproceedings{
ding2023the,
title={The {CLIP} Model is Secretly an Image-to-Prompt Converter},
author={Yuxuan Ding and Chunna Tian and Haoxuan Ding and Lingqiao Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lHa7gFbmvS}
}
|
The Stable Diffusion model is a prominent text-to-image generation model that relies on a text prompt as its input, which is encoded using the Contrastive Language-Image Pre-Training (CLIP). However, text prompts have limitations when it comes to incorporating implicit information from reference images. Existing methods have attempted to address this limitation by employing expensive training procedures involving millions of training samples for image-to-image generation. In contrast, this paper demonstrates that the CLIP model, as utilized in Stable Diffusion, inherently possesses the ability to instantaneously convert images into text prompts. Such an image-to-prompt conversion can be achieved by utilizing a linear projection matrix that is calculated in a closed form. Moreover, the paper showcases that this capability can be further enhanced by either utilizing a small amount of similar-domain training data (approximately 100 images) or incorporating several online training steps (around 30 iterations) on the reference images. By leveraging these approaches, the proposed method offers a simple and flexible solution to bridge the gap between images and text prompts. This methodology can be applied to various tasks such as image variation and image editing, facilitating more effective and seamless interaction between images and textual prompts.
|
The CLIP Model is Secretly an Image-to-Prompt Converter
|
[
"Yuxuan Ding",
"Chunna Tian",
"Haoxuan Ding",
"Lingqiao Liu"
] |
Conference
|
poster
|
2305.12716
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lDI3ZuyzM9
|
@inproceedings{
salameh2023autogo,
title={Auto{GO}: Automated Computation Graph Optimization for Neural Network Evolution},
author={Mohammad Salameh and Keith G Mills and Negar Hassanpour and Fred X. Han and Shuting Zhang and Wei Lu and SHANGLING JUI and CHUNHUA ZHOU and Fengyu Sun and Di Niu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lDI3ZuyzM9}
}
|
Optimizing Deep Neural Networks (DNNs) to obtain high-quality models for efficient real-world deployment has posed multi-faceted challenges to machine learning engineers. Existing methods either search for neural architectures in heuristic design spaces or apply low-level adjustments to computation primitives to improve inference efficiency on hardware. We present Automated Graph Optimization (AutoGO), a framework to evolve neural networks in a low-level Computation Graph (CG) of primitive operations to improve both its performance and hardware friendliness. Through a tokenization scheme, AutoGO performs variable-sized segment mutations, making both primitive changes and larger-grained changes to CGs. We introduce our segmentation and mutation algorithms, efficient frequent segment mining technique, as well as a pretrained context-aware predictor to estimate the impact of segment replacements. Extensive experimental results show that AutoGO can automatically evolve several typical large convolutional networks to achieve significant task performance improvement and FLOPs reduction on a range of CV tasks, ranging from Classification, Semantic Segmentation, Human Pose Estimation, to Super Resolution, yet without introducing any newer primitive operations. We also demonstrate the lightweight deployment results of AutoGO-optimized super-resolution and denoising U-Nets on a cycle simulator for a Neural Processing Unit (NPU), achieving PSNR improvement and latency/power reduction simultaneously. Code available at https://github.com/Ascend-Research/AutoGO.
|
AutoGO: Automated Computation Graph Optimization for Neural Network Evolution
|
[
"Mohammad Salameh",
"Keith G Mills",
"Negar Hassanpour",
"Fred X. Han",
"Shuting Zhang",
"Wei Lu",
"SHANGLING JUI",
"CHUNHUA ZHOU",
"Fengyu Sun",
"Di Niu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lD8xaUWw24
|
@inproceedings{
celotto2023an,
title={An information-theoretic quantification of the content of communication between brain regions},
author={Marco Celotto and Jan B{\'\i}m and Alejandro Tlaie and Vito De Feo and Alessandro Toso and Stefan M Lemke and Daniel Chicharro and Hamed Nili and Malte Bieler and Ileana Livia Hanganu-Opatz and Tobias H. Donner and Andrea Brovelli and Stefano Panzeri},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lD8xaUWw24}
}
|
Quantifying the amount, content and direction of communication between brain regions is key to understanding brain function. Traditional methods to analyze brain activity based on the Wiener-Granger causality principle quantify the overall information propagated by neural activity between simultaneously recorded brain regions, but do not reveal the information flow about specific features of interest (such as sensory stimuli). Here, we develop a new information theoretic measure termed Feature-specific Information Transfer (FIT), quantifying how much information about a specific feature flows between two regions. FIT merges the Wiener-Granger causality principle with information-content specificity. We first derive FIT and prove analytically its key properties. We then illustrate and test them with simulations of neural activity, demonstrating that FIT identifies, within the total information propagated between regions, the information that is transmitted about specific features. We then analyze three neural datasets obtained with different recording methods, magneto- and electro-encephalography, and spiking activity, to demonstrate the ability of FIT to uncover the content and direction of information flow between brain regions beyond what can be discerned with traditional analytical methods. FIT can improve our understanding of how brain regions communicate by uncovering previously unaddressed feature-specific information flow.
|
An information-theoretic quantification of the content of communication between brain regions
|
[
"Marco Celotto",
"Jan Bím",
"Alejandro Tlaie",
"Vito De Feo",
"Alessandro Toso",
"Stefan M Lemke",
"Daniel Chicharro",
"Hamed Nili",
"Malte Bieler",
"Ileana Livia Hanganu-Opatz",
"Tobias H. Donner",
"Andrea Brovelli",
"Stefano Panzeri"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lCThtrJxoH
|
@inproceedings{
mcaleer2023teampsro,
title={Team-{PSRO} for Learning Approximate {TMEC}or in Large Team Games via Cooperative Reinforcement Learning},
author={Stephen Marcus McAleer and Gabriele Farina and Gaoyue Zhou and Mingzhi Wang and Yaodong Yang and Tuomas Sandholm},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lCThtrJxoH}
}
|
Recent algorithms have achieved superhuman performance at a number of two-player zero-sum games such as poker and go. However, many real-world situations are multi-player games. Zero-sum two-team games, such as bridge and football, involve two teams where each member of the team shares the same reward with every other member of that team, and each team has the negative of the reward of the other team. A popular solution concept in this setting, called TMECor, assumes that teams can jointly correlate their strategies before play, but are not able to communicate during play. This setting is harder than two-player zero-sum games because each player on a team has different information and must use their public actions to signal to other members of the team. Prior works either have game-theoretic guarantees but only work in very small games, or are able to scale to large games but do not have game-theoretic guarantees. In this paper we introduce two algorithms: Team-PSRO, an extension of PSRO from two-player games to team games, and Team-PSRO Mix-and-Match which improves upon Team PSRO by better using population policies. In Team-PSRO, in every iteration both teams learn a joint best response to the opponent's meta-strategy via reinforcement learning. As the reinforcement learning joint best response approaches the optimal best response, Team-PSRO is guaranteed to converge to a TMECor. In experiments on Kuhn poker and Liar's Dice, we show that a tabular version of Team-PSRO converges to TMECor, and a version of Team PSRO using deep cooperative reinforcement learning beats self-play reinforcement learning in the large game of Google Research Football.
|
Team-PSRO for Learning Approximate TMECor in Large Team Games via Cooperative Reinforcement Learning
|
[
"Stephen Marcus McAleer",
"Gabriele Farina",
"Gaoyue Zhou",
"Mingzhi Wang",
"Yaodong Yang",
"Tuomas Sandholm"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=lBhRTO2uWf
|
@inproceedings{
barrab{\'e}s2023adversarial,
title={Adversarial Learning for Feature Shift Detection and Correction},
author={M{\'\i}riam Barrab{\'e}s and Daniel Mas Montserrat and Margarita Geleta and Xavier Gir{\'o}-i-Nieto and Alexander G Ioannidis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lBhRTO2uWf}
}
|
Data shift is a phenomenon present in many real-world applications, and while there are multiple methods attempting to detect shifts, the task of localizing and correcting the features originating such shifts has not been studied in depth. Feature shifts can occur in many datasets, including in multi-sensor data, where some sensors are malfunctioning, or in tabular and structured data, including biomedical, financial, and survey data, where faulty standardization and data processing pipelines can lead to erroneous features. In this work, we explore using the principles of adversarial learning, where the information from several discriminators trained to distinguish between two distributions is used to both detect the corrupted features and fix them in order to remove the distribution shift between datasets. We show that mainstream supervised classifiers, such as random forest or gradient boosting trees, combined with simple iterative heuristics, can localize and correct feature shifts, outperforming current statistical and neural network-based techniques. The code is available at https://github.com/AI-sandbox/DataFix.
|
Adversarial Learning for Feature Shift Detection and Correction
|
[
"Míriam Barrabés",
"Daniel Mas Montserrat",
"Margarita Geleta",
"Xavier Giró-i-Nieto",
"Alexander G Ioannidis"
] |
Conference
|
poster
|
2312.04546
|
[
"https://github.com/ai-sandbox/datafix"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lArwl3y9x6
|
@inproceedings{
mueller2023normalization,
title={Normalization Layers Are All That Sharpness-Aware Minimization Needs},
author={Maximilian Mueller and Tiffany Joyce Vlaar and David Rolnick and Matthias Hein},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lArwl3y9x6}
}
|
Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima and has been shown to enhance generalization performance in various settings. In this work we show that perturbing only the affine normalization parameters (typically comprising 0.1% of the total parameters) in the adversarial step of SAM can outperform perturbing all of the parameters. This finding generalizes
to different SAM variants and both ResNet (Batch Normalization) and Vision Transformer (Layer Normalization) architectures. We consider alternative sparse perturbation approaches and find that these do not achieve similar performance enhancement at such extreme sparsity levels, showing that this behaviour is unique to the normalization layers. Although our findings reaffirm the effectiveness
of SAM in improving generalization performance, they cast doubt on whether this is solely caused by reduced sharpness.
|
Normalization Layers Are All That Sharpness-Aware Minimization Needs
|
[
"Maximilian Mueller",
"Tiffany Joyce Vlaar",
"David Rolnick",
"Matthias Hein"
] |
Conference
|
poster
|
2306.04226
|
[
"https://github.com/mueller-mp/sam-on"
] |
https://huggingface.co/papers/2306.04226
| 0 | 0 | 1 | 4 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=lAbCgNcxm7
|
@inproceedings{
gao2023drugclip,
title={Drug{CLIP}: Contrasive Protein-Molecule Representation Learning for Virtual Screening},
author={Bowen Gao and Bo Qiang and Haichuan Tan and Yinjun Jia and Minsi Ren and Minsi Lu and Jingjing Liu and Wei-Ying Ma and Yanyan Lan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lAbCgNcxm7}
}
|
Virtual screening, which identifies potential drugs from vast compound databases to bind with a particular protein pocket, is a critical step in AI-assisted drug discovery. Traditional docking methods are highly time-consuming, and can only work with a restricted search library in real-life applications. Recent supervised learning approaches using scoring functions for binding-affinity prediction, although promising, have not yet surpassed docking methods due to their strong dependency on limited data with reliable binding-affinity labels. In this paper, we propose a novel contrastive learning framework, DrugCLIP, by reformulating virtual screening as a dense retrieval task and employing contrastive learning to align representations of binding protein pockets and molecules from a large quantity of pairwise data without explicit binding-affinity scores. We also introduce a biological-knowledge inspired data augmentation strategy to learn better protein-molecule representations. Extensive experiments show that DrugCLIP significantly outperforms traditional docking and supervised learning methods on diverse virtual screening benchmarks with highly reduced computation time, especially in zero-shot setting.
|
DrugCLIP: Contrastive Protein-Molecule Representation Learning for Virtual Screening
|
[
"Bowen Gao",
"Bo Qiang",
"Haichuan Tan",
"Yinjun Jia",
"Minsi Ren",
"Minsi Lu",
"Jingjing Liu",
"Wei-Ying Ma",
"Yanyan Lan"
] |
Conference
|
poster
|
2310.06367
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=lAEc7aIW20
|
@inproceedings{
min2023unsupervised,
title={Unsupervised Learning for Solving the Travelling Salesman Problem},
author={Yimeng Min and Yiwei Bai and Carla P Gomes},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=lAEc7aIW20}
}
|
We propose UTSP, an Unsupervised Learning (UL) framework for solving the Travelling Salesman Problem (TSP). We train a Graph Neural Network (GNN) using a surrogate loss. The GNN outputs a heat map representing the probability for each edge to be part of the optimal path. We then apply local search to generate our final prediction based on the heat map. Our loss function consists of two parts: one pushes the model to find the shortest path and the other serves as a surrogate for the constraint that the route should form a Hamiltonian Cycle.
Experimental results show that UTSP
outperforms the existing data-driven TSP heuristics.
Our approach is parameter efficient as well as data efficient: the model takes $\sim$ 10\% of the number of parameters and $\sim$ 0.2\% of training samples compared with Reinforcement Learning or Supervised Learning methods.
|
Unsupervised Learning for Solving the Travelling Salesman Problem
|
[
"Yimeng Min",
"Yiwei Bai",
"Carla P Gomes"
] |
Conference
|
poster
|
2303.10538
|
[
"https://github.com/yimengmin/UTSP"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=l9MbuqzlZt
|
@inproceedings{
ryner2023globally,
title={Globally solving the Gromov-Wasserstein problem for point clouds in low dimensional Euclidean spaces},
author={Martin Ryner and Jan Kronqvist and Johan Karlsson},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l9MbuqzlZt}
}
|
This paper presents a framework for computing the Gromov-Wasserstein problem between two sets of points in low dimensional spaces, where the discrepancy is the squared Euclidean norm.
The Gromov-Wasserstein problem is a generalization of the optimal transport problem that finds the assignment between two sets preserving pairwise distances as much as possible. This can be used to quantify the similarity between two formations or shapes, a common problem in AI and machine learning.
The problem can be formulated as a Quadratic Assignment Problem (QAP), which is in general computationally intractable even for small problems. Our framework addresses this challenge by reformulating the QAP as an optimization problem with a low-dimensional domain, leveraging the fact that the problem can be expressed as a concave quadratic optimization problem with low rank. The method scales well with the number of points, and it can be used to find the global solution for large-scale problems with thousands of points.
We compare the computational complexity of our approach with state-of-the-art methods on synthetic problems and apply it to a near-symmetrical problem which is of particular interest in computational biology.
|
Globally solving the Gromov-Wasserstein problem for point clouds in low dimensional Euclidean spaces
|
[
"Martin Ryner",
"Jan Kronqvist",
"Johan Karlsson"
] |
Conference
|
poster
|
2307.09057
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=l9BsCh8ikK
|
@inproceedings{
nguyen2023visual,
title={Visual Instruction Inversion: Image Editing via Image Prompting},
author={Thao Nguyen and Yuheng Li and Utkarsh Ojha and Yong Jae Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l9BsCh8ikK}
}
|
Text-conditioned image editing has emerged as a powerful tool for editing images.
However, in many situations, language can be ambiguous and ineffective in describing specific image edits.
When faced with such challenges, visual prompts can be a more informative and intuitive way to convey ideas.
We present a method for image editing via visual prompting.
Given pairs of example that represent the "before" and "after" images of an edit, our goal is to learn a text-based editing direction that can be used to perform the same edit on new images.
We leverage the rich, pretrained editing capabilities of text-to-image diffusion models by inverting visual prompts into editing instructions.
Our results show that with just one example pair, we can achieve competitive results compared to state-of-the-art text-conditioned image editing frameworks.
|
Visual Instruction Inversion: Image Editing via Image Prompting
|
[
"Thao Nguyen",
"Yuheng Li",
"Utkarsh Ojha",
"Yong Jae Lee"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=l6ypbj6Nv5
|
@inproceedings{
zhang2023generative,
title={Generative Category-level Object Pose Estimation via Diffusion Models},
author={Jiyao Zhang and Mingdong Wu and Hao Dong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l6ypbj6Nv5}
}
|
Object pose estimation plays a vital role in embodied AI and computer vision, enabling intelligent agents to comprehend and interact with their surroundings. Despite the practicality of category-level pose estimation, current approaches encounter challenges with partially observed point clouds, known as the multihypothesis issue. In this study, we propose a novel solution by reframing categorylevel object pose estimation as conditional generative modeling, departing from traditional point-to-point regression. Leveraging score-based diffusion models, we estimate object poses by sampling candidates from the diffusion model and aggregating them through a two-step process: filtering out outliers via likelihood estimation and subsequently mean-pooling the remaining candidates. To avoid the costly integration process when estimating the likelihood, we introduce an alternative method that distils an energy-based model from the original score-based model, enabling end-to-end likelihood estimation. Our approach achieves state-of-the-art performance on the REAL275 dataset, surpassing 50% and 60% on strict 5 ◦ 2cm and 5 ◦ 5cm metrics, respectively. Furthermore, our method demonstrates strong generalization to novel categories without the need for fine-tuning and can readily adapt to object pose tracking tasks, yielding comparable results to the current state-of-the-art baselines. Our checkpoints and demonstrations can be found at https://sites.google.com/view/genpose.
|
Generative Category-level Object Pose Estimation via Diffusion Models
|
[
"Jiyao Zhang",
"Mingdong Wu",
"Hao Dong"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=l6pYRbuHpO
|
@inproceedings{
zhang2023practical,
title={Practical Contextual Bandits with Feedback Graphs},
author={Mengxiao Zhang and Yuheng Zhang and Olga Vrousgou and Haipeng Luo and Paul Mineiro},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l6pYRbuHpO}
}
|
While contextual bandit has a mature theory, effectively leveraging different feedback patterns to enhance the pace of learning remains unclear. Bandits with feedback graphs, which interpolates between the full information and bandit regimes, provides a promising framework to mitigate the statistical complexity of learning. In this paper, we propose and analyze an approach to contextual bandits with feedback graphs based upon reduction to regression. The resulting algorithms are computationally practical and achieve established minimax rates, thereby reducing the statistical complexity in real-world applications.
|
Practical Contextual Bandits with Feedback Graphs
|
[
"Mengxiao Zhang",
"Yuheng Zhang",
"Olga Vrousgou",
"Haipeng Luo",
"Paul Mineiro"
] |
Conference
|
poster
|
2302.08631
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=l6R4Go3noz
|
@inproceedings{
yang2023finegrained,
title={Fine-Grained Visual Prompting},
author={Lingfeng Yang and Yueze Wang and Xiang Li and Xinlong Wang and Jian Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l6R4Go3noz}
}
|
Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive zero-shot transfer capabilities in image-level visual perception. However, these models have shown limited performance in instance-level tasks that demand precise localization and recognition. Previous works have suggested that incorporating visual prompts, such as colorful boxes or circles, can improve the ability of models to recognize objects of interest. Nonetheless, compared to language prompting, visual prompting designs are rarely explored. Existing approaches, which employ coarse visual cues such as colorful boxes or circles, often result in sub-optimal performance due to the inclusion of irrelevant and noisy pixels. In this paper, we carefully study the visual prompting designs by exploring more fine-grained markings, such as segmentation masks and their variations. In addition, we introduce a new zero-shot framework that leverages pixel-level annotations acquired from a generalist segmentation model for fine-grained visual prompting. Consequently, our investigation reveals that a straightforward application of blur outside the target mask, referred to as the Blur Reverse Mask, exhibits exceptional effectiveness. This proposed prompting strategy leverages the precise mask annotations to reduce focus on weakly related regions while retaining spatial coherence between the target and the surrounding background. Our **F**ine-**G**rained **V**isual **P**rompting (**FGVP**) demonstrates superior performance in zero-shot comprehension of referring expressions on the RefCOCO, RefCOCO+, and RefCOCOg benchmarks. It outperforms prior methods by an average margin of 3.0\% to 4.6\%, with a maximum improvement of 12.5\% on the RefCOCO+ testA subset. The part detection experiments conducted on the PACO dataset further validate the preponderance of FGVP over existing visual prompting techniques. Code is available at https://github.com/ylingfeng/FGVP.
|
Fine-Grained Visual Prompting
|
[
"Lingfeng Yang",
"Yueze Wang",
"Xiang Li",
"Xinlong Wang",
"Jian Yang"
] |
Conference
|
poster
|
2306.04356
|
[
"https://github.com/ylingfeng/FGVP"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=l61Kp1zBwC
|
@inproceedings{
shi2023relative,
title={Relative Entropic Optimal Transport: a (Prior-aware) Matching Perspective to (Unbalanced) Classification},
author={Liangliang Shi and Haoyu Zhen and Gu Zhang and Junchi Yan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l61Kp1zBwC}
}
|
Classification is a fundamental problem in machine learning, and considerable efforts have been recently devoted to the demanding long-tailed setting due to its prevalence in nature. Departure from the Bayesian framework, this paper rethinks classification from a matching perspective by studying the matching probability between samples and labels with optimal transport (OT) formulation. Specifically, we first propose a new variant of optimal transport, called Relative Entropic Optimal Transport (RE-OT), which guides the coupling solution to a known prior information matrix. We gives some theoretical results and their proof for RE-OT and surprisingly find RE-OT can help to deblur for barycenter images. Then we adopt inverse RE-OT for training long-tailed data and find that the loss derived from RE-OT has a similar form to Softmax-based cross-entropy loss, indicating a close connection between optimal transport and classification and the potential for transferring concepts between these two academic fields, such as barycentric projection in OT, which can map the labels back to the feature space. We further derive an epoch-varying RE-OT loss, and do the experiments on unbalanced image classification, molecule classification, instance segmentation and representation learning. Experimental results show its effectiveness.
|
Relative Entropic Optimal Transport: a (Prior-aware) Matching Perspective to (Unbalanced) Classification
|
[
"Liangliang Shi",
"Haoyu Zhen",
"Gu Zhang",
"Junchi Yan"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=l4CZCKXoSn
|
@inproceedings{
liu2023focal,
title={{FOCAL}: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space},
author={Shengzhong Liu and Tomoyoshi Kimura and Dongxin Liu and Ruijie Wang and Jinyang Li and Suhas Diggavi and Mani Srivastava and Tarek Abdelzaher},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l4CZCKXoSn}
}
|
This paper proposes a novel contrastive learning framework, called FOCAL, for extracting comprehensive features from multimodal time-series sensing signals through self-supervised training. Existing multimodal contrastive frameworks mostly rely on the shared information between sensory modalities, but do not explicitly consider the exclusive modality information that could be critical to understanding the underlying sensing physics. Besides, contrastive frameworks for time series have not handled the temporal information locality appropriately. FOCAL solves these challenges by making the following contributions: First, given multimodal time series, it encodes each modality into a factorized latent space consisting of shared features and private features that are orthogonal to each other. The shared space emphasizes feature patterns consistent across sensory modalities through a modal-matching objective. In contrast, the private space extracts modality-exclusive information through a transformation-invariant objective. Second, we propose a temporal structural constraint for modality features, such that the average distance between temporally neighboring samples is no larger than that of temporally distant samples. Extensive evaluations are performed on four multimodal sensing datasets with two backbone encoders and two classifiers to demonstrate the superiority of FOCAL. It consistently outperforms the state-of-the-art baselines in downstream tasks with a clear margin, under different ratios of available labels. The code and self-collected dataset are available at https://github.com/tomoyoshki/focal.
|
FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space
|
[
"Shengzhong Liu",
"Tomoyoshi Kimura",
"Dongxin Liu",
"Ruijie Wang",
"Jinyang Li",
"Suhas Diggavi",
"Mani Srivastava",
"Tarek Abdelzaher"
] |
Conference
|
poster
|
2310.20071
|
[
"https://github.com/tomoyoshki/focal"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=l3yxZS3QdT
|
@inproceedings{
chen2023bird,
title={{BIRD}: Generalizable Backdoor Detection and Removal for Deep Reinforcement Learning},
author={Xuan Chen and Wenbo Guo and Guanhong Tao and Xiangyu Zhang and Dawn Song},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l3yxZS3QdT}
}
|
Backdoor attacks pose a severe threat to the supply chain management of deep reinforcement learning (DRL) policies. Despite initial defenses proposed in recent studies, these methods have very limited generalizability and scalability. To address this issue, we propose BIRD, a technique to detect and remove backdoors from a pretrained DRL policy in a clean environment without requiring any knowledge about the attack specifications and accessing its training process. By analyzing the unique properties and behaviors of backdoor attacks, we formulate trigger restoration as an optimization problem and design a novel metric to detect backdoored policies. We also design a finetuning method to remove the backdoor, while maintaining the agent's performance in the clean environment. We evaluate BIRD against three backdoor attacks in ten different single-agent or multi-agent environments. Our results verify the effectiveness, efficiency, and generalizability of BIRD, as well as its robustness to different attack variations and adaptions.
|
BIRD: Generalizable Backdoor Detection and Removal for Deep Reinforcement Learning
|
[
"Xuan Chen",
"Wenbo Guo",
"Guanhong Tao",
"Xiangyu Zhang",
"Dawn Song"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=l3HUgVHqGQ
|
@inproceedings{
tian2023scan,
title={Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer},
author={Yuandong Tian and Yiping Wang and Beidi Chen and Simon Shaolei Du},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l3HUgVHqGQ}
}
|
Transformer architecture has shown impressive performance in multiple research domains and has become the backbone of many neural network models. However, there is limited understanding on how it works. In particular, with a simple predictive loss, how the representation emerges from the gradient \emph{training dynamics} remains a mystery. In this paper, for 1-layer transformer with one self-attention layer plus one decoder layer, we analyze its SGD training dynamics for the task of next token prediction in a mathematically rigorous manner. We open the black box of the dynamic process of how the self-attention layer combines input tokens, and reveal the nature of underlying inductive bias. More specifically, with the assumption (a) no positional encoding, (b) long input sequence, and (c) the decoder layer learns faster than the self-attention layer, we prove that self-attention acts as a \emph{discriminative scanning algorithm}:
starting from uniform attention, it gradually attends more to distinct key tokens for a specific next token to be predicted, and pays less attention to common key tokens that occur across different next tokens. Among distinct tokens, it progressively drops attention weights, following the order of low to high co-occurrence between the key and the query token in the training set. Interestingly, this procedure does not lead to winner-takes-all, but stops due to a \emph{phase transition} that is controllable by the learning rate of the decoder layer, leaving (almost) fixed token combination. We verify this \textbf{\emph{scan and snap}} dynamics on synthetic and real-world data (WikiText-103).
|
Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer
|
[
"Yuandong Tian",
"Yiping Wang",
"Beidi Chen",
"Simon Shaolei Du"
] |
Conference
|
poster
|
2305.16380
|
[
""
] |
https://huggingface.co/papers/2305.16380
| 1 | 4 | 0 | 4 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=l2VKZkolT7
|
@inproceedings{
jiralerspong2023feature,
title={Feature Likelihood Score: Evaluating the Generalization of Generative Models Using Samples},
author={Marco Jiralerspong and Joey Bose and Ian Gemp and Chongli Qin and Yoram Bachrach and Gauthier Gidel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=l2VKZkolT7}
}
|
The past few years have seen impressive progress in the development of deep generative models capable of producing high-dimensional, complex, and photo-realistic data. However, current methods for evaluating such models remain incomplete: standard likelihood-based metrics do not always apply and rarely correlate with perceptual fidelity, while sample-based metrics, such as FID, are insensitive to overfitting, i.e., inability to generalize beyond the training set. To address these limitations, we propose a new metric called the Feature Likelihood Divergence (FLD), a parametric sample-based score that uses density estimation to provide a comprehensive trichotomic evaluation accounting for novelty (i.e., different from the training samples), fidelity, and diversity of generated samples. We empirically demonstrate the ability of FLD to identify specific overfitting problem cases, where previously proposed metrics fail. We also extensively evaluate FLD on various image datasets and model classes, demonstrating its ability to match intuitions of previous metrics like FID while offering a more comprehensive evaluation of generative models.
|
Feature Likelihood Divergence: Evaluating the Generalization of Generative Models Using Samples
|
[
"Marco Jiralerspong",
"Joey Bose",
"Ian Gemp",
"Chongli Qin",
"Yoram Bachrach",
"Gauthier Gidel"
] |
Conference
|
poster
|
2302.04440
|
[
"https://github.com/marcojira/fld"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kyXMU3H7RB
|
@inproceedings{
cheng2023look,
title={Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline {RL}},
author={Peng Cheng and Xianyuan Zhan and Zhihao Wu and Wenjia Zhang and Youfang Lin and Shou cheng Song and Han Wang and Li Jiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kyXMU3H7RB}
}
|
Offline reinforcement learning (RL) offers an appealing approach to real-world tasks by learning policies from pre-collected datasets without interacting with the environment. However, the performance of existing offline RL algorithms heavily depends on the scale and state-action space coverage of datasets. Real-world data collection is often expensive and uncontrollable, leading to small and narrowly covered datasets and posing significant challenges for practical deployments of offline RL. In this paper, we provide a new insight that leveraging the fundamental symmetry of system dynamics can substantially enhance offline RL performance under small datasets. Specifically, we propose a Time-reversal symmetry (T-symmetry) enforced Dynamics Model (TDM), which establishes consistency between a pair of forward and reverse latent dynamics. TDM provides both well-behaved representations for small datasets and a new reliability measure for OOD samples based on compliance with the T-symmetry. These can be readily used to construct a new offline RL algorithm (TSRL) with less conservative policy constraints and a reliable latent space data augmentation procedure. Based on extensive experiments, we find TSRL achieves great performance on small benchmark datasets with as few as 1% of the original samples, which significantly outperforms the recent offline RL algorithms in terms of data efficiency and generalizability. Code is available at:
https://github.com/pcheng2/TSRL
|
Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline RL
|
[
"Peng Cheng",
"Xianyuan Zhan",
"Zhihao Wu",
"Wenjia Zhang",
"Youfang Lin",
"Shou cheng Song",
"Han Wang",
"Li Jiang"
] |
Conference
|
poster
|
2306.04220
|
[
"https://github.com/pcheng2/tsrl"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kvXcHfBghm
|
@inproceedings{
sun2023minimumrisk,
title={Minimum-Risk Recalibration of Classifiers},
author={Zeyu Sun and Dogyoon Song and Alfred Hero},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kvXcHfBghm}
}
|
Recalibrating probabilistic classifiers is vital for enhancing the reliability and accuracy of predictive models. Despite the development of numerous recalibration algorithms, there is still a lack of a comprehensive theory that integrates calibration and sharpness (which is essential for maintaining predictive power). In this paper, we introduce the concept of minimum-risk recalibration within the framework of mean-squared-error (MSE) decomposition, offering a principled approach for evaluating and recalibrating probabilistic classifiers. Using this framework, we analyze the uniform-mass binning (UMB) recalibration method and establish a finite-sample risk upper bound of order $\tilde{O}(B/n + 1/B^2)$ where $B$ is the number of bins and $n$ is the sample size. By balancing calibration and sharpness, we further determine that the optimal number of bins for UMB scales with $n^{1/3}$, resulting in a risk bound of approximately $O(n^{-2/3})$. Additionally, we tackle the challenge of label shift by proposing a two-stage approach that adjusts the recalibration function using limited labeled data from the target domain. Our results show that transferring a calibrated classifier requires significantly fewer target samples compared to recalibrating from scratch. We validate our theoretical findings through numerical simulations, which confirm the tightness of the proposed bounds, the optimal number of bins, and the effectiveness of label shift adaptation.
|
Minimum-Risk Recalibration of Classifiers
|
[
"Zeyu Sun",
"Dogyoon Song",
"Alfred Hero"
] |
Conference
|
spotlight
|
2305.10886
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kuxu4lCRr5
|
@inproceedings{
shi2023prior,
title={{PRIOR}: Personalized Prior for Reactivating the Information Overlooked in Federated Learning.},
author={Mingjia Shi and Yuhao Zhou and Kai Wang and Huaizheng Zhang and Shudong Huang and Qing Ye and Jiancheng Lv},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kuxu4lCRr5}
}
|
Classical federated learning (FL) enables training machine learning models without sharing data for privacy preservation, but heterogeneous data characteristic degrades the performance of the localized model. Personalized FL (PFL) addresses this by synthesizing personalized models from a global model via training on local data. Such a global model may overlook the specific information that the clients have been sampled. In this paper, we propose a novel scheme to inject personalized prior knowledge into the global model in each client, which attempts to mitigate the introduced incomplete information problem in PFL. At the heart of our proposed approach is a framework, the $\textit{PFL with Bregman Divergence}$ (pFedBreD), decoupling the personalized prior from the local objective function regularized by Bregman divergence for greater adaptability in personalized scenarios. We also relax the mirror descent (RMD) to extract the prior explicitly to provide optional strategies. Additionally, our pFedBreD is backed up by a convergence analysis. Sufficient experiments demonstrate that our method reaches the $\textit{state-of-the-art}$ performances on 5 datasets and outperforms other methods by up to 3.5% across 8 benchmarks. Extensive analyses verify the robustness and necessity of proposed designs. The code will be made public.
|
PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning.
|
[
"Mingjia Shi",
"Yuhao Zhou",
"Kai Wang",
"Huaizheng Zhang",
"Shudong Huang",
"Qing Ye",
"Jiancheng Lv"
] |
Conference
|
poster
|
[
"https://github.com/bdemo/pfedbred_public"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kupNhxLc6k
|
@inproceedings{
zhang2023computing,
title={Computing Optimal Nash Equilibria in Multiplayer Games},
author={Youzhi Zhang and Bo An and Venkatramanan Siva Subrahmanian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kupNhxLc6k}
}
|
Designing efficient algorithms to compute a Nash Equilibrium (NE) in multiplayer games is still an open challenge. In this paper, we focus on computing an NE that optimizes a given objective function. For example, when there is a team of players independently playing against an adversary in a game (e.g., several groups in a forest trying to interdict illegal loggers in green security games), these team members may need to find an NE minimizing the adversary’s utility. Finding an optimal NE in multiplayer games can be formulated as a mixed-integer bilinear program by introducing auxiliary variables to represent bilinear terms, leading to a huge number of bilinear terms, making it hard to solve. To overcome this challenge, we first propose a general framework for this formulation based on a set of correlation plans. We then develop a novel algorithm called CRM based on this framework, which uses correlation plans with their relations to strictly reduce the feasible solution space after the convex relaxation of bilinear terms while minimizing the number of correlation plans to significantly reduce the number of bilinear terms. We show that our techniques can significantly reduce the time complexity and CRM can be several orders of magnitude faster than the state-of-the-art baseline.
|
Computing Optimal Nash Equilibria in Multiplayer Games
|
[
"Youzhi Zhang",
"Bo An",
"Venkatramanan Siva Subrahmanian"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=ktYjrgOENR
|
@inproceedings{
zheng2023ddcot,
title={{DDC}oT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models},
author={Ge Zheng and Bin Yang and Jiajin Tang and Hong-Yu Zhou and Sibei Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ktYjrgOENR}
}
|
A long-standing goal of AI systems is to perform complex multimodal reasoning like humans. Recently, large language models (LLMs) have made remarkable strides in such multi-step reasoning on the language modality solely by leveraging the chain of thought (CoT) to mimic human thinking. However, the transfer of these advancements to multimodal contexts introduces heightened challenges, including but not limited to the impractical need for labor-intensive annotation and the limitations in terms of flexibility, generalizability, and explainability. To evoke CoT reasoning in multimodality, this work first conducts an in-depth analysis of these challenges posed by multimodality and presents two key insights: “keeping critical thinking” and “letting everyone do their jobs” in multimodal CoT reasoning. Furthermore, this study proposes a novel DDCoT prompting that maintains a critical attitude through negative-space prompting and incorporates multimodality into reasoning by first dividing the reasoning responsibility of LLMs into reasoning and recognition and then integrating the visual recognition capability of visual models into the joint reasoning process. The rationales generated by DDCoT not only improve the reasoning abilities of both large and small language models in zero-shot prompting and fine-tuning learning, significantly outperforming state-of-the-art methods but also exhibit impressive generalizability and explainability.
|
DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models
|
[
"Ge Zheng",
"Bin Yang",
"Jiajin Tang",
"Hong-Yu Zhou",
"Sibei Yang"
] |
Conference
|
poster
|
2310.16436
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ktTSji9ZIs
|
@inproceedings{
knight2023multitask,
title={Multi-task learning with summary statistics},
author={Parker Knight and Rui Duan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ktTSji9ZIs}
}
|
Multi-task learning has emerged as a powerful machine learning paradigm for integrating data from multiple sources, leveraging similarities between tasks to improve overall model performance. However, the application of multi-task learning to real-world settings is hindered by data-sharing constraints, especially in healthcare settings. To address this challenge, we propose a flexible multi-task learning framework utilizing summary statistics from various sources. Additionally, we present an adaptive parameter selection approach based on a variant of Lepski's method, allowing for data-driven tuning parameter selection when only summary statistics are accessible. Our systematic non-asymptotic analysis characterizes the performance of the proposed methods under various regimes of the source datasets' sample complexity and overlap. We demonstrate our theoretical findings and the performance of the method through extensive simulations. This work offers a more flexible tool for training related models across various domains, with practical implications in genetic risk prediction and many other fields.
|
Multi-task learning with summary statistics
|
[
"Parker Knight",
"Rui Duan"
] |
Conference
|
poster
|
2307.02388
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kshC3NOP6h
|
@inproceedings{
labonte2023towards,
title={Towards Last-Layer Retraining for Group Robustness with Fewer Annotations},
author={Tyler LaBonte and Vidya Muthukumar and Abhishek Kumar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kshC3NOP6h}
}
|
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious correlations and poor generalization on minority groups. The recent deep feature reweighting (DFR) technique achieves state-of-the-art group robustness via simple last-layer retraining, but it requires held-out group and class annotations to construct a group-balanced reweighting dataset. In this work, we examine this impractical requirement and find that last-layer retraining can be surprisingly effective with no group annotations (other than for model selection) and only a handful of class annotations. We first show that last-layer retraining can greatly improve worst-group accuracy even when the reweighting dataset has only a small proportion of worst-group data. This implies a "free lunch" where holding out a subset of training data to retrain the last layer can substantially outperform ERM on the entire dataset with no additional data, annotations, or computation for training. To further improve group robustness, we introduce a lightweight method called selective last-layer finetuning (SELF), which constructs the reweighting dataset using misclassifications or disagreements. Our experiments present the first evidence that model disagreement upsamples worst-group data, enabling SELF to nearly match DFR on four well-established benchmarks across vision and language tasks with no group annotations and less than 3% of the held-out class annotations.
|
Towards Last-layer Retraining for Group Robustness with Fewer Annotations
|
[
"Tyler LaBonte",
"Vidya Muthukumar",
"Abhishek Kumar"
] |
Conference
|
poster
|
2309.08534
|
[
"https://github.com/tmlabonte/last-layer-retraining"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ks7Mf5lzSx
|
@inproceedings{
an2023spatialrank,
title={SpatialRank: Urban Event Ranking with {NDCG} Optimization on Spatiotemporal Data},
author={BANG AN and Xun Zhou and Yongjian Zhong and Tianbao Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ks7Mf5lzSx}
}
|
The problem of urban event ranking aims at predicting the top-$k$ most risky locations of future events such as traffic accidents and crimes. This problem is of fundamental importance to public safety and urban administration especially when limited resources are available. The problem is, however, challenging due to complex and dynamic spatio-temporal correlations between locations, uneven distribution of urban events in space, and the difficulty to correctly rank nearby locations with similar features. Prior works on event forecasting mostly aim at accurately predicting the actual risk score or counts of events for all the locations. Rankings obtained as such usually have low quality due to prediction errors. Learning-to-rank methods directly optimize measures such as Normalized Discounted Cumulative Gain (NDCG), but cannot handle the spatiotemporal autocorrelation existing among locations. Due to the common assumption that items are independent. In this paper, we bridge the gap by proposing a novel spatial event ranking approach named SpatialRank. SpatialRank features adaptive graph convolution layers that dynamically learn the spatiotemporal dependencies across locations from data. In addition, the model optimizes through surrogates a hybrid NDCG loss with a spatial component to better rank neighboring spatial locations. We design an importance-sampling with a spatial filtering algorithm to effectively evaluate the loss during training. Comprehensive experiments on three real-world datasets demonstrate that SpatialRank can effectively identify the top riskiest locations of crimes and traffic accidents and outperform state-of-art methods in terms of NDCG by up to 12.7%.
|
SpatialRank: Urban Event Ranking with NDCG Optimization on Spatiotemporal Data
|
[
"BANG AN",
"Xun Zhou",
"Yongjian Zhong",
"Tianbao Yang"
] |
Conference
|
poster
|
2310.00270
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kqBUgrkm1c
|
@inproceedings{
busa-fekete2023easy,
title={Easy Learning from Label Proportions},
author={Robert Istvan Busa-Fekete and Heejin Choi and Travis Dick and Claudio Gentile and Andres Munoz medina},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kqBUgrkm1c}
}
|
We consider the problem of Learning from Label Proportions (LLP), a weakly supervised classification setup where instances are grouped into i.i.d. “bags”, and only the frequency of class labels at each bag is available. Albeit, the objective of the learner is to achieve low task loss at an individual instance level. Here we propose EASYLLP, a flexible and simple-to-implement debiasing approach based on aggregate labels, which operates on arbitrary loss functions. Our technique allows us to accurately estimate the expected loss of an arbitrary model at an individual level. We elucidate the differences between our method and standard methods based on label proportion matching, in terms of applicability and optimality conditions. We showcase the flexibility of our approach compared to alternatives by applying our method to popular learning frameworks, like Empirical Risk Minimization (ERM) and Stochastic Gradient Descent (SGD) with provable guarantees on instance level performance. Finally, we validate our theoretical results on multiple datasets, empirically illustrating the conditions under which our algorithm is expected to perform better or worse than previous LLP approaches
|
Easy Learning from Label Proportions
|
[
"Robert Istvan Busa-Fekete",
"Heejin Choi",
"Travis Dick",
"Claudio Gentile",
"Andres Munoz medina"
] |
Conference
|
poster
|
2302.03115
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=konBXvt2iS
|
@inproceedings{
wang2023understanding,
title={Understanding Multi-phase Optimization Dynamics and Rich Nonlinear Behaviors of Re{LU} Networks},
author={Mingze Wang and Chao Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=konBXvt2iS}
}
|
The training process of ReLU neural networks often exhibits complicated nonlinear phenomena.
The nonlinearity of models and non-convexity of loss pose significant challenges for theoretical analysis. Therefore, most previous theoretical works on the optimization dynamics of neural networks focus either on local analysis (like the end of training) or approximate linear models (like Neural Tangent Kernel).
In this work, we conduct a complete theoretical characterization of the training process of a two-layer ReLU network trained by Gradient Flow on a linearly separable data. In this specific setting, our analysis captures the whole optimization process starting from random initialization to final convergence.
Despite the relatively simple model and data that we studied, we reveal four different phases from the whole training process showing a general simplifying-to-complicating learning trend.
Specific nonlinear behaviors can also be precisely identified and captured theoretically, such as
initial condensation, saddle-to-plateau dynamics, plateau escape, changes of activation patterns,
learning with increasing complexity, etc.
|
Understanding Multi-phase Optimization Dynamics and Rich Nonlinear Behaviors of ReLU Networks
|
[
"Mingze Wang",
"Chao Ma"
] |
Conference
|
spotlight
|
2305.12467
|
[
"https://github.com/wmz9/understanding_multi-phase_optimization_neurips2023"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kmbG9iBRIb
|
@inproceedings{
sun2023accountability,
title={Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples},
author={Hao Sun and Alihan H{\"u}y{\"u}k and Daniel Jarrett and Mihaela van der Schaar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kmbG9iBRIb}
}
|
Learning controllers with offline data in decision-making systems is an essential area of research due to its potential to reduce the risk of applications in real-world systems. However, in responsibility-sensitive settings such as healthcare, decision accountability is of paramount importance, yet has not been adequately addressed by the literature.
This paper introduces the Accountable Offline Controller (AOC) that employs the offline dataset as the Decision Corpus and performs accountable control based on a tailored selection of examples, referred to as the Corpus Subset. AOC operates effectively in low-data scenarios, can be extended to the strictly offline imitation setting, and displays qualities of both conservation and adaptability.
We assess AOC's performance in both simulated and real-world healthcare scenarios, emphasizing its capability to manage offline control tasks with high levels of performance while maintaining accountability.
|
Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples
|
[
"Hao Sun",
"Alihan Hüyük",
"Daniel Jarrett",
"Mihaela van der Schaar"
] |
Conference
|
poster
|
2310.07747
|
[
""
] |
https://huggingface.co/papers/2310.07747
| 1 | 1 | 0 | 4 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=kjkLJ7NJJZ
|
@inproceedings{
uehara2023offline,
title={Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage},
author={Masatoshi Uehara and Nathan Kallus and Jason D. Lee and Wen Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kjkLJ7NJJZ}
}
|
We consider offline reinforcement learning (RL) where we only have only access to offline data. In contrast to numerous offline RL algorithms that necessitate the uniform coverage of the offline data over state and action space, we propose value-based algorithms with PAC guarantees under partial coverage, specifically, coverage of offline data against a single policy, and realizability of soft Q-function (a.k.a., entropy-regularized Q-function) and another function, which is defined as a solution to a saddle point of certain minimax optimization problem). Furthermore, we show the analogous result for Q-functions instead of soft Q-functions. To attain these guarantees, we use novel algorithms with minimax loss functions to accurately estimate soft Q-functions and Q-functions with
-convergence guarantees measured on the offline data. We introduce these loss functions by casting the estimation problems into nonlinear convex optimization problems and taking the Lagrange functions.
|
Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage
|
[
"Masatoshi Uehara",
"Nathan Kallus",
"Jason D. Lee",
"Wen Sun"
] |
Conference
|
poster
|
2302.02392
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kjMGHTo8Cs
|
@inproceedings{
brandfonbrener2023inverse,
title={Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation},
author={David Brandfonbrener and Ofir Nachum and Joan Bruna},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kjMGHTo8Cs}
}
|
In recent years, domains such as natural language processing and image recognition have popularized the paradigm of using large datasets to pretrain representations that can be effectively transferred to downstream tasks. In this work we evaluate how such a paradigm should be done in imitation learning, where both pretraining and finetuning data are trajectories collected by experts interacting with an unknown environment. Namely, we consider a setting where the pretraining corpus consists of multitask demonstrations and the task for each demonstration is set by an unobserved latent context variable. The goal is to use the pretraining corpus to learn a low dimensional representation of the high dimensional (e.g., visual) observation space which can be transferred to a novel context for finetuning on a limited dataset of demonstrations. Among a variety of possible pretraining objectives, we argue that inverse dynamics modeling -- i.e., predicting an action given the observations appearing before and after it in the demonstration -- is well-suited to this setting. We provide empirical evidence of this claim through evaluations on a variety of simulated visuomotor manipulation problems. While previous work has attempted various theoretical explanations regarding the benefit of inverse dynamics modeling, we find that these arguments are insufficient to explain the empirical advantages often observed in our settings, and so we derive a novel analysis using a simple but general environment model.
|
Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation
|
[
"David Brandfonbrener",
"Ofir Nachum",
"Joan Bruna"
] |
Conference
|
poster
|
[
""
] |
https://huggingface.co/papers/2305.16985
| 0 | 0 | 0 | 3 | 1 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kj33zJ9Vue
|
@inproceedings{
rossi2023on,
title={On permutation symmetries in Bayesian neural network posteriors: a variational perspective},
author={Simone Rossi and Ankit Singh and Thomas Hannagan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kj33zJ9Vue}
}
|
The elusive nature of gradient-based optimization in neural networks is tied to their loss landscape geometry, which is poorly understood. However recent work has brought solid evidence that there is essentially no loss barrier between the local solutions of gradient descent, once accounting for weight-permutations that leave the network's computation unchanged. This raises questions for approximate inference in Bayesian neural networks (BNNs), where we are interested in marginalizing over multiple points in the loss landscape.
In this work, we first extend the formalism of marginalized loss barrier and solution interpolation to BNNs, before proposing a matching algorithm to search for linearly connected solutions. This is achieved by aligning the distributions of two independent approximate Bayesian solutions with respect to permutation matrices. Building on the work of Ainsworth et al. (2023), we frame the problem as a combinatorial optimization one, using an approximation to the sum of bilinear assignment problem. We then experiment on a variety of architectures and datasets, finding nearly zero marginalized loss barriers for linearly connected solutions.
|
On permutation symmetries in Bayesian neural network posteriors: a variational perspective
|
[
"Simone Rossi",
"Ankit Singh",
"Thomas Hannagan"
] |
Conference
|
poster
|
2310.10171
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kfWzpZvEUh
|
@inproceedings{
maraval2023endtoend,
title={End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes},
author={Alexandre Max Maraval and Matthieu Zimmer and Antoine Grosnit and Haitham Bou Ammar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kfWzpZvEUh}
}
|
Meta-Bayesian optimisation (meta-BO) aims to improve the sample efficiency of Bayesian optimisation by leveraging data from related tasks. While previous methods successfully meta-learn either a surrogate model or an acquisition function independently, joint training of both components remains an open challenge. This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures. We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data. Early on, we notice that training transformer-based neural processes from scratch with RL is challenging due to insufficient supervision, especially when rewards are sparse. We formalise this claim with a combinatorial analysis showing that the widely used notion of regret as a reward signal exhibits a logarithmic sparsity pattern in trajectory lengths. To tackle this problem, we augment the RL objective with an auxiliary task that guides part of the architecture to learn a valid probabilistic model as an inductive bias. We demonstrate that our method achieves state-of-the-art regret results against various baselines in experiments on standard hyperparameter optimisation tasks and also outperforms others in the real-world problems of mixed-integer programming tuning, antibody design, and logic synthesis for electronic design automation.
|
End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes
|
[
"Alexandre Max Maraval",
"Matthieu Zimmer",
"Antoine Grosnit",
"Haitham Bou Ammar"
] |
Conference
|
poster
|
2305.15930
|
[
"https://github.com/huawei-noah/hebo"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=ke3RgcDmfO
|
@inproceedings{
chen2023textdiffuser,
title={TextDiffuser: Diffusion Models as Text Painters},
author={Jingye Chen and Yupan Huang and Tengchao Lv and Lei Cui and Qifeng Chen and Furu Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ke3RgcDmfO}
}
|
Diffusion models have gained increasing attention for their impressive generation abilities but currently struggle with rendering accurate and coherent text. To address this issue, we introduce TextDiffuser, focusing on generating images with visually appealing text that is coherent with backgrounds. TextDiffuser consists of two stages: first, a Transformer model generates the layout of keywords extracted from text prompts, and then diffusion models generate images conditioned on the text prompt and the generated layout. Additionally, we contribute the first large-scale text images dataset with OCR annotations, MARIO-10M, containing 10 million image-text pairs with text recognition, detection, and character-level segmentation annotations. We further collect the MARIO-Eval benchmark to serve as a comprehensive tool for evaluating text rendering quality. Through experiments and user studies, we demonstrate that TextDiffuser is flexible and controllable to create high-quality text images using text prompts alone or together with text template images, and conduct text inpainting to reconstruct incomplete images with text. We will make the code, model and dataset publicly available.
|
TextDiffuser: Diffusion Models as Text Painters
|
[
"Jingye Chen",
"Yupan Huang",
"Tengchao Lv",
"Lei Cui",
"Qifeng Chen",
"Furu Wei"
] |
Conference
|
poster
|
2305.10855
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kdFR6IUEW6
|
@inproceedings{
ren2023prompt,
title={Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition},
author={Shuhuai Ren and Aston Zhang and Yi Zhu and Shuai Zhang and Shuai Zheng and Mu Li and Alex Smola and Xu Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kdFR6IUEW6}
}
|
This work proposes POMP, a prompt pre-training method for vision-language models. Being memory and computation efficient, POMP enables the learned prompt to condense semantic information for a rich set of visual concepts with over twenty-thousand classes. Once pre-trained, the prompt with a strong transferable ability can be directly plugged into a variety of visual recognition tasks including image classification, semantic segmentation, and object detection, to boost recognition performances in a zero-shot manner. Empirical evaluation shows that POMP achieves state-of-the-art performances on 21 datasets, e.g., 67.0% average accuracy on 10 classification datasets (+3.1% compared to CoOp) and 84.4 hIoU on open-vocabulary Pascal VOC segmentation (+6.9 compared to ZSSeg).
|
Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition
|
[
"Shuhuai Ren",
"Aston Zhang",
"Yi Zhu",
"Shuai Zhang",
"Shuai Zheng",
"Mu Li",
"Alex Smola",
"Xu Sun"
] |
Conference
|
poster
|
2304.04704
|
[
"https://github.com/amazon-science/prompt-pretraining"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kXfrlWXLwH
|
@inproceedings{
engels2023dessert,
title={{DESSERT}: An Efficient Algorithm for Vector Set Search with Vector Set Queries},
author={Joshua Engels and Benjamin Coleman and Vihan Lakshman and Anshumali Shrivastava},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kXfrlWXLwH}
}
|
We study the problem of $\text{\emph{vector set search}}$ with $\text{\emph{vector set queries}}$. This task is analogous to traditional near-neighbor search, with the exception that both the query and each element in the collection are $\text{\textit{sets}}$ of vectors. We identify this problem as a core subroutine for semantic search applications and find that existing solutions are unacceptably slow. Towards this end, we present a new approximate search algorithm, DESSERT ($\text{\bf D}$ESSERT $\text{\bf E}$ffeciently $\text{\bf S}$earches $\text{\bf S}$ets of $\text{\bf E}$mbeddings via $\text{\bf R}$etrieval $\text{\bf T}$ables). DESSERT is a general tool with strong theoretical guarantees and excellent empirical performance. When we integrate DESSERT into ColBERT, a state-of-the-art semantic search model, we find a 2-5x speedup on the MS MARCO and LoTTE retrieval benchmarks with minimal loss in recall, underscoring the effectiveness and practical applicability of our proposal.
|
DESSERT: An Efficient Algorithm for Vector Set Search with Vector Set Queries
|
[
"Joshua Engels",
"Benjamin Coleman",
"Vihan Lakshman",
"Anshumali Shrivastava"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kXOXrVnwbb
|
@inproceedings{
gu2023dataseg,
title={DaTaSeg: Taming a Universal Multi-Dataset Multi-Task Segmentation Model},
author={Xiuye Gu and Yin Cui and Jonathan Huang and Abdullah Rashwan and Xuan Yang and Xingyi Zhou and Golnaz Ghiasi and Weicheng Kuo and Huizhong Chen and Liang-Chieh Chen and David A Ross},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kXOXrVnwbb}
}
|
Observing the close relationship among panoptic, semantic and instance segmentation tasks, we propose to train a universal multi-dataset multi-task segmentation model: DaTaSeg. We use a shared representation (mask proposals with class predictions) for all tasks. To tackle task discrepancy, we adopt different merge operations and post-processing for different tasks. We also leverage weak-supervision, allowing our segmentation model to benefit from cheaper bounding box annotations. To share knowledge across datasets, we use text embeddings from the same semantic embedding space as classifiers and share all network parameters among datasets. We train DaTaSeg on ADE semantic, COCO panoptic, and Objects365 detection datasets. DaTaSeg improves performance on all datasets, especially small-scale datasets, achieving 54.0 mIoU on ADE semantic and 53.5 PQ on COCO panoptic. DaTaSeg also enables weakly-supervised knowledge transfer on ADE panoptic and Objects365 instance segmentation. Experiments show DaTaSeg scales with the number of training datasets and enables open-vocabulary segmentation through direct transfer. In addition, we annotate an Objects365 instance segmentation set of 1,000 images and release it as a public evaluation benchmark on https://laoreja.github.io/dataseg.
|
DaTaSeg: Taming a Universal Multi-Dataset Multi-Task Segmentation Model
|
[
"Xiuye Gu",
"Yin Cui",
"Jonathan Huang",
"Abdullah Rashwan",
"Xuan Yang",
"Xingyi Zhou",
"Golnaz Ghiasi",
"Weicheng Kuo",
"Huizhong Chen",
"Liang-Chieh Chen",
"David A Ross"
] |
Conference
|
poster
|
2306.01736
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kVfHQV668B
|
@inproceedings{
huang2023towards,
title={Towards Efficient Pre-Trained Language Model via Feature Correlation Distillation},
author={Kun Huang and Xin Guo and Meng Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kVfHQV668B}
}
|
Knowledge Distillation (KD) has emerged as a promising approach for compressing large Pre-trained Language Models (PLMs). The performance of KD relies on how to effectively formulate and transfer the knowledge from the teacher model to the student model. Prior arts mainly focus on directly aligning output features from the transformer block, which may impose overly strict constraints on the student model's learning process and complicate the training process by introducing extra parameters and computational cost. Moreover, our analysis indicates that the different relations within self-attention, as adopted in other works, involves more computation complexities and can easily be constrained by the number of heads, potentially leading to suboptimal solutions.
To address these issues, we propose a novel approach that builds relationships directly from output features. Specifically, we introduce token-level and sequence-level relations concurrently
to fully exploit the knowledge from the teacher model. Furthermore, we propose a correlation-based distillation loss to alleviate the exact match properties inherent in traditional KL divergence or MSE loss functions. Our method, dubbed FCD, presents a simple yet effective method to compress various architectures (BERT, RoBERTa, and GPT) and model sizes (base-size and large-size).
Extensive experimental results demonstrate that our distilled, smaller language models significantly surpass existing KD methods across various NLP tasks.
|
Towards Efficient Pre-Trained Language Model via Feature Correlation Distillation
|
[
"Kun Huang",
"Xin Guo",
"Meng Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kS8rIH43Zc
|
@inproceedings{
zhang2023bayesian,
title={Bayesian Active Causal Discovery with Multi-Fidelity Experiments},
author={Zeyu Zhang and Chaozhuo Li and Xu Chen and Xing Xie},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kS8rIH43Zc}
}
|
This paper studies the problem of active causal discovery when the experiments can be done based on multi-fidelity oracles, where higher fidelity experiments are more precise and expensive, while the lower ones are cheaper but less accurate. In this paper, we formally define the task of multi-fidelity active causal discovery, and design a probabilistic model for solving this problem. In specific, we first introduce a mutual-information based acquisition function to determine which variable should be intervened at which fidelity, and then a cascading model is proposed to capture the correlations between different fidelity oracles. Beyond the above basic framework, we also extend it to the batch intervention scenario. We find that the theoretical foundations behind the widely used and efficient greedy method do not hold in our problem. To solve this problem, we introduce a new concept called $\epsilon$-submodular, and design a constraint based fidelity model to theoretically validate the greedy method. We conduct extensive experiments to demonstrate the effectiveness of our model.
|
Bayesian Active Causal Discovery with Multi-Fidelity Experiments
|
[
"Zeyu Zhang",
"Chaozhuo Li",
"Xu Chen",
"Xing Xie"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kS7ED7eE74
|
@inproceedings{
maskey2023a,
title={A Fractional Graph Laplacian Approach to Oversmoothing},
author={Sohir Maskey and Raffaele Paolino and Aras Bacho and Gitta Kutyniok},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kS7ED7eE74}
}
|
Graph neural networks (GNNs) have shown state-of-the-art performances in various applications. However, GNNs often struggle to capture long-range dependencies in graphs due to oversmoothing. In this paper, we generalize the concept of oversmoothing from undirected to directed graphs. To this aim, we extend the notion of Dirichlet energy by considering a directed symmetrically normalized Laplacian. As vanilla graph convolutional networks are prone to oversmooth, we adopt a neural graph ODE framework. Specifically, we propose fractional graph Laplacian neural ODEs, which describe non-local dynamics. We prove that our approach allows propagating information between distant nodes while maintaining a low probability of long-distance jumps. Moreover, we show that our method is more flexible with respect to the convergence of the graph’s Dirichlet energy, thereby mitigating oversmoothing. We conduct extensive experiments on synthetic and real-world graphs, both directed and undirected, demonstrating our method’s versatility across diverse graph homophily levels. Our
code is available at https://github.com/RPaolino/fLode
|
A Fractional Graph Laplacian Approach to Oversmoothing
|
[
"Sohir Maskey",
"Raffaele Paolino",
"Aras Bacho",
"Gitta Kutyniok"
] |
Conference
|
poster
|
2305.13084
|
[
"https://github.com/rpaolino/flode"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kRdaTkaBwC
|
@inproceedings{
yu2023inferring,
title={Inferring Hybrid Neural Fluid Fields from Videos},
author={Hong-Xing Yu and Yang Zheng and Yuan Gao and Yitong Deng and Bo Zhu and Jiajun Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kRdaTkaBwC}
}
|
We study recovering fluid density and velocity from sparse multiview videos. Existing neural dynamic reconstruction methods predominantly rely on optical flows; therefore, they cannot accurately estimate the density and uncover the underlying velocity due to the inherent visual ambiguities of fluid velocity, as fluids are often shapeless and lack stable visual features. The challenge is further pronounced by the turbulent nature of fluid flows, which calls for properly designed fluid velocity representations. To address these challenges, we propose hybrid neural fluid fields (HyFluid), a neural approach to jointly infer fluid density and velocity fields. Specifically, to deal with visual ambiguities of fluid velocity, we introduce a set of physics-based losses that enforce inferring a physically plausible velocity field, which is divergence-free and drives the transport of density. To deal with the turbulent nature of fluid velocity, we design a hybrid neural velocity representation that includes a base neural velocity field that captures most irrotational energy and a vortex particle-based velocity that models residual turbulent velocity. We show that our method enables recovering vortical flow details. Our approach opens up possibilities for various learning and reconstruction applications centered around 3D incompressible flow, including fluid re-simulation and editing, future prediction, and neural dynamic scene composition. Project website: https://kovenyu.com/HyFluid/
|
Inferring Hybrid Neural Fluid Fields from Videos
|
[
"Hong-Xing Yu",
"Yang Zheng",
"Yuan Gao",
"Yitong Deng",
"Bo Zhu",
"Jiajun Wu"
] |
Conference
|
poster
|
2312.06561
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kR5ycmBclj
|
@inproceedings{
choi2023nutrea,
title={NuTrea: Neural Tree Search for Context-guided Multi-hop {KGQA}},
author={Hyeong Kyu Choi and Seunghun Lee and Jaewon Chu and Hyunwoo J. Kim},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kR5ycmBclj}
}
|
Multi-hop Knowledge Graph Question Answering (KGQA) is a task that involves retrieving nodes from a knowledge graph (KG) to answer natural language questions. Recent GNN-based approaches formulate this task as a KG path searching problem, where messages are sequentially propagated from the seed node towards the answer nodes. However, these messages are past-oriented, and they do not consider the full KG context. To make matters worse, KG nodes often represent pronoun entities and are sometimes encrypted, being uninformative in selecting between paths. To address these problems, we propose Neural Tree Search (NuTrea), a tree search-based GNN model that incorporates the broader KG context. Our model adopts a message-passing scheme that probes the unreached subtree regions to boost the past-oriented embeddings. In addition, we introduce the Relation Frequency-Inverse Entity Frequency (RF-IEF) node embedding that considers the global KG context to better characterize ambiguous KG nodes. The general effectiveness of our approach is demonstrated through experiments on three major multi-hop KGQA benchmark datasets, and our extensive analyses further validate its expressiveness and robustness. Overall, NuTrea provides a powerful means to query the KG with complex natural language questions. Code is available at https://github.com/mlvlab/NuTrea.
|
NuTrea: Neural Tree Search for Context-guided Multi-hop KGQA
|
[
"Hyeong Kyu Choi",
"Seunghun Lee",
"Jaewon Chu",
"Hyunwoo J. Kim"
] |
Conference
|
poster
|
2310.15484
|
[
"https://github.com/mlvlab/nutrea"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kR21XsZeAr
|
@inproceedings{
bai2023subclassdominant,
title={Subclass-Dominant Label Noise: A Counterexample for the Success of Early Stopping},
author={Yingbin Bai and Zhongyi Han and Erkun Yang and Jun Yu and Bo Han and Dadong Wang and Tongliang Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kR21XsZeAr}
}
|
In this paper, we empirically investigate a previously overlooked and widespread type of label noise, subclass-dominant label noise (SDN). Our findings reveal that, during the early stages of training, deep neural networks can rapidly memorize mislabeled examples in SDN. This phenomenon poses challenges in effectively selecting confident examples using conventional early stopping techniques. To address this issue, we delve into the properties of SDN and observe that long-trained representations are superior at capturing the high-level semantics of mislabeled examples, leading to a clustering effect where similar examples are grouped together. Based on this observation, we propose a novel method called NoiseCluster that leverages the geometric structures of long-trained representations to identify and correct SDN. Our experiments demonstrate that NoiseCluster outperforms state-of-the-art baselines on both synthetic and real-world datasets, highlighting the importance of addressing SDN in learning with noisy labels. The code is available at https://github.com/tmllab/2023_NeurIPS_SDN.
|
Subclass-Dominant Label Noise: A Counterexample for the Success of Early Stopping
|
[
"Yingbin Bai",
"Zhongyi Han",
"Erkun Yang",
"Jun Yu",
"Bo Han",
"Dadong Wang",
"Tongliang Liu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kPfd3pcwHV
|
@inproceedings{
spaeh2023online,
title={Online Ad Allocation with Predictions},
author={Fabian Christian Spaeh and Alina Ene},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kPfd3pcwHV}
}
|
Display Ads and the generalized assignment problem are two well-studied online packing problems with important applications in ad allocation and other areas. In both problems, ad impressions arrive online and have to be allocated immediately to budget-constrained advertisers. Worst-case algorithms that achieve the ideal competitive ratio are known for both problems, but might act overly conservative given the predictable and usually tame nature of real-world input. Given this discrepancy, we develop an algorithm for both problems that incorporate machine-learned predictions and can thus improve the performance beyond the worst-case. Our algorithm is based on the work of Feldman et al. (2009) and similar in nature to Mahdian et al. (2007) who were the first to develop a learning-augmented algorithm for the related, but more structured Ad Words problem. We use a novel analysis to show that our algorithm is able to capitalize on a good prediction, while being robust against poor predictions. We experimentally evaluate our algorithm on synthetic and real-world data on a wide range of predictions. Our algorithm is consistently outperforming the worst-case algorithm without predictions.
|
Online Ad Allocation with Predictions
|
[
"Fabian Christian Spaeh",
"Alina Ene"
] |
Conference
|
poster
|
2302.01827
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kMueEV8Eyy
|
@inproceedings{
marcotte2023abide,
title={Abide by the law and follow the flow: conservation laws for gradient flows},
author={Sibylle Marcotte and R{\'e}mi Gribonval and Gabriel Peyr{\'e}},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kMueEV8Eyy}
}
|
Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained over-parameterized models retain some properties of the optimization initialization. This "implicit bias" is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. The purpose of this article is threefold. First, we rigorously expose the definition and basic properties of "conservation laws", that define quantities conserved during gradient flows of a given model (e.g. of a ReLU network with a given architecture) with any training data and any loss. Then we explain how to find the maximal number of independent conservation laws
by performing finite-dimensional algebraic manipulations on the Lie algebra generated by the Jacobian of the model. Finally, we provide algorithms to: a) compute a family of polynomial laws; b) compute the maximal number of (not necessarily polynomial) independent conservation laws. We provide showcase examples that we fully work out theoretically. Besides, applying the two algorithms confirms for a number of ReLU network architectures that all known laws are recovered by the algorithm, and that there are no other independent laws. Such computational tools pave the way to understanding desirable properties of optimization initialization in large machine learning models.
|
Abide by the law and follow the flow: conservation laws for gradient flows
|
[
"Sibylle Marcotte",
"Rémi Gribonval",
"Gabriel Peyré"
] |
Conference
|
oral
|
2307.00144
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kMmAYbT0VL
|
@inproceedings{
kasten2023point,
title={Point Cloud Completion with Pretrained Text-to-Image Diffusion Models},
author={Yoni Kasten and Ohad Rahamim and Gal Chechik},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kMmAYbT0VL}
}
|
Point cloud data collected in real-world applications are often incomplete. This is because they are observed from partial viewpoints, which capture only a specific perspective or angle, or due to occlusion and low resolution. Existing completion approaches rely on datasets of specific predefined objects to guide the completion of incomplete, and possibly noisy, point clouds. However, these approaches perform poorly with Out-Of-Distribution (OOD) objects, which are either absent from the dataset or poorly represented. In recent years, the field of text-guided image generation has made significant progress, leading to major breakthroughs in text guided shape generation. We describe an approach called SDS-Complete that uses a pre-trained text-to-image diffusion model and leverages the text semantic of a given incomplete point cloud of an object, to obtain a complete surface representation. SDS-Complete can complete a variety of objects at test time optimization without the need for an expensive collection of 3D information. We evaluate SDS-Complete on incomplete scanned objects, captured by real-world depth sensors and LiDAR scanners, and demonstrate that is effective in handling objects which are typically absent from common datasets.
|
Point Cloud Completion with Pretrained Text-to-Image Diffusion Models
|
[
"Yoni Kasten",
"Ohad Rahamim",
"Gal Chechik"
] |
Conference
|
poster
|
[
"https://github.com/NVlabs/sds-complete"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kLIieSS2P3
|
@inproceedings{
salehkalaibar2023on,
title={On the choice of Perception Loss Function for Learned Video Compression},
author={Sadaf Salehkalaibar and Truong Buu Phan and Jun Chen and Wei Yu and Ashish J Khisti},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kLIieSS2P3}
}
|
We study causal, low-latency, sequential video compression when the output is subjected to both a mean squared-error (MSE) distortion loss as well as a perception loss to target realism. Motivated by prior approaches, we consider two different perception loss functions (PLFs). The first, PLF-JD, considers the joint distribution (JD) of all the video frames up to the current one, while the second metric, PLF-FMD, considers the framewise marginal distributions (FMD) between the source and reconstruction. Using information theoretic analysis and deep-learning based experiments, we demonstrate that the choice of PLF can have a significant effect on the reconstruction, especially at low-bit rates. In particular, while the reconstruction based on PLF-JD can better preserve the temporal correlation across frames, it also imposes a significant penalty in distortion compared to PLF-FMD and further makes it more difficult to recover from errors made in the earlier output frames. Although the choice of PLF decisively affects reconstruction quality, we also demonstrate that it may not be essential to commit to a particular PLF during encoding and the choice of PLF can be delegated to the decoder. In particular, encoded representations generated by training a system to minimize the MSE (without requiring either PLF) can be {\em near universal} and can generate close to optimal reconstructions for either choice of PLF at the decoder. We validate our results using (one-shot) information-theoretic analysis, detailed study of the rate-distortion-perception tradeoff of the Gauss-Markov source model as well as deep-learning based experiments on moving MNIST and KTH datasets.
|
On the choice of Perception Loss Function for Learned Video Compression
|
[
"Sadaf Salehkalaibar",
"Truong Buu Phan",
"Jun Chen",
"Wei Yu",
"Ashish J Khisti"
] |
Conference
|
poster
|
2305.19301
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kKXJkiniOx
|
@inproceedings{
duan2023condaformer,
title={ConDaFormer: Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding},
author={Lunhao Duan and Shanshan Zhao and Nan Xue and Mingming Gong and Gui-Song Xia and Dacheng Tao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kKXJkiniOx}
}
|
Transformers have been recently explored for 3D point cloud understanding with impressive progress achieved. A large number of points, over 0.1 million, make the global self-attention infeasible for point cloud data. Thus, most methods propose to apply the transformer in a local region, e.g., spherical or cubic window. However, it still contains a large number of Query-Key pairs, which requires high computational costs. In addition, previous methods usually learn the query, key, and value using a linear projection without modeling the local 3D geometric structure. In this paper, we attempt to reduce the costs and model the local geometry prior by developing a new transformer block, named ConDaFormer. Technically, ConDaFormer disassembles the cubic window into three orthogonal 2D planes, leading to fewer points when modeling the attention in a similar range. The disassembling operation is beneficial to enlarging the range of attention without increasing the computational complexity, but ignores some contexts. To provide a remedy, we develop a local structure enhancement strategy that introduces a depth-wise convolution before and after the attention. This scheme can also capture the local geometric information. Taking advantage of these designs, ConDaFormer captures both long-range contextual information and local priors. The effectiveness is demonstrated by experimental results on several 3D point cloud understanding benchmarks. Our code will be available.
|
ConDaFormer: Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding
|
[
"Lunhao Duan",
"Shanshan Zhao",
"Nan Xue",
"Mingming Gong",
"Gui-Song Xia",
"Dacheng Tao"
] |
Conference
|
poster
|
2312.11112
|
[
"https://github.com/lhduan/condaformer"
] |
https://huggingface.co/papers/2312.11112
| 0 | 0 | 0 | 6 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=kKFDMtpeDW
|
@inproceedings{
cai2023on,
title={On Learning Necessary and Sufficient Causal Graphs},
author={Hengrui Cai and Yixin Wang and Michael Jordan and Rui Song},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kKFDMtpeDW}
}
|
The causal revolution has stimulated interest in understanding complex relationships in various fields. Most of the existing methods aim to discover causal relationships among all variables within a complex large-scale graph. However, in practice, only a small subset of variables in the graph are relevant to the outcomes of interest. Consequently, causal estimation with the full causal graph---particularly given limited data---could lead to numerous *falsely discovered, spurious* variables that exhibit high correlation with, but exert no causal impact on, the target outcome. In this paper, we propose learning a class of *necessary and sufficient causal graphs (NSCG)* that exclusively comprises causally relevant variables for an outcome of interest, which we term *causal features*. The key idea is to employ *probabilities of causation* to systematically evaluate the importance of features in the causal graph, allowing us to identify a subgraph relevant to the outcome of interest. To learn NSCG from data, we develop a *necessary and sufficient causal structural learning (NSCSL)* algorithm, by establishing theoretical properties and relationships between probabilities of causation and natural causal effects of features. Across empirical studies of simulated and real data, we demonstrate that NSCSL outperforms existing algorithms and can reveal crucial yeast genes for target heritable traits of interest.
|
On Learning Necessary and Sufficient Causal Graphs
|
[
"Hengrui Cai",
"Yixin Wang",
"Michael Jordan",
"Rui Song"
] |
Conference
|
spotlight
|
2301.12389
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kJmYu3Ti2z
|
@inproceedings{
luan2023when,
title={When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability},
author={Sitao Luan and Chenqing Hua and Minkai Xu and Qincheng Lu and Jiaqi Zhu and Xiao-Wen Chang and Jie Fu and Jure Leskovec and Doina Precup},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kJmYu3Ti2z}
}
|
Homophily principle, i.e., nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over Neural Networks on node classification tasks. Recent research suggests that, even in the absence of homophily, the advantage of GNNs still exists as long as nodes from the same class share similar neighborhood patterns. However, this argument only considers intra-class Node Distinguishability (ND) but neglects inter-class ND, which provides incomplete understanding of homophily on GNNs. In this paper, we first demonstrate such deficiency with examples and argue that an ideal situation for ND is to have smaller intra-class ND than inter-class ND. To formulate this idea and study ND deeply, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) and define two metrics, Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence, to quantify ND. With the metrics, we visualize and analyze how graph filters, node degree distributions and class variances influence ND, and investigate the combined effect of intra- and inter-class ND. Besides, we discovered the mid-homophily pitfall, which occurs widely in graph datasets. Furthermore, we verified that, in real-work tasks, the superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels. Grounded in this observation, we propose a new hypothesis-testing based performance metric beyond homophily, which is non-linear, feature-based and can provide statistical threshold value for GNNs' the superiority. Experiments indicate that it is significantly more effective than the existing homophily metrics on revealing the advantage and disadvantage of graph-aware modes on both synthetic and benchmark real-world datasets.
|
When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability
|
[
"Sitao Luan",
"Chenqing Hua",
"Minkai Xu",
"Qincheng Lu",
"Jiaqi Zhu",
"Xiao-Wen Chang",
"Jie Fu",
"Jure Leskovec",
"Doina Precup"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kJIibP5bq2
|
@inproceedings{
ng2023on,
title={On the Identifiability of Sparse {ICA} without Assuming Non-Gaussianity},
author={Ignavier Ng and Yujia Zheng and Xinshuai Dong and Kun Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kJIibP5bq2}
}
|
Independent component analysis (ICA) is a fundamental statistical tool used to reveal hidden generative processes from observed data. However, traditional ICA approaches struggle with the rotational invariance inherent in Gaussian distributions, often necessitating the assumption of non-Gaussianity in the underlying sources. This may limit their applicability in broader contexts. To accommodate Gaussian sources, we develop an identifiability theory that relies on second-order statistics without imposing further preconditions on the distribution of sources, by introducing novel assumptions on the connective structure from sources to observed variables. Different from recent work that focuses on potentially restrictive connective structures, our proposed assumption of structural variability is both considerably less restrictive and provably necessary. Furthermore, we propose two estimation methods based on second-order statistics and sparsity constraint. Experimental results are provided to validate our identifiability theory and estimation methods.
|
On the Identifiability of Sparse ICA without Assuming Non-Gaussianity
|
[
"Ignavier Ng",
"Yujia Zheng",
"Xinshuai Dong",
"Kun Zhang"
] |
Conference
|
poster
|
2408.10353
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kDQwossJuI
|
@inproceedings{
le2023limits,
title={Limits, approximation and size transferability for {GNN}s on sparse graphs via graphops},
author={Thien Le and Stefanie Jegelka},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kDQwossJuI}
}
|
Can graph neural networks generalize to graphs that are different from the graphs they were trained on, e.g., in size? In this work, we study this question from a theoretical perspective. While recent work established such transferability and approximation results via graph limits, e.g., via graphons, these only apply nontrivially to dense graphs. To include frequently encountered sparse graphs such as bounded-degree or power law graphs, we take a perspective of taking limits of operators derived from graphs, such as the aggregation operation that makes up GNNs. This leads to the recently introduced limit notion of graphops (Backhausz and Szegedy, 2022). We demonstrate how the operator perspective allows us to develop quantitative bounds on the distance between a finite GNN and its limit on an infinite graph, as well as the distance between the GNN on graphs of different sizes that share structural properties, under a regularity assumption verified for various graph sequences. Our results hold for dense and sparse graphs, and various notions of graph limits.
|
Limits, approximation and size transferability for GNNs on sparse graphs via graphops
|
[
"Thien Le",
"Stefanie Jegelka"
] |
Conference
|
poster
|
2306.04495
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kChEBODIx9
|
@inproceedings{
gao2023can,
title={Can Pre-Trained Text-to-Image Models Generate Visual Goals for Reinforcement Learning?},
author={Jialu Gao and Kaizhe Hu and Guowei Xu and Huazhe Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kChEBODIx9}
}
|
Pre-trained text-to-image generative models can produce diverse, semantically rich, and realistic images from natural language descriptions. Compared with language, images usually convey information with more details and less ambiguity. In this study, we propose Learning from the Void (LfVoid), a method that leverages the power of pre-trained text-to-image models and advanced image editing techniques to guide robot learning. Given natural language instructions, LfVoid can edit the original observations to obtain goal images, such as "wiping" a stain off a table. Subsequently, LfVoid trains an ensembled goal discriminator on the generated image to provide reward signals for a reinforcement learning agent, guiding it to achieve the goal. The ability of LfVoid to learn with zero in-domain training on expert demonstrations or true goal observations (the void) is attributed to the utilization of knowledge from web-scale generative models. We evaluate LfVoid across three simulated tasks and validate its feasibility in the corresponding real-world scenarios. In addition, we offer insights into the key considerations for the effective integration of visual generative models into robot learning workflows. We posit that our work represents an initial step towards the broader application of pre-trained visual generative models in the robotics field. Our project page: https://lfvoid-rl.github.io/.
|
Can Pre-Trained Text-to-Image Models Generate Visual Goals for Reinforcement Learning?
|
[
"Jialu Gao",
"Kaizhe Hu",
"Guowei Xu",
"Huazhe Xu"
] |
Conference
|
poster
|
2307.07837
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kCCD8d2aEu
|
@inproceedings{
watson2023coherent,
title={Coherent Soft Imitation Learning},
author={Joe Watson and Sandy Huang and Nicolas Heess},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kCCD8d2aEu}
}
|
Imitation learning methods seek to learn from an expert either through behavioral cloning (BC) for the policy or inverse reinforcement learning (IRL) for the reward.
Such methods enable agents to learn complex tasks from humans that are difficult to capture with hand-designed reward functions.
Choosing between BC or IRL for imitation depends on the quality and state-action coverage of the demonstrations, as well as additional access to the Markov decision process.
Hybrid strategies that combine BC and IRL are rare, as initial policy optimization against inaccurate rewards diminishes the benefit of pretraining the policy with BC.
Our work derives an imitation method that captures the strengths of both BC and IRL.
In the entropy-regularized (`soft') reinforcement learning setting, we show that the behavioral-cloned policy can be used as both a shaped reward and a critic hypothesis space by inverting the regularized policy update.
This coherency facilitates fine-tuning cloned policies using the reward estimate and additional interactions with the environment.
This approach conveniently achieves imitation learning through initial behavioral cloning and subsequent refinement via RL with online or offline data sources.
The simplicity of the approach enables graceful scaling to high-dimensional and vision-based tasks, with stable learning and minimal hyperparameter tuning, in contrast to adversarial approaches.
For the open-source implementation and simulation results, see https://joemwatson.github.io/csil/.
|
Coherent Soft Imitation Learning
|
[
"Joe Watson",
"Sandy Huang",
"Nicolas Heess"
] |
Conference
|
spotlight
|
2305.16498
|
[
"https://github.com/google-deepmind/csil"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=kBBsj9KRgh
|
@inproceedings{
ye2023same,
title={{SAME}: Uncovering {GNN} Black Box with Structure-aware Shapley-based Multipiece Explanations},
author={Ziyuan Ye and Rihan Huang and Qilin Wu and Quanying Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kBBsj9KRgh}
}
|
Post-hoc explanation techniques on graph neural networks (GNNs) provide economical solutions for opening the black-box graph models without model retraining. Many GNN explanation variants have achieved state-of-the-art explaining results on a diverse set of benchmarks, while they rarely provide theoretical analysis for their inherent properties and explanatory capability. In this work, we propose $\underline{\text{S}}$tructure-$\underline{\text{A}}$ware Shapley-based $\underline{\text{M}}$ultipiece $\underline{\text{E}}$xplanation (SAME) method to address the structure-aware feature interactions challenges for GNNs explanation. Specifically, SAME leverages an expansion-based Monte Carlo tree search to explore the multi-grained structure-aware connected substructure. Afterward, the explanation results are encouraged to be informative of the graph properties by optimizing the combination of distinct single substructures. With the consideration of fair feature interactions in the process of investigating multiple connected important substructures, the explanation provided by SAME has the potential to be as explainable as the theoretically optimal explanation obtained by the Shapley value within polynomial time. Extensive experiments on real-world and synthetic benchmarks show that SAME improves the previous state-of-the-art fidelity performance by 12.9\% on BBBP, 7.01\% on MUTAG, 42.3\% on Graph-SST2, 38.9\% on Graph-SST5, 11.3\% on BA-2Motifs and 18.2\% on BA-Shapes under the same testing condition. Code is available at https://github.com/same2023neurips/same.
|
SAME: Uncovering GNN Black Box with Structure-aware Shapley-based Multipiece Explanations
|
[
"Ziyuan Ye",
"Rihan Huang",
"Qilin Wu",
"Quanying Liu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=kAU6Cdq1gV
|
@inproceedings{
jackson2023discovering,
title={Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design},
author={Matthew Thomas Jackson and Minqi Jiang and Jack Parker-Holder and Risto Vuorio and Chris Lu and Gregory Farquhar and Shimon Whiteson and Jakob Nicolaus Foerster},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kAU6Cdq1gV}
}
|
The past decade has seen vast progress in deep reinforcement learning (RL) on the back of algorithms manually designed by human researchers. Recently, it has been shown that it is possible to meta-learn update rules, with the hope of discovering algorithms that can perform well on a wide range of RL tasks. Despite impressive initial results from algorithms such as Learned Policy Gradient (LPG), there remains a generalization gap when these algorithms are applied to unseen environments. In this work, we examine how characteristics of the meta-training distribution impact the generalization performance of these algorithms. Motivated by this analysis and building on ideas from Unsupervised Environment Design (UED), we propose a novel approach for automatically generating curricula to maximize the regret of a meta-learned optimizer, in addition to a novel approximation of regret, which we name algorithmic regret (AR). The result is our method, General RL Optimizers Obtained Via Environment Design (GROOVE). In a series of experiments, we show that GROOVE achieves superior generalization to LPG, and evaluate AR against baseline metrics from UED, identifying it as a critical component of environment design in this setting. We believe this approach is a step towards the discovery of truly general RL algorithms, capable of solving a wide range of real-world environments.
|
Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design
|
[
"Matthew Thomas Jackson",
"Minqi Jiang",
"Jack Parker-Holder",
"Risto Vuorio",
"Chris Lu",
"Gregory Farquhar",
"Shimon Whiteson",
"Jakob Nicolaus Foerster"
] |
Conference
|
poster
|
2310.02782
|
[
"https://github.com/EmptyJackson/groove"
] |
https://huggingface.co/papers/2310.02782
| 1 | 0 | 0 | 8 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=k9zSU3pdi4
|
@inproceedings{
feng2023open,
title={Open Compound Domain Adaptation with Object Style Compensation for Semantic Segmentation},
author={Tingliang Feng and Hao Shi and Xueyang Liu and Wei Feng and Liang Wan and Yanlin Zhou and Di Lin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=k9zSU3pdi4}
}
|
Many methods of semantic image segmentation have borrowed the success of open compound domain adaptation. They minimize the style gap between the images of source and target domains, more easily predicting the accurate pseudo annotations for target domain's images that train segmentation network. The existing methods globally adapt the scene style of the images, whereas the object styles of different categories or instances are adapted improperly. This paper proposes the Object Style Compensation, where we construct the Object-Level Discrepancy Memory with multiple sets of discrepancy features. The discrepancy features in a set capture the style changes of the same category's object instances adapted from target to source domains. We learn the discrepancy features from the images of source and target domains, storing the discrepancy features in memory. With this memory, we select appropriate discrepancy features for compensating the style information of the object instances of various categories, adapting the object styles to a unified style of source domain. Our method enables a more accurate computation of the pseudo annotations for target domain's images, thus yielding state-of-the-art results on different datasets.
|
Open Compound Domain Adaptation with Object Style Compensation for Semantic Segmentation
|
[
"Tingliang Feng",
"Hao Shi",
"Xueyang Liu",
"Wei Feng",
"Liang Wan",
"Yanlin Zhou",
"Di Lin"
] |
Conference
|
poster
|
2309.16127
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=k8U8ZijXHh
|
@inproceedings{
ding2023pdf,
title={{PDF}: Point Diffusion Implicit Function for Large-scale Scene Neural Representation},
author={Yuhan Ding and Fukun Yin and Jiayuan Fan and Hui Li and Xin Chen and Wen Liu and Chongshan Lu and Gang YU and Tao Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=k8U8ZijXHh}
}
|
Recent advances in implicit neural representations have achieved impressive results by sampling and fusing individual points along sampling rays in the sampling space. However, due to the explosively growing sampling space, finely representing and synthesizing detailed textures remains a challenge for unbounded large-scale outdoor scenes. To alleviate the dilemma of using individual points to perceive the entire colossal space, we explore learning the surface distribution of the scene to provide structural priors and reduce the samplable space and propose a Point Diffusion implicit Function, PDF, for large-scale scene neural representation. The core of our method is a large-scale point cloud super-resolution diffusion module that enhances the sparse point cloud reconstructed from several training images into a dense point cloud as an explicit prior. Then in the rendering stage, only sampling points with prior points within the sampling radius are retained. That is, the sampling space is reduced from the unbounded space to the scene surface. Meanwhile, to fill in the background of the scene that cannot be provided by point clouds, the region sampling based on Mip-NeRF 360 is employed to model the background representation. Expensive experiments have demonstrated the effectiveness of our method for large-scale scene novel view synthesis, which outperforms relevant state-of-the-art baselines.
|
PDF: Point Diffusion Implicit Function for Large-scale Scene Neural Representation
|
[
"Yuhan Ding",
"Fukun Yin",
"Jiayuan Fan",
"Hui Li",
"Xin Chen",
"Wen Liu",
"Chongshan Lu",
"Gang YU",
"Tao Chen"
] |
Conference
|
poster
|
2311.01773
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=k6yNi6DEqK
|
@inproceedings{
hai2023ltdln,
title={L2T-{DLN}: Learning to Teach with Dynamic Loss Network},
author={Zhaoyang Hai and Liyuan Pan and Xiabi Liu and Zhengzheng Liu and Mirna Yunita},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=k6yNi6DEqK}
}
|
With the concept of teaching being introduced to the machine learning community, a teacher model start using dynamic loss functions to teach the training of a student model. The dynamic intends to set adaptive loss functions to different phases of student model learning. In existing works, the teacher model 1) merely determines the loss function based on the present states of the student model, e.g., disregards the experience of the teacher; 2) only utilizes the states of the student model, e.g., training iteration number and loss/accuracy from training/validation sets, while ignoring the states of the loss function. In this paper, we first formulate the loss adjustment as a temporal task by designing a teacher model with memory units, and, therefore, enables the student learning to be guided by the experience of the teacher model. Then, with a Dynamic Loss Network, we can additionally use the states of the loss to assist the teacher learning in enhancing the interactions between the teacher and the student model.
Extensive experiments demonstrate our approach can enhance student learning and improve the performance of various deep models on real-world tasks, including classification, objective detection, and semantic segmentation scenario.
|
L2T-DLN: Learning to Teach with Dynamic Loss Network
|
[
"Zhaoyang Hai",
"Liyuan Pan",
"Xiabi Liu",
"Zhengzheng Liu",
"Mirna Yunita"
] |
Conference
|
poster
|
2310.19313
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=k4ZCORSFEd
|
@inproceedings{
assadi2023streaming,
title={Streaming Algorithms and Lower Bounds for Estimating Correlation Clustering Cost},
author={Sepehr Assadi and Vihan Shah and Chen Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=k4ZCORSFEd}
}
|
Correlation clustering is a fundamental optimization problem at the intersection of machine learning and theoretical computer science.
Motivated by applications to big data processing, recent years have witnessed a flurry of results on this problem in the streaming model.
In this model, the algorithm needs to process the input $n$-vertex graph by making one or few passes over the stream of its edges and using a limited memory, much smaller than the input size.
All previous work on streaming correlation clustering have focused on semi-streaming algorithms with $\Omega(n)$ memory, whereas in this work, we study streaming algorithms with much smaller memory requirement of only $\text{polylog}{(n)}$ bits. This stringent memory requirement is in the same spirit of classical streaming algorithms that instead of recovering a full solution to the problem---which can be prohibitively large with such small memory as is the case in our problem---, aimed to learn certain statistical properties of their inputs. In our case, this translates to determining the ``(correlation) clusterability'' of input graphs, or more precisely, estimating the cost of the optimal correlation clustering solution.
As our main result, we present two novel algorithms that in only $\text{polylog}{(n)}$ space are able to estimate the optimal correlation clustering cost up to some constant multiplicative factor plus some extra additive error. One of the algorithms outputs a $3$-multiplicative approximation plus $o(n^2)$ additive approximation, and the other one improves the additive error further down at the cost of increasing the multiplicative factor to some large constant. We then present new lower bounds that justify this mix of both multiplicative and additive error approximation in our algorithms.
|
Streaming Algorithms and Lower Bounds for Estimating Correlation Clustering Cost
|
[
"Sepehr Assadi",
"Vihan Shah",
"Chen Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=k2UVKezeWn
|
@inproceedings{
linhart2023lcst,
title={L-C2{ST}: Local Diagnostics for Posterior Approximations in Simulation-Based Inference},
author={Julia Linhart and Alexandre Gramfort and Pedro L. C. Rodrigues},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=k2UVKezeWn}
}
|
Many recent works in simulation-based inference (SBI) rely on deep generative models to approximate complex, high-dimensional posterior distributions. However, evaluating whether or not these approximations can be trusted remains a challenge. Most approaches evaluate the posterior estimator only in expectation over the observation space. This limits their interpretability and is not sufficient to identify for which observations the approximation can be trusted or should be improved. Building upon the well-known classifier two-sample test (C2ST), we introduce $\ell$-C2ST, a new method that allows for a local evaluation of the posterior estimator at any given observation. It offers theoretically grounded and easy to interpret -- e.g. graphical -- diagnostics, and unlike C2ST, does not require access to samples from the true posterior. In the case of normalizing flow-based posterior estimators, $\ell$-C2ST can be specialized to offer better statistical power, while being computationally more efficient. On standard SBI benchmarks, $\ell$-C2ST provides comparable results to C2ST and outperforms alternative local approaches such as coverage tests based on highest predictive density (HPD). We further highlight the importance of local evaluation and the benefit of interpretability of $\ell$-C2ST on a challenging application from computational neuroscience.
|
L-C2ST: Local Diagnostics for Posterior Approximations in Simulation-Based Inference
|
[
"Julia Linhart",
"Alexandre Gramfort",
"Pedro L. C. Rodrigues"
] |
Conference
|
poster
|
2306.03580
|
[
"https://github.com/julialinhart/lc2st"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=k1Xy5zCNOJ
|
@inproceedings{
zhang2023lookaround,
title={Lookaround Optimizer: \$k\$ steps around, 1 step average},
author={Jiangtao Zhang and Shunyu Liu and Jie Song and Tongtian Zhu and Zhengqi Xu and Mingli Song},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=k1Xy5zCNOJ}
}
|
Weight Average (WA) is an active research topic due to its simplicity in ensembling deep networks and the effectiveness in promoting generalization. Existing weight average approaches, however, are often carried out along only one training trajectory in a post-hoc manner (i.e., the weights are averaged after the entire training process is finished), which significantly degrades the diversity between networks and thus impairs the effectiveness. In this paper, inspired by weight average, we propose Lookaround, a straightforward yet effective SGD-based optimizer leading to flatter minima with better generalization. Specifically, Lookaround iterates two steps during the whole training period: the around step and the average step. In each iteration, 1) the around step starts from a common point and trains multiple networks simultaneously, each on transformed data by a different data augmentation, and 2) the average step averages these trained networks to get the averaged network, which serves as the starting point for the next iteration. The around step improves the functionality diversity while the average step guarantees the weight locality of these networks during the whole training, which is essential for WA to work. We theoretically explain the superiority of Lookaround by convergence analysis, and make extensive experiments to evaluate Lookaround on popular benchmarks including CIFAR and ImageNet with both CNNs and ViTs, demonstrating clear superiority over state-of-the-arts. Our code is available at https://github.com/Ardcy/Lookaround.
|
Lookaround Optimizer: k steps around, 1 step average
|
[
"Jiangtao Zhang",
"Shunyu Liu",
"Jie Song",
"Tongtian Zhu",
"Zhengqi Xu",
"Mingli Song"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.