bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
listlengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
listlengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
listlengths 0
100
| Datasets
listlengths 0
11
| Spaces
listlengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
https://openreview.net/forum?id=Z1Aj59LoZD
|
@inproceedings{
ergen2023path,
title={Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel Re{LU} Networks},
author={Tolga Ergen and Mert Pilanci},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z1Aj59LoZD}
}
|
Understanding the fundamental principles behind the success of deep neural networks is one of the most important open questions in the current literature. To this end, we study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape. We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases. We then show that pathwise regularized training problems can be represented as an exact convex optimization problem. We further prove that the equivalent convex problem is regularized via a group sparsity inducing norm. Thus, a path regularized parallel ReLU network can be viewed as a parsimonious convex model in high dimensions. More importantly, since the original training problem may not be trainable in polynomial-time, we propose an approximate algorithm with a fully polynomial-time complexity in all data dimensions. Then, we prove strong global optimality guarantees for this algorithm. We also provide experiments corroborating our theory.
|
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
|
[
"Tolga Ergen",
"Mert Pilanci"
] |
Conference
|
poster
|
2110.09548
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Z16jo3d6OD
|
@inproceedings{
xiao2023a,
title={A Unified Framework for Rank-based Loss Minimization},
author={Rufeng Xiao and Yuze Ge and Rujun Jiang and Yifan Yan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z16jo3d6OD}
}
|
The empirical loss, commonly referred to as the average loss, is extensively utilized for training machine learning models. However, in order to address the diverse performance requirements of machine learning models, the use of the rank-based loss is prevalent, replacing the empirical loss in many cases. The rank-based loss comprises a weighted sum of sorted individual losses, encompassing both convex losses like the spectral risk, which includes the empirical risk and conditional value-at-risk, and nonconvex losses such as the human-aligned risk and the sum of the ranked range loss. In this paper, we introduce a unified framework for the optimization of the rank-based loss through the utilization of a proximal alternating direction method of multipliers. We demonstrate the convergence and convergence rate of the proposed algorithm under mild conditions. Experiments conducted on synthetic and real datasets illustrate the effectiveness and efficiency of the proposed algorithm.
|
A Unified Framework for Rank-based Loss Minimization
|
[
"Rufeng Xiao",
"Yuze Ge",
"Rujun Jiang",
"Yifan Yan"
] |
Conference
|
poster
|
2310.17237
|
[
"https://github.com/rufengxiao/admm-for-rank-based-loss"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Yx8Sw2H5Q7
|
@inproceedings{
{\v{z}}ikeli{\'c}2023compositional,
title={Compositional Policy Learning in Stochastic Control Systems with Formal Guarantees},
author={{\DJ}or{\dj}e {\v{Z}}ikeli{\'c} and Mathias Lechner and Abhinav Verma and Krishnendu Chatterjee and Thomas A Henzinger},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Yx8Sw2H5Q7}
}
|
Reinforcement learning has shown promising results in learning neural network policies for complicated control tasks. However, the lack of formal guarantees about the behavior of such policies remains an impediment to their deployment. We propose a novel method for learning a composition of neural network policies in stochastic environments, along with a formal certificate which guarantees that a specification over the policy's behavior is satisfied with the desired probability. Unlike prior work on verifiable RL, our approach leverages the compositional nature of logical specifications provided in SpectRL, to learn over graphs of probabilistic reach-avoid specifications. The formal guarantees are provided by learning neural network policies together with reach-avoid supermartingales (RASM) for the graph’s sub-tasks and then composing them into a global policy. We also derive a tighter lower bound compared to previous work on the probability of reach-avoidance implied by a RASM, which is required to find a compositional policy with an acceptable probabilistic threshold for complex tasks with multiple edge policies. We implement a prototype of our approach and evaluate it on a Stochastic Nine Rooms environment.
|
Compositional Policy Learning in Stochastic Control Systems with Formal Guarantees
|
[
"Đorđe Žikelić",
"Mathias Lechner",
"Abhinav Verma",
"Krishnendu Chatterjee",
"Thomas A Henzinger"
] |
Conference
|
poster
|
2312.01456
|
[
"https://github.com/mlech26l/neural_martingales"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YwgA3avHrP
|
@inproceedings{
zhou2023text,
title={Text Promptable Surgical Instrument Segmentation with Vision-Language Models},
author={Zijian Zhou and Oluwatosin Alabi and Meng Wei and Tom Vercauteren and Miaojing Shi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YwgA3avHrP}
}
|
In this paper, we propose a novel text promptable surgical instrument segmentation approach to overcome challenges associated with diversity and differentiation of surgical instruments in minimally invasive surgeries. We redefine the task as text promptable, thereby enabling a more nuanced comprehension of surgical instruments and adaptability to new instrument types. Inspired by recent advancements in vision-language models, we leverage pretrained image and text encoders as our model backbone and design a text promptable mask decoder consisting of attention- and convolution-based prompting schemes for surgical instrument segmentation prediction. Our model leverages multiple text prompts for each surgical instrument through a new mixture of prompts mechanism, resulting in enhanced segmentation performance. Additionally, we introduce a hard instrument area reinforcement module to improve image feature comprehension and segmentation precision. Extensive experiments on several surgical instrument segmentation datasets demonstrate our model's superior performance and promising generalization capability. To our knowledge, this is the first implementation of a promptable approach to surgical instrument segmentation, offering significant potential for practical application in the field of robotic-assisted surgery. Code is available at https://github.com/franciszzj/TP-SIS.
|
Text Promptable Surgical Instrument Segmentation with Vision-Language Models
|
[
"Zijian Zhou",
"Oluwatosin Alabi",
"Meng Wei",
"Tom Vercauteren",
"Miaojing Shi"
] |
Conference
|
poster
|
2306.09244
|
[
""
] |
https://huggingface.co/papers/2306.09244
| 1 | 1 | 0 | 5 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=Yvpenkym8A
|
@inproceedings{
zhang2023integrationfree,
title={Integration-free Training for Spatio-temporal Multimodal Covariate Deep Kernel Point Processes},
author={YIXUAN ZHANG and Quyu Kong and Feng Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Yvpenkym8A}
}
|
In this study, we propose a novel deep spatio-temporal point process model, Deep Kernel Mixture Point Processes (DKMPP), that incorporates multimodal covariate information. DKMPP is an enhanced version of Deep Mixture Point Processes (DMPP), which uses a more flexible deep kernel to model complex relationships between events and covariate data, improving the model's expressiveness. To address the intractable training procedure of DKMPP due to the non-integrable deep kernel, we utilize an integration-free method based on score matching, and further improve efficiency by adopting a scalable denoising score matching method. Our experiments demonstrate that DKMPP and its corresponding score-based estimators outperform baseline models, showcasing the advantages of incorporating covariate information, utilizing a deep kernel, and employing score-based estimators.
|
Integration-free Training for Spatio-temporal Multimodal Covariate Deep Kernel Point Processes
|
[
"YIXUAN ZHANG",
"Quyu Kong",
"Feng Zhou"
] |
Conference
|
poster
|
2310.05485
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YvO5yTVv5Y
|
@inproceedings{
zhang2023online,
title={Online Map Vectorization for Autonomous Driving: A Rasterization Perspective},
author={Gongjie Zhang and Jiahao Lin and Shuang Wu and Yilin Song and Zhipeng Luo and Yang Xue and Shijian Lu and Zuoguan Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YvO5yTVv5Y}
}
|
High-definition (HD) vectorized map is essential for autonomous driving, providing detailed and precise environmental information for advanced perception and planning. However, current map vectorization methods often exhibit deviations, and the existing evaluation metric for map vectorization lacks sufficient sensitivity to detect these deviations. To address these limitations, we propose integrating the philosophy of rasterization into map vectorization. Specifically, we introduce a new rasterization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios. Furthermore, we propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiable rasterization to vectorized outputs and then performs precise and geometry-aware supervision on rasterized HD maps. Notably, MapVR designs tailored rasterization strategies for various geometric shapes, enabling effective adaptation to a wide range of map elements. Experiments show that incorporating rasterization into map vectorization greatly enhances performance with no extra computational cost during inference, leading to more accurate map perception and ultimately promoting safer autonomous driving. Codes are available at https://github.com/ZhangGongjie/MapVR. A standalone map vectorization evaluation toolkit is available at https://github.com/jiahaoLjh/MapVectorizationEvalToolkit.
|
Online Map Vectorization for Autonomous Driving: A Rasterization Perspective
|
[
"Gongjie Zhang",
"Jiahao Lin",
"Shuang Wu",
"Yilin Song",
"Zhipeng Luo",
"Yang Xue",
"Shijian Lu",
"Zuoguan Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YsZTDcIQwQ
|
@inproceedings{
lin2023diversifying,
title={Diversifying Spatial-Temporal Perception for Video Domain Generalization},
author={Kun-Yu Lin and Jia-Run Du and Yipeng Gao and Jiaming Zhou and Wei-Shi Zheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YsZTDcIQwQ}
}
|
Video domain generalization aims to learn generalizable video classification models for unseen target domains by training in a source domain.
A critical challenge of video domain generalization is to defend against the heavy reliance on domain-specific cues extracted from the source domain when recognizing target videos. To this end, we propose to perceive diverse spatial-temporal cues in videos, aiming to discover potential domain-invariant cues in addition to domain-specific cues. We contribute a novel model named Spatial-Temporal Diversification Network (STDN), which improves the diversity from both space and time dimensions of video data. First, our STDN proposes to discover various types of spatial cues within individual frames by spatial grouping. Then, our STDN proposes to explicitly model spatial-temporal dependencies between video contents at multiple space-time scales by spatial-temporal relation modeling. Extensive experiments on three benchmarks of different types demonstrate the effectiveness and versatility of our approach.
|
Diversifying Spatial-Temporal Perception for Video Domain Generalization
|
[
"Kun-Yu Lin",
"Jia-Run Du",
"Yipeng Gao",
"Jiaming Zhou",
"Wei-Shi Zheng"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YsYKv95jy9
|
@inproceedings{
yu2023deep,
title={Deep Fractional Fourier Transform},
author={Hu Yu and Jie Huang and Lingzhi Li and Man Zhou and Feng Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YsYKv95jy9}
}
|
Existing deep learning-based computer vision methods usually operate in the spatial and frequency domains, which are two orthogonal \textbf{individual} perspectives for image processing.
In this paper, we introduce a new spatial-frequency analysis tool, Fractional Fourier Transform (FRFT), to provide comprehensive \textbf{unified} spatial-frequency perspectives.
The FRFT is a unified continuous spatial-frequency transform that simultaneously reflects an image's spatial and frequency representations, making it optimal for processing non-stationary image signals.
We explore the properties of the FRFT for image processing and present a fast implementation of the 2D FRFT, which facilitates its widespread use.
Based on these explorations, we introduce a simple yet effective operator, Multi-order FRactional Fourier Convolution (MFRFC), which exhibits the remarkable merits of processing images from more perspectives in the spatial-frequency plane. Our proposed MFRFC is a general and basic operator that can be easily integrated into various tasks for performance improvement.
We experimentally evaluate the MFRFC on various computer vision tasks, including object detection, image classification, guided super-resolution, denoising, dehazing, deraining, and low-light enhancement. Our proposed MFRFC consistently outperforms baseline methods by significant margins across all tasks.
|
Deep Fractional Fourier Transform
|
[
"Hu Yu",
"Jie Huang",
"Lingzhi Li",
"Man Zhou",
"Feng Zhao"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Yq6GKgN3RC
|
@inproceedings{
crawshaw2023federated,
title={Federated Learning with Client Subsampling, Data Heterogeneity, and Unbounded Smoothness: A New Algorithm and Lower Bounds},
author={Michael Crawshaw and Yajie Bao and Mingrui Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Yq6GKgN3RC}
}
|
We study the problem of Federated Learning (FL) under client subsampling and data heterogeneity with an objective function that has potentially unbounded smoothness. This problem is motivated by empirical evidence that the class of relaxed smooth functions, where the Lipschitz constant of the gradient scales linearly with the gradient norm, closely resembles the loss functions of certain neural networks such as recurrent neural networks (RNNs) with possibly exploding gradient. We introduce EPISODE++, the first algorithm to solve this problem. It maintains historical statistics for each client to construct control variates and decide clipping behavior for sampled clients in the current round. We prove that EPISODE++ achieves linear speedup in the number of participating clients, reduced communication rounds, and resilience to data heterogeneity. Our upper bound proof relies on novel techniques of recursively bounding the client updates under unbounded smoothness and client subsampling, together with a refined high probability analysis. In addition, we prove a lower bound showing that the convergence rate of a special case of clipped minibatch SGD (without randomness in the stochastic gradient and with randomness in client subsampling) suffers from an explicit dependence on the maximum gradient norm of the objective in a sublevel set, which may be large. This effectively demonstrates that applying gradient clipping to minibatch SGD in our setting does not eliminate the problem of exploding gradients. Our lower bound is based on new constructions of hard instances tailored to client subsampling and a novel analysis of the trajectory of the algorithm in the presence of clipping. Lastly, we provide an experimental evaluation of EPISODE++ when training RNNs on federated text classification tasks, demonstrating that EPISODE++ outperforms strong baselines in FL. The code is available at https://github.com/MingruiLiu-ML-Lab/episode_plusplus.
|
Federated Learning with Client Subsampling, Data Heterogeneity, and Unbounded Smoothness: A New Algorithm and Lower Bounds
|
[
"Michael Crawshaw",
"Yajie Bao",
"Mingrui Liu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Ypbke6biDm
|
@inproceedings{
edelman2023pareto,
title={Pareto Frontiers in Deep Feature Learning: Data, Compute, Width, and Luck},
author={Benjamin L. Edelman and Surbhi Goel and Sham M. Kakade and eran malach and Cyril Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Ypbke6biDm}
}
|
In modern deep learning, algorithmic choices (such as width, depth, and learning rate) are known to modulate nuanced resource tradeoffs. This work investigates how these complexities necessarily arise for feature learning in the presence of computational-statistical gaps. We begin by considering offline sparse parity learning, a supervised classification problem which admits a statistical query lower bound for gradient-based training of a multilayer perceptron. This lower bound can be interpreted as a *multi-resource tradeoff frontier*:
successful learning can only occur if one is sufficiently rich (large model), knowledgeable (large dataset), patient (many training iterations), or lucky (many random guesses). We show, theoretically and experimentally, that sparse initialization and increasing network width yield significant improvements in sample efficiency in this setting. Here, width plays the role of parallel search: it amplifies the probability of finding "lottery ticket" neurons, which learn sparse features more sample-efficiently. Finally, we show that the synthetic sparse parity task can be useful as a proxy for real problems requiring axis-aligned feature learning. We demonstrate improved sample efficiency on tabular classification benchmarks by using wide, sparsely-initialized MLP models; these networks sometimes outperform tuned random forests.
|
Pareto Frontiers in Deep Feature Learning: Data, Compute, Width, and Luck
|
[
"Benjamin L. Edelman",
"Surbhi Goel",
"Sham M. Kakade",
"eran malach",
"Cyril Zhang"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YoghyvSG0H
|
@inproceedings{
ho2023diffusionssd,
title={Diffusion-{SS}3D: Diffusion Model for Semi-supervised 3D Object Detection},
author={Cheng-Ju Ho and Chen-Hsuan Tai and Yen-Yu Lin and Ming-Hsuan Yang and Yi-Hsuan Tsai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YoghyvSG0H}
}
|
Semi-supervised object detection is crucial for 3D scene understanding, efficiently addressing the limitation of acquiring large-scale 3D bounding box annotations. Existing methods typically employ a teacher-student framework with pseudo-labeling to leverage unlabeled point clouds. However, producing reliable pseudo-labels in a diverse 3D space still remains challenging. In this work, we propose Diffusion-SS3D, a new perspective of enhancing the quality of pseudo-labels via the diffusion model for semi-supervised 3D object detection. Specifically, we include noises to produce corrupted 3D object size and class label distributions, and then utilize the diffusion model as a denoising process to obtain bounding box outputs. Moreover, we integrate the diffusion model into the teacher-student framework, so that the denoised bounding boxes can be used to improve pseudo-label generation, as well as the entire semi-supervised learning process. We conduct experiments on the ScanNet and SUN RGB-D benchmark datasets to demonstrate that our approach achieves state-of-the-art performance against existing methods. We also present extensive analysis to understand how our diffusion model design affects performance in semi-supervised learning. The source code will be available at https://github.com/luluho1208/Diffusion-SS3D.
|
Diffusion-SS3D: Diffusion Model for Semi-supervised 3D Object Detection
|
[
"Cheng-Ju Ho",
"Chen-Hsuan Tai",
"Yen-Yu Lin",
"Ming-Hsuan Yang",
"Yi-Hsuan Tsai"
] |
Conference
|
poster
|
2312.02966
|
[
"https://github.com/luluho1208/diffusion-ss3d"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YmEDnMynuO
|
@inproceedings{
li2023graphadapter,
title={GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph},
author={Xin Li and Dongze Lian and Zhihe Lu and Jiawang Bai and Zhibo Chen and Xinchao Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YmEDnMynuO}
}
|
Adapter-style efficient transfer learning (ETL) has shown excellent performance in the tuning of vision-language models (VLMs) under the low-data regime, where only a few additional parameters are introduced to excavate the task-specific knowledge based on the general and powerful representation of VLMs. However, most adapter-style works face two limitations: (i) modeling task-specific knowledge with a single modality only; and (ii) overlooking the exploitation of the inter-class relationships in downstream tasks, thereby leading to sub-optimal solutions. To mitigate that, we propose an effective adapter-style tuning strategy, dubbed GraphAdapter, which performs the textual adapter by explicitly modeling the dual-modality structure knowledge (i.e., the correlation of different semantics/classes in textual and visual modalities) with a dual knowledge graph. In particular, the dual knowledge graph is established with two sub-graphs, i.e., a textual knowledge sub-graph, and a visual knowledge sub-graph, where the nodes and edges represent the semantics/classes and their correlations in two modalities, respectively. This enables the textual feature of each prompt to leverage the task-specific structure knowledge from both textual and visual modalities, yielding a more effective classifier for downstream tasks. Extensive experimental results on 11 benchmark datasets reveal that our GraphAdapter significantly outperforms the previous adapter-based methods.
|
GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph
|
[
"Xin Li",
"Dongze Lian",
"Zhihe Lu",
"Jiawang Bai",
"Zhibo Chen",
"Xinchao Wang"
] |
Conference
|
poster
|
2309.13625
|
[
"https://github.com/lixinustc/graphadapter"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YkBDJWerKg
|
@inproceedings{
lifshitz2023steve,
title={{STEVE}-1: A Generative Model for Text-to-Behavior in Minecraft},
author={Shalev Lifshitz and Keiran Paster and Harris Chan and Jimmy Ba and Sheila A. McIlraith},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YkBDJWerKg}
}
|
Constructing AI models that respond to text instructions is challenging, especially for sequential decision-making tasks. This work introduces a methodology, inspired by unCLIP, for instruction-tuning generative models of behavior without relying on a large dataset of instruction-labeled trajectories. Using this methodology, we create an instruction-tuned Video Pretraining (VPT) model called STEVE-1, which can follow short-horizon open-ended text and visual instructions in Minecraft. STEVE-1 is trained in two steps: adapting the pretrained VPT model to follow commands in MineCLIP's latent space, then training a prior to predict latent codes from text. This allows us to finetune VPT through self-supervised behavioral cloning and hindsight relabeling, reducing the need for costly human text annotations, and all for only $60 of compute. By leveraging pretrained models like VPT and MineCLIP and employing best practices from text-conditioned image generation, STEVE-1 sets a new bar for open-ended instruction following in Minecraft with low-level controls (mouse and keyboard) and raw pixel inputs, far outperforming previous baselines and robustly completing 12 of 13 tasks in our early-game evaluation suite. We provide experimental evidence highlighting key factors for downstream performance, including pretraining, classifier-free guidance, and data scaling. All resources, including our model weights, training scripts, and evaluation tools are made available for further research.
|
STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
|
[
"Shalev Lifshitz",
"Keiran Paster",
"Harris Chan",
"Jimmy Ba",
"Sheila A. McIlraith"
] |
Conference
|
spotlight
|
2306.00937
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Yj3lFEyfnl
|
@inproceedings{
kwak2023boosting,
title={Boosting Learning for {LDPC} Codes to Improve the Error-Floor Performance},
author={Hee-Youl Kwak and Dae-Young Yun and Yongjune Kim and Sang-Hyo Kim and Jong-Seon No},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Yj3lFEyfnl}
}
|
Low-density parity-check (LDPC) codes have been successfully commercialized in communication systems due to their strong error correction capabilities and simple decoding process. However, the error-floor phenomenon of LDPC codes, in which the error rate stops decreasing rapidly at a certain level, presents challenges for achieving extremely low error rates and deploying LDPC codes in scenarios demanding ultra-high reliability. In this work, we propose training methods for neural min-sum (NMS) decoders to eliminate the error-floor effect. First, by leveraging the boosting learning technique of ensemble networks, we divide the decoding network into two neural decoders and train the post decoder to be specialized for uncorrected words that the first decoder fails to correct. Secondly, to address the vanishing gradient issue in training, we introduce a block-wise training schedule that locally trains a block of weights while retraining the preceding block. Lastly, we show that assigning different weights to unsatisfied check nodes effectively lowers the error-floor with a minimal number of weights. By applying these training methods to standard LDPC codes, we achieve the best error-floor performance compared to other decoding methods. The proposed NMS decoder, optimized solely through novel training methods without additional modules, can be integrated into existing LDPC decoders without incurring extra hardware costs. The source code is available at https://github.com/ghy1228/LDPC_Error_Floor.
|
Boosting Learning for LDPC Codes to Improve the Error-Floor Performance
|
[
"Hee-Youl Kwak",
"Dae-Young Yun",
"Yongjune Kim",
"Sang-Hyo Kim",
"Jong-Seon No"
] |
Conference
|
poster
|
2310.07194
|
[
"https://github.com/ghy1228/ldpc_error_floor"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YiwMpyMdPX
|
@inproceedings{
fan2023evaluating,
title={Evaluating Neuron Interpretation Methods of {NLP} Models},
author={Yimin Fan and Fahim Dalvi and Nadir Durrani and Hassan Sajjad},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YiwMpyMdPX}
}
|
Neuron interpretation offers valuable insights into how knowledge is structured within a deep neural network model. While a number of neuron interpretation methods have been proposed in the literature, the field lacks a comprehensive comparison among these methods. This gap hampers progress due to the absence of standardized metrics and benchmarks. The commonly used evaluation metric has limitations, and creating ground truth annotations for neurons is impractical. Addressing these challenges, we propose an evaluation framework based on voting theory. Our hypothesis posits that neurons consistently identified by different methods carry more significant information. We rigorously assess our framework across a diverse array of neuron interpretation methods. Notable findings include: i) despite the theoretical differences among the methods, neuron ranking methods share over 60% of their rankings when identifying salient neurons, ii) the neuron interpretation methods are most sensitive to the last layer representations, iii) Probeless neuron ranking emerges as the most consistent method.
|
Evaluating Neuron Interpretation Methods of NLP Models
|
[
"Yimin Fan",
"Fahim Dalvi",
"Nadir Durrani",
"Hassan Sajjad"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YifKp5b15e
|
@inproceedings{
diakonikolas2023nearoptimal,
title={Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise},
author={Ilias Diakonikolas and Jelena Diakonikolas and Daniel Kane and Puqian Wang and Nikos Zarifis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YifKp5b15e}
}
|
We study the problem of learning general (i.e., not necessarily homogeneous)
halfspaces with Random Classification Noise under the Gaussian distribution.
We establish nearly-matching algorithmic and Statistical Query (SQ) lower bound results
revealing a surprising information-computation gap for this basic problem.
Specifically, the sample complexity of this learning problem is
$\widetilde{\Theta}(d/\epsilon)$, where $d$ is the dimension and $\epsilon$ is the excess error.
Our positive result is a computationally efficient learning algorithm with sample complexity
$\tilde{O}(d/\epsilon + d/\max(p, \epsilon))^2)$, where $p$ quantifies the bias of the target halfspace.
On the lower bound side, we show that any efficient SQ algorithm (or low-degree test)
for the problem requires sample complexity at least
$\Omega(d^{1/2}/(\max(p, \epsilon))^2)$.
Our lower bound suggests that this quadratic dependence on $1/\epsilon$ is inherent for efficient algorithms.
|
Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise
|
[
"Ilias Diakonikolas",
"Jelena Diakonikolas",
"Daniel Kane",
"Puqian Wang",
"Nikos Zarifis"
] |
Conference
|
poster
|
2307.08438
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YiRX7nQ77Q
|
@inproceedings{
kassraie2023anytime,
title={Anytime Model Selection in Linear Bandits},
author={Parnian Kassraie and Nicolas Emmenegger and Andreas Krause and Aldo Pacchiano},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YiRX7nQ77Q}
}
|
Model selection in the context of bandit optimization is a challenging problem, as it requires balancing exploration and exploitation not only for action selection, but also for model selection. One natural approach is to rely on online learning algorithms that treat different models as experts. Existing methods, however, scale poorly ($\mathrm{poly}M$) with the number of models $M$ in terms of their regret.
Our key insight is that, for model selection in linear bandits, we can emulate full-information feedback to the online learner with a favorable bias-variance trade-off. This allows us to develop ALEXP, which has an exponentially improved ($\log M$) dependence on $M$ for its regret.
ALEXP has anytime guarantees on its regret, and neither requires knowledge of the horizon $n$, nor relies on an initial purely exploratory stage.
Our approach utilizes a novel time-uniform analysis of the Lasso, establishing a new connection between online learning and high-dimensional statistics.
|
Anytime Model Selection in Linear Bandits
|
[
"Parnian Kassraie",
"Nicolas Emmenegger",
"Andreas Krause",
"Aldo Pacchiano"
] |
Conference
|
poster
|
2307.12897
|
[
"https://github.com/lasgroup/alexp"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YhAZqWhOnS
|
@inproceedings{
ntavelis2023autodecoding,
title={Autodecoding Latent 3D Diffusion Models},
author={Evangelos Ntavelis and Aliaksandr Siarohin and Kyle Olszewski and Chaoyang Wang and Luc Van Gool and Sergey Tulyakov},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YhAZqWhOnS}
}
|
Diffusion-based methods have shown impressive visual results in the text-to-image domain. They first learn a latent space using an autoencoder, then run a denoising process on the bottleneck to generate new samples. However, learning an autoencoder requires substantial data in the target domain. Such data is scarce for 3D generation, prohibiting the learning of large-scale diffusion models for 3D synthesis. We present a novel approach to the generation of static and articulated 3D assets that has a 3D autodecoder at its core. The 3D autodecoder framework embeds properties learned from the target dataset in the latent space, which can then be decoded into a volumetric representation for rendering view-consistent appearance and geometry. We then identify the appropriate intermediate volumetric latent space, and introduce robust normalization and de-normalization operations to learn a 3D diffusion from 2D images or monocular videos of rigid or articulated objects. Our approach is flexible enough to use either existing camera supervision or no camera information at all -- instead efficiently learning it during training. Our evaluations demonstrate that our generation results outperform state-of-the-art alternatives on various benchmark datasets and metrics, including multi-view image datasets of synthetic objects, real in-the-wild videos of moving people, and a large-scale, real video dataset of static objects.
|
Autodecoding Latent 3D Diffusion Models
|
[
"Evangelos Ntavelis",
"Aliaksandr Siarohin",
"Kyle Olszewski",
"Chaoyang Wang",
"Luc Van Gool",
"Sergey Tulyakov"
] |
Conference
|
poster
|
2307.05445
|
[
"https://github.com/snap-research/3dvader"
] |
https://huggingface.co/papers/2307.05445
| 3 | 13 | 0 | 6 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=YeP8osxOht
|
@inproceedings{
banihashem2023bandit,
title={Bandit Social Learning under Myopic Behavior},
author={Kiarash Banihashem and MohammadTaghi Hajiaghayi and Suho Shin and Aleksandrs Slivkins},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YeP8osxOht}
}
|
We study social learning dynamics motivated by reviews on online platforms. The
agents collectively follow a simple multi-armed bandit protocol, but each agent
acts myopically, without regards to exploration. We allow a wide range of myopic
behaviors that are consistent with (parameterized) confidence intervals for the arms’
expected rewards. We derive stark exploration failures for any such behavior, and
provide matching positive results. As a special case, we obtain the first general
results on failure of the greedy algorithm in bandits, thus providing a theoretical
foundation for why bandit algorithms should explore.
|
Bandit Social Learning under Myopic Behavior
|
[
"Kiarash Banihashem",
"MohammadTaghi Hajiaghayi",
"Suho Shin",
"Aleksandrs Slivkins"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Ydxnan4P2G
|
@inproceedings{
dukler2023your,
title={Your representations are in the network: composable and parallel adaptation for large scale models},
author={Yonatan Dukler and Alessandro Achille and Hao Yang and Varsha Vivek and Luca Zancato and Benjamin Bowman and Avinash Ravichandran and Charless Fowlkes and Ashwin Swaminathan and Stefano Soatto},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Ydxnan4P2G}
}
|
We present a framework for transfer learning that efficiently adapts a large base-model by learning lightweight cross-attention modules attached to its intermediate activations.
We name our approach InCA (Introspective-Cross-Attention) and show that it can efficiently survey a network’s representations and identify strong performing adapter models for a downstream task.
During training, InCA enables training numerous adapters efficiently and in parallel, isolated from the frozen base model. On the ViT-L/16 architecture, our experiments show that a single adapter, 1.3% of the full model, is able to reach full fine-tuning accuracy on average across 11 challenging downstream classification tasks.
Compared with other forms of parameter-efficient adaptation, the isolated nature of the InCA adaptation is computationally desirable for large-scale models. For instance, we adapt ViT-G/14 (1.8B+ parameters) quickly with 20+ adapters in parallel on a single V100 GPU (76% GPU memory reduction) and exhaustively identify its most useful representations.
We further demonstrate how the adapters learned by InCA can be incrementally modified or combined for flexible learning scenarios and our approach achieves state of the art performance on the ImageNet-to-Sketch multi-task benchmark.
|
Your representations are in the network: composable and parallel adaptation for large scale models
|
[
"Yonatan Dukler",
"Alessandro Achille",
"Hao Yang",
"Varsha Vivek",
"Luca Zancato",
"Benjamin Bowman",
"Avinash Ravichandran",
"Charless Fowlkes",
"Ashwin Swaminathan",
"Stefano Soatto"
] |
Conference
|
poster
|
2303.04105
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YdfcKb4Wif
|
@inproceedings{
fu2023learning,
title={Learning Trajectories are Generalization Indicators},
author={Jingwen Fu and Zhizheng Zhang and Dacheng Yin and Yan Lu and Nanning Zheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YdfcKb4Wif}
}
|
This paper explores the connection between learning trajectories of Deep Neural Networks (DNNs) and their generalization capabilities when optimized using (stochastic) gradient descent algorithms.
Instead of concentrating solely on the generalization error of the DNN post-training, we present a novel perspective for analyzing generalization error by investigating the contribution of each update step to the change in generalization error. This perspective enable a more direct comprehension of how the learning trajectory influences generalization error. Building upon this analysis, we propose a new generalization bound that incorporates more extensive trajectory information.
Our proposed generalization bound depends on the complexity of learning trajectory and the ratio between the bias and diversity of training set. Experimental observations reveal that our method effectively captures the generalization error throughout the training process. Furthermore, our approach can also track changes in generalization error when adjustments are made to learning rates and label noise levels. These results demonstrate that learning trajectory information is a valuable indicator of a model's generalization capabilities.
|
Learning Trajectories are Generalization Indicators
|
[
"Jingwen Fu",
"Zhizheng Zhang",
"Dacheng Yin",
"Yan Lu",
"Nanning Zheng"
] |
Conference
|
poster
|
2304.12579
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YcmGuwdLoU
|
@inproceedings{
zhang2023realtime,
title={Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding},
author={Zhejun Zhang and Alexander Liniger and Christos Sakaridis and Fisher Yu and Luc Van Gool},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YcmGuwdLoU}
}
|
The real-world deployment of an autonomous driving system requires its components to run on-board and in real-time, including the motion prediction module that predicts the future trajectories of surrounding traffic participants. Existing agent-centric methods have demonstrated outstanding performance on public benchmarks. However, they suffer from high computational overhead and poor scalability as the number of agents to be predicted increases. To address this problem, we introduce the K-nearest neighbor attention with relative pose encoding (KNARPE), a novel attention mechanism allowing the pairwise-relative representation to be used by Transformers. Then, based on KNARPE we present the Heterogeneous Polyline Transformer with Relative pose encoding (HPTR), a hierarchical framework enabling asynchronous token update during the online inference. By sharing contexts among agents and reusing the unchanged contexts, our approach is as efficient as scene-centric methods, while performing on par with state-of-the-art agent-centric methods. Experiments on Waymo and Argoverse-2 datasets show that HPTR achieves superior performance among end-to-end methods that do not apply expensive post-processing or model ensembling. The code is available at https://github.com/zhejz/HPTR.
|
Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding
|
[
"Zhejun Zhang",
"Alexander Liniger",
"Christos Sakaridis",
"Fisher Yu",
"Luc Van Gool"
] |
Conference
|
poster
|
2310.12970
|
[
"https://github.com/zhejz/hptr"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Yc9bqbnrbs
|
@inproceedings{
zhang2023fast,
title={Fast and Regret Optimal Best Arm Identification: Fundamental Limits and Low-Complexity Algorithms},
author={Qining Zhang and Lei Ying},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Yc9bqbnrbs}
}
|
This paper considers a stochastic Multi-Armed Bandit (MAB) problem with dual objectives: (i) quick identification and commitment to the optimal arm, and (ii) reward maximization throughout a sequence of $T$ consecutive rounds. Though each objective has been individually well-studied, i.e., best arm identification for (i) and regret minimization for (ii), the simultaneous realization of both objectives remains an open problem, despite its practical importance. This paper introduces \emph{Regret Optimal Best Arm Identification} (ROBAI) which aims to achieve these dual objectives. To solve ROBAI with both pre-determined stopping time and adaptive stopping time requirements, we present an algorithm called EOCP and its variants respectively, which not only achieve asymptotic optimal regret in both Gaussian and general bandits, but also commit to the optimal arm in $\mathcal{O}(\log T)$ rounds with pre-determined stopping time and $\mathcal{O}(\log^2 T)$ rounds with adaptive stopping time. We further characterize lower bounds on the commitment time (equivalent to the sample complexity) of ROBAI, showing that EOCP and its variants are sample optimal with pre-determined stopping time, and almost sample optimal with adaptive stopping time. Numerical results confirm our theoretical analysis and reveal an interesting ``over-exploration'' phenomenon carried by classic UCB algorithms, such that EOCP has smaller regret even though it stops exploration much earlier than UCB, i.e., $\mathcal{O}(\log T)$ versus $\mathcal{O}(T)$, which suggests over-exploration is unnecessary and potentially harmful to system performance.
|
Fast and Regret Optimal Best Arm Identification: Fundamental Limits and Low-Complexity Algorithms
|
[
"Qining Zhang",
"Lei Ying"
] |
Conference
|
poster
|
2309.00591
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YbYQ0JEQ80
|
@inproceedings{
qin2023bimatting,
title={BiMatting: Efficient Video Matting via Binarization},
author={Haotong Qin and Lei Ke and Xudong Ma and Martin Danelljan and Yu-Wing Tai and Chi-Keung Tang and Xianglong Liu and Fisher Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YbYQ0JEQ80}
}
|
Real-time video matting on edge devices faces significant computational resource constraints, limiting the widespread use of video matting in applications such as online conferences and short-form video production. Binarization is a powerful compression approach that greatly reduces computation and memory consumption by using 1-bit parameters and bitwise operations. However, binarization of the video matting model is not a straightforward process, and our empirical analysis has revealed two primary bottlenecks: severe representation degradation of the encoder and massive redundant computations of the decoder. To address these issues, we propose BiMatting, an accurate and efficient video matting model using binarization. Specifically, we construct shrinkable and dense topologies of the binarized encoder block to enhance the extracted representation. We sparsify the binarized units to reduce the low-information decoding computation. Through extensive experiments, we demonstrate that BiMatting outperforms other binarized video matting models, including state-of-the-art (SOTA) binarization methods, by a significant margin. Our approach even performs comparably to the full-precision counterpart in visual quality. Furthermore, BiMatting achieves remarkable savings of 12.4$\times$ and 21.6$\times$ in computation and storage, respectively, showcasing its potential and advantages in real-world resource-constrained scenarios. Our code and models are released at https://github.com/htqin/BiMatting .
|
BiMatting: Efficient Video Matting via Binarization
|
[
"Haotong Qin",
"Lei Ke",
"Xudong Ma",
"Martin Danelljan",
"Yu-Wing Tai",
"Chi-Keung Tang",
"Xianglong Liu",
"Fisher Yu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Yacmpz84TH
|
@inproceedings{
schick2023toolformer,
title={Toolformer: Language Models Can Teach Themselves to Use Tools},
author={Timo Schick and Jane Dwivedi-Yu and Roberto Dessi and Roberta Raileanu and Maria Lomeli and Eric Hambro and Luke Zettlemoyer and Nicola Cancedda and Thomas Scialom},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Yacmpz84TH}
}
|
Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller specialized models excel. In this paper, we show that LMs can teach themselves to *use external tools* via simple APIs and achieve the best of both worlds. We introduce *Toolformer*, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q&A system, a search engine, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.
|
Toolformer: Language Models Can Teach Themselves to Use Tools
|
[
"Timo Schick",
"Jane Dwivedi-Yu",
"Roberto Dessi",
"Roberta Raileanu",
"Maria Lomeli",
"Eric Hambro",
"Luke Zettlemoyer",
"Nicola Cancedda",
"Thomas Scialom"
] |
Conference
|
oral
|
2302.04761
|
[
""
] |
https://huggingface.co/papers/2302.04761
| 3 | 11 | 3 | 8 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=YZSLDEE0mw
|
@inproceedings{
sun2023contrast,
title={Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities},
author={Jingyuan Sun and Mingxiao Li and Yunhao Zhang and Marie-Francine Moens and Zijiao Chen and Shaonan Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YZSLDEE0mw}
}
|
Decoding visual stimuli from neural responses recorded by functional Magnetic Resonance Imaging (fMRI) presents an intriguing intersection between cognitive neuroscience and machine learning, promising advancements in understanding human visual perception. However, the task is challenging due to the noisy nature of fMRI signals and the intricate pattern of brain visual representations. To mitigate these challenges, we introduce a two-phase fMRI representation learning framework. The first phase pre-trains an fMRI feature learner with a proposed Double-contrastive Mask Auto-encoder to learn denoised representations. The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder. The optimized fMRI feature learner then conditions a latent diffusion model to reconstruct image stimuli from brain activities. Experimental results demonstrate our model's superiority in generating high-resolution and semantically accurate images, substantially exceeding previous state-of-the-art methods by 39.34% in the 50-way-top-1 semantic classification accuracy. The code implementations is available at https://github.com/soinx0629/vis_dec_neurips/.
|
Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities
|
[
"Jingyuan Sun",
"Mingxiao Li",
"Zijiao Chen",
"Yunhao Zhang",
"Shaonan Wang",
"Marie-Francine Moens"
] |
Conference
|
poster
|
2305.17214
|
[
"https://github.com/soinx0629/vis_dec_neurips"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YZGWhs1H7F
|
@inproceedings{
li2023gan,
title={{GAN} You See Me? Enhanced Data Reconstruction Attacks against Split Inference},
author={Ziang Li and Mengda Yang and Yaxin Liu and Juan Wang and Hongxin Hu and Wenzhe Yi and Xiaoyang Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YZGWhs1H7F}
}
|
Split Inference (SI) is an emerging deep learning paradigm that addresses computational constraints on edge devices and preserves data privacy through collaborative edge-cloud approaches. However, SI is vulnerable to Data Reconstruction Attacks (DRA), which aim to reconstruct users' private prediction instances. Existing attack methods suffer from various limitations. Optimization-based DRAs do not leverage public data effectively, while Learning-based DRAs depend heavily on auxiliary data quantity and distribution similarity. Consequently, these approaches yield unsatisfactory attack results and are sensitive to defense mechanisms. To overcome these challenges, we propose a GAN-based LAtent Space Search attack (GLASS) that harnesses abundant prior knowledge from public data using advanced StyleGAN technologies. Additionally, we introduce GLASS++ to enhance reconstruction stability. Our approach represents the first GAN-based DRA against SI, and extensive evaluation across different split points and adversary setups demonstrates its state-of-the-art performance. Moreover, we thoroughly examine seven defense mechanisms, highlighting our method's capability to reveal private information even in the presence of these defenses.
|
GAN You See Me? Enhanced Data Reconstruction Attacks against Split Inference
|
[
"Ziang Li",
"Mengda Yang",
"Yaxin Liu",
"Juan Wang",
"Hongxin Hu",
"Wenzhe Yi",
"Xiaoyang Xu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YZ7ip645Ra
|
@inproceedings{
mao2023structured,
title={Structured Prediction with Stronger Consistency Guarantees},
author={Anqi Mao and Mehryar Mohri and Yutao Zhong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YZ7ip645Ra}
}
|
We present an extensive study of surrogate losses for structured prediction supported by *$H$-consistency bounds*. These are recently introduced guarantees that are more relevant to learning than Bayes-consistency, since they are not asymptotic and since they take into account the hypothesis set $H$ used. We first show that no non-trivial $H$-consistency bound can be derived for widely used surrogate structured prediction losses. We then define several new families of surrogate losses, including *structured comp-sum losses* and *structured constrained losses*, for which we prove $H$-consistency bounds and thus Bayes-consistency. These loss functions readily lead to new structured prediction algorithms with stronger theoretical guarantees, based on their minimization. We describe efficient algorithms for minimizing several of these surrogate losses, including a new *structured logistic loss*.
|
Structured Prediction with Stronger Consistency Guarantees
|
[
"Anqi Mao",
"Mehryar Mohri",
"Yutao Zhong"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YWsPN0EMZr
|
@inproceedings{
dwaraknath2023fixing,
title={Fixing the {NTK}: From Neural Network Linearizations to Exact Convex Programs},
author={Rajat Vadiraj Dwaraknath and Tolga Ergen and Mert Pilanci},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YWsPN0EMZr}
}
|
Recently, theoretical analyses of deep neural networks have broadly focused on two directions: 1) Providing insight into neural network training by SGD in the limit of infinite hidden-layer width and infinitesimally small learning rate (also known as gradient flow) via the Neural Tangent Kernel (NTK), and 2) Globally optimizing the regularized training objective via cone-constrained convex reformulations of ReLU networks. The latter research direction also yielded an alternative formulation of the ReLU network, called a gated ReLU network, that is globally optimizable via efficient unconstrained convex programs. In this work, we interpret the convex program for this gated ReLU network as a Multiple Kernel Learning (MKL) model with a weighted data masking feature map and establish a connection to the NTK. Specifically, we show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data. A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set. By using iterative reweighting, we improve the weights induced by the NTK to obtain the optimal MKL kernel which is equivalent to the solution of the exact convex reformulation of the gated ReLU network. We also provide several numerical simulations corroborating our theory. Additionally, we provide an analysis of the prediction error of the resulting optimal kernel via consistency results for the group lasso.
|
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
|
[
"Rajat Vadiraj Dwaraknath",
"Tolga Ergen",
"Mert Pilanci"
] |
Conference
|
poster
|
2309.15096
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YWSOpYjyG4
|
@inproceedings{
ouyang-zhang2023predicting,
title={Predicting a Protein's Stability under a Million Mutations},
author={Jeffrey Ouyang-Zhang and Daniel Jesus Diaz and Adam Klivans and Philipp Kraehenbuehl},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YWSOpYjyG4}
}
|
Stabilizing proteins is a foundational step in protein engineering. However, the evolutionary pressure of all extant proteins makes identifying the scarce number of mutations that will improve thermodynamic stability challenging.
Deep learning has recently emerged as a powerful tool for identifying promising mutations.
Existing approaches, however, are computationally expensive, as the number of model inferences scales with the number of mutations queried.
Our main contribution is a simple, parallel decoding algorithm.
Mutate Everything is capable of predicting the effect of all single and double mutations in one forward pass.
It is even versatile enough to predict higher-order mutations with minimal computational overhead.
We build Mutate Everything on top of ESM2 and AlphaFold, neither of which were trained to predict thermodynamic stability.
We trained on the Mega-Scale cDNA proteolysis dataset and achieved state-of-the-art performance on single and higher-order mutations on S669, ProTherm, and ProteinGym datasets.
Our code is available at https://github.com/jozhang97/MutateEverything.
|
Predicting a Protein's Stability under a Million Mutations
|
[
"Jeffrey Ouyang-Zhang",
"Daniel Jesus Diaz",
"Adam Klivans",
"Philipp Kraehenbuehl"
] |
Conference
|
poster
|
2310.12979
|
[
"https://github.com/jozhang97/mutateeverything"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YVMc3KiWBQ
|
@inproceedings{
qiao2023offline,
title={Offline Reinforcement Learning with Differential Privacy},
author={Dan Qiao and Yu-Xiang Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YVMc3KiWBQ}
}
|
The offline reinforcement learning (RL) problem is often motivated by the need to learn data-driven decision policies in financial, legal and healthcare applications. However, the learned policy could retain sensitive information of individuals in the training data (e.g., treatment and outcome of patients), thus susceptible to various privacy risks. We design offline RL algorithms with differential privacy guarantees which provably prevent such risks. These algorithms also enjoy strong instance-dependent learning bounds under both tabular and linear Markov Decision Process (MDP) settings. Our theory and simulation suggest that the privacy guarantee comes at (almost) no drop in utility comparing to the non-private counterpart for a medium-size dataset.
|
Offline Reinforcement Learning with Differential Privacy
|
[
"Dan Qiao",
"Yu-Xiang Wang"
] |
Conference
|
poster
|
2206.00810
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YV1MYtj2AR
|
@inproceedings{
yang2023movie,
title={MoVie: Visual Model-Based Policy Adaptation for View Generalization},
author={Sizhe Yang and Yanjie Ze and Huazhe Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YV1MYtj2AR}
}
|
Visual Reinforcement Learning (RL) agents trained on limited views face significant challenges in generalizing their learned abilities to unseen views. This inherent difficulty is known as the problem of $\textit{view generalization}$. In this work, we systematically categorize this fundamental problem into four distinct and highly challenging scenarios that closely resemble real-world situations. Subsequently, we propose a straightforward yet effective approach to enable successful adaptation of visual $\textbf{Mo}$del-based policies for $\textbf{Vie}$w generalization ($\textbf{MoVie}$) during test time, without any need for explicit reward signals and any modification during training time. Our method demonstrates substantial advancements across all four scenarios encompassing a total of $\textbf{18}$ tasks sourced from DMControl, xArm, and Adroit, with a relative improvement of $\mathbf{33}$%, $\mathbf{86}$%, and $\mathbf{152}$% respectively. The superior results highlight the immense potential of our approach for real-world robotics applications. Code and videos are available at https://yangsizhe.github.io/MoVie/.
|
MoVie: Visual Model-Based Policy Adaptation for View Generalization
|
[
"Sizhe Yang",
"Yanjie Ze",
"Huazhe Xu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YSMLVffl5u
|
@inproceedings{
khwaja2023celle,
title={{CELLE}-2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer},
author={Emaad Khwaja and Yun S. Song and Aaron Agarunov and Bo Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YSMLVffl5u}
}
|
We present CELL-E 2, a novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and vice versa). Protein localization is a challenging problem that requires integrating sequence and image information, which most existing methods ignore. CELL-E 2 extends the work of CELL-E, not only capturing the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling de novo protein design. We train and finetune CELL-E 2 on two large-scale datasets of human proteins. We also demonstrate how to use CELL-E 2 to create hundreds of novel nuclear localization signals (NLS). Results and interactive demos are featured at https://bohuanglab.github.io/CELL-E_2/.
|
CELLE-2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer
|
[
"Emaad Khwaja",
"Yun S. Song",
"Aaron Agarunov",
"Bo Huang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YSFQRVkkl0
|
@inproceedings{
sui2023implicit,
title={Implicit Regularization in Over-Parameterized Support Vector Machine},
author={Yang Sui and Xin HE and Yang Bai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YSFQRVkkl0}
}
|
In this paper, we design a regularization-free algorithm for high-dimensional support vector machines (SVMs) by integrating over-parameterization with Nesterov's smoothing method, and provide theoretical guarantees for the induced implicit regularization phenomenon. In particular, we construct an over-parameterized hinge loss function and estimate the true parameters by leveraging regularization-free gradient descent on this loss function. The utilization of Nesterov's method enhances the computational efficiency of our algorithm, especially in terms of determining the stopping criterion and reducing computational complexity. With appropriate choices of initialization, step size, and smoothness parameter, we demonstrate that unregularized gradient descent achieves a near-oracle statistical convergence rate. Additionally, we verify our theoretical findings through a variety of numerical experiments and compare the proposed method with explicit regularization. Our results illustrate the advantages of employing implicit regularization via gradient descent in conjunction with over-parameterization in sparse SVMs.
|
Implicit Regularization in Over-Parameterized Support Vector Machine
|
[
"Yang Sui",
"Xin HE",
"Yang Bai"
] |
Conference
|
poster
|
2310.17124
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YQA28p7qNz
|
@inproceedings{
hong2023dllm,
title={3D-{LLM}: Injecting the 3D World into Large Language Models},
author={Yining Hong and Haoyu Zhen and Peihao Chen and Shuhong Zheng and Yilun Du and Zhenfang Chen and Chuang Gan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YQA28p7qNz}
}
|
Large language models (LLMs) and Vision-Language Models (VLMs) have been proved to excel at multiple tasks, such as commonsense reasoning. Powerful as these models can be, they are not grounded in the 3D physical world, which involves richer concepts such as spatial relationships, affordances, physics, layout, and so on. In this work, we propose to inject the 3D world into large language models, and introduce a whole new family of 3D-LLMs. Specifically, 3D-LLMs can take 3D point clouds and their features as input and perform a diverse set of 3D-related tasks, including captioning, dense captioning, 3D question answering, task decomposition, 3D
grounding, 3D-assisted dialog, navigation, and so on. Using three types of prompting mechanisms that we design, we are able to collect over 300k 3D-language data covering these tasks. To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that obtains 3D features from rendered multi-view images. Then, we use 2D VLMs as our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism, 3D-LLMs could better capture 3D spatial information. Experiments on ScanQA show that our model outperforms state-of-the-art baselines by a large margin (\textit{e.g.}, the BLEU-1 score surpasses state-of-the-art score by 9\%). Furthermore, experiments on our held-in datasets for 3D captioning, task composition, and 3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative examples also show that our model could perform more tasks beyond the scope of existing LLMs and VLMs. Our model and data will be publicly available.
|
3D-LLM: Injecting the 3D World into Large Language Models
|
[
"Yining Hong",
"Haoyu Zhen",
"Peihao Chen",
"Shuhong Zheng",
"Yilun Du",
"Zhenfang Chen",
"Chuang Gan"
] |
Conference
|
spotlight
|
2307.12981
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YPQg2RTFD8
|
@inproceedings{
liu2023harnessing,
title={Harnessing Hard Mixed Samples with Decoupled Regularizer},
author={Zicheng Liu and Siyuan Li and Ge Wang and Lirong Wu and Cheng Tan and Stan Z. Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YPQg2RTFD8}
}
|
Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data. Recently, dynamic mixup methods have improved previous \textit{static} policies effectively (e.g., linear interpolation) by maximizing target-related salient regions in mixed samples, but excessive additional time costs are not acceptable. These additional computational overheads mainly come from optimizing the mixed samples according to the mixed labels. However, we found that the extra optimizing step may be redundant because label-mismatched mixed samples are informative hard mixed samples for deep models to localize discriminative features. In this paper, we thus are not trying to propose a more complicated dynamic mixup policy but rather an efficient mixup objective function with decoupled regularizer, named decoupled mixup (DM). The primary effect is that DM can adaptively utilize those hard mixed samples to mine discriminative features without losing the original smoothness of mixup. As a result, DM enables static mixup methods to achieve comparable or even exceed the performance of dynamic methods without any extra computation. This also leads to an interesting objective design problem for mixup training that we need to focus on both smoothing the decision boundaries and identifying discriminative features. Extensive experiments on supervised and semi-supervised learning benchmarks across seven datasets validate the effectiveness of DM.
|
Harnessing Hard Mixed Samples with Decoupled Regularizer
|
[
"Zicheng Liu",
"Siyuan Li",
"Ge Wang",
"Lirong Wu",
"Cheng Tan",
"Stan Z. Li"
] |
Conference
|
poster
|
2203.10761
|
[
"https://github.com/Westlake-AI/openmixup"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YPHIrNKI0d
|
@inproceedings{
lyon2023spatioangular,
title={Spatio-Angular Convolutions for Super-resolution in Diffusion {MRI}},
author={Matthew Lyon and Paul Armitage and Mauricio A {\'A}lvarez},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YPHIrNKI0d}
}
|
Diffusion MRI (dMRI) is a widely used imaging modality, but requires long scanning times to acquire high resolution datasets. By leveraging the unique geometry present within this domain, we present a novel approach to dMRI angular super-resolution that extends upon the parametric continuous convolution (PCConv) framework. We introduce several additions to the operation including a Fourier feature mapping, 'global' co-ordinates, and domain specific context. Using this framework, we build a fully parametric continuous convolution network (PCCNN) and compare against existing models. We demonstrate the PCCNN performs competitively while using significantly fewer parameters. Moreover, we show that this formulation generalises well to clinically relevant downstream analyses such as fixel-based analysis, and neurite orientation dispersion and density imaging.
|
Spatio-Angular Convolutions for Super-resolution in Diffusion MRI
|
[
"Matthew Lyon",
"Paul Armitage",
"Mauricio A Álvarez"
] |
Conference
|
poster
|
2306.00854
|
[
"https://github.com/m-lyon/dmri-pcconv"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YOZaej0ZC7
|
@inproceedings{
teo2023on,
title={On Measuring Fairness in Generative Models},
author={Christopher T.H Teo and Milad Abdollahzadeh and Ngai-man Cheung},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YOZaej0ZC7}
}
|
Recently, there has been increased interest in fair generative models. In this work,
we conduct, for the first time, an in-depth study on fairness measurement, a
critical component in gauging progress on fair generative models. We make three
contributions. First, we conduct a study that reveals that the existing fairness
measurement framework has considerable measurement errors, even when highly
accurate sensitive attribute (SA) classifiers are used. These findings cast doubts
on previously reported fairness improvements. Second, to address this issue,
we propose CLassifier Error-Aware Measurement (CLEAM), a new framework
which uses a statistical model to account for inaccuracies in SA classifiers. Our
proposed CLEAM reduces measurement errors significantly, e.g., 4.98%→0.62%
for StyleGAN2 w.r.t. Gender. Additionally, CLEAM achieves this with minimal
additional overhead. Third, we utilize CLEAM to measure fairness in important
text-to-image generator and GANs, revealing considerable biases in these models
that raise concerns about their applications. Code and more resources: https:
//sutd-visual-computing-group.github.io/CLEAM/.
|
On Measuring Fairness in Generative Models
|
[
"Christopher T.H Teo",
"Milad Abdollahzadeh",
"Ngai-man Cheung"
] |
Conference
|
poster
|
2310.19297
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YMMlHBSQdC
|
@inproceedings{
srinivas2023which,
title={Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness},
author={Suraj Srinivas and Sebastian Bordt and Himabindu Lakkaraju},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YMMlHBSQdC}
}
|
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause robust models to have rudimentary generative capabilities, including image generation, denoising, and in-painting. However, the underlying mechanisms behind these phenomena remain unknown. In this work, we provide a first explanation of PAGs via \emph{off-manifold robustness}, which states that models must be more robust off- the data manifold than they are on-manifold. We first demonstrate theoretically that off-manifold robustness leads input gradients to lie approximately on the data manifold, explaining their perceptual alignment. We then show that Bayes optimal models satisfy off-manifold robustness, and confirm the same empirically for robust models trained via gradient norm regularization, randomized smoothing, and adversarial training with projected gradient descent. Quantifying the perceptual alignment of model gradients via their similarity with the gradients of generative models, we show that off-manifold robustness correlates well with perceptual alignment. Finally, based on the levels of on- and off-manifold robustness, we identify three different regimes of robustness that affect both perceptual alignment and model accuracy: weak robustness, bayes-aligned robustness, and excessive robustness. Code is available at https://github.com/tml-tuebingen/pags.
|
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
|
[
"Suraj Srinivas",
"Sebastian Bordt",
"Himabindu Lakkaraju"
] |
Conference
|
spotlight
|
2305.19101
|
[
"https://github.com/tml-tuebingen/pags"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YLOJ4aKAka
|
@inproceedings{
wu2023connecting,
title={Connecting Pre-trained Language Model and Downstream Task via Properties of Representation},
author={Chenwei Wu and Holden Lee and Rong Ge},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YLOJ4aKAka}
}
|
Recently, researchers have found that representations learned by large-scale pre-trained language models are useful in various downstream tasks. However, there is little theoretical understanding of how pre-training performance is related to downstream task performance. In this paper, we analyze how this performance transfer depends on the properties of the downstream task and the structure of the representations. We consider a log-linear model where a word can be predicted from its context through a network having softmax as its last layer. We show that even if the downstream task is highly structured and depends on a simple function of the hidden representation, there are still cases when a low pre-training loss cannot guarantee good performance on the downstream task. On the other hand, we propose and empirically validate the existence of an ``anchor vector'' in the representation space, and show that this assumption, together with properties of the downstream task, guarantees performance transfer.
|
Connecting Pre-trained Language Model and Downstream Task via Properties of Representation
|
[
"Chenwei Wu",
"Holden Lee",
"Rong Ge"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YJDz4F2AZu
|
@inproceedings{
chen2023contiformer,
title={ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling},
author={Yuqi Chen and Kan Ren and Yansen Wang and Yuchen Fang and Weiwei Sun and Dongsheng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YJDz4F2AZu}
}
|
Modeling continuous-time dynamics on irregular time series is critical to account for data evolution and correlations that occur continuously. Traditional methods including recurrent neural networks or Transformer models leverage inductive bias via powerful neural architectures to capture complex patterns. However, due to their discrete characteristic, they have limitations in generalizing to continuous-time data paradigms. Though neural ordinary differential equations (Neural ODEs) and their variants have shown promising results in dealing with irregular time series, they often fail to capture the intricate correlations within these sequences. It is challenging yet demanding to concurrently model the relationship between input data points and capture the dynamic changes of the continuous-time system. To tackle this problem, we propose ContiFormer that extends the relation modeling of vanilla Transformer to the continuous-time domain, which explicitly incorporates the modeling abilities of continuous dynamics of Neural ODEs with the attention mechanism of Transformers. We mathematically characterize the expressive power of ContiFormer and illustrate that, by curated designs of function hypothesis, many Transformer variants specialized in irregular time series modeling can be covered as a special case of ContiFormer. A wide range of experiments on both synthetic and real-world datasets have illustrated the superior modeling capacities and prediction performance of ContiFormer on irregular time series data. The project link is https://seqml.github.io/contiformer/.
|
ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling
|
[
"Yuqi Chen",
"Kan Ren",
"Yansen Wang",
"Yuchen Fang",
"Weiwei Sun",
"Dongsheng Li"
] |
Conference
|
poster
|
2402.10635
|
[
"https://github.com/microsoft/SeqML/tree/main/ContiFormer"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YI4bn6aAmz
|
@inproceedings{
dey2023conformal,
title={Conformal Prediction Sets for Ordinal Classification},
author={PRASENJIT DEY and Srujana Merugu and Sivaramakrishnan R Kaveri},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YI4bn6aAmz}
}
|
Ordinal classification (OC), i.e., labeling instances along classes with a natural ordering, is common in multiple applications such as size or budget based recommendations and disease severity labeling. Often in practical scenarios, it is desirable to obtain a small set of likely classes with a guaranteed high chance of including the true class. Recent works on conformal prediction (CP) address this problem for the classification setting with non-ordered labels but the resulting prediction sets (PS) are often non-contiguous and unsuitable for ordinal classification. In this work, we propose a framework to adapt existing CP methods to generate contiguous sets with guaranteed coverage and minimal cardinality. Our framework employs a novel non-parametric approach for modeling unimodal distributions. Empirical results on both synthetic and real-world datasets demonstrate our method outperforms SOTA baselines by 4% on Accuracy@K and 8% on PS size.
|
Conformal Prediction Sets for Ordinal Classification
|
[
"PRASENJIT DEY",
"Srujana Merugu",
"Sivaramakrishnan R Kaveri"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YFW6MVGVTn
|
@inproceedings{
ni2023nice,
title={{NICE}: NoIse-modulated Consistency rEgularization for Data-Efficient {GAN}s},
author={Yao Ni and Piotr Koniusz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YFW6MVGVTn}
}
|
Generative Adversarial Networks (GANs) are powerful tools for image synthesis. However, they require access to vast amounts of training data, which is often costly and prohibitive. Limited data affects GANs, leading to discriminator overfitting and training instability. In this paper, we present a novel approach called NoIse-modulated Consistency rEgularization (NICE) to overcome these challenges. To this end, we introduce an adaptive multiplicative noise into the discriminator to modulate its latent features. We demonstrate the effectiveness of such a modulation in preventing discriminator overfitting by adaptively reducing the Rademacher complexity of the discriminator. However, this modulation leads to an unintended consequence of increased gradient norm, which can undermine the stability of GAN training. To mitigate this undesirable effect, we impose a constraint on the discriminator, ensuring its consistency for the same inputs under different noise modulations. The constraint effectively penalizes the first and second-order gradients of latent features, enhancing GAN stability. Experimental evidence aligns with our theoretical analysis, demonstrating the reduction of generalization error and gradient penalization of NICE. This substantiates the efficacy of NICE in reducing discriminator overfitting and improving stability of GAN training. NICE achieves state-of-the-art results on CIFAR-10, CIFAR-100, ImageNet and FFHQ datasets when trained with limited data, as well as in low-shot generation tasks.
|
NICE: NoIse-modulated Consistency rEgularization for Data-Efficient GANs
|
[
"Yao Ni",
"Piotr Koniusz"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YFSrf8aciU
|
@inproceedings{
wu2023inverse,
title={Inverse Reinforcement Learning with the Average Reward Criterion},
author={Feiyang Wu and Jingyang Ke and Anqi Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YFSrf8aciU}
}
|
We study the problem of Inverse Reinforcement Learning (IRL) with an average-reward criterion. The goal is to recover an unknown policy and a reward function when the agent only has samples of states and actions from an experienced agent. Previous IRL methods assume that the expert is trained in a discounted environment, and the discount factor is known. This work alleviates this assumption by proposing an average-reward framework with efficient learning algorithms. We develop novel stochastic first-order methods to solve the IRL problem under the average-reward setting, which requires solving an Average-reward Markov Decision Process (AMDP) as a subproblem. To solve the subproblem, we develop a Stochastic Policy Mirror Descent (SPMD) method under general state and action spaces that needs $\mathcal{O}(1/\varepsilon)$ steps of gradient computation. Equipped with SPMD, we propose the Inverse Policy Mirror Descent (IPMD) method for solving the IRL problem with a $\mathcal{O}(1/\varepsilon^2)$ complexity. To the best of our knowledge, the aforementioned complexity results are new in IRL with the average reward criterion. Finally, we corroborate our analysis with numerical experiments using the MuJoCo benchmark and additional control tasks.
|
Inverse Reinforcement Learning with the Average Reward Criterion
|
[
"Feiyang Wu",
"Jingyang Ke",
"Anqi Wu"
] |
Conference
|
poster
|
2305.14608
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YEtstXIpP3
|
@inproceedings{
russo2023modelfree,
title={Model-Free Active Exploration in Reinforcement Learning},
author={Alessio Russo and Alexandre Proutiere},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YEtstXIpP3}
}
|
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution. We adopt an information-theoretical viewpoint and start from the instance-specific lower bound of the number of samples that have to be collected to identify a nearly-optimal policy. Deriving this lower bound along with the optimal exploration strategy entails solving an intricate optimization problem and requires a model of the system. In turn, most existing sample optimal exploration algorithms rely on estimating the model. We derive an approximation of the instance-specific lower bound that only involves quantities that can be inferred using model-free approaches. Leveraging this approximation, we devise an ensemble-based model-free exploration strategy applicable to both tabular and continuous Markov decision processes. Numerical results demonstrate that our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
|
Model-Free Active Exploration in Reinforcement Learning
|
[
"Alessio Russo",
"Alexandre Proutiere"
] |
Conference
|
poster
|
2407.00801
|
[
"https://github.com/rssalessio/modelfreeactivateexplorationrl"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=YE04aRkeZb
|
@inproceedings{
nabli2023textbfatextbfcid,
title={\${\textbackslash}textbf\{A\}{\textasciicircum}2{\textbackslash}textbf\{CiD\}{\textasciicircum}2\$: Accelerating Asynchronous Communication in Decentralized Deep Learning},
author={Adel Nabli and Eugene Belilovsky and Edouard Oyallon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YE04aRkeZb}
}
|
Distributed training of Deep Learning models has been critical to many recent successes in the field. Current standard methods primarily rely on synchronous centralized algorithms which induce major communication bottlenecks and synchronization locks at scale. Decentralized asynchronous algorithms are emerging as a potential alternative but their practical applicability still lags. In order to mitigate the increase in communication cost that naturally comes with scaling the number of workers, we introduce a principled asynchronous, randomized, gossip-based optimization algorithm which works thanks to a continuous local momentum named $\textbf{A}^2\textbf{CiD}^2$. Our method allows each worker to continuously process mini-batches without stopping, and run a peer-to-peer averaging routine in parallel, reducing idle time. In addition to inducing a significant communication acceleration at no cost other than adding a local momentum variable, minimal adaptation is required to incorporate $\textbf{A}^2\textbf{CiD}^2$ to standard asynchronous approaches. Our theoretical analysis proves accelerated rates compared to previous asynchronous decentralized baselines and we empirically show that using our $\textbf{A}^2\textbf{CiD}^2$ momentum significantly decrease communication costs in poorly connected networks. In particular, we show consistent improvement on the ImageNet dataset using up to 64 asynchronous workers (A100 GPUs) and various communication network topologies.
|
A^2CiD^2: Accelerating Asynchronous Communication in Decentralized Deep Learning
|
[
"Adel Nabli",
"Eugene Belilovsky",
"Edouard Oyallon"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=YDCpf85eXc
|
@inproceedings{
tsirtsis2023finding,
title={Finding Counterfactually Optimal Action Sequences in Continuous State Spaces},
author={Stratis Tsirtsis and Manuel Gomez Rodriguez},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=YDCpf85eXc}
}
|
Whenever a clinician reflects on the efficacy of a sequence of treatment decisions for a patient, they may try to identify critical time steps where, had they made different decisions, the patient's health would have improved. While recent methods at the intersection of causal inference and reinforcement learning promise to aid human experts, as the clinician above, to *retrospectively* analyze sequential decision making processes, they have focused on environments with finitely many discrete states. However, in many practical applications, the state of the environment is inherently continuous in nature. In this paper, we aim to fill this gap. We start by formally characterizing a sequence of discrete actions and continuous states using finite horizon Markov decision processes and a broad class of bijective structural causal models. Building upon this characterization, we formalize the problem of finding counterfactually optimal action sequences and show that, in general, we cannot expect to solve it in polynomial time. Then, we develop a search method based on the A* algorithm that, under a natural form of Lipschitz continuity of the environment’s dynamics, is guaranteed to return the optimal solution to the problem. Experiments on real clinical data show that our method is very efficient in practice, and it has the potential to offer interesting insights for sequential decision making tasks.
|
Finding Counterfactually Optimal Action Sequences in Continuous State Spaces
|
[
"Stratis Tsirtsis",
"Manuel Gomez Rodriguez"
] |
Conference
|
poster
|
2306.03929
|
[
"https://github.com/networks-learning/counterfactual-continuous-mdp"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Y8p3ThNDmK
|
@inproceedings{
chen2023a,
title={A Unified Algorithm Framework for Unsupervised Discovery of Skills based on Determinantal Point Process},
author={Jiayu Chen and Vaneet Aggarwal and Tian Lan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y8p3ThNDmK}
}
|
Learning rich skills under the option framework without supervision of external rewards is at the frontier of reinforcement learning research. Existing works mainly fall into two distinctive categories: variational option discovery that maximizes the diversity of the options through a mutual information loss (while ignoring coverage) and Laplacian-based methods that focus on improving the coverage of options by increasing connectivity of the state space (while ignoring diversity). In this paper, we show that diversity and coverage in unsupervised option discovery can indeed be unified under the same mathematical framework. To be specific, we explicitly quantify the diversity and coverage of the learned options through a novel use of Determinantal Point Process (DPP) and optimize these objectives to discover options with both superior diversity and coverage. Our proposed algorithm, ODPP, has undergone extensive evaluation on challenging tasks created with Mujoco and Atari. The results demonstrate that our algorithm outperforms state-of-the-art baselines in both diversity- and coverage-driven categories.
|
A Unified Algorithm Framework for Unsupervised Discovery of Skills based on Determinantal Point Process
|
[
"Jiayu Chen",
"Vaneet Aggarwal",
"Tian Lan"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Y6IGTNMdLT
|
@inproceedings{
xu2023model,
title={Model Shapley: Equitable Model Valuation with Black-box Access},
author={Xinyi Xu and Thanh Lam and Chuan-Sheng Foo and Bryan Kian Hsiang Low},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y6IGTNMdLT}
}
|
Valuation methods of data and machine learning (ML) models are essential to the establishment of AI marketplaces. Importantly, certain practical considerations (e.g., operational constraints, legal restrictions) favor the use of model valuation over data valuation. Also, existing marketplaces that involve trading of pre-trained ML models call for an equitable model valuation method to price them. In particular, we investigate the black-box access setting which allows querying a model (to observe predictions) without disclosing model-specific information (e.g., architecture and parameters). By exploiting a Dirichlet abstraction of a model’s predictions, we propose a novel and equitable model valuation method called model Shapley. We also leverage a Lipschitz continuity of model Shapley to design a learning approach for predicting the model Shapley values (MSVs) of many vendors’ models (e.g., 150) in a large-scale marketplace. We perform extensive empirical validation on the effectiveness of model Shapley using various real-world datasets and heterogeneous model types.
|
Model Shapley: Equitable Model Valuation with Black-box Access
|
[
"Xinyi Xu",
"Thanh Lam",
"Chuan-Sheng Foo",
"Bryan Kian Hsiang Low"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Y44NurSDjq
|
@inproceedings{
dai2023quantum,
title={Quantum Bayesian Optimization},
author={Zhongxiang Dai and Gregory Kang Ruey Lau and Arun Verma and Yao Shu and Bryan Kian Hsiang Low and Patrick Jaillet},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y44NurSDjq}
}
|
Kernelized bandits, also known as Bayesian optimization (BO), has been a prevalent method for optimizing complicated black-box reward functions. Various BO algorithms have been theoretically shown to enjoy upper bounds on their cumulative regret which are sub-linear in the number $T$ of iterations, and a regret lower bound of $\Omega(\sqrt{T})$ has been derived which represents the unavoidable regrets for any classical BO algorithm. Recent works on quantum bandits have shown that with the aid of quantum computing, it is possible to achieve tighter regret upper bounds better than their corresponding classical lower bounds. However, these works are restricted to either multi-armed or linear bandits, and are hence not able to solve sophisticated real-world problems with non-linear reward functions. To this end, we introduce the quantum-Gaussian process-upper confidence bound (Q-GP-UCB) algorithm. To the best of our knowledge, our Q-GP-UCB is the first BO algorithm able to achieve a regret upper bound of $\mathcal{O}(\text{poly}\log T)$, which is significantly smaller than its regret lower bound of $\Omega(\sqrt{T})$ in the classical setting. Moreover, thanks to our novel analysis of the confidence ellipsoid, our Q-GP-UCB with the linear kernel achieves a smaller regret than the quantum linear UCB algorithm from the previous work. We use simulations, as well as an experiment using a real quantum computer, to verify that the theoretical quantum speedup achieved by our Q-GP-UCB is also potentially relevant in practice.
|
Quantum Bayesian Optimization
|
[
"Zhongxiang Dai",
"Gregory Kang Ruey Lau",
"Arun Verma",
"Yao Shu",
"Bryan Kian Hsiang Low",
"Patrick Jaillet"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Y3g1PV5R9l
|
@inproceedings{
he2023ptqd,
title={{PTQD}: Accurate Post-Training Quantization for Diffusion Models},
author={Yefei He and Luping Liu and Jing Liu and Weijia Wu and Hong Zhou and Bohan Zhuang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y3g1PV5R9l}
}
|
Diffusion models have recently dominated image synthesis and other related generative tasks. However, the iterative denoising process is expensive in computations at inference time, making diffusion models less practical for low-latency and scalable real-world applications.
Post-training quantization of diffusion models can significantly reduce the model size and accelerate the sampling process without requiring any re-training. Nonetheless, applying existing post-training quantization methods directly to low-bit diffusion models can significantly impair the quality of generated samples. Specifically, for each denoising step, quantization noise leads to deviations in the estimated mean and mismatches with the predetermined variance schedule. Moreover, as the sampling process proceeds, the quantization noise may accumulate, resulting in a low signal-to-noise ratio (SNR) during the later denoising steps. To address these challenges, we propose a unified formulation for the quantization noise and diffusion perturbed noise in the quantized denoising process.
Specifically, we first disentangle the quantization noise into its correlated and residual uncorrelated parts regarding its full-precision counterpart. The correlated part can be easily corrected by estimating the correlation coefficient. For the uncorrelated part, we subtract the bias from the quantized results to correct the mean deviation and calibrate the denoising variance schedule to absorb the excess variance resulting from quantization. Moreover, we introduce a mixed-precision scheme for selecting the optimal bitwidth for each denoising step, which prioritizes lower bitwidths to expedite early denoising steps, while ensuring that higher bitwidths maintain a high signal-to-noise ratio (SNR) in the later steps. Extensive experiments demonstrate that our method outperforms previous post-training quantized diffusion models in generating high-quality samples, with only a $0.06$ increase in FID score compared to full-precision LDM-4 on ImageNet $256\times256$, while saving $19.9\times$ bit operations. Code is available at [https://github.com/ziplab/PTQD](https://github.com/ziplab/PTQD).
|
PTQD: Accurate Post-Training Quantization for Diffusion Models
|
[
"Yefei He",
"Luping Liu",
"Jing Liu",
"Weijia Wu",
"Hong Zhou",
"Bohan Zhuang"
] |
Conference
|
poster
|
2305.10657
|
[
"https://github.com/ziplab/ptqd"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Y3NjoeO4Q1
|
@inproceedings{
kawana2023detection,
title={Detection Based Part-level Articulated Object Reconstruction from Single {RGBD} Image},
author={Yuki Kawana and Tatsuya Harada},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y3NjoeO4Q1}
}
|
We propose an end-to-end trainable, cross-category method for reconstructing multiple man-made articulated objects from a single RGBD image, focusing on part-level shape reconstruction and pose and kinematics estimation. We depart from previous works that rely on learning instance-level latent space, focusing on man-made articulated objects with predefined part counts. Instead, we propose a novel alternative approach that employs part-level representation, representing instances as combinations of detected parts. While our detect-then-group approach effectively handles instances with diverse part structures and various part counts, it faces issues of false positives, varying part sizes and scales, and an increasing model size due to end-to-end training. To address these challenges, we propose 1) test-time kinematics-aware part fusion to improve detection performance while suppressing false positives, 2) anisotropic scale normalization for part shape learning to accommodate various part sizes and scales, and 3) a balancing strategy for cross-refinement between feature space and output space to improve part detection while maintaining model size. Evaluation on both synthetic and real data demonstrates that our method successfully reconstructs variously structured multiple instances that previous works cannot handle, and outperforms prior works in shape reconstruction and kinematics estimation.
|
Detection Based Part-level Articulated Object Reconstruction from Single RGBD Image
|
[
"Yuki Kawana",
"Tatsuya Harada"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Y2hnMZvVDm
|
@inproceedings{
mahankali2023beyond,
title={Beyond {NTK} with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time},
author={Arvind Venkat Mahankali and Jeff Z. HaoChen and Kefan Dong and Margalit Glasgow and Tengyu Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y2hnMZvVDm}
}
|
Despite recent theoretical progress on the non-convex optimization of two-layer neural networks, it is still an open question whether gradient descent on neural networks without unnatural modifications can achieve better sample complexity than kernel methods. This paper provides a clean mean-field analysis of projected gradient flow on polynomial-width two-layer neural networks. Different from prior works, our analysis does not require unnatural modifications of the optimization algorithm. We prove that with sample size $n = O(d^{3.1})$ where $d$ is the dimension of the inputs, the network trained with projected gradient flow converges in polynomial time to a non-trivial error that is not achievable by kernel methods using $n \ll d^4$ samples, hence demonstrating a clear separation between unmodified gradient descent and NTK. As a corollary, we show that projected gradient descent with a positive learning rate and a polynomial number of iterations converges to low error with the same sample complexity.
|
Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time
|
[
"Arvind Venkat Mahankali",
"Jeff Z. HaoChen",
"Kefan Dong",
"Margalit Glasgow",
"Tengyu Ma"
] |
Conference
|
poster
|
2306.16361
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Y2VQWfi7Vc
|
@inproceedings{
sangalli2023expert,
title={Expert load matters: operating networks at high accuracy and low manual effort},
author={Sara Sangalli and Ertunc Erdil and Ender Konukoglu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y2VQWfi7Vc}
}
|
In human-AI collaboration systems for critical applications, in order to ensure minimal error, users should set an operating point based on model confidence to determine when the decision should be delegated to human experts.
Samples for which model confidence is lower than the operating point would be manually analysed by experts to avoid mistakes.
Such systems can become truly useful only if they consider two aspects: models should be confident only for samples for which they are accurate, and the number of samples delegated to experts should be minimized.
The latter aspect is especially crucial for applications where available expert time is limited and expensive, such as healthcare.
The trade-off between the model accuracy and the number of samples delegated to experts can be represented by a curve that is similar to an ROC curve, which we refer to as confidence operating characteristic (COC) curve.
In this paper, we argue that deep neural networks should be trained by taking into account both accuracy and expert load and, to that end, propose a new complementary loss function for classification that maximizes the area under this COC curve.
This promotes simultaneously the increase in network accuracy and the reduction in number of samples delegated to humans.
We perform experiments on multiple computer vision and medical image datasets for classification.
Our results demonstrate that the proposed loss improves classification accuracy and delegates less number of decisions to experts, achieves better out-of-distribution samples detection and on par calibration performance compared to existing loss functions.
|
Expert load matters: operating networks at high accuracy and low manual effort
|
[
"Sara Sangalli",
"Ertunc Erdil",
"Ender Konukoglu"
] |
Conference
|
poster
|
2308.05035
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Y1sJJW3pID
|
@inproceedings{
pedramfar2023a,
title={A Unified Approach for Maximizing Continuous {DR}-submodular Functions},
author={Mohammad Pedramfar and Christopher John Quinn and Vaneet Aggarwal},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y1sJJW3pID}
}
|
This paper presents a unified approach for maximizing continuous DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the general convex set. We consider settings where the oracle provides access to either the gradient of the function or only the function value, and where the oracle access is either deterministic or stochastic. We determine the number of required oracle accesses in all cases. Our approach gives new/improved results for nine out of the sixteen considered cases, avoids computationally expensive projections in three cases, with the proposed framework matching performance of state-of-the-art approaches in the remaining four cases. Notably, our approach for the stochastic function value-based oracle enables the first regret bounds with bandit feedback for stochastic DR-submodular functions.
|
A Unified Approach for Maximizing Continuous DR-submodular Functions
|
[
"Mohammad Pedramfar",
"Christopher John Quinn",
"Vaneet Aggarwal"
] |
Conference
|
poster
|
2305.16671
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Y18r0xWkSh
|
@inproceedings{
rosa2023posterior,
title={Posterior Contraction Rates for Mat\'ern Gaussian Processes on Riemannian Manifolds},
author={Paul Rosa and Viacheslav Borovitskiy and Alexander Terenin and Judith Rousseau},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y18r0xWkSh}
}
|
Gaussian processes are used in many machine learning applications that rely on uncertainty quantification. Recently, computational tools for working with these models in geometric settings, such as when inputs lie on a Riemannian manifold, have been developed. This raises the question: can these intrinsic models be shown theoretically to lead to better performance, compared to simply embedding all relevant quantities into $\mathbb{R}^d$ and using the restriction of an ordinary Euclidean Gaussian process? To study this, we prove optimal contraction rates for intrinsic Matérn Gaussian processes defined on compact Riemannian manifolds. We also prove analogous rates for extrinsic processes using trace and extension theorems between manifold and ambient Sobolev spaces: somewhat surprisingly, the rates obtained turn out to coincide with those of the intrinsic processes, provided that their smoothness parameters are matched appropriately. We illustrate these rates empirically on a number of examples, which, mirroring prior work, show that intrinsic processes can achieve better performance in practice. Therefore, our work shows that finer-grained analyses are needed to distinguish between different levels of data-efficiency of geometric Gaussian processes, particularly in settings which involve small data set sizes and non-asymptotic behavior.
|
Posterior Contraction Rates for Matérn Gaussian Processes on Riemannian Manifolds
|
[
"Paul Rosa",
"Viacheslav Borovitskiy",
"Alexander Terenin",
"Judith Rousseau"
] |
Conference
|
spotlight
|
2309.10918
|
[
"https://github.com/aterenin/geometric_asymptotics"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Y17N9B0vXn
|
@inproceedings{
tian2023towards,
title={Towards Higher Ranks via Adversarial Weight Pruning},
author={Yuchuan Tian and Hanting Chen and Tianyu Guo and Chao Xu and Yunhe Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Y17N9B0vXn}
}
|
Convolutional Neural Networks (CNNs) are hard to deploy on edge devices due to its high computation and storage complexities. As a common practice for model compression, network pruning consists of two major categories: unstructured and structured pruning, where unstructured pruning constantly performs better. However, unstructured pruning presents a structured pattern at high pruning rates, which limits its performance. To this end, we propose a Rank-based PruninG (RPG) method to maintain the ranks of sparse weights in an adversarial manner. In each step, we minimize the low-rank approximation error for the weight matrices using singular value decomposition, and maximize their distance by pushing the weight matrices away from its low rank approximation. This rank-based optimization objective guides sparse weights towards a high-rank topology. The proposed method is conducted in a gradual pruning fashion to stabilize the change of rank during training. Experimental results on various datasets and different tasks demonstrate the effectiveness of our algorithm in high sparsity. The proposed RPG outperforms the state-of-the-art performance by 1.13\% top-1 accuracy on ImageNet in ResNet-50 with 98\% sparsity. The codes are available at https://github.com/huawei-noah/Efficient-Computing/tree/master/Pruning/RPG and https://gitee.com/mindspore/models/tree/master/research/cv/RPG.
|
Towards Higher Ranks via Adversarial Weight Pruning
|
[
"Yuchuan Tian",
"Hanting Chen",
"Tianyu Guo",
"Chao Xu",
"Yunhe Wang"
] |
Conference
|
poster
|
2311.17493
|
[
"https://github.com/huawei-noah/Efficient-Computing"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XzTM9gVRT4
|
@inproceedings{
lee2023minimax,
title={Minimax Risks and Optimal Procedures for Estimation under Functional Local Differential Privacy},
author={Bonwoo Lee and Jeongyoun Ahn and Cheolwoo Park},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XzTM9gVRT4}
}
|
As concerns about data privacy continue to grow, differential privacy (DP) has emerged as a fundamental concept that aims to guarantee privacy by ensuring individuals' indistinguishability in data analysis. Local differential privacy (LDP) is a rigorous type of DP that requires individual data to be privatized before being sent to the collector, thus removing the need for a trusted third party to collect data. Among the numerous (L)DP-based approaches, functional DP has gained considerable attention in the DP community because it connects DP to statistical decision-making by formulating it as a hypothesis-testing problem and also exhibits Gaussian-related properties. However, the utility of privatized data is generally lower than that of non-private data, prompting research into optimal mechanisms that maximize the statistical utility for given privacy constraints. In this study, we investigate how functional LDP preserves the statistical utility by analyzing minimax risks of univariate mean estimation as well as nonparametric density estimation. We leverage the contraction property of functional LDP mechanisms and classical information-theoretical bounds to derive private minimax lower bounds. Our theoretical study reveals that it is possible to establish an interpretable, continuous balance between the statistical utility and privacy level, which has not been achieved under the $\epsilon$-LDP framework. Furthermore, we suggest minimax optimal mechanisms based on Gaussian LDP (a type of functional LDP) that achieve the minimax upper bounds and show via a numerical study that they are superior to the counterparts derived under $\epsilon$-LDP. The theoretical and empirical findings of this work suggest that Gaussian LDP should be considered a reliable standard for LDP.
|
Minimax Risks and Optimal Procedures for Estimation under Functional Local Differential Privacy
|
[
"Bonwoo Lee",
"Jeongyoun Ahn",
"Cheolwoo Park"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Xyj46OxEhK
|
@inproceedings{
chang2023look,
title={Look Ma, No Hands! Agent-Environment Factorization of Egocentric Videos},
author={Matthew Chang and Aditya Prakash and Saurabh Gupta},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xyj46OxEhK}
}
|
The analysis and use of egocentric videos for robotics tasks is made challenging by occlusion and the visual mismatch between the human hand and a robot end-effector. Past work views the human hand as a nuisance and removes it from the scene. However, the hand also provides a valuable signal for learning. In this work, we propose to extract a factored representation of the scene that separates the agent (human hand) and the environment. This alleviates both occlusion and mismatch while preserving the signal, thereby easing the design of models for downstream robotics tasks. At the heart of this factorization is our proposed Video Inpainting via Diffusion Model (VIDM) that leverages both a prior on real-world images (through a large-scale pre-trained diffusion model) and the appearance of the object in earlier frames of the video (through attention). Our experiments demonstrate the effectiveness of VIDM at improving the in-painting quality in egocentric videos and the power of our factored representation for numerous tasks: object detection, 3D reconstruction of manipulated objects, and learning of reward functions, policies, and affordances from videos.
|
Look Ma, No Hands! Agent-Environment Factorization of Egocentric Videos
|
[
"Matthew Chang",
"Aditya Prakash",
"Saurabh Gupta"
] |
Conference
|
poster
|
2305.16301
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XyAP8ScqLV
|
@inproceedings{
yang2023an,
title={An Empirical Study Towards Prompt-Tuning for Graph Contrastive Pre-Training in Recommendations},
author={Haoran Yang and Xiangyu Zhao and Yicong Li and Hongxu Chen and Guandong Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XyAP8ScqLV}
}
|
Graph contrastive learning (GCL) has emerged as a potent technology for numerous graph learning tasks. It has been successfully applied to real-world recommender systems, where the contrastive loss and the downstream recommendation objectives are always combined to form the overall objective function. Such a strategy is inconsistent with the original GCL paradigm, where graph embeddings are pre-trained without involving downstream training objectives. In this paper, we innovatively propose a prompt-enhanced framework for GCL-based recommender systems, namely CPTPP, which can fully leverage the advantages of the original GCL protocol through prompt tuning. Specifically, we first summarise user profiles in graph recommender systems to automatically generate personalized user prompts. These prompts will then be combined with pre-trained user embeddings to conduct prompt-tuning in downstream tasks, thereby narrowing the distinct targets between pre-training and downstream tasks. Extensive experiments on three benchmark datasets validate the effectiveness of CPTPP against state-of-the-art baselines. A further visualization experiment demonstrates that user embeddings generated by CPTPP have a more uniform distribution, indicating a better capacity to model the diversity of user preferences.
The implementation code is available online to ease reproducibility: https://anonymous.4open.science/r/CPTPP-F8F4
|
An Empirical Study Towards Prompt-Tuning for Graph Contrastive Pre-Training in Recommendations
|
[
"Haoran Yang",
"Xiangyu Zhao",
"Yicong Li",
"Hongxu Chen",
"Guandong Xu"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Xy7DoWSNZX
|
@inproceedings{
kuznedelev2023cap,
title={{CAP}: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models},
author={Denis Kuznedelev and Eldar Kurtic and Elias Frantar and Dan Alistarh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xy7DoWSNZX}
}
|
Driven by significant improvements in architectural design and training pipelines, computer vision
has recently experienced dramatic progress in terms of accuracy on classic benchmarks such as ImageNet.
These highly-accurate models are challenging to deploy, as they appear harder to compress using standard techniques such as pruning.
We address this issue by introducing the Correlation Aware Pruner (CAP),
a new unstructured pruning framework which significantly pushes the compressibility limits for state-of-the-art architectures.
Our method is based on two technical advancements: a new theoretically-justified pruner, which can handle complex weight correlations accurately and efficiently during the pruning process itself, and an efficient finetuning procedure for post-compression recovery.
We validate our approach via extensive experiments on several modern vision models such as Vision Transformers (ViT),
modern CNNs, and ViT-CNN hybrids, showing for the first time that these can be
pruned to high sparsity levels (e.g. $\geq 75$%) with low impact on accuracy ($\leq 1$% relative drop).
Our approach is also compatible with structured pruning and quantization, and can lead to practical speedups of 1.5 to 2.4x without accuracy loss. To further showcase CAP's accuracy and scalability, we use it to show for the first time that extremely-accurate large vision models, trained via self-supervised techniques, can also be pruned to moderate sparsities, with negligible accuracy loss.
|
CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models
|
[
"Denis Kuznedelev",
"Eldar Kurtic",
"Elias Frantar",
"Dan Alistarh"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Xxllzjt6T5
|
@inproceedings{
liu2023resync,
title={ReSync: Riemannian Subgradient-based Robust Rotation Synchronization},
author={Huikang Liu and Xiao Li and Anthony Man-Cho So},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xxllzjt6T5}
}
|
This work presents ReSync, a Riemannian subgradient-based algorithm for solving the robust rotation synchronization problem, which arises in various engineering applications. ReSync solves a least-unsquared minimization formulation over the rotation group, which is nonsmooth and nonconvex, and aims at recovering the underlying rotations directly. We provide strong theoretical guarantees for ReSync under the random corruption setting. Specifically, we first show that the initialization procedure of ReSync yields a proper initial point that lies in a local region around the ground-truth rotations. We next establish the weak sharpness property of the aforementioned formulation and then utilize this property to derive the local linear convergence of ReSync to the ground-truth rotations. By combining these guarantees, we conclude that ReSync converges linearly to the ground-truth rotations under appropriate conditions. Experiment results demonstrate the effectiveness of ReSync.
|
ReSync: Riemannian Subgradient-based Robust Rotation Synchronization
|
[
"Huikang Liu",
"Xiao Li",
"Anthony Man-Cho So"
] |
Conference
|
poster
|
2305.15136
|
[
"https://github.com/huikang2019/resync"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XvfEYqEbIb
|
@inproceedings{
jiang2023nonrigid,
title={Non-Rigid Shape Registration via Deep Functional Maps Prior},
author={Puhua Jiang and Mingze Sun and Ruqi Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XvfEYqEbIb}
}
|
In this paper, we propose a learning-based framework for non-rigid shape registra- tion without correspondence supervision. Traditional shape registration techniques typically rely on correspondences induced by extrinsic proximity, therefore can fail in the presence of large intrinsic deformations. Spectral mapping methods overcome this challenge by embedding shapes into, geometric or learned, high- dimensional spaces, where shapes are easier to align. However, due to the dependency on abstract, non-linear embedding schemes, the latter can be vulnerable with respect to perturbed or alien input. In light of this, our framework takes the best of both worlds. Namely, we deform source mesh towards the target point cloud, guided by correspondences induced by high-dimensional embeddings learned from deep functional maps (DFM). In particular, the correspondences are dynamically updated according to the intermediate registrations and filtered by consistency prior, which prominently robustify the overall pipeline. Moreover, in order to alleviate the requirement of extrinsically aligned input, we train an orientation regressor on a set of aligned synthetic shapes independent of the training shapes for DFM. Empirical results show that, with as few as dozens of training shapes of limited variability, our pipeline achieves state-of-the-art results on several benchmarks of non-rigid point cloud matching, but also delivers high-quality correspondences between unseen challenging shape pairs that undergo both significant extrinsic and intrinsic defor- mations, in which case neither traditional registration methods nor intrinsic methods work. The code is available at https://github.com/rqhuang88/DFR.
|
Non-Rigid Shape Registration via Deep Functional Maps Prior
|
[
"Puhua Jiang",
"Mingze Sun",
"Ruqi Huang"
] |
Conference
|
poster
|
2311.04494
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XvGQ6F3sG8
|
@inproceedings{
yang2023selfsupervised,
title={Self-supervised Graph Neural Networks via Low-Rank Decomposition},
author={Liang Yang and Runjie Shi and Qiuliang Zhang and Bingxin Niu and Zhen Wang and Xiaochun Cao and Chuan Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XvGQ6F3sG8}
}
|
Self-supervised learning is introduced to train graph neural networks (GNNs) by employing propagation-based GNNs designed for semi-supervised learning tasks. Unfortunately, this common choice tends to cause two serious issues. Firstly, global parameters cause the model lack the ability to capture the local property. Secondly, it is difficult to handle networks beyond homophily without label information.
This paper tends to break through the common choice of employing propagation-based GNNs, which aggregate representations of nodes belonging to different classes and tend to lose discriminative information. If the propagation in each ego-network is just between the nodes from the same class, the obtained representation matrix should follow the low-rank characteristic. To meet this requirement, this paper proposes the Low-Rank Decomposition-based GNNs (LRD-GNN-Matrix) by employing Low-Rank Decomposition to the attribute matrix.
Furthermore, to incorporate long-distance information, Low-Rank Tensor Decomposition-based GNN (LRD-GNN-Tensor) is proposed by constructing the node attribute tensor from selected similar ego-networks and performing Low-Rank Tensor Decomposition. The employed tensor nuclear norm facilitates the capture of the long-distance relationship between original and selected similar ego-networks. Extensive experiments demonstrate the superior performance and the robustness of LRD-GNNs.
|
Self-supervised Graph Neural Networks via Low-Rank Decomposition
|
[
"Liang Yang",
"Runjie Shi",
"Qiuliang Zhang",
"Bingxin Niu",
"Zhen Wang",
"Xiaochun Cao",
"Chuan Wang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Xu8aG5Q8M3
|
@inproceedings{
feng2023layoutgpt,
title={Layout{GPT}: Compositional Visual Planning and Generation with Large Language Models},
author={Weixi Feng and Wanrong Zhu and Tsu-Jui Fu and Varun Jampani and Arjun Reddy Akula and Xuehai He and S Basu and Xin Eric Wang and William Yang Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xu8aG5Q8M3}
}
|
Attaining a high degree of user controllability in visual generation often requires intricate, fine-grained inputs like layouts. However, such inputs impose a substantial burden on users when compared to simple text inputs. To address the issue, we study how Large Language Models (LLMs) can serve as visual planners by generating layouts from text conditions, and thus collaborate with visual generative models. We propose LayoutGPT, a method to compose in-context visual demonstrations in style sheet language to enhance visual planning skills of LLMs. We show that LayoutGPT can generate plausible layouts in multiple domains, ranging from 2D images to 3D indoor scenes. LayoutGPT also shows superior performance in converting challenging language concepts like numerical and spatial relations to layout arrangements for faithful text-to-image generation. When combined with a downstream image generation model, LayoutGPT outperforms text-to-image models/systems by 20-40\% and achieves comparable performance as human users in designing visual layouts for numerical and spatial correctness. Lastly, LayoutGPT achieves comparable performance to supervised methods in 3D indoor scene synthesis, demonstrating its effectiveness and potential in multiple visual domains.
|
LayoutGPT: Compositional Visual Planning and Generation with Large Language Models
|
[
"Weixi Feng",
"Wanrong Zhu",
"Tsu-Jui Fu",
"Varun Jampani",
"Arjun Reddy Akula",
"Xuehai He",
"S Basu",
"Xin Eric Wang",
"William Yang Wang"
] |
Conference
|
poster
|
2305.15393
|
[
"https://github.com/weixi-feng/layoutgpt"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Xs6Xwc0Glj
|
@inproceedings{
voronov2023is,
title={Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics},
author={Anton Voronov and Mikhail Khoroshikh and Artem Babenko and Max Ryabinin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xs6Xwc0Glj}
}
|
Text-to-image generation models represent the next step of evolution in image synthesis, offering a natural way to achieve flexible yet fine-grained control over the result.
One emerging area of research is the fast adaptation of large text-to-image models to smaller datasets or new visual concepts.
However, many efficient methods of adaptation have a long training time, which limits their practical applications, slows down experiments, and spends excessive GPU resources.
In this work, we study the training dynamics of popular text-to-image personalization methods (such as Textual Inversion or DreamBooth), aiming to speed them up.
We observe that most concepts are learned at early stages and do not improve in quality later, but standard training convergence metrics fail to indicate that.
Instead, we propose a simple drop-in early stopping criterion that only requires computing the regular training objective on a fixed set of inputs for all training iterations.
Our experiments on Stable Diffusion for 48 different concepts and three personalization methods demonstrate the competitive performance of our approach, which makes adaptation up to 8 times faster with no significant drops in quality.
|
Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics
|
[
"Anton Voronov",
"Mikhail Khoroshikh",
"Artem Babenko",
"Max Ryabinin"
] |
Conference
|
poster
|
2302.04841
|
[
"https://github.com/yandex-research/dvar"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XrqqPDAsRE
|
@inproceedings{
wang2023a,
title={A Randomized Approach to Tight Privacy Accounting},
author={Jiachen T. Wang and Saeed Mahloujifar and Tong Wu and Ruoxi Jia and Prateek Mittal},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XrqqPDAsRE}
}
|
Bounding privacy leakage over compositions, i.e., privacy accounting, is a key challenge in differential privacy (DP). However, the privacy parameter ($\varepsilon$ or $\delta$) is often easy to estimate but hard to bound. In this paper, we propose a new differential privacy paradigm called estimate-verify-release (EVR), which tackles the challenges of providing a strict upper bound for the privacy parameter in DP compositions by converting an *estimate* of privacy parameter into a formal guarantee. The EVR paradigm first verifies whether the mechanism meets the *estimated* privacy guarantee, and then releases the query output based on the verification result. The core component of the EVR is privacy verification. We develop a randomized privacy verifier using Monte Carlo (MC) technique. Furthermore, we propose an MC-based DP accountant that outperforms existing DP accounting techniques in terms of accuracy and efficiency. MC-based DP verifier and accountant is applicable to an important and commonly used class of DP algorithms, including the famous DP-SGD. An empirical evaluation shows the proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
|
A Randomized Approach to Tight Privacy Accounting
|
[
"Jiachen T. Wang",
"Saeed Mahloujifar",
"Tong Wu",
"Ruoxi Jia",
"Prateek Mittal"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=XqcXf7ix5q
|
@inproceedings{
lee2023localityaware,
title={Locality-Aware Generalizable Implicit Neural Representation},
author={Doyup Lee and Chiheon Kim and Minsu Cho and Wook-Shin Han},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XqcXf7ix5q}
}
|
Generalizable implicit neural representation (INR) enables a single continuous function, i.e., a coordinate-based neural network, to represent multiple data instances by modulating its weights or intermediate features using latent codes. However, the expressive power of the state-of-the-art modulation is limited due to its inability to localize and capture fine-grained details of data entities such as specific pixels and rays. To address this issue, we propose a novel framework for generalizable INR that combines a transformer encoder with a locality-aware INR decoder. The transformer encoder predicts a set of latent tokens from a data instance to encode local information into each latent token. The locality-aware INR decoder extracts a modulation vector by selectively aggregating the latent tokens via cross-attention for a coordinate input and then predicts the output by progressively decoding with coarse-to-fine modulation through multiple frequency bandwidths. The selective token aggregation and the multi-band feature modulation enable us to learn locality-aware representation in spatial and spectral aspects, respectively. Our framework significantly outperforms previous generalizable INRs and validates the usefulness of the locality-aware latents for downstream tasks such as image generation.
|
Locality-Aware Generalizable Implicit Neural Representation
|
[
"Doyup Lee",
"Chiheon Kim",
"Minsu Cho",
"Wook-Shin Han"
] |
Conference
|
poster
|
2310.05624
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Xq2s5yxzd2
|
@inproceedings{
chen2023multiprompt,
title={Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation},
author={Haoran Chen and Xintong Han and Zuxuan Wu and Yu-Gang Jiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xq2s5yxzd2}
}
|
Most existing methods for unsupervised domain adaptation (UDA) rely on a shared network to extract domain-invariant features. However, when facing multiple source domains, optimizing such a network involves updating the parameters of the entire network, making it both computationally expensive and challenging, particularly when coupled with min-max objectives. Inspired by recent advances in prompt learning that adapts high-capacity models for downstream tasks in a computationally economic way, we introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA. Given a source and target domain pair, MPA first trains an individual prompt to minimize the domain gap through a contrastive loss. Then, MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts. Moreover, we show that the resulting subspace acquired from the auto-encoding process can easily generalize to a streamlined set of target domains, making our method more efficient for practical usage. Extensive experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
|
Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation
|
[
"Haoran Chen",
"Xintong Han",
"Zuxuan Wu",
"Yu-Gang Jiang"
] |
Conference
|
poster
|
2209.15210
|
[
"https://github.com/haoranchen/multi-prompt-alignment-for-msuda"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XpmJNP8BVA
|
@inproceedings{
seo2023regularized,
title={Regularized Behavior Cloning for Blocking the Leakage of Past Action Information},
author={Seokin Seo and HyeongJoo Hwang and Hongseok Yang and Kee-Eung Kim},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XpmJNP8BVA}
}
|
For partially observable environments, imitation learning with observation histories (ILOH) assumes that control-relevant information is sufficiently captured in the observation histories for imitating the expert actions. In the offline setting wherethe agent is required to learn to imitate without interaction with the environment, behavior cloning (BC) has been shown to be a simple yet effective method for imitation learning. However, when the information about the actions executed in the past timesteps leaks into the observation histories, ILOH via BC often ends up imitating its own past actions. In this paper, we address this catastrophic failure by proposing a principled regularization for BC, which we name Past Action Leakage Regularization (PALR). The main idea behind our approach is to leverage the classical notion of conditional independence to mitigate the leakage. We compare different instances of our framework with natural choices of conditional independence metric and its estimator. The result of our comparison advocates the use of a particular kernel-based estimator for the conditional independence metric. We conduct an extensive set of experiments on benchmark datasets in order to assess the effectiveness of our regularization method. The experimental results show that our method significantly outperforms prior related approaches, highlighting its potential to successfully imitate expert actions when the past action information leaks into the observation histories.
|
Regularized Behavior Cloning for Blocking the Leakage of Past Action Information
|
[
"Seokin Seo",
"HyeongJoo Hwang",
"Hongseok Yang",
"Kee-Eung Kim"
] |
Conference
|
spotlight
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Xp68yXQiRk
|
@inproceedings{
diakonikolas2023sq,
title={{SQ} Lower Bounds for Non-Gaussian Component Analysis with Weaker Assumptions},
author={Ilias Diakonikolas and Daniel Kane and Lisheng Ren and Yuxin Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xp68yXQiRk}
}
|
We study the complexity of Non-Gaussian Component Analysis (NGCA) in the Statistical Query (SQ) model.
Prior work developed a methodology to prove SQ lower bounds for NGCA that have been applicable to a wide range of contexts.
In particular, it was known that for any univariate distribution $A$ satisfying certain conditions,
distinguishing between a standard multivariate Gaussian and a distribution that behaves like $A$ in a random hidden direction and like a standard Gaussian in the orthogonal complement, is SQ-hard.
The required conditions were that (1) $A$ matches many low-order moments with a standard Gaussian,
and (2) the chi-squared norm of $A$ with respect to the standard Gaussian is finite.
While the moment-matching condition is clearly necessary for hardness, the chi-squared condition was only required for technical reasons.
In this work, we establish that the latter condition is indeed not necessary.
In particular, we prove near-optimal SQ lower bounds for NGCA under the moment-matching condition only.
|
SQ Lower Bounds for Non-Gaussian Component Analysis with Weaker Assumptions
|
[
"Ilias Diakonikolas",
"Daniel Kane",
"Lisheng Ren",
"Yuxin Sun"
] |
Conference
|
poster
|
2403.04744
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XmpthbaJql
|
@inproceedings{
jiang2023restuning,
title={Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone},
author={Zeyinzi Jiang and Chaojie Mao and Ziyuan Huang and Ao Ma and Yiliang Lv and Yujun Shen and Deli Zhao and Jingren Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XmpthbaJql}
}
|
Parameter-efficient tuning has become a trend in transferring large-scale foundation models to downstream applications. Existing methods typically embed some light-weight tuners into the backbone, where both the design and the learning of the tuners are highly dependent on the base model. This work offers a new tuning paradigm, dubbed Res-Tuning, which intentionally unbinds tuners from the backbone. With both theoretical and empirical evidence, we show that popular tuning approaches have their equivalent counterparts under our unbinding formulation, and hence can be integrated into our framework effortlessly. Thanks to the structural disentanglement, we manage to free the design of tuners from the network architecture, facilitating flexible combination of various tuning strategies. We further propose a memory-efficient variant of Res-Tuning, where the bypass i.e., formed by a sequence of tuners) is effectively detached from the main branch, such that the gradients are back-propagated only to the tuners but not to the backbone. Such a detachment also allows one-time backbone forward for multi-task inference. Extensive experiments on both discriminative and generative tasks demonstrate the superiority of our method over existing alternatives from the perspectives of efficacy and efficiency. Project page: https://res-tuning.github.io/.
|
Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone
|
[
"Zeyinzi Jiang",
"Chaojie Mao",
"Ziyuan Huang",
"Ao Ma",
"Yiliang Lv",
"Yujun Shen",
"Deli Zhao",
"Jingren Zhou"
] |
Conference
|
poster
|
2310.19859
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XmN7ZNbUAe
|
@inproceedings{
borzunov2023distributed,
title={Distributed Inference and Fine-tuning of Large Language Models Over The Internet},
author={Alexander Borzunov and Max Ryabinin and Artem Chumachenko and Dmitry Baranchuk and Tim Dettmers and Younes Belkada and Pavel Samygin and Colin Raffel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XmN7ZNbUAe}
}
|
Large language models (LLMs) are useful in many NLP tasks and become more capable with size, with the best open-source models having over 50 billion parameters. However, using these 50B+ models requires high-end hardware, making them inaccessible to most researchers. In this work, we investigate methods for cost-efficient inference and fine-tuning of LLMs, comparing local and distributed strategies. We observe that a large enough model (50B+) can run efficiently even on geodistributed devices in a consumer-grade network. This could allow running LLM efficiently by pooling together idle compute resources of multiple research groups and volunteers. We address two open problems: (1) how to perform inference and fine-tuning reliably if any device can disconnect abruptly and (2) how to partition LLMs between devices with uneven hardware, joining and leaving at will. In order to do that, we develop special fault-tolerant inference algorithms and load-balancing protocols that automatically assign devices to maximize the total system throughput. We showcase these algorithms in Petals — a decentralized system that runs Llama 2 (70B) and BLOOM (176B) over the Internet up to $10\times$ faster than offloading for interactive generation. We evaluate the performance of our system in simulated conditions and a real-world setup spanning two continents.
|
Distributed Inference and Fine-tuning of Large Language Models Over The Internet
|
[
"Alexander Borzunov",
"Max Ryabinin",
"Artem Chumachenko",
"Dmitry Baranchuk",
"Tim Dettmers",
"Younes Belkada",
"Pavel Samygin",
"Colin Raffel"
] |
Conference
|
poster
|
2312.08361
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XlvsieCnAX
|
@inproceedings{
chanpuriya2023exact,
title={Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings},
author={Sudhanshu Chanpuriya and Ryan A. Rossi and Anup Rao and Tung Mai and Nedim Lipka and Zhao Song and Cameron N Musco},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XlvsieCnAX}
}
|
Graph models based on factorization of the adjacency matrix often fail to capture network structures related to links between dissimilar nodes (heterophily). We introduce a novel graph factorization model that leverages two nonnegative vectors per node to interpretably account for links between both similar and dissimilar nodes. We prove that our model can exactly represent any graph with low *arboricity*, a property that many real-world networks satisfy; our proof also applies to related models but has much greater scope than the closest prior bound, which is based on low *max degree*. Our factorization also has compelling properties besides expressiveness: due to its symmetric structure and nonnegativity, fitting the model inherently finds node communities, and the model's link predictions can be interpreted in terms of these communities. In experiments on real-world networks, we demonstrate our factorization's effectiveness on a variety of tasks, including community detection and link prediction.
|
Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings
|
[
"Sudhanshu Chanpuriya",
"Ryan A. Rossi",
"Anup Rao",
"Tung Mai",
"Nedim Lipka",
"Zhao Song",
"Cameron N Musco"
] |
Conference
|
poster
|
2111.03030
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XkcufOcgUc
|
@inproceedings{
zheng2023structurefree,
title={Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data},
author={Xin Zheng and Miao Zhang and Chunyang Chen and Quoc Viet Hung Nguyen and Xingquan Zhu and Shirui Pan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XkcufOcgUc}
}
|
Graph condensation, which reduces the size of a large-scale graph by synthesizing a small-scale condensed graph as its substitution, has immediate benefits for various graph learning tasks.
However, existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph, and overlook critical issues in effectiveness and generalization ability.
In this paper, we advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set without explicit graph structures, i.e., graph-free data.
Our idea is to implicitly encode topology structure information into the node attributes in the synthesized graph-free data, whose topology is reduced to an identity matrix.
Specifically, SFGC contains two collaborative components:
(1) a training trajectory meta-matching scheme for effectively synthesizing small-scale graph-free data;
(2) a graph neural feature score metric for dynamically evaluating the quality of the condensed data.
Through training trajectory meta-matching, SFGC aligns the long-term GNN learning behaviors between the large-scale graph and the condensed small-scale graph-free data, ensuring comprehensive and compact transfer of informative knowledge to the graph-free data.
Afterward, the underlying condensed graph-free data would be dynamically evaluated with the graph neural feature score, which is a closed-form metric for ensuring the excellent expressiveness of the condensed graph-free data.
Extensive experiments verify the superiority of SFGC across different condensation ratios.
|
Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data
|
[
"Xin Zheng",
"Miao Zhang",
"Chunyang Chen",
"Quoc Viet Hung Nguyen",
"Xingquan Zhu",
"Shirui Pan"
] |
Conference
|
spotlight
|
2306.02664
|
[
"https://github.com/amanda-zheng/sfgc"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XjOj3ZmWEl
|
@inproceedings{
norelli2023asif,
title={{ASIF}: Coupled Data Turns Unimodal Models to Multimodal without Training},
author={Antonio Norelli and Marco Fumero and Valentino Maiorca and Luca Moschella and Emanuele Rodol{\`a} and Francesco Locatello},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XjOj3ZmWEl}
}
|
CLIP proved that aligning visual and language spaces is key to solving many vision tasks without explicit training, but required to train image and text encoders from scratch on a huge dataset. LiT improved this by only training the text encoder and using a pre-trained vision network. In this paper, we show that a common space can be created without any training at all, using single-domain encoders (trained with or without supervision) and a much smaller amount of image-text pairs. Furthermore, our model has unique properties. Most notably, deploying a new version with updated training samples can be done in a matter of seconds. Additionally, the representations in the common space are easily interpretable as every dimension corresponds to the similarity of the input to a unique entry in the multimodal dataset. Experiments on standard zero-shot visual benchmarks demonstrate the typical transfer ability of image-text models. Overall, our method represents a simple yet surprisingly strong baseline for foundation multi-modal models, raising important questions on their data efficiency and on the role of retrieval in machine learning.
|
ASIF: Coupled Data Turns Unimodal Models to Multimodal without Training
|
[
"Antonio Norelli",
"Marco Fumero",
"Valentino Maiorca",
"Luca Moschella",
"Emanuele Rodolà",
"Francesco Locatello"
] |
Conference
|
poster
|
2210.01738
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=Xj4LJiXvlX
|
@inproceedings{
dai2023batch,
title={Batch Bayesian Optimization For Replicable Experimental Design},
author={Zhongxiang Dai and Quoc Phong Nguyen and Sebastian Shenghong Tay and Daisuke Urano and Richalynn Leong and Bryan Kian Hsiang Low and Patrick Jaillet},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xj4LJiXvlX}
}
|
Many real-world experimental design problems (a) evaluate multiple experimental conditions in parallel and (b) replicate each condition multiple times due to large and heteroscedastic observation noise. Given a fixed total budget, this naturally induces a trade-off between evaluating more unique conditions while replicating each of them fewer times vs. evaluating fewer unique conditions and replicating each more times. Moreover, in these problems, practitioners may be risk-averse and hence prefer an input with both good average performance and small variability. To tackle both challenges, we propose the Batch Thompson Sampling for Replicable Experimental Design (BTS-RED) framework, which encompasses three algorithms. Our BTS-RED-Known and BTS-RED-Unknown algorithms, for, respectively, known and unknown noise variance, choose the number of replications adaptively rather than deterministically such that an input with a larger noise variance is replicated more times. As a result, despite the noise heteroscedasticity, both algorithms enjoy a theoretical guarantee and are asymptotically no-regret. Our Mean-Var-BTS-RED algorithm aims at risk-averse optimization and is also asymptotically no-regret. We also show the effectiveness of our algorithms in two practical real-world applications: precision agriculture and AutoML.
|
Batch Bayesian Optimization For Replicable Experimental Design
|
[
"Zhongxiang Dai",
"Quoc Phong Nguyen",
"Sebastian Shenghong Tay",
"Daisuke Urano",
"Richalynn Leong",
"Bryan Kian Hsiang Low",
"Patrick Jaillet"
] |
Conference
|
poster
|
2311.01195
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XhNlBvb4XV
|
@inproceedings{
wang2023deep,
title={Deep Insights into Noisy Pseudo Labeling on Graph Data},
author={Botao WANG and Jia Li and Yang Liu and Jiashun Cheng and Yu Rong and Wenjia Wang and Fugee Tsung},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XhNlBvb4XV}
}
|
Pseudo labeling (PL) is a wide-applied strategy to enlarge the labeled dataset by self-annotating the potential samples during the training process. Several works have shown that it can improve the graph learning model performance in general. However, we notice that the incorrect labels can be fatal to the graph training process. Inappropriate PL may result in the performance degrading, especially on graph data where the noise can propagate. Surprisingly, the corresponding error is seldom theoretically analyzed in the literature. In this paper, we aim to give deep insights of PL on graph learning models. We first present the error analysis of PL strategy by showing that the error is bounded by the confidence of PL threshold and consistency of multi-view prediction. Then, we theoretically illustrate the effect of PL on convergence property. Based on the analysis, we propose a cautious pseudo labeling methodology in which we pseudo label the samples with highest confidence and multi-view consistency. Finally, extensive experiments demonstrate that the proposed strategy improves graph learning process and outperforms other PL strategies on link prediction and node classification tasks.
|
Deep Insights into Noisy Pseudo Labeling on Graph Data
|
[
"Botao WANG",
"Jia Li",
"Yang Liu",
"Jiashun Cheng",
"Yu Rong",
"Wenjia Wang",
"Fugee Tsung"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=XfYpIaKDb6
|
@inproceedings{
eldowa2023on,
title={On the Minimax Regret for Online Learning with Feedback Graphs},
author={Khaled Eldowa and Emmanuel Esposito and Tommaso Cesari and Nicol{\`o} Cesa-Bianchi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XfYpIaKDb6}
}
|
In this work, we improve on the upper and lower bounds for the regret of online learning with strongly observable undirected feedback graphs. The best known upper bound for this problem is $\mathcal{O}\bigl(\sqrt{\alpha T\ln K}\bigr)$, where $K$ is the number of actions, $\alpha$ is the independence number of the graph, and $T$ is the time horizon. The $\sqrt{\ln K}$ factor is known to be necessary when $\alpha = 1$ (the experts case). On the other hand, when $\alpha = K$ (the bandits case), the minimax rate is known to be $\Theta\bigl(\sqrt{KT}\bigr)$, and a lower bound $\Omega\bigl(\sqrt{\alpha T}\bigr)$ is known to hold for any $\alpha$. Our improved upper bound $\mathcal{O}\bigl(\sqrt{\alpha T(1+\ln(K/\alpha))}\bigr)$ holds for any $\alpha$ and matches the lower bounds for bandits and experts, while interpolating intermediate cases. To prove this result, we use FTRL with $q$-Tsallis entropy for a carefully chosen value of $q \in [1/2, 1)$ that varies with $\alpha$. The analysis of this algorithm requires a new bound on the variance term in the regret. We also show how to extend our techniques to time-varying graphs, without requiring prior knowledge of their independence numbers. Our upper bound is complemented by an improved $\Omega\bigl(\sqrt{\alpha T(\ln K)/(\ln\alpha)}\bigr)$ lower bound for all $\alpha > 1$, whose analysis relies on a novel reduction to multitask learning. This shows that a logarithmic factor is necessary as soon as $\alpha < K$.
|
On the Minimax Regret for Online Learning with Feedback Graphs
|
[
"Khaled Eldowa",
"Emmanuel Esposito",
"Tommaso Cesari",
"Nicolò Cesa-Bianchi"
] |
Conference
|
spotlight
|
2305.15383
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XfKnoW4Zef
|
@inproceedings{
pang2023towards,
title={Towards Robust and Expressive Whole-body Human Pose and Shape Estimation},
author={Hui En Pang and Zhongang Cai and Lei Yang and Qingyi Tao and Zhonghua Wu and Tianwei Zhang and Ziwei Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XfKnoW4Zef}
}
|
Whole-body pose and shape estimation aims to jointly predict different behaviors (e.g., pose, hand gesture, facial expression) of the entire human body from a monocular image. Existing methods often exhibit suboptimal performance due to the complexity of in-the-wild scenarios. We argue that the prediction accuracy of these models is significantly affected by the quality of the _bounding box_, e.g., scale, alignment. The natural discrepancy between the ideal bounding box annotations and model detection results is particularly detrimental to the performance of whole-body pose and shape estimation.
In this paper, we propose a novel framework to enhance the robustness of whole-body pose and shape estimation. Our framework incorporates three new modules to address the above challenges from three perspectives: (1) a **Localization Module** enhances the model's awareness of the subject's location and semantics within the image space; (2) a **Contrastive Feature Extraction Module** encourages the model to be invariant to robust augmentations by incorporating a contrastive loss and positive samples; (3) a **Pixel Alignment Module** ensures the reprojected mesh from the predicted camera and body model parameters are more accurate and pixel-aligned. We perform comprehensive experiments to demonstrate the effectiveness of our proposed framework on body, hands, face and whole-body benchmarks.
|
Towards Robust and Expressive Whole-body Human Pose and Shape Estimation
|
[
"Hui En Pang",
"Zhongang Cai",
"Lei Yang",
"Qingyi Tao",
"Zhonghua Wu",
"Tianwei Zhang",
"Ziwei Liu"
] |
Conference
|
poster
|
2312.08730
|
[
"https://github.com/robosmplx/robosmplx"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XetXfkYZ6i
|
@inproceedings{
venkata2023deep,
title={Deep Recurrent Optimal Stopping},
author={NIRANJAN DAMERA VENKATA and Chiranjib Bhattacharyya},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XetXfkYZ6i}
}
|
Deep neural networks (DNNs) have recently emerged as a powerful paradigm for solving Markovian optimal stopping problems. However, a ready extension of DNN-based methods to non-Markovian settings requires significant state and parameter space expansion, manifesting the curse of dimensionality. Further, efficient state-space transformations permitting Markovian approximations, such as those afforded by recurrent neural networks (RNNs), are either structurally infeasible or are confounded by the curse of non-Markovianity. Considering these issues, we introduce, for the first time, an optimal stopping policy gradient algorithm (OSPG) that can leverage RNNs effectively in non-Markovian settings by implicitly optimizing value functions without recursion, mitigating the curse of non-Markovianity. The OSPG algorithm is derived from an inference procedure on a novel Bayesian network representation of discrete-time non-Markovian optimal stopping trajectories and, as a consequence, yields an offline policy gradient algorithm that eliminates expensive Monte Carlo policy rollouts.
|
Deep Recurrent Optimal Stopping
|
[
"NIRANJAN DAMERA VENKATA",
"Chiranjib Bhattacharyya"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=XeMryhpniy
|
@inproceedings{
chen2023hierarchical,
title={Hierarchical Integration Diffusion Model for Realistic Image Deblurring},
author={Zheng Chen and Yulun Zhang and Ding Liu and Bin Xia and Jinjin Gu and Linghe Kong and Xin Yuan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XeMryhpniy}
}
|
Diffusion models (DMs) have recently been introduced in image deblurring and exhibited promising performance, particularly in terms of details reconstruction. However, the diffusion model requires a large number of inference iterations to recover the clean image from pure Gaussian noise, which consumes massive computational resources. Moreover, the distribution synthesized by the diffusion model is often misaligned with the target results, leading to restrictions in distortion-based metrics. To address the above issues, we propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring. Specifically, we perform the DM in a highly compacted latent space to generate the prior feature for the deblurring process. The deblurring process is implemented by a regression-based method to obtain better distortion accuracy. Meanwhile, the highly compact latent space ensures the efficiency of the DM. Furthermore, we design the hierarchical integration module to fuse the prior into the regression-based model from multiple scales, enabling better generalization in complex blurry scenarios. Comprehensive experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods. Code and trained models are available at https://github.com/zhengchen1999/HI-Diff.
|
Hierarchical Integration Diffusion Model for Realistic Image Deblurring
|
[
"Zheng Chen",
"Yulun Zhang",
"Ding Liu",
"Bin Xia",
"Jinjin Gu",
"Linghe Kong",
"Xin Yuan"
] |
Conference
|
spotlight
|
2305.12966
|
[
"https://github.com/zhengchen1999/hi-diff"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XddoUFpjkP
|
@inproceedings{
li2023bayesian,
title={Bayesian Learning via Q-Exponential Process},
author={Shuyi Li and Michael O'Connor and Shiwei Lan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XddoUFpjkP}
}
|
Regularization is one of the most fundamental topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter $u\in\mathbb{R}^d$, an $\ell_q$ penalty term, $\Vert u\Vert_q$, is usually added to the objective function. What is the probabilistic distribution corresponding to such $\ell_q$ penalty? What is the \emph{correct} stochastic process corresponding to $\Vert u\Vert_q$ when we model functions $u\in L^q$? This is important for statistically modeling high-dimensional objects such as images, with penalty to preserve certainty properties, e.g. edges in the image.
In this work, we generalize the $q$-exponential distribution (with density proportional to) $\exp{(- \frac{1}{2}|u|^q)}$ to a stochastic process named \emph{$Q$-exponential (Q-EP) process} that corresponds to the $L_q$ regularization of functions. The key step is to specify consistent multivariate $q$-exponential distributions by choosing from a large family of elliptic contour distributions. The work is closely related to Besov process which is usually defined in terms of series. Q-EP can be regarded as a definition of Besov process with explicit probabilistic formulation, direct control on the correlation strength, and tractable prediction formula. From the Bayesian perspective, Q-EP provides a flexible prior on functions with sharper penalty ($q<2$) than the commonly used Gaussian process (GP, $q=2$).
We compare GP, Besov and Q-EP in modeling functional data, reconstructing images and solving inverse problems and demonstrate the advantage of our proposed methodology.
|
Bayesian Learning via Q-Exponential Process
|
[
"Shuyi Li",
"Michael O'Connor",
"Shiwei Lan"
] |
Conference
|
poster
|
2210.07987
|
[
"https://github.com/lanzithinking/q-exp"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XcQzXeF7fX
|
@inproceedings{
pang2023on,
title={On Calibrating Diffusion Probabilistic Models},
author={Tianyu Pang and Cheng Lu and Chao Du and Min Lin and Shuicheng YAN and Zhijie Deng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XcQzXeF7fX}
}
|
Recently, diffusion probabilistic models (DPMs) have achieved promising results in diverse generative tasks. A typical DPM framework includes a forward process that gradually diffuses the data distribution and a reverse process that recovers the data distribution from time-dependent data scores. In this work, we observe that the stochastic reverse process of data scores is a martingale, from which concentration bounds and the optional stopping theorem for data scores can be derived. Then, we discover a simple way for calibrating an arbitrary pretrained DPM, with which the score matching loss can be reduced and the lower bounds of model likelihood can consequently be increased. We provide general calibration guidelines under various model parametrizations. Our calibration method is performed only once and the resulting models can be used repeatedly for sampling. We conduct experiments on multiple datasets to empirically validate our proposal. Our code is available at https://github.com/thudzj/Calibrated-DPMs.
|
On Calibrating Diffusion Probabilistic Models
|
[
"Tianyu Pang",
"Cheng Lu",
"Chao Du",
"Min Lin",
"Shuicheng YAN",
"Zhijie Deng"
] |
Conference
|
poster
|
2302.10688
|
[
"https://github.com/thudzj/calibrated-dpms"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XbVnNXaIQY
|
@inproceedings{
tu2023holistic,
title={Holistic Transfer: Towards Non-Disruptive Fine-Tuning with Partial Target Data},
author={Cheng-Hao Tu and Hong-You Chen and Zheda Mai and Jike Zhong and Vardaan Pahuja and Tanya Berger-Wolf and Song Gao and Charles Stewart and Yu Su and Wei-Lun Chao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XbVnNXaIQY}
}
|
We propose a learning problem involving adapting a pre-trained source model to the target domain for classifying all classes that appeared in the source data, using target data that covers only a partial label space. This problem is practical, as it is unrealistic for the target end-users to collect data for all classes prior to adaptation. However, it has received limited attention in the literature. To shed light on this issue, we construct benchmark datasets and conduct extensive experiments to uncover the inherent challenges. We found a dilemma --- on the one hand, adapting to the new target domain is important to claim better performance; on the other hand, we observe that preserving the classification accuracy of classes missing in the target adaptation data is highly challenging, let alone improving them. To tackle this, we identify two key directions: 1) disentangling domain gradients from classification gradients, and 2) preserving class relationships. We present several effective solutions that maintain the accuracy of the missing classes and enhance the overall performance, establishing solid baselines for holistic transfer of pre-trained models with partial target data.
|
Holistic Transfer: Towards Non-Disruptive Fine-Tuning with Partial Target Data
|
[
"Cheng-Hao Tu",
"Hong-You Chen",
"Zheda Mai",
"Jike Zhong",
"Vardaan Pahuja",
"Tanya Berger-Wolf",
"Song Gao",
"Charles Stewart",
"Yu Su",
"Wei-Lun Chao"
] |
Conference
|
poster
|
2311.01420
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XbInLmYLDr
|
@inproceedings{
vora2023divinet,
title={DiViNeT: 3D Reconstruction from Disparate Views using Neural Template Regularization},
author={Aditya Vora and Akshay Gadi Patil and Hao Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XbInLmYLDr}
}
|
We present a volume rendering-based neural surface reconstruction method that takes as few as three disparate RGB images as input. Our key idea is to regularize the reconstruction, which is severely ill-posed and leaving significant gaps between the sparse views, by learning a set of neural templates that act as surface priors. Our method, coined DiViNet, operates in two stages. The first stage learns the templates, in the form of 3D Gaussian functions, across different scenes, without 3D supervision. In the reconstruction stage, our predicted templates serve as anchors to help “stitch” the surfaces over sparse regions. We demonstrate that our approach is not only able to complete the surface geometry but also reconstructs surface details to a reasonable extent from few disparate input views. On the DTU and BlendedMVS datasets, our approach achieves the best reconstruction quality among existing methods in the presence of such sparse views and performs on par, if not better, with competing methods when dense views are employed as inputs.
|
DiViNeT: 3D Reconstruction from Disparate Views using Neural Template Regularization
|
[
"Aditya Vora",
"Akshay Gadi Patil",
"Hao Zhang"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Xazhn0JoNx
|
@inproceedings{
choe2023making,
title={Making Scalable Meta Learning Practical},
author={Sang Keun Choe and Sanket Vaibhav Mehta and Hwijeen Ahn and Willie Neiswanger and Pengtao Xie and Emma Strubell and Eric Xing},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xazhn0JoNx}
}
|
Despite its flexibility to learn diverse inductive biases in machine learning programs, meta learning (i.e.,\ learning to learn) has long been recognized to suffer from poor scalability due to its tremendous compute/memory costs, training instability, and a lack of efficient distributed training support. In this work, we focus on making scalable meta learning practical by introducing SAMA, which combines advances in both implicit differentiation algorithms and systems. Specifically, SAMA is designed to flexibly support a broad range of adaptive optimizers in the base level of meta learning programs, while reducing computational burden by avoiding explicit computation of second-order gradient information, and exploiting efficient distributed training techniques implemented for first-order gradients. Evaluated on multiple large-scale meta learning benchmarks, SAMA showcases up to 1.7/4.8x increase in throughput and 2.0/3.8x decrease in memory consumption respectively on single-/multi-GPU setups compared to other baseline meta learning algorithms. Furthermore, we show that SAMA-based data optimization leads to consistent improvements in text classification accuracy with BERT and RoBERTa large language models, and achieves state-of-the-art results in both small- and large-scale data pruning on image classification tasks, demonstrating the practical applicability of scalable meta learning across language and vision domains.
|
Making Scalable Meta Learning Practical
|
[
"Sang Keun Choe",
"Sanket Vaibhav Mehta",
"Hwijeen Ahn",
"Willie Neiswanger",
"Pengtao Xie",
"Emma Strubell",
"Eric Xing"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=Xasl21tSOf
|
@inproceedings{
yu2023provable,
title={Provable Training for Graph Contrastive Learning},
author={Yue Yu and Xiao Wang and Mengmei Zhang and Nian Liu and Chuan Shi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Xasl21tSOf}
}
|
Graph Contrastive Learning (GCL) has emerged as a popular training approach for learning node embeddings from augmented graphs without labels. Despite the key principle that maximizing the similarity between positive node pairs while minimizing it between negative node pairs is well established, some fundamental problems are still unclear. Considering the complex graph structure, are some nodes consistently well-trained and following this principle even with different graph augmentations? Or are there some nodes more likely to be untrained across graph augmentations and violate the principle? How to distinguish these nodes and further guide the training of GCL? To answer these questions, we first present experimental evidence showing that the training of GCL is indeed imbalanced across all nodes. To address this problem, we propose the metric "node compactness", which is the lower bound of how a node follows the GCL principle related to the range of augmentations. We further derive the form of node compactness theoretically through bound propagation, which can be integrated into binary cross-entropy as a regularization. To this end, we propose the PrOvable Training (POT) for GCL, which regularizes the training of GCL to encode node embeddings that follows the GCL principle better. Through extensive experiments on various benchmarks, POT consistently improves the existing GCL approaches, serving as a friendly plugin.
|
Provable Training for Graph Contrastive Learning
|
[
"Yue Yu",
"Xiao Wang",
"Mengmei Zhang",
"Nian Liu",
"Chuan Shi"
] |
Conference
|
spotlight
|
2309.13944
|
[
"https://github.com/voidharuhi/pot-gcl"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XY6BnwIh4q
|
@inproceedings{
shin2023binary,
title={Binary Radiance Fields},
author={Seungjoo Shin and Jaesik Park},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XY6BnwIh4q}
}
|
In this paper, we propose \textit{binary radiance fields} (BiRF), a storage-efficient radiance field representation employing binary feature encoding in a format of either $+1$ or $-1$. This binarization strategy lets us represent the feature grid with highly compact feature encoding and a dramatic reduction in storage size. Furthermore, our 2D-3D hybrid feature grid design enhances the compactness of feature encoding as the 3D grid includes main components while 2D grids capture details. In our experiments, binary radiance field representation successfully outperforms the reconstruction performance of state-of-the-art (SOTA) storage-efficient radiance field models with lower storage allocation. In particular, our model achieves impressive results in static scene reconstruction, with a PSNR of 32.03 dB for Synthetic-NeRF scenes, 34.48 dB for Synthetic-NSVF scenes, 28.20 dB for Tanks and Temples scenes while only utilizing 0.5 MB of storage space, respectively. We hope the proposed binary radiance field representation will make radiance fields more accessible without a storage bottleneck.
|
Binary Radiance Fields
|
[
"Seungjoo Shin",
"Jaesik Park"
] |
Conference
|
poster
|
2306.07581
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XXagS1RQH0
|
@inproceedings{
wang2023learningtorank,
title={Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification},
author={Rui Wang and Pei Pei Li and Huaibo Huang and Chunshui Cao and Ran He and Zhaofeng He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XXagS1RQH0}
}
|
We present a novel language-driven ordering alignment method for ordinal classification. The labels in ordinal classification contain additional ordering relations, making them prone to overfitting when relying solely on training data. Recent developments in pre-trained vision-language models inspire us to leverage the rich ordinal priors in human language by converting the original task into a vision-language alignment task. Consequently, we propose L2RCLIP, which fully utilizes the language priors from two perspectives. First, we introduce a complementary prompt tuning technique called RankFormer, designed to enhance the ordering relation of original rank prompts. It employs token-level attention with residual-style prompt blending in the word embedding space. Second, to further incorporate language priors, we revisit the approximate bound optimization of vanilla cross-entropy loss and restructure it within the cross-modal embedding space. Consequently, we propose a cross-modal ordinal pairwise loss to refine the CLIP feature space, where texts and images maintain both semantic alignment and ordering alignment. Extensive experiments on three ordinal classification tasks, including facial age estimation, historical color image (HCI) classification, and aesthetic assessment demonstrate its promising performance.
|
Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification
|
[
"Rui Wang",
"Pei Pei Li",
"Huaibo Huang",
"Chunshui Cao",
"Ran He",
"Zhaofeng He"
] |
Conference
|
poster
|
2306.13856
|
[
"https://github.com/raywang335/l2rclip"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XXPzBhOs4f
|
@inproceedings{
boenisch2023have,
title={Have it your way: Individualized Privacy Assignment for {DP}-{SGD}},
author={Franziska Boenisch and Christopher M{\"u}hl and Adam Dziedzic and Roy Rinberg and Nicolas Papernot},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XXPzBhOs4f}
}
|
When training a machine learning model with differential privacy, one sets a privacy budget. This uniform budget represents an overall maximal privacy violation that any user is willing to face by contributing their data to the training set. We argue that this approach is limited because different users may have different privacy expectations. Thus, setting a uniform privacy budget across all points may be overly conservative for some users or, conversely, not sufficiently protective for others. In this paper, we capture these preferences through individualized privacy budgets. To demonstrate their practicality, we introduce a variant of Differentially Private Stochastic Gradient Descent (DP-SGD) which supports such individualized budgets. DP-SGD is the canonical approach to training models with differential privacy. We modify its data sampling and gradient noising mechanisms to arrive at our approach, which we call Individualized DP-SGD (IDP-SGD). Because IDP-SGD provides privacy guarantees tailored to the preferences of individual users and their data points, we empirically find it to improve privacy-utility trade-offs.
|
Have it your way: Individualized Privacy Assignment for DP-SGD
|
[
"Franziska Boenisch",
"Christopher Mühl",
"Adam Dziedzic",
"Roy Rinberg",
"Nicolas Papernot"
] |
Conference
|
poster
|
2303.17046
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XWYv4BNShP
|
@inproceedings{
maalouf2023on,
title={On the Size and Approximation Error of Distilled Datasets},
author={Alaa Maalouf and Murad Tukan and Noel Loo and Ramin Hasani and Mathias Lechner and Daniela Rus},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XWYv4BNShP}
}
|
Dataset Distillation is the task of synthesizing small datasets from large ones while still retaining comparable predictive accuracy to the original uncompressed dataset. Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets? In this work, we take a theoretical view on kernel ridge regression (KRR) based methods of dataset distillation such as Kernel Inducing Points. By transforming ridge regression in random Fourier features (RFF) space, we provide the first proof of the existence of small (size) distilled datasets and their corresponding excess risk for shift-invariant kernels. We prove that a small set of instances exists in the original input space such that its solution in the RFF space coincides with the solution of the original data. We further show that a KRR solution can be generated using this distilled set of instances which gives an approximation towards the KRR solution optimized on the full input data. The size of this set is linear in the dimension of the RFF space of the input set or alternatively near linear in the number of effective degrees of freedom, which is a function of the kernel, number of data points, and the regularization parameter $\lambda$. The error bound of this distilled set is also a function of $\lambda$. We verify our bounds analytically and empirically.
|
On the Size and Approximation Error of Distilled Datasets
|
[
"Alaa Maalouf",
"Murad Tukan",
"Noel Loo",
"Ramin Hasani",
"Mathias Lechner",
"Daniela Rus"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=XUu2GloTXb
|
@inproceedings{
lee2023implicit,
title={Implicit Contrastive Representation Learning with Guided Stop-gradient},
author={Byeongchan Lee and Sehyun Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XUu2GloTXb}
}
|
In self-supervised representation learning, Siamese networks are a natural architecture for learning transformation-invariance by bringing representations of positive pairs closer together. But it is prone to collapse into a degenerate solution. To address the issue, in contrastive learning, a contrastive loss is used to prevent collapse by moving representations of negative pairs away from each other. But it is known that algorithms with negative sampling are not robust to a reduction in the number of negative samples. So, on the other hand, there are algorithms that do not use negative pairs. Many positive-only algorithms adopt asymmetric network architecture consisting of source and target encoders as a key factor in coping with collapse. By exploiting the asymmetric architecture, we introduce a methodology to implicitly incorporate the idea of contrastive learning. As its implementation, we present a novel method guided stop-gradient. We apply our method to benchmark algorithms SimSiam and BYOL and show that our method stabilizes training and boosts performance. We also show that the algorithms with our method work well with small batch sizes and do not collapse even when there is no predictor. The code is available in the supplementary material.
|
Implicit Contrastive Representation Learning with Guided Stop-gradient
|
[
"Byeongchan Lee",
"Sehyun Lee"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
null |
https://openreview.net/forum?id=XSCYxDp3yE
|
@inproceedings{
nguyen2023a,
title={A Bayesian Approach To Analysing Training Data Attribution In Deep Learning},
author={Elisa Nguyen and Minjoon Seo and Seong Joon Oh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XSCYxDp3yE}
}
|
Training data attribution (TDA) techniques find influential training data for the model's prediction on the test data of interest. They approximate the impact of down- or up-weighting a particular training sample. While conceptually useful, they are hardly applicable to deep models in practice, particularly because of their sensitivity to different model initialisation. In this paper, we introduce a Bayesian perspective on the TDA task, where the learned model is treated as a Bayesian posterior and the TDA estimates as random variables. From this novel viewpoint, we observe that the influence of an individual training sample is often overshadowed by the noise stemming from model initialisation and SGD batch composition. Based on this observation, we argue that TDA can only be reliably used for explaining deep model predictions that are consistently influenced by certain training data, independent of other noise factors. Our experiments demonstrate the rarity of such noise-independent training-test data pairs but confirm their existence. We recommend that future researchers and practitioners trust TDA estimates only in such cases. Further, we find a disagreement between ground truth and estimated TDA distributions and encourage future work to study this gap. Code is provided at https://github.com/ElisaNguyen/bayesian-tda.
|
A Bayesian Approach To Analysing Training Data Attribution In Deep Learning
|
[
"Elisa Nguyen",
"Minjoon Seo",
"Seong Joon Oh"
] |
Conference
|
poster
|
2305.19765
|
[
"https://github.com/elisanguyen/bayesian-tda"
] |
https://huggingface.co/papers/2305.19765
| 2 | 0 | 0 | 3 | 1 |
[] |
[] |
[] |
null |
https://openreview.net/forum?id=XRy4YQYLe0
|
@inproceedings{
wang2023aleatoric,
title={Aleatoric and Epistemic Discrimination: Fundamental Limits of Fairness Interventions},
author={Hao Wang and Luxi He and Rui Gao and Flavio Calmon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XRy4YQYLe0}
}
|
Machine learning (ML) models can underperform on certain population groups due to choices made during model development and bias inherent in the data. We categorize sources of discrimination in the ML pipeline into two classes: aleatoric discrimination, which is inherent in the data distribution, and epistemic discrimination, which is due to decisions made during model development. We quantify aleatoric discrimination by determining the performance limits of a model under fairness constraints, assuming perfect knowledge of the data distribution. We demonstrate how to characterize aleatoric discrimination by applying Blackwell's results on comparing statistical experiments. We then quantify epistemic discrimination as the gap between a model's accuracy when fairness constraints are applied and the limit posed by aleatoric discrimination. We apply this approach to benchmark existing fairness interventions and investigate fairness risks in data with missing values. Our results indicate that state-of-the-art fairness interventions are effective at removing epistemic discrimination on standard (overused) tabular datasets. However, when data has missing values, there is still significant room for improvement in handling aleatoric discrimination.
|
Aleatoric and Epistemic Discrimination: Fundamental Limits of Fairness Interventions
|
[
"Hao Wang",
"Luxi He",
"Rui Gao",
"Flavio Calmon"
] |
Conference
|
spotlight
|
2301.11781
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XRTxIBs2eu
|
@inproceedings{
pilault2023blockstate,
title={Block-State Transformers},
author={Jonathan Pilault and Mahan Fathi and Orhan Firat and Christopher Pal and Pierre-Luc Bacon and Ross Goroshin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XRTxIBs2eu}
}
|
State space models (SSMs) have shown impressive results on tasks that require modeling long-range dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity.
Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks.
In this work, we propose a hybrid layer named Block-State Transformer (*BST*), that internally combines an SSM sublayer for long-range contextualization, and a Block Transformer sublayer for short-term representation of sequences.
We study three different, and completely *parallelizable*, variants that integrate SSMs and block-wise attention.
We show that our model outperforms similar Transformer-based architectures on language modeling perplexity and generalizes to longer sequences.
In addition, the Block-State Transformer demonstrates a more than *tenfold* increase in speed at the layer level compared to the Block-Recurrent Transformer when model parallelization is employed.
|
Block-State Transformers
|
[
"Jonathan Pilault",
"Mahan Fathi",
"Orhan Firat",
"Christopher Pal",
"Pierre-Luc Bacon",
"Ross Goroshin"
] |
Conference
|
poster
|
2306.09539
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XPWEtXzlLy
|
@inproceedings{
liu2023mirror,
title={Mirror Diffusion Models for Constrained and Watermarked Generation},
author={Guan-Horng Liu and Tianrong Chen and Evangelos Theodorou and Molei Tao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XPWEtXzlLy}
}
|
Modern successes of diffusion models in learning complex, high-dimensional data distributions are attributed, in part, to their capability to construct diffusion processes with analytic transition kernels and score functions. The tractability results in a simulation-free framework with stable regression losses, from which reversed, generative processes can be learned at scale. However, when data is confined to a constrained set as opposed to a standard Euclidean space, these desirable characteristics appear to be lost based on prior attempts. In this work, we propose Mirror Diffusion Models (MDM), a new class of diffusion models that generate data on convex constrained sets without losing any tractability. This is achieved by learning diffusion processes in a dual space constructed from a mirror map, which, crucially, is a standard Euclidean space. We derive efficient computation of mirror maps for popular constrained sets, such as simplices and $\ell_2$-balls, showing significantly improved performance of MDM over existing methods. For safety and privacy purposes, we also explore constrained sets as a new mechanism to embed invisible but quantitative information (i.e., watermarks) in generated data, for which MDM serves as a compelling approach. Our work brings new algorithmic opportunities for learning tractable diffusion on complex domains.
|
Mirror Diffusion Models for Constrained and Watermarked Generation
|
[
"Guan-Horng Liu",
"Tianrong Chen",
"Evangelos Theodorou",
"Molei Tao"
] |
Conference
|
poster
|
2310.01236
|
[
"https://github.com/ghliu/mdm"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XOotfgPiUF
|
@inproceedings{
yang2023freemask,
title={FreeMask: Synthetic Images with Dense Annotations Make Stronger Segmentation Models},
author={Lihe Yang and Xiaogang Xu and Bingyi Kang and Yinghuan Shi and Hengshuang Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XOotfgPiUF}
}
|
Semantic segmentation has witnessed tremendous progress due to the proposal of various advanced network architectures. However, they are extremely hungry for delicate annotations to train, and the acquisition is laborious and unaffordable. Therefore, we present FreeMask in this work, which resorts to synthetic images from generative models to ease the burden of both data collection and annotation procedures. Concretely, we first synthesize abundant training images conditioned on the semantic masks provided by realistic datasets. This yields extra well-aligned image-mask training pairs for semantic segmentation models. We surprisingly observe that, solely trained with synthetic images, we already achieve comparable performance with real ones (e.g., 48.3 vs. 48.5 mIoU on ADE20K, and 49.3 vs. 50.5 on COCO-Stuff). Then, we investigate the role of synthetic images by joint training with real images, or pre-training for real images. Meantime, we design a robust filtering principle to suppress incorrectly synthesized regions. In addition, we propose to inequally treat different semantic masks to prioritize those harder ones and sample more corresponding synthetic images for them. As a result, either jointly trained or pre-trained with our filtered and re-sampled synthesized images, segmentation models can be greatly enhanced, e.g., from 48.7 to 52.0 on ADE20K.
|
FreeMask: Synthetic Images with Dense Annotations Make Stronger Segmentation Models
|
[
"Lihe Yang",
"Xiaogang Xu",
"Bingyi Kang",
"Yinghuan Shi",
"Hengshuang Zhao"
] |
Conference
|
poster
|
2310.15160
|
[
"https://github.com/LiheYoung/FreeMask"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XOCbdqxAR2
|
@inproceedings{
asadi2023td,
title={{TD} Convergence: An Optimization Perspective},
author={Kavosh Asadi and Shoham Sabach and Yao Liu and Omer Gottesman and Rasool Fakoor},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XOCbdqxAR2}
}
|
We study the convergence behavior of the celebrated temporal-difference (TD) learning algorithm. By looking at the algorithm through the lens of optimization, we first argue that TD can be viewed as an iterative optimization algorithm where the function to be minimized changes per iteration. By carefully investigating the divergence displayed by TD on a classical counter example, we identify two forces that determine the convergent or divergent behavior of the algorithm. We next formalize our discovery in the linear TD setting with quadratic loss and prove that convergence of TD hinges on the interplay between these two forces. We extend this optimization perspective to prove convergence of TD in a much broader setting than just linear approximation and squared loss. Our results provide a theoretical explanation for the successful application of TD in reinforcement learning.
|
TD Convergence: An Optimization Perspective
|
[
"Kavosh Asadi",
"Shoham Sabach",
"Yao Liu",
"Omer Gottesman",
"Rasool Fakoor"
] |
Conference
|
poster
|
2306.17750
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
null |
https://openreview.net/forum?id=XNBeTgYcAq
|
@inproceedings{
xue2023cosnet,
title={CosNet: A Generalized Spectral Kernel Network},
author={Yanfang Xue and Pengfei Fang and Jinyue Tian and Shipeng Zhu and hui xue},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=XNBeTgYcAq}
}
|
Complex-valued representation exists inherently in the time-sequential data that can be derived from the integration of harmonic waves. The non-stationary spectral kernel, realizing a complex-valued feature mapping, has shown its potential to analyze the time-varying statistical characteristics of the time-sequential data, as a result of the modeling frequency parameters. However, most existing spectral kernel-based methods eliminate the imaginary part, thereby limiting the representation power of the spectral kernel. To tackle this issue, we propose a generalized spectral kernel network, namely, \underline{Co}mplex-valued \underline{s}pectral kernel \underline{Net}work (CosNet), which includes spectral kernel mapping generalization (SKMG) module and complex-valued spectral kernel embedding (CSKE) module. Concretely, the SKMG module is devised to generalize the spectral kernel mapping in the real number domain to the complex number domain, recovering the inherent complex-valued representation for the real-valued data. Then a following CSKE module is further developed to combine the complex-valued spectral kernels and neural networks to effectively capture long-range or periodic relations of the data. Along with the CosNet, we study the effect of the complex-valued spectral kernel mapping via theoretically analyzing the bound of covering number and generalization error. Extensive experiments demonstrate that CosNet performs better than the mainstream kernel methods and complex-valued neural networks.
|
CosNet: A Generalized Spectral Kernel Network
|
[
"Yanfang Xue",
"Pengfei Fang",
"Jinyue Tian",
"Shipeng Zhu",
"hui xue"
] |
Conference
|
poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.