bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=QZtJ22aOV4
@inproceedings{ li2024safe, title={Safe Exploitative Play with Untrusted Type Beliefs}, author={Tongxin Li and Tinashe Handina and Shaolei Ren and Adam Wierman}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QZtJ22aOV4} }
The combination of the Bayesian game and learning has a rich history, with the idea of controlling a single agent in a system composed of multiple agents with unknown behaviors given a set of types, each specifying a possible behavior for the other agents. The idea is to plan an agent's own actions with respect to those types which it believes are most likely to maximize the payoff. However, the type beliefs are often learned from past actions and likely to be incorrect. With this perspective in mind, we consider an agent in a game with type predictions of other components, and investigate the impact of incorrect beliefs to the agent’s payoff. In particular, we formally define a tradeoff between risk and opportunity by comparing the payoff obtained against the optimal payoff, which is represented by a gap caused by trusting or distrusting the learned beliefs.Our main results characterize the tradeoff by establishing upper and lower bounds on the Pareto front for both normal-form and stochastic Bayesian games, with numerical results provided.
Safe Exploitative Play with Untrusted Type Beliefs
[ "Tongxin Li", "Tinashe Handina", "Shaolei Ren", "Adam Wierman" ]
NeurIPS.cc/2024/Conference
2411.07679
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QZ2d8E8Whu
@inproceedings{ wang2024llmdfa, title={{LLMDFA}: Analyzing Dataflow in Code with Large Language Models}, author={Chengpeng Wang and Wuqi Zhang and Zian Su and Xiangzhe Xu and Xiaoheng Xie and Xiangyu Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QZ2d8E8Whu} }
Dataflow analysis is a fundamental code analysis technique that identifies dependencies between program values. Traditional approaches typically necessitate successful compilation and expert customization, hindering their applicability and usability for analyzing uncompilable programs with evolving analysis needs in real-world scenarios. This paper presents LLMDFA, an LLM-powered compilation-free and customizable dataflow analysis framework. To address hallucinations for reliable results, we decompose the problem into several subtasks and introduce a series of novel strategies. Specifically, we leverage LLMs to synthesize code that outsources delicate reasoning to external expert tools, such as using a parsing library to extract program values of interest and invoking an automated theorem prover to validate path feasibility. Additionally, we adopt a few-shot chain-of-thought prompting to summarize dataflow facts in individual functions, aligning the LLMs with the program semantics of small code snippets to mitigate hallucinations. We evaluate LLMDFA on synthetic programs to detect three representative types of bugs and on real-world Android applications for customized bug detection. On average, LLMDFA achieves 87.10% precision and 80.77% recall, surpassing existing techniques with F1 score improvements of up to 0.35. We have open-sourced LLMDFA at https://github.com/chengpeng-wang/LLMDFA.
LLMDFA: Analyzing Dataflow in Code with Large Language Models
[ "Chengpeng Wang", "Wuqi Zhang", "Zian Su", "Xiangzhe Xu", "Xiaoheng Xie", "Xiangyu Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QY4SpBhQZI
@inproceedings{ hsiao2024refldm, title={ReF-{LDM}: A Latent Diffusion Model for Reference-based Face Image Restoration}, author={Chi-Wei Hsiao and Yu-Lun Liu and Cheng-Kun Yang and Sheng-Po Kuo and Kevin Jou and Chia-Ping Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QY4SpBhQZI} }
While recent works on blind face image restoration have successfully produced impressive high-quality (HQ) images with abundant details from low-quality (LQ) input images, the generated content may not accurately reflect the real appearance of a person. To address this problem, incorporating well-shot personal images as additional reference inputs may be a promising strategy. Inspired by the recent success of the Latent Diffusion Model (LDM) in image generation, we propose ReF-LDM—an adaptation of LDM designed to generate HQ face images conditioned on one LQ image and multiple HQ reference images. Our LDM-based model incorporates an effective and efficient mechanism, CacheKV, for conditioning on reference images. Additionally, we design a timestep-scaled identity loss, enabling LDM to focus on learning the discriminating features of human faces. Lastly, we construct FFHQ-ref, a dataset consisting of 20,406 high-quality (HQ) face images with corresponding reference images, which can serve as both training and evaluation data for reference-based face restoration models.
ReF-LDM: A Latent Diffusion Model for Reference-based Face Image Restoration
[ "Chi-Wei Hsiao", "Yu-Lun Liu", "Cheng-Kun Yang", "Sheng-Po Kuo", "Kevin Jou", "Chia-Ping Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QXkFC7D6p4
@inproceedings{ ma2024fedgtst, title={Fed{GTST}: Boosting Global Transferability of Federated Models via Statistics Tuning}, author={Evelyn Ma and Chao Pan and S. Rasoul Etesami and Han Zhao and Olgica Milenkovic}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QXkFC7D6p4} }
The performance of Transfer Learning (TL) significantly depends on effective pretraining, which not only requires extensive amounts of data but also substantial computational resources. As a result, in practice, it is challenging to successfully perform TL at the level of individual model developers. Federated Learning (FL) addresses these challenges by enabling collaboration among individual clients through an indirect expansion of the available dataset, distribution of the computation burden across different entities, and privacy-preserving communication mechanisms. Despite several attempts to devise effective transferable FL approaches, several important issues remain unsolved. First, existing methods in this setting primarily focus on optimizing transferability within their local client domains, thereby ignoring transferability over the global learning domain. Second, most approaches focus on analyzing indirect transferability metrics, which does not allow for accurate assessment of the final target loss and extent of transferability. To address these issues, we introduce two important FL features into the model. The first boosts transferability via an exchange protocol between the clients and the server that includes information about cross-client Jacobian (gradient) norms. The second feature promotes an increase of the average of the Jacobians of the clients at the server side, which is subsequently used as a local regularizer that reduces the cross-client Jacobian variance. A rigorous analysis of our transferable federated algorithm, termed FedGTST (Federated Global Transferability via Statistics Tuning), reveals that increasing the averaged Jacobian norm across clients and reducing its variance ensures tight control of the target loss. This insight leads to the first known upper bound on the target loss of transferable federated learning in terms of the source loss and source-target domain discrepancy. Extensive experimental results on datasets including MNIST → MNIST-M and CIFAR10 → SVHN suggest that FedGTST significantly outperforms other relevant baselines, such as FedSR. For example, on the second source-target dataset pair, we improve the accuracy of FedSR by 9.8% and that of FedIIR by 7.6% when the backbone used is LeNet.
FedGTST: Boosting Global Transferability of Federated Models via Statistics Tuning
[ "Evelyn Ma", "Chao Pan", "S. Rasoul Etesami", "Han Zhao", "Olgica Milenkovic" ]
NeurIPS.cc/2024/Conference
2410.13045
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QXQY58xU25
@inproceedings{ solko-breslin2024dataefficient, title={Data-Efficient Learning with Neural Programs}, author={Alaia Solko-Breslin and Seewon Choi and Ziyang Li and Neelay Velingker and Rajeev Alur and Mayur Naik and Eric Wong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QXQY58xU25} }
Many computational tasks can be naturally expressed as a composition of a DNN followed by a program written in a traditional programming language or an API call to an LLM. We call such composites "neural programs" and focus on the problem of learning the DNN parameters when the training data consist of end-to-end input-output labels for the composite. When the program is written in a differentiable logic programming language, techniques from neurosymbolic learning are applicable, but in general, the learning for neural programs requires estimating the gradients of black-box components. We present an algorithm for learning neural programs, called ISED, that only relies on input-output samples of black-box components. For evaluation, we introduce new benchmarks that involve calls to modern LLMs such as GPT-4 and also consider benchmarks from the neurosymbolic learning literature. Our evaluation shows that for the latter benchmarks, ISED has comparable performance to state-of-the-art neurosymbolic frameworks. For the former, we use adaptations of prior work on gradient approximations of black-box components as a baseline, and show that ISED achieves comparable accuracy but in a more data- and sample-efficient manner.
Data-Efficient Learning with Neural Programs
[ "Alaia Solko-Breslin", "Seewon Choi", "Ziyang Li", "Neelay Velingker", "Rajeev Alur", "Mayur Naik", "Eric Wong" ]
NeurIPS.cc/2024/Conference
2406.06246
[ "https://github.com/alaiasolkobreslin/ised" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QWsLks8LCO
@inproceedings{ liu2024grounded, title={Grounded Answers for Multi-agent Decision-making Problem through Generative World Model}, author={Zeyang Liu and Xinrui Yang and Shiguang Sun and Long Qian and Lipeng Wan and Xingyu Chen and Xuguang Lan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QWsLks8LCO} }
Recent progress in generative models has stimulated significant innovations in many fields, such as image generation and chatbots. Despite their success, these models often produce sketchy and misleading solutions for complex multi-agent decision-making problems because they miss the trial-and-error experience and reasoning as humans. To address this limitation, we explore a paradigm that integrates a language-guided simulator into the multi-agent reinforcement learning pipeline to enhance the generated answer. The simulator is a world model that separately learns dynamics and reward, where the dynamics model comprises an image tokenizer as well as a causal transformer to generate interaction transitions autoregressively, and the reward model is a bidirectional transformer learned by maximizing the likelihood of trajectories in the expert demonstrations under language guidance. Given an image of the current state and the task description, we use the world model to train the joint policy and produce the image sequence as the answer by running the converged policy on the dynamics model. The empirical results demonstrate that this framework can improve the answers for multi-agent decision-making problems by showing superior performance on the training and unseen tasks of the StarCraft Multi-Agent Challenge benchmark. In particular, it can generate consistent interaction sequences and explainable reward functions at interaction states, opening the path for training generative models of the future.
Grounded Answers for Multi-agent Decision-making Problem through Generative World Model
[ "Zeyang Liu", "Xinrui Yang", "Shiguang Sun", "Long Qian", "Lipeng Wan", "Xingyu Chen", "Xuguang Lan" ]
NeurIPS.cc/2024/Conference
2410.02664
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QVtwpT5Dmg
@inproceedings{ mu2024rule, title={Rule Based Rewards for Language Model Safety}, author={Tong Mu and Alec Helyar and Johannes Heidecke and Joshua Achiam and Andrea Vallone and Ian D Kivlichan and Molly Lin and Alex Beutel and John Schulman and Lilian Weng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QVtwpT5Dmg} }
Reinforcement learning based fine-tuning of large language models (LLMs) on human preferences has been shown to enhance both their capabilities and safety behavior. However, in cases related to safety, without precise instructions to human annotators, the data collected may cause the model to become overly cautious, or to respond in an undesirable style, such as being judgmental. Additionally, as model capabilities and usage patterns evolve, there may be a costly need to add or relabel data to modify safety behavior. We propose a novel preference modeling approach that utilizes AI feedback and only requires a small amount of human data. Our method, Rule Based Rewards (RBR), uses a collection of rules for desired or undesired behaviors (e.g. refusals should not be judgmental) along with a LLM grader. In contrast to prior methods using AI feedback, our method uses fine-grained, composable, LLM-graded few-shot prompts as reward directly in RL training, resulting in greater control, accuracy and ease of updating. We show that RBRs are an effective training method, achieving an F1 score of 97.1, compared to a human-feedback baseline of 91.7, resulting in much higher safety-behavior accuracy through better balancing usefulness and safety.
Rule Based Rewards for Language Model Safety
[ "Tong Mu", "Alec Helyar", "Johannes Heidecke", "Joshua Achiam", "Andrea Vallone", "Ian D Kivlichan", "Molly Lin", "Alex Beutel", "John Schulman", "Lilian Weng" ]
NeurIPS.cc/2024/Conference
2411.01111
[ "https://github.com/openai/safety-rbr-code-and-data" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QVSP1uk7b5
@inproceedings{ gu2024tetrahedron, title={Tetrahedron Splatting for 3D Generation}, author={Chun Gu and Zeyu Yang and Zijie Pan and Xiatian Zhu and Li Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QVSP1uk7b5} }
3D representation is essential to the significant advance of 3D generation with 2D diffusion priors. As a flexible representation, NeRF has been first adopted for 3D representation. With density-based volumetric rendering, it however suffers both intensive computational overhead and inaccurate mesh extraction. Using a signed distance field and Marching Tetrahedra, DMTet allows for precise mesh extraction and real-time rendering but is limited in handling large topological changes in meshes, leading to optimization challenges. Alternatively, 3D Gaussian Splatting (3DGS) is favored in both training and rendering efficiency while falling short in mesh extraction. In this work, we introduce a novel 3D representation, Tetrahedron Splatting (TeT-Splatting), that supports easy convergence during optimization, precise mesh extraction, and real-time rendering simultaneously. This is achieved by integrating surface-based volumetric rendering within a structured tetrahedral grid while preserving the desired ability of precise mesh extraction, and a tile-based differentiable tetrahedron rasterizer. Furthermore, we incorporate eikonal and normal consistency regularization terms for the signed distance field to improve generation quality and stability. Critically, our representation can be trained without mesh extraction, making the optimization process easier to converge. Our TeT-Splatting can be readily integrated in existing 3D generation pipelines, along with polygonal mesh for texture optimization. Extensive experiments show that our TeT-Splatting strikes a superior tradeoff among convergence speed, render efficiency, and mesh quality as compared to previous alternatives under varying 3D generation settings.
Tetrahedron Splatting for 3D Generation
[ "Chun Gu", "Zeyu Yang", "Zijie Pan", "Xiatian Zhu", "Li Zhang" ]
NeurIPS.cc/2024/Conference
2406.01579
[ "https://github.com/fudan-zvg/tet-splatting" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=QVG7j29Sta
@inproceedings{ dutta2024accuracy, title={Accuracy is Not All You Need}, author={Abhinav Dutta and Sanjeev Krishnan and Nipun Kwatra and Ramachandran Ramjee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QVG7j29Sta} }
When Large Language Models (LLMs) are compressed using techniques such as quantization, the predominant way to demonstrate the validity of such techniques is by measuring the model's accuracy on various benchmarks. If the accuracies of the baseline model and the compressed model are close, it is assumed that there was negligible degradation in quality. However, even when the accuracy of baseline and compressed model are similar, we observe the phenomenon of flips, wherein answers change from correct to incorrect and vice versa in proportion. We conduct a detailed study of metrics across multiple compression techniques, models and datasets, demonstrating that the behavior of compressed models as visible to end-users is often significantly different from the baseline model, even when accuracy is similar. We further evaluate compressed models qualitatively and quantitatively using MT-Bench and show that compressed models exhibiting high flips are worse than baseline models in this free-form generative task. Thus, we argue that accuracy and perplexity are necessary but not sufficient for evaluating compressed models, since these metrics hide large underlying changes that have not been observed by previous work. Hence, compression techniques should also be evaluated using distance metrics. We propose two such distance metrics, KL-Divergence and flips, and show that they are well correlated.
Accuracy is Not All You Need
[ "Abhinav Dutta", "Sanjeev Krishnan", "Nipun Kwatra", "Ramachandran Ramjee" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QUYLbzwtTV
@inproceedings{ jain2024bias, title={Bias in Motion: Theoretical Insights into the Dynamics of Bias in {SGD} Training}, author={Anchit Jain and Rozhin Nobahari and Aristide Baratin and Stefano Sarao Mannelli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QUYLbzwtTV} }
Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations of the data. However, our current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup that models different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setup, which we prove to be exact in high dimension. Notably, our analysis identifies different properties of the sub-populations that drive bias at different timescales and hence shows a shifting preference of our classifier during training. By applying our general solution to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real data, i.e. using CIFAR10, MNIST, and CelebA datasets.
Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training
[ "Anchit Jain", "Rozhin Nobahari", "Aristide Baratin", "Stefano Sarao Mannelli" ]
NeurIPS.cc/2024/Conference
2405.18296
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QQSyNX5s83
@inproceedings{ lu2024dndgs, title={{DN}-4{DGS}: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering}, author={Jiahao Lu and Jiacheng Deng and Ruijie Zhu and Yanzhe Liang and Wenfei Yang and Xu Zhou and Tianzhu Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QQSyNX5s83} }
Dynamic scenes rendering is an intriguing yet challenging problem. Although current methods based on NeRF have achieved satisfactory performance, they still can not reach real-time levels. Recently, 3D Gaussian Splatting (3DGS) has garnered researchers' attention due to their outstanding rendering quality and real-time speed. Therefore, a new paradigm has been proposed: defining a canonical 3D gaussians and deforming it to individual frames in deformable fields. However, since the coordinates of canonical 3D gaussians are filled with noise, which can transfer noise into the deformable fields, and there is currently no method that adequately considers the aggregation of 4D information. Therefore, we propose Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering (DN-4DGS). Specifically, a Noise Suppression Strategy is introduced to change the distribution of the coordinates of the canonical 3D gaussians and suppress noise. Additionally, a Decoupled Temporal-Spatial Aggregation Module is designed to aggregate information from adjacent points and frames. Extensive experiments on various real-world datasets demonstrate that our method achieves state-of-the-art rendering quality under a real-time level. Code is available at https://github.com/peoplelu/DN-4DGS.
DN-4DGS: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering
[ "Jiahao Lu", "Jiacheng Deng", "Ruijie Zhu", "Yanzhe Liang", "Wenfei Yang", "Xu Zhou", "Tianzhu Zhang" ]
NeurIPS.cc/2024/Conference
2410.13607
[ "https://github.com/peoplelu/dn-4dgs" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QQSGwpmDfU
@inproceedings{ shi2024learning, title={Learning Commonality, Divergence and Variety for Unsupervised Visible-Infrared Person Re-identification}, author={Jiangming Shi and Xiangbo Yin and Yachao Zhang and zhizhong zhang and Yuan Xie and Yanyun Qu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QQSGwpmDfU} }
Unsupervised visible-infrared person re-identification (USVI-ReID) aims to match specified persons in infrared images to visible images without annotations, and vice versa. USVI-ReID is a challenging yet underexplored task. Most existing methods address the USVI-ReID through cluster-based contrastive learning, which simply employs the cluster center to represent an individual. However, the cluster center primarily focuses on commonality, overlooking divergence and variety. To address the problem, we propose a Progressive Contrastive Learning with Hard and Dynamic Prototypes for USVI-ReID. In brief, we generate the hard prototype by selecting the sample with the maximum distance from the cluster center. We reveal that the inclusion of the hard prototype in contrastive loss helps to emphasize divergence. Additionally, instead of rigidly aligning query images to a specific prototype, we generate the dynamic prototype by randomly picking samples within a cluster. The dynamic prototype is used to encourage variety. Finally, we introduce a progressive learning strategy to gradually shift the model's attention towards divergence and variety, avoiding cluster deterioration. Extensive experiments conducted on the publicly available SYSU-MM01 and RegDB datasets validate the effectiveness of the proposed method.
Learning Commonality, Divergence and Variety for Unsupervised Visible-Infrared Person Re-identification
[ "Jiangming Shi", "Xiangbo Yin", "Yachao Zhang", "zhizhong zhang", "Yuan Xie", "Yanyun Qu" ]
NeurIPS.cc/2024/Conference
2402.19026
[ "https://github.com/shijiangming1/pclhd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QNieOPt4fg
@inproceedings{ liu2024selectit, title={Select{IT}: Selective Instruction Tuning for {LLM}s via Uncertainty-Aware Self-Reflection}, author={Liangxin Liu and Xuebo Liu and Derek F. Wong and Dongfang Li and Ziyi Wang and Baotian Hu and Min Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QNieOPt4fg} }
Instruction tuning (IT) is crucial to tailoring large language models (LLMs) towards human-centric interactions. Recent advancements have shown that the careful selection of a small, high-quality subset of IT data can significantly enhance the performance of LLMs. Despite this, common approaches often rely on additional models or data, which increases costs and limits widespread adoption. In this work, we propose a novel approach, termed $\textit{SelectIT}$, that capitalizes on the foundational capabilities of the LLM itself. Specifically, we exploit the intrinsic uncertainty present in LLMs to more effectively select high-quality IT data, without the need for extra resources. Furthermore, we introduce a curated IT dataset, the $\textit{Selective Alpaca}$, created by applying SelectIT to the Alpaca-GPT4 dataset. Empirical results demonstrate that IT using Selective Alpaca leads to substantial model ability enhancement. The robustness of SelectIT has also been corroborated in various foundation models and domain-specific tasks. Our findings suggest that longer and more computationally intensive IT data may serve as superior sources of IT, offering valuable insights for future research in this area. Data, code, and scripts are freely available at https://github.com/Blue-Raincoat/SelectIT.
SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection
[ "Liangxin Liu", "Xuebo Liu", "Derek F. Wong", "Dongfang Li", "Ziyi Wang", "Baotian Hu", "Min Zhang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QMaLS4VeY3
@inproceedings{ mo2024aligning, title={Aligning Audio-Visual Joint Representations with an Agentic Workflow}, author={Shentong Mo and Yibing Song}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QMaLS4VeY3} }
Visual content and accompanied audio signals naturally formulate a joint representation to improve audio-visual (AV) related applications. While studies develop various AV representation learning frameworks, the importance of AV data alignment is usually undermined for achieving high-quality representation. We observe that an audio signal may contain background noise interference. Also, non-synchronization may appear between audio and video streams. These non-strict data alignment limits representation quality and downgrade application performance. In this paper, we propose to improve AV joint representations from a data-centric perspective by aligning audio signals to visual data. Our alignment is conducted in an agentic workflow controlled by an LLM-based assistant named AVAgent. For each input AV data pair, our AVAgent uses a multi-modal LLM to convert audio and visual data into language descriptions separately (i.e., tool use). Then, AVAgent reasons whether this paired data is aligned well and plans to edit the audio signal if needed (i.e., planning). The audio editing is executed by predefined actions that filter noise or augment data. Moreover, we use a VLM to evaluate how modified audio signals match the visual content and provide feedback to AVAgent (i.e., reflection). The tool use, planning, and reflection steps operate cyclically to become an agentic workflow where audio signals are gradually aligned to visual content. To this end, existing methods can directly leverage the aligned AV data via our agentic workflow to improve AV joint representations. The experimental results comprehensively demonstrate the state-of-the-art performance of the proposed approach against previous baselines in diverse downstream tasks.
Aligning Audio-Visual Joint Representations with an Agentic Workflow
[ "Shentong Mo", "Yibing Song" ]
NeurIPS.cc/2024/Conference
2410.23230
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QMVydwvrx7
@inproceedings{ zhong2024ssdiff, title={{SSD}iff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening}, author={Yu Zhong and Xiao Wu and Liang-Jian Deng and Zihan Cao and Hong-Xia Dou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QMVydwvrx7} }
Pansharpening is a significant image fusion technique that merges the spatial content and spectral characteristics of remote sensing images to generate high-resolution multispectral images. Recently, denoising diffusion probabilistic models have been gradually applied to visual tasks, enhancing controllable image generation through low-rank adaptation (LoRA). In this paper, we introduce a spatial-spectral integrated diffusion model for the remote sensing pansharpening task, called SSDiff, which considers the pansharpening process as the fusion process of spatial and spectral components from the perspective of subspace decomposition. Specifically, SSDiff utilizes spatial and spectral branches to learn spatial details and spectral features separately, then employs a designed alternating projection fusion module (APFM) to accomplish the fusion. Furthermore, we propose a frequency modulation inter-branch module (FMIM) to modulate the frequency distribution between branches. The two components of SSDiff can perform favorably against the APFM when utilizing a LoRA-like branch-wise alternative fine-tuning method. It refines SSDiff to capture component-discriminating features more sufficiently. Finally, extensive experiments on four commonly used datasets, i.e., WorldView-3, WorldView-2, GaoFen-2, and QuickBird, demonstrate the superiority of SSDiff both visually and quantitatively. The code is available at https://github.com/Z-ypnos/SSdiff_main.
SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening
[ "Yu Zhong", "Xiao Wu", "Liang-Jian Deng", "Zihan Cao", "Hong-Xia Dou" ]
NeurIPS.cc/2024/Conference
2404.11537
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QLRO8o4bol
@inproceedings{ hu2024generate, title={Generate Universal Adversarial Perturbations for Few-Shot Learning}, author={Yiman Hu and Yixiong Zou and Ruixuan Li and Yuhua Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QLRO8o4bol} }
Deep networks are known to be vulnerable to adversarial examples which are deliberately designed to mislead the trained model by introducing imperceptible perturbations to input samples. Compared to traditional perturbations crafted specifically for each data point, Universal Adversarial Perturbations (UAPs) are input-agnostic and shown to be more practical in the real world. However, UAPs are typically generated in a close-set scenario that shares the same classification task during the training and testing phases. This paper demonstrates the ineffectiveness of traditional UAPs in open-set scenarios like Few-Shot Learning (FSL). Through analysis, we identify two primary challenges that hinder the attacking process: the task shift and the semantic shift. To enhance the transferability of UAPs in FSL, we propose a unifying attacking framework addressing these two shifts. The task shift is addressed by aligning proxy tasks to the downstream tasks, while the semantic shift is handled by leveraging the generalizability of pre-trained encoders.The proposed Few-Shot Attacking FrameWork, denoted as FSAFW, can effectively generate UAPs across various FSL training paradigms and different downstream tasks. Our approach not only sets a new standard for state-of-the-art works but also significantly enhances attack performance, exceeding the baseline method by over 16\%.
Generate Universal Adversarial Perturbations for Few-Shot Learning
[ "Yiman Hu", "Yixiong Zou", "Ruixuan Li", "Yuhua Li" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QKp3nhPU41
@inproceedings{ yue2024deervla, title={DeeR-{VLA}: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution}, author={Yang Yue and Yulin Wang and Bingyi Kang and Yizeng Han and Shenzhi Wang and Shiji Song and Jiashi Feng and Gao Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QKp3nhPU41} }
Multimodal Large Language Models (MLLMs) have demonstrated remarkable comprehension and reasoning capabilities with complex language and visual data. These advances have spurred the vision of establishing a generalist robotic MLLM proficient in understanding complex human instructions and accomplishing various embodied tasks, whose feasibility has been recently verified~\cite{rt-2,rt-x}. However, developing MLLMs for real-world robots is challenging due to the typically limited computation and memory capacities available on robotic platforms. In contrast, the inference of MLLMs usually incorporates storing billions of parameters and performing tremendous computation, imposing significant hardware demands. In our paper, we seek to address this challenge by leveraging an intriguing observation: relatively easier situations make up the bulk of the procedure of controlling robots to fulfill diverse tasks, and they generally require far smaller models to obtain the correct robotic actions. Motivated by this observation, we propose a \emph{Dynamic Early-Exit for Robotic MLLM} (DeeR) framework that automatically adjusts the size of the activated MLLM based on each situation at hand. The approach leverages a multi-exit architecture in MLLMs, which allows the model to cease processing once a proper size of the model has been activated for a specific situation, thus avoiding further redundant computation. Additionally, we develop novel algorithms that establish early-termination criteria for DeeR, conditioned on predefined demands such as average computational cost (\emph{i.e.}, power consumption), as well as peak computational consumption (\emph{i.e.}, latency) and GPU memory usage. These enhancements ensure that DeeR operates efficiently under varying resource constraints while maintaining competitive performance. Moreover, we design a tailored training method for integrating temporal information on top of such multi-exit architectures to predict actions reasonably. On the CALVIN robot manipulation benchmark, DeeR demonstrates significant reductions in computational costs by 5.2-6.5x and GPU memory by 2x without compromising performance. Code and checkpoints are available at https://github.com/yueyang130/DeeR-VLA.
DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution
[ "Yang Yue", "Yulin Wang", "Bingyi Kang", "Yizeng Han", "Shenzhi Wang", "Shiji Song", "Jiashi Feng", "Gao Huang" ]
NeurIPS.cc/2024/Conference
2411.02359
[ "https://github.com/yueyang130/deer-vla" ]
https://huggingface.co/papers/2411.02359
1
12
2
8
[ "Yang130/DeeR-VLA" ]
[]
[]
[ "Yang130/DeeR-VLA" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=QJr02BTM7J
@inproceedings{ koshizuka2024understanding, title={Understanding the Expressivity and Trainability of Fourier Neural Operator: A Mean-Field Perspective}, author={Takeshi Koshizuka and Masahiro Fujisawa and Yusuke Tanaka and Issei Sato}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QJr02BTM7J} }
In this paper, we explores the expressivity and trainability of the Fourier Neural Operator (FNO). We establish a mean-field theory for the FNO, analyzing the behavior of the random FNO from an \emph{edge of chaos} perspective. Our investigation into the expressivity of a random FNO involves examining the ordered-chaos phase transition of the network based on the weight distribution. This phase transition demonstrates characteristics unique to the FNO, induced by mode truncation, while also showcasing similarities to those of densely connected networks. Furthermore, we identify a connection between expressivity and trainability: the ordered and chaotic phases correspond to regions of vanishing and exploding gradients, respectively. This finding provides a practical prerequisite for the stable training of the FNO. Our experimental results corroborate our theoretical findings.
Understanding the Expressivity and Trainability of Fourier Neural Operator: A Mean-Field Perspective
[ "Takeshi Koshizuka", "Masahiro Fujisawa", "Yusuke Tanaka", "Issei Sato" ]
NeurIPS.cc/2024/Conference
2310.06379
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QI1ScdeQjp
@inproceedings{ huang2024upping, title={Upping the Game: How 2D U-Net Skip Connections Flip 3D Segmentation}, author={Xingru Huang and Yihao Guo and Jian Huang and Tianyun Zhang and HE HONG and Shaowei Jiang and Yaoqi Sun}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QI1ScdeQjp} }
In the present study, we introduce an innovative structure for 3D medical image segmentation that effectively integrates 2D U-Net-derived skip connections into the architecture of 3D convolutional neural networks (3D CNNs). Conventional 3D segmentation techniques predominantly depend on isotropic 3D convolutions for the extraction of volumetric features, which frequently engenders inefficiencies due to the varying information density across the three orthogonal axes in medical imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). This disparity leads to a decline in axial-slice plane feature extraction efficiency, with slice plane features being comparatively underutilized relative to features in the time-axial. To address this issue, we introduce the U-shaped Connection (uC), utilizing simplified 2D U-Net in place of standard skip connections to augment the extraction of the axial-slice plane features while concurrently preserving the volumetric context afforded by 3D convolutions. Based on uC, we further present uC 3DU-Net, an enhanced 3D U-Net backbone that integrates the uC approach to facilitate optimal axial-slice plane feature utilization. Through rigorous experimental validation on five publicly accessible datasets—FLARE2021, OIMHS, FeTA2021, AbdomenCT-1K, and BTCV, the proposed method surpasses contemporary state-of-the-art models. Notably, this performance is achieved while reducing the number of parameters and computational complexity. This investigation underscores the efficacy of incorporating 2D convolutions within the framework of 3D CNNs to overcome the intrinsic limitations of volumetric segmentation, thereby potentially expanding the frontiers of medical image analysis. Our implementation is available at https://github.com/IMOP-lab/U-Shaped-Connection.
Upping the Game: How 2D U-Net Skip Connections Flip 3D Segmentation
[ "Xingru Huang", "Yihao Guo", "Jian Huang", "Tianyun Zhang", "HE HONG", "Shaowei Jiang", "Yaoqi Sun" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QHRLFdhkLu
@inproceedings{ luohe2024reference, title={Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models}, author={Shi Luohe and Yao Yao and Zuchao Li and Lefei Zhang and hai zhao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QHRLFdhkLu} }
Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities. In-Context Learning (ICL) and Parameter-Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting LLMs to downstream tasks. ICL typically constructs a few-shot learning scenario, either manually or by setting up a Retrieval-Augmented Generation (RAG) system, helping models quickly grasp domain knowledge or question-answering patterns without changing model parameters. However, this approach involves trade-offs, such as slower inference speed and increased space occupancy. PEFT assists the model in adapting to tasks through minimal parameter modifications, but the training process still demands high hardware requirements, even with a small number of parameters involved. To address these challenges, we propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning, maintaining low inference costs. RTD constructs a reference datastore from the provided training examples and optimizes the LLM's final vocabulary distribution by flexibly selecting suitable references based on the input, resulting in more trustable responses and enabling the model to adapt to downstream tasks at a low cost. Experimental evaluations on various LLMs using different benchmarks demonstrate that RTD establishes a new paradigm for augmenting models to downstream tasks. Furthermore, our method exhibits strong orthogonality with traditional methods, allowing for concurrent usage. Our code can be found at https://github.com/ShiLuohe/ReferenceTrustableDecoding.
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models
[ "Shi Luohe", "Yao Yao", "Zuchao Li", "Lefei Zhang", "hai zhao" ]
NeurIPS.cc/2024/Conference
2409.20181
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QGJSXMhVaL
@inproceedings{ tang2024worldcoder, title={WorldCoder, a Model-Based {LLM} Agent: Building World Models by Writing Code and Interacting with the Environment}, author={Hao Tang and Darren Yan Key and Kevin Ellis}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QGJSXMhVaL} }
We give a model-based agent that builds a Python program representing its knowledge of the world based on its interactions with the environment. The world model tries to explain its interactions, while also being optimistic about what reward it can achieve. We define this optimism as a logical constraint between a program and a planner. We study our agent on gridworlds, and on task planning, finding our approach is more sample-efficient compared to deep RL, more compute-efficient compared to ReAct-style agents, and that it can transfer its knowledge across environments by editing its code.
WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment
[ "Hao Tang", "Darren Yan Key", "Kevin Ellis" ]
NeurIPS.cc/2024/Conference
2402.12275
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QFUsZvw9mx
@inproceedings{ li2024towards, title={Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning}, author={Lanqing Li and Hai Zhang and Xinyu Zhang and Shatong Zhu and Yang YU and Junqiao Zhao and Pheng-Ann Heng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QFUsZvw9mx} }
As a marriage between offline RL and meta-RL, the advent of offline meta-reinforcement learning (OMRL) has shown great promise in enabling RL agents to multi-task and quickly adapt while acquiring knowledge safely. Among which, context-based OMRL (COMRL) as a popular paradigm, aims to learn a universal policy conditioned on effective task representations. In this work, by examining several key milestones in the field of COMRL, we propose to integrate these seemingly independent methodologies into a unified framework. Most importantly, we show that the pre-existing COMRL algorithms are essentially optimizing the same mutual information objective between the task variable $M$ and its latent representation $Z$ by implementing various approximate bounds. Such theoretical insight offers ample design freedom for novel algorithms. As demonstrations, we propose a supervised and a self-supervised implementation of $I(Z; M)$, and empirically show that the corresponding optimization algorithms exhibit remarkable generalization across a broad spectrum of RL benchmarks, context shift scenarios, data qualities and deep learning architectures. This work lays the information theoretic foundation for COMRL methods, leading to a better understanding of task representation learning in the context of reinforcement learning.
Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning
[ "Lanqing Li", "Hai Zhang", "Xinyu Zhang", "Shatong Zhu", "Yang YU", "Junqiao Zhao", "Pheng-Ann Heng" ]
NeurIPS.cc/2024/Conference
2402.02429
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=QEmsZoQ45M
@inproceedings{ maran2024local, title={Local Linearity: the Key for No-regret Reinforcement Learning in Continuous {MDP}s}, author={Davide Maran and Alberto Maria Metelli and Matteo Papini and Marcello Restelli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QEmsZoQ45M} }
Achieving the no-regret property for Reinforcement Learning (RL) problems in continuous state and action-space environments is one of the major open problems in the field. Existing solutions either work under very specific assumptions or achieve bounds that are vacuous in some regimes. Furthermore, many structural assumptions are known to suffer from a provably unavoidable exponential dependence on the time horizon $H$ in the regret, which makes any possible solution unfeasible in practice. In this paper, we identify _local linearity_ as the feature that makes Markov Decision Processes (MDPs) both _learnable_ (sublinear regret) and _feasible_ (regret that is polynomial in $H$). We define a novel MDP representation class, namely _Locally Linearizable MDPs_, generalizing other representation classes like Linear MDPs and MDPS with low inherent Belmman error. Then, i) we introduce **Cinderella**, a no-regret algorithm for this general representation class, and ii) we show that all known learnable and feasible MDP families are representable in this class. We first show that all known feasible MDPs belong to a family that we call _Mildly Smooth MDPs_. Then, we show how any mildly smooth MDP can be represented as a Locally Linearizable MDP by an appropriate choice of representation. This way, **Cinderella** is shown to achieve state-of-the-art regret bounds for all previously known (and some new) continuous MDPs for which RL is learnable and feasible.
Local Linearity: the Key for No-regret Reinforcement Learning in Continuous MDPs
[ "Davide Maran", "Alberto Maria Metelli", "Matteo Papini", "Marcello Restelli" ]
NeurIPS.cc/2024/Conference
2410.24071
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QEaHE4TUgc
@inproceedings{ muppidi2024fast, title={Fast {TRAC}: A Parameter-Free Optimizer for Lifelong Reinforcement Learning}, author={Aneesh Muppidi and Zhiyu Zhang and Heng Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QEaHE4TUgc} }
A key challenge in lifelong reinforcement learning (RL) is the loss of plasticity, where previous learning progress hinders an agent's adaptation to new tasks. While regularization and resetting can help, they require precise hyperparameter selection at the outset and environment-dependent adjustments. Building on the principled theory of online convex optimization, we present a parameter-free optimizer for lifelong RL, called TRAC, which requires no tuning or prior knowledge about the distribution shifts. Extensive experiments on Procgen, Atari, and Gym Control environments show that TRAC works surprisingly well—mitigating loss of plasticity and rapidly adapting to challenging distribution shifts—despite the underlying optimization problem being nonconvex and nonstationary.
Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning
[ "Aneesh Muppidi", "Zhiyu Zhang", "Heng Yang" ]
NeurIPS.cc/2024/Conference
2405.16642
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QEUntqKvmm
@inproceedings{ cheng2024the, title={The surprising efficiency of temporal difference learning for rare event prediction}, author={Xiaoou Cheng and Jonathan Weare}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QEUntqKvmm} }
We quantify the efficiency of temporal difference (TD) learning over the direct, or Monte Carlo (MC), estimator for policy evaluation in reinforcement learning, with an emphasis on estimation of quantities related to rare events. Policy evaluation is complicated in the rare event setting by the long timescale of the event and by the need for \emph{relative accuracy} in estimates of very small values. Specifically, we focus on least-squares TD (LSTD) prediction for finite state Markov chains, and show that LSTD can achieve relative accuracy far more efficiently than MC. We prove a central limit theorem for the LSTD estimator and upper bound the \emph{relative asymptotic variance} by simple quantities characterizing the connectivity of states relative to the transition probabilities between them. Using this bound, we show that, even when both the timescale of the rare event and the relative accuracy of the MC estimator are exponentially large in the number of states, LSTD maintains a fixed level of relative accuracy with a total number of observed transitions of the Markov chain that is only \emph{polynomially} large in the number of states.
The surprising efficiency of temporal difference learning for rare event prediction
[ "Xiaoou Cheng", "Jonathan Weare" ]
NeurIPS.cc/2024/Conference
2405.17638
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QDprhde3jb
@inproceedings{ cui2024learning, title={Learning Optimal Tax Design in Nonatomic Congestion Games}, author={Qiwen Cui and Maryam Fazel and Simon Shaolei Du}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QDprhde3jb} }
In multiplayer games, self-interested behavior among the players can harm the social welfare. Tax mechanisms are a common method to alleviate this issue and induce socially optimal behavior. In this work, we take the initial step of learning the optimal tax that can maximize social welfare with limited feedback in congestion games. We propose a new type of feedback named \emph{equilibrium feedback}, where the tax designer can only observe the Nash equilibrium after deploying a tax plan. Existing algorithms are not applicable due to the exponentially large tax function space, nonexistence of the gradient, and nonconvexity of the objective. To tackle these challenges, we design a computationally efficient algorithm that leverages several novel components: (1) a piece-wise linear tax to approximate the optimal tax; (2) extra linear terms to guarantee a strongly convex potential function; (3) an efficient subroutine to find the exploratory tax that can provide critical information about the game. The algorithm can find an $\epsilon$-optimal tax with $O(\beta F^2/\epsilon)$ sample complexity, where $\beta$ is the smoothness of the cost function and $F$ is the number of facilities.
Learning Optimal Tax Design in Nonatomic Congestion Games
[ "Qiwen Cui", "Maryam Fazel", "Simon Shaolei Du" ]
NeurIPS.cc/2024/Conference
2402.07437
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QDYts5dYgq
@inproceedings{ rubanova2024learning, title={Learning rigid-body simulators over implicit shapes for large-scale scenes and vision}, author={Yulia Rubanova and Tatiana Lopez-Guevara and Kelsey R Allen and William F Whitney and Kim Stachenfeld and Tobias Pfaff}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QDYts5dYgq} }
Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games. Rigid interactions are notoriously hard to model: small changes to the initial state or the simulation parameters can lead to large changes in the final state. Recently, learned simulators based on graph networks (GNNs) were developed as an alternative to hand-designed simulators like MuJoCo and Bullet. They are able to accurately capture dynamics of real objects directly from real-world observations. However, current state-of-the-art learned simulators operate on meshes and scale poorly to scenes with many objects or detailed shapes. Here we present SDF-Sim, the first learned rigid-body simulator designed for scale. We use learned signed-distance functions (SDFs) to represent the object shapes and to speed up distance computation. We design the simulator to leverage SDFs and avoid the fundamental bottleneck of the previous simulators associated with collision detection. For the first time in literature, we demonstrate that we can scale the GNN-based simulators to scenes with hundreds of objects and up to 1.1 million nodes, where mesh-based approaches run out of memory. Finally, we show that SDF-Sim can be applied to real world scenes by extracting SDFs from multi-view images.
Learning rigid-body simulators over implicit shapes for large-scale scenes and vision
[ "Yulia Rubanova", "Tatiana Lopez-Guevara", "Kelsey R Allen", "William F Whitney", "Kim Stachenfeld", "Tobias Pfaff" ]
NeurIPS.cc/2024/Conference
2405.14045
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=QDG2q5MYHV
@inproceedings{ kim2024a, title={A Gradient Accumulation Method for Dense Retriever under Memory Constraint}, author={Jaehee Kim and Yukyung Lee and Pilsung Kang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QDG2q5MYHV} }
InfoNCE loss is commonly used to train dense retriever in information retrieval tasks. It is well known that a large batch is essential to stable and effective training with InfoNCE loss, which requires significant hardware resources. Due to the dependency of large batch, dense retriever has bottleneck of application and research. Recently, memory reduction methods have been broadly adopted to resolve the hardware bottleneck by decomposing forward and backward or using a memory bank. However, current methods still suffer from slow and unstable train. To address these issues, we propose Contrastive Accumulation (ContAccum), a stable and efficient memory reduction method for dense retriever trains that uses a dual memory bank structure to leverage previously generated query and passage representations. Experiments on widely used five information retrieval datasets indicate that ContAccum can surpass not only existing memory reduction methods but also high-resource scenarios. Moreover, theoretical analysis and experimental results confirm that ContAccum provides more stable dual-encoder training than current memory bank utilization methods.
A Gradient Accumulation Method for Dense Retriever under Memory Constraint
[ "Jaehee Kim", "Yukyung Lee", "Pilsung Kang" ]
NeurIPS.cc/2024/Conference
2406.12356
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QCINh3O9q6
@inproceedings{ zuo2024crossvideo, title={Cross-video Identity Correlating for Person Re-identification Pre-training}, author={Jialong Zuo and Ying Nie and Hanyu Zhou and Huaxin Zhang and Haoyu Wang and Tianyu Guo and Nong Sang and Changxin Gao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QCINh3O9q6} }
Recent researches have proven that pre-training on large-scale person images extracted from internet videos is an effective way in learning better representations for person re-identification. However, these researches are mostly confined to pre-training at the instance-level or single-video tracklet-level. They ignore the identity-invariance in images of the same person across different videos, which is a key focus in person re-identification. To address this issue, we propose a Cross-video Identity-cOrrelating pre-traiNing (CION) framework. Defining a noise concept that comprehensively considers both intra-identity consistency and inter-identity discrimination, CION seeks the identity correlation from cross-video images by modeling it as a progressive multi-level denoising problem. Furthermore, an identity-guided self-distillation loss is proposed to implement better large-scale pre-training by mining the identity-invariance within person images. We conduct extensive experiments to verify the superiority of our CION in terms of efficiency and performance. CION achieves significantly leading performance with even fewer training samples. For example, compared with the previous state-of-the-art ISR, CION with the same ResNet50-IBN achieves higher mAP of 93.3% and 74.3% on Market1501 and MSMT17, while only utilizing 8% training samples. Finally, with CION demonstrating superior model-agnostic ability, we contribute a model zoo named ReIDZoo to meet diverse research and application needs in this field. It contains a series of CION pre-trained models with spanning structures and parameters, totaling 32 models with 10 different structures, including GhostNet, ConvNext, RepViT, FastViT and so on. The code and models will be open-sourced.
Cross-video Identity Correlating for Person Re-identification Pre-training
[ "Jialong Zuo", "Ying Nie", "Hanyu Zhou", "Huaxin Zhang", "Haoyu Wang", "Tianyu Guo", "Nong Sang", "Changxin Gao" ]
NeurIPS.cc/2024/Conference
2409.18569
[ "https://github.com/zplusdragon/cion_reidzoo" ]
https://huggingface.co/papers/2409.18569
0
0
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=QC4e0vOanp
@inproceedings{ ramamoorthy2024leveraging, title={Leveraging partial stragglers within gradient coding}, author={Aditya Ramamoorthy and Ruoyu meng and Vrinda S Girimaji}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QC4e0vOanp} }
Within distributed learning, workers typically compute gradients on their assigned dataset chunks and send them to the parameter server (PS), which aggregates them to compute either an exact or approximate version of $\nabla L$ (gradient of the loss function $L$). However, in large-scale clusters, many workers are slower than their promised speed or even failure-prone. A gradient coding solution introduces redundancy within the assignment of chunks to the workers and uses coding theoretic ideas to allow the PS to recover $\nabla L$ (exactly or approximately), even in the presence of stragglers. Unfortunately, most existing gradient coding protocols are inefficient from a computation perspective as they coarsely classify workers as operational or failed; the potentially valuable work performed by slow workers (partial stragglers) is ignored. In this work, we present novel gradient coding protocols that judiciously leverage the work performed by partial stragglers. Our protocols are efficient from a computation and communication perspective and numerically stable. For an important class of chunk assignments, we present efficient algorithms for optimizing the relative ordering of chunks within the workers; this ordering affects the overall execution time. For exact gradient reconstruction, our protocol is around $2\times$ faster than the original class of protocols and for approximate gradient reconstruction, the mean-squared-error of our reconstructed gradient is several orders of magnitude better.
Leveraging partial stragglers within gradient coding
[ "Aditya Ramamoorthy", "Ruoyu meng", "Vrinda S Girimaji" ]
NeurIPS.cc/2024/Conference
2405.19509
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QBCxWpOt5w
@inproceedings{ cabannes2024iteration, title={Iteration Head: A Mechanistic Study of Chain-of-Thought}, author={Vivien Cabannes and Charles Arnal and Wassim Bouaziz and Xingyu Alice Yang and Francois Charton and Julia Kempe}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QBCxWpOt5w} }
Chain-of-Thought (CoT) reasoning is known to improve Large Language Models both empirically and in terms of theoretical approximation power. However, our understanding of the inner workings and conditions of apparition of CoT capabilities remains limited. This paper helps fill this gap by demonstrating how CoT reasoning emerges in transformers in a controlled and interpretable setting. In particular, we observe the appearance of a specialized attention mechanism dedicated to iterative reasoning, which we coined "iteration heads". We track both the emergence and the precise working of these iteration heads down to the attention level, and measure the transferability of the CoT skills to which they give rise between tasks.
Iteration Head: A Mechanistic Study of Chain-of-Thought
[ "Vivien Cabannes", "Charles Arnal", "Wassim Bouaziz", "Xingyu Alice Yang", "Francois Charton", "Julia Kempe" ]
NeurIPS.cc/2024/Conference
2406.02128
[ "https://github.com/facebookresearch/pal" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QB6CvDqa6b
@inproceedings{ lin2024an, title={An Offline Adaptation Framework for Constrained Multi-Objective Reinforcement Learning}, author={Qian Lin and Zongkai Liu and Danying Mo and Chao Yu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QB6CvDqa6b} }
In recent years, significant progress has been made in multi-objective reinforcement learning (RL) research, which aims to balance multiple objectives by incorporating preferences for each objective. In most existing studies, specific preferences must be provided during deployment to indicate the desired policies explicitly. However, designing these preferences depends heavily on human prior knowledge, which is typically obtained through extensive observation of high-performing demonstrations with expected behaviors. In this work, we propose a simple yet effective offline adaptation framework for multi-objective RL problems without assuming handcrafted target preferences, but only given several demonstrations to implicitly indicate the preferences of expected policies. Additionally, we demonstrate that our framework can naturally be extended to meet constraints on safety-critical objectives by utilizing safe demonstrations, even when the safety thresholds are unknown. Empirical results on offline multi-objective and safe tasks demonstrate the capability of our framework to infer policies that align with real preferences while meeting the constraints implied by the provided demonstrations.
An Offline Adaptation Framework for Constrained Multi-Objective Reinforcement Learning
[ "Qian Lin", "Zongkai Liu", "Danying Mo", "Chao Yu" ]
NeurIPS.cc/2024/Conference
2409.09958
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QAiKLaCrKj
@inproceedings{ cui2024cherry, title={Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models}, author={Wanyun Cui and Qianle Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QAiKLaCrKj} }
This paper reveals the phenomenon of parameter heterogeneity in large language models (LLMs). We find that a small subset of ``cherry'' parameters exhibit a disproportionately large influence on model performance, while the vast majority of parameters have minimal impact. This heterogeneity is found to be prevalent across different model families, scales, and types. Motivated by this observation, we propose CherryQ, a novel quantization method that unifies the optimization of mixed-precision parameters. CherryQ identifies and preserves the critical cherry parameters in high precision while aggressively quantizing the remaining parameters to low precision. Extensive experiments demonstrate the effectiveness of CherryQ. CherryQ outperforms existing quantization approaches in terms of perplexity and downstream task performance. Notably, our 3-bit quantized Vicuna-1.5 exhibits competitive performance compared to their 16-bit counterparts. These findings highlight the potential of CherryQ for enabling efficient deployment of LLMs by taking advantage of parameter heterogeneity.
Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models
[ "Wanyun Cui", "Qianle Wang" ]
NeurIPS.cc/2024/Conference
2404.02837
[ "" ]
https://huggingface.co/papers/2404.02837
0
0
0
2
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=QAbhLBF72K
@inproceedings{ zhao2024what, title={What makes unlearning hard and what to do about it}, author={Kairan Zhao and Meghdad Kurmanji and George-Octavian B{\u{a}}rbulescu and Eleni Triantafillou and Peter Triantafillou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QAbhLBF72K} }
Machine unlearning is the problem of removing the effect of a subset of training data (the ``forget set'') from a trained model without damaging the model's utility e.g. to comply with users' requests to delete their data, or remove mislabeled, poisoned or otherwise problematic data. With unlearning research still being at its infancy, many fundamental open questions exist: Are there interpretable characteristics of forget sets that substantially affect the difficulty of the problem? How do these characteristics affect different state-of-the-art algorithms? With this paper, we present the first investigation aiming to answer these questions. We identify two key factors affecting unlearning difficulty and the performance of unlearning algorithms. Evaluation on forget sets that isolate these identified factors reveals previously-unknown behaviours of state-of-the-art algorithms that don't materialize on random forget sets. Based on our insights, we develop a framework coined Refined-Unlearning Meta-algorithm (RUM) that encompasses: (i) refining the forget set into homogenized subsets, according to different characteristics; and (ii) a meta-algorithm that employs existing algorithms to unlearn each subset and finally delivers a model that has unlearned the overall forget set. We find that RUM substantially improves top-performing unlearning algorithms. Overall, we view our work as an important step in (i) deepening our scientific understanding of unlearning and (ii) revealing new pathways to improving the state-of-the-art.
What makes unlearning hard and what to do about it
[ "Kairan Zhao", "Meghdad Kurmanji", "George-Octavian Bărbulescu", "Eleni Triantafillou", "Peter Triantafillou" ]
NeurIPS.cc/2024/Conference
2406.01257
[ "https://github.com/kairanzhao/rum" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=QAEnr5j172
@inproceedings{ hu2024fashionrr, title={FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models}, author={Rui Hu and Qian He and Gaofeng He and Jiedong Zhuang and Huang Chen and Huafeng Liu and Huamin Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=QAEnr5j172} }
Modeling and producing lifelike clothed human images has attracted researchers' attention from different areas for decades, with the complexity from highly articulated and structured content. Rendering algorithms decompose and simulate the imaging process of a camera, while are limited by the accuracy of modeled variables and the efficiency of computation. Generative models can produce impressively vivid human images, however still lacking in controllability and editability. This paper studies photorealism enhancement of rendered images, leveraging generative power from diffusion models on the controlled basis of rendering. We introduce a novel framework to translate rendered images into their realistic counterparts, which consists of two stages: Domain Knowledge Injection (DKI) and Realistic Image Generation (RIG). In DKI, we adopt positive (real) domain finetuning and negative (rendered) domain embedding to inject knowledge into a pretrained Text-to-image (T2I) diffusion model. In RIG, we generate the realistic image corresponding to the input rendered image, with a Texture-preserving Attention Control (TAC) to preserve fine-grained clothing textures, exploiting the decoupled features encoded in the UNet structure. Additionally, we introduce SynFashion dataset, featuring high-quality digital clothing images with diverse textures. Extensive experimental results demonstrate the superiority and effectiveness of our method in rendered-to-real image translation.
FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models
[ "Rui Hu", "Qian He", "Gaofeng He", "Jiedong Zhuang", "Huang Chen", "Huafeng Liu", "Huamin Wang" ]
NeurIPS.cc/2024/Conference
2410.14429
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q8yfhrBBD8
@inproceedings{ zhu2024bridgeif, title={Bridge-{IF}: Learning Inverse Protein Folding with Markov Bridges}, author={Yiheng Zhu and Jialu Wu and Qiuyi Li and Jiahuan Yan and Mingze Yin and Wei Wu and Mingyang Li and Jieping Ye and Zheng Wang and Jian Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q8yfhrBBD8} }
Inverse protein folding is a fundamental task in computational protein design, which aims to design protein sequences that fold into the desired backbone structures. While the development of machine learning algorithms for this task has seen significant success, the prevailing approaches, which predominantly employ a discriminative formulation, frequently encounter the error accumulation issue and often fail to capture the extensive variety of plausible sequences. To fill these gaps, we propose Bridge-IF, a generative diffusion bridge model for inverse folding, which is designed to learn the probabilistic dependency between the distributions of backbone structures and protein sequences. Specifically, we harness an expressive structure encoder to propose a discrete, informative prior derived from structures, and establish a Markov bridge to connect this prior with native sequences. During the inference stage, Bridge-IF progressively refines the prior sequence, culminating in a more plausible design. Moreover, we introduce a reparameterization perspective on Markov bridge models, from which we derive a simplified loss function that facilitates more effective training. We also modulate protein language models (PLMs) with structural conditions to precisely approximate the Markov bridge process, thereby significantly enhancing generation performance while maintaining parameter-efficient training. Extensive experiments on well-established benchmarks demonstrate that Bridge-IF predominantly surpasses existing baselines in sequence recovery and excels in the design of plausible proteins with high foldability. The code is available at https://github.com/violet-sto/Bridge-IF.
Bridge-IF: Learning Inverse Protein Folding with Markov Bridges
[ "Yiheng Zhu", "Jialu Wu", "Qiuyi Li", "Jiahuan Yan", "Mingze Yin", "Wei Wu", "Mingyang Li", "Jieping Ye", "Zheng Wang", "Jian Wu" ]
NeurIPS.cc/2024/Conference
2411.02120
[ "https://github.com/violet-sto/bridge-if" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q8Z04XhDdL
@inproceedings{ zhu2024moe, title={MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks}, author={Xingkui Zhu and Yiran Guan and Dingkang Liang and Yuchao Chen and Yuliang Liu and Xiang Bai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q8Z04XhDdL} }
The sparsely activated mixture of experts (MoE) model presents an effective alternative to densely activated (dense) models, combining improved accuracy with computational efficiency. However, training MoE models from scratch requires extensive data and computational resources, a challenge that limits their widespread adoption. To address this, we introduce MoE Jetpack, a framework designed to fine-tune the abundant and easily accessible dense checkpoints into MoE models. MoE Jetpack incorporates two key techniques: (1) **checkpoint recycling**, which initializes MoE models with dense checkpoints to accelerate convergence and enhance accuracy, minimizing the need for extensive pre-training; (2) the **hyperspherical adaptive MoE (SpheroMoE) layer**, which optimizes the MoE architecture to enhance fine-tuning performance and efficiency. Experimental results indicate that MoE Jetpack doubles the convergence speed and enhances accuracy by 2.8% on ImageNet-1K. On smaller datasets, it achieves up to 8-fold faster convergence and over 30% accuracy gains, highlighting its efficiency. The code is available at https://github.com/Adlith/MoE-Jetpack.
MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks
[ "Xingkui Zhu", "Yiran Guan", "Dingkang Liang", "Yuchao Chen", "Yuliang Liu", "Xiang Bai" ]
NeurIPS.cc/2024/Conference
2406.04801
[ "https://github.com/adlith/moe-jetpack" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q7s8mFWqsx
@inproceedings{ he2024learning, title={Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training}, author={Haoran He and Chenjia Bai and Ling Pan and Weinan Zhang and Bin Zhao and Xuelong Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q7s8mFWqsx} }
Learning a generalist embodied agent capable of completing multiple tasks poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets. In contrast, a vast amount of human videos exist, capturing intricate tasks and interactions with the physical world. Promising prospects arise for utilizing actionless human videos for pre-training and transferring the knowledge to facilitate robot policy learning through limited robot demonstrations. However, it remains a challenge due to the domain gap between humans and robots. Moreover, it is difficult to extract useful information representing the dynamic world from human videos, because of its noisy and multimodal data structure. In this paper, we introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos. We start by compressing both human and robot videos into unified video tokens. In the pre-training stage, we employ a discrete diffusion model with a mask-and-replace diffusion strategy to predict future video tokens in the latent space. In the fine-tuning stage, we harness the imagined future videos to guide low-level action learning with a limited set of robot data. Experiments demonstrate that our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches with superior performance.
Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training
[ "Haoran He", "Chenjia Bai", "Ling Pan", "Weinan Zhang", "Bin Zhao", "Xuelong Li" ]
NeurIPS.cc/2024/Conference
2402.14407
[ "" ]
https://huggingface.co/papers/2402.14407
0
1
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Q74JVgKCP6
@inproceedings{ glaser2024nearoptimality, title={Near-Optimality of Contrastive Divergence Algorithms}, author={Pierre Glaser and Kevin Han Huang and Arthur Gretton}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q74JVgKCP6} }
We provide a non-asymptotic analysis of the contrastive divergence (CD) algorithm, a training method for unnormalized models. While prior work has established that (for exponential family distributions) the CD iterates asymptotically converge at an $O(n^{-1 / 3})$ rate to the true parameter of the data distribution, we show that CD can achieve the parametric rate $O(n^{-1 / 2})$. Our analysis provides results for various data batching schemes, including fully online and minibatch. We additionally show that CD is near-optimal, in the sense that its asymptotic variance is close to the Cramér-Rao lower bound.
Near-Optimality of Contrastive Divergence Algorithms
[ "Pierre Glaser", "Kevin Han Huang", "Arthur Gretton" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q5e3ftQ3q3
@inproceedings{ hou2024almost, title={Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits}, author={Yunlong Hou and Vincent Y. F. Tan and Zixin Zhong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q5e3ftQ3q3} }
We propose a novel piecewise stationary linear bandit (PSLB) model, where the environment randomly samples a context from an unknown probability distribution at each changepoint, and the quality of an arm is measured by its return averaged over all contexts. The contexts and their distribution, as well as the changepoints are unknown to the agent. We design Piecewise-Stationary $\varepsilon$-Best Arm Identification$^+$ (PS$\varepsilon$BAI$^+$), an algorithm that is guaranteed to identify an $\varepsilon$-optimal arm with probability $\ge 1-\delta$ and with a minimal number of samples. PS$\varepsilon$BAI$^+$ consists of two subroutines, PS$\varepsilon$BAI and Naïve $\varepsilon$-BAI (N$\varepsilon$BAI), which are executed in parallel. PS$\varepsilon$BAI actively detects changepoints and aligns contexts to facilitate the arm identification process. When PS$\varepsilon$BAI and N$\varepsilon$BAI are utilized judiciously in parallel, PS$\varepsilon$BAI$^+$ is shown to have a finite expected sample complexity. By proving a lower bound, we show the expected sample complexity of PS$\varepsilon$BAI$^+$ is optimal up to a logarithmic factor. We compare PS$\varepsilon$BAI$^+$ to baseline algorithms using numerical experiments which demonstrate its efficiency. Both our analytical and numerical results corroborate that the efficacy of PS$\varepsilon$BAI$^+$ is due to the delicate change detection and context alignment procedures embedded in PS$\varepsilon$BAI.
Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits
[ "Yunlong Hou", "Vincent Y. F. Tan", "Zixin Zhong" ]
NeurIPS.cc/2024/Conference
2410.07638
[ "https://github.com/Y-Hou/BAI-in-PSLB" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q5RYn6jagC
@inproceedings{ campbell2024understanding, title={Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem}, author={Declan Iain Campbell and Sunayana Rane and Tyler Giallanza and C. Nicol{\`o} De Sabbata and Kia Ghods and Amogh Joshi and Alexander Ku and Steven M Frankland and Thomas L. Griffiths and Jonathan D. Cohen and Taylor Whittington Webb}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q5RYn6jagC} }
Recent work has documented striking heterogeneity in the performance of state-of-the-art vision language models (VLMs), including both multimodal language models and text-to-image models. These models are able to describe and generate a diverse array of complex, naturalistic images, yet they exhibit surprising failures on basic multi-object reasoning tasks -- such as counting, localization, and simple forms of visual analogy -- that humans perform with near perfect accuracy. To better understand this puzzling pattern of successes and failures, we turn to theoretical accounts of the binding problem in cognitive science and neuroscience, a fundamental problem that arises when a shared set of representational resources must be used to represent distinct entities (e.g., to represent multiple objects in an image), necessitating the use of serial processing to avoid interference. We find that many of the puzzling failures of state-of-the-art VLMs can be explained as arising due to the binding problem, and that these failure modes are strikingly similar to the limitations exhibited by rapid, feedforward processing in the human brain.
Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem
[ "Declan Iain Campbell", "Sunayana Rane", "Tyler Giallanza", "C. Nicolò De Sabbata", "Kia Ghods", "Amogh Joshi", "Alexander Ku", "Steven M Frankland", "Thomas L. Griffiths", "Jonathan D. Cohen", "Taylor Whittington Webb" ]
NeurIPS.cc/2024/Conference
2411.00238
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q4QUCN2ioc
@inproceedings{ jin2024an, title={An End-To-End Graph Attention Network Hashing for Cross-Modal Retrieval}, author={Huilong Jin and Yingxue Zhang and Lei Shi and Shuang Zhang and Feifei Kou and Jiapeng Yang and Chuangying Zhu and Jia Luo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q4QUCN2ioc} }
Due to its low storage cost and fast search speed, cross-modal retrieval based on hashing has attracted widespread attention and is widely used in real-world applications of social media search. However, most existing hashing methods are often limited by uncomprehensive feature representations and semantic associations, which greatly restricts their performance and applicability in practical applications. To deal with this challenge, in this paper, we propose an end-to-end graph attention network hashing (EGATH) for cross-modal retrieval, which can not only capture direct semantic associations between images and texts but also match semantic content between different modalities. We adopt the contrastive language image pretraining (CLIP) combined with the Transformer to improve understanding and generalization ability in semantic consistency across different data modalities. The classifier based on graph attention network is applied to obtain predicted labels to enhance cross-modal feature representation. We construct hash codes using an optimization strategy and loss function to preserve the semantic information and compactness of the hash code. Comprehensive experiments on the NUS-WIDE, MIRFlickr25K, and MS-COCO benchmark datasets show that our EGATH significantly outperforms against several state-of-the-art methods.
An End-To-End Graph Attention Network Hashing for Cross-Modal Retrieval
[ "Huilong Jin", "Yingxue Zhang", "Lei Shi", "Shuang Zhang", "Feifei Kou", "Jiapeng Yang", "Chuangying Zhu", "Jia Luo" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q4NWfStqVf
@inproceedings{ lee2024nearly, title={Nearly Minimax Optimal Regret for Multinomial Logistic Bandit}, author={Joongkyu Lee and Min-hwan Oh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q4NWfStqVf} }
In this paper, we study the contextual multinomial logit (MNL) bandit problem in which a learning agent sequentially selects an assortment based on contextual information, and user feedback follows an MNL choice model. There has been a significant discrepancy between lower and upper regret bounds, particularly regarding the maximum assortment size $K$. Additionally, the variation in reward structures between these bounds complicates the quest for optimality. Under uniform rewards, where all items have the same expected reward, we establish a regret lower bound of $\Omega(d\sqrt{\smash[b]{T/K}})$ and propose a constant-time algorithm, OFU-MNL+, that achieves a matching upper bound of $\tilde{\mathcal{O}}(d\sqrt{\smash[b]{T/K}})$. We also provide instance-dependent minimax regret bounds under uniform rewards. Under non-uniform rewards, we prove a lower bound of $\Omega(d\sqrt{T})$ and an upper bound of $\tilde{\mathcal{O}}(d\sqrt{T})$, also achievable by OFU-MNL+. Our empirical studies support these theoretical findings. To the best of our knowledge, this is the first work in the contextual MNL bandit literature to prove minimax optimality --- for either uniform or non-uniform reward setting --- and to propose a computationally efficient algorithm that achieves this optimality up to logarithmic factors.
Nearly Minimax Optimal Regret for Multinomial Logistic Bandit
[ "Joongkyu Lee", "Min-hwan Oh" ]
NeurIPS.cc/2024/Conference
2405.09831
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Q0KwoyZlSo
@inproceedings{ joshi2024on, title={On the Complexity of Learning Sparse Functions with Statistical and Gradient Queries}, author={Nirmit Joshi and Theodor Misiakiewicz and Nathan Srebro}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Q0KwoyZlSo} }
The goal of this paper is to investigate the complexity of gradient algorithms when learning sparse functions (juntas). We introduce a type of Statistical Queries ($\mathsf{SQ}$), which we call Differentiable Learning Queries ($\mathsf{DLQ}$), to model gradient queries on a specified loss with respect to an arbitrary model. We provide a tight characterization of the query complexity of $\mathsf{DLQ}$ for learning the support of a sparse function over generic product distributions. This complexity crucially depends on the loss function. For the squared loss, $\mathsf{DLQ}$ matches the complexity of Correlation Statistical Queries $(\mathsf{CSQ})$—potentially much worse than $\mathsf{SQ}$. But for other simple loss functions, including the $\ell_1$ loss, $\mathsf{DLQ}$ always achieves the same complexity as $\mathsf{SQ}$. We also provide evidence that $\mathsf{DLQ}$ can indeed capture learning with (stochastic) gradient descent by showing it correctly describes the complexity of learning with a two-layer neural network in the mean field regime and linear scaling.
On the Complexity of Learning Sparse Functions with Statistical and Gradient Queries
[ "Nirmit Joshi", "Theodor Misiakiewicz", "Nathan Srebro" ]
NeurIPS.cc/2024/Conference
2407.05622
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PzG7xVlYqm
@inproceedings{ roy2024on, title={On the Computational Complexity of Private High-dimensional Model Selection}, author={Saptarshi Roy and Zehua Wang and Ambuj Tewari}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PzG7xVlYqm} }
We consider the problem of model selection in a high-dimensional sparse linear regression model under privacy constraints. We propose a differentially private (DP) best subset selection method with strong statistical utility properties by adopting the well-known exponential mechanism for selecting the best model. To achieve computational expediency, we propose an efficient Metropolis-Hastings algorithm and under certain regularity conditions, we establish that it enjoys polynomial mixing time to its stationary distribution. As a result, we also establish both approximate differential privacy and statistical utility for the estimates of the mixed Metropolis-Hastings chain. Finally, we perform some illustrative experiments on simulated data showing that our algorithm can quickly identify active features under reasonable privacy budget constraints.
On the Computational Complexity of Private High-dimensional Model Selection
[ "Saptarshi Roy", "Zehua Wang", "Ambuj Tewari" ]
NeurIPS.cc/2024/Conference
2310.07852
[ "https://github.com/roysaptaumich/dp-bss" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PyTkA6HkzX
@inproceedings{ straitouri2024controlling, title={Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets}, author={Eleni Straitouri and Suhas Thejaswi and Manuel Gomez Rodriguez}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PyTkA6HkzX} }
Decision support systems based on prediction sets help humans solve multiclass classification tasks by narrowing down the set of potential label values to a subset of them, namely a prediction set, and asking them to always predict label values from the prediction sets. While this type of systems have been proven to be effective at improving the average accuracy of the predictions made by humans, by restricting human agency, they may cause harm---a human who has succeeded at predicting the ground-truth label of an instance on their own may have failed had they used these systems. In this paper, our goal is to control how frequently a decision support system based on prediction sets may cause harm, by design. To this end, we start by characterizing the above notion of harm using the theoretical framework of structural causal models. Then, we show that, under a natural, albeit unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using only predictions made by humans on their own. Further, we also show that, under a weaker monotonicity assumption, which can be verified experimentally, we can bound how frequently a system may cause harm again using only predictions made by humans on their own. Building upon these assumptions, we introduce a computational framework to design decision support systems based on prediction sets that are guaranteed to cause harm less frequently than a user-specified value using conformal risk control. We validate our framework using real human predictions from two different human subject studies and show that, in decision support systems based on prediction sets, there is a trade-off between accuracy and counterfactual harm.
Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets
[ "Eleni Straitouri", "Suhas Thejaswi", "Manuel Gomez Rodriguez" ]
NeurIPS.cc/2024/Conference
2406.06671
[ "https://github.com/Networks-Learning/controlling-counterfactual-harm-prediction-sets" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Px1hQM72iX
@inproceedings{ wu2024densitybased, title={Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval}, author={Haolun Wu and Ofer Meshi and Masrour Zoghi and Fernando Diaz and Xue Liu and Craig Boutilier and MARYAM KARIMZADEHGAN}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Px1hQM72iX} }
Accurate modeling of the diverse and dynamic interests of users remains a significant challenge in the design of personalized recommender systems. Existing user modeling methods, like single-point and multi-point representations, have limitations w.r.t.\ accuracy, diversity, and adaptability. To overcome these deficiencies, we introduce density-based user representations (DURs), a novel method that leverages Gaussian process regression (GPR) for effective multi-interest recommendation and retrieval. Our approach, GPR4DUR, exploits DURs to capture user interest variability without manual tuning, incorporates uncertainty-awareness, and scales well to large numbers of users. Experiments using real-world offline datasets confirm the adaptability and efficiency of GPR4DUR, while online experiments with simulated users demonstrate its ability to address the exploration-exploitation trade-off by effectively utilizing model uncertainty.
Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval
[ "Haolun Wu", "Ofer Meshi", "Masrour Zoghi", "Fernando Diaz", "Xue Liu", "Craig Boutilier", "MARYAM KARIMZADEHGAN" ]
NeurIPS.cc/2024/Conference
2310.20091
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Pwl9n4zlf5
@inproceedings{ chen2024automanual, title={AutoManual: Generating Instruction Manuals by {LLM} Agents via Interactive Environmental Learning}, author={Minghao Chen and Yihang Li and Yanting Yang and Shiyu Yu and Binbin Lin and Xiaofei He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pwl9n4zlf5} }
Large Language Models (LLM) based agents have shown promise in autonomously completing tasks across various domains, e.g., robotics, games, and web navigation. However, these agents typically require elaborate design and expert prompts to solve tasks in specific domains, which limits their adaptability. We introduce AutoManual, a framework enabling LLM agents to autonomously build their understanding through interaction and adapt to new environments. AutoManual categorizes environmental knowledge into diverse rules and optimizes them in an online fashion by two agents: 1) The Planner codes actionable plans based on current rules for interacting with the environment. 2) The Builder updates the rules through a well-structured rule system that facilitates online rule management and essential detail retention. To mitigate hallucinations in managing rules, we introduce a *case-conditioned prompting* strategy for the Builder. Finally, the Formulator agent compiles these rules into a comprehensive manual. The self-generated manual can not only improve the adaptability but also guide the planning of smaller LLMs while being human-readable. Given only one simple demonstration, AutoManual significantly improves task success rates, achieving 97.4\% with GPT-4-turbo and 86.2\% with GPT-3.5-turbo on ALFWorld benchmark tasks. The code is available at https://github.com/minghchen/automanual.
AutoManual: Generating Instruction Manuals by LLM Agents via Interactive Environmental Learning
[ "Minghao Chen", "Yihang Li", "Yanting Yang", "Shiyu Yu", "Binbin Lin", "Xiaofei He" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PvoxbjcRPT
@inproceedings{ zhu2024madiff, title={{MAD}iff: Offline Multi-agent Learning with Diffusion Models}, author={Zhengbang Zhu and Minghuan Liu and Liyuan Mao and Bingyi Kang and Minkai Xu and Yong Yu and Stefano Ermon and Weinan Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PvoxbjcRPT} }
Offline reinforcement learning (RL) aims to learn policies from pre-existing datasets without further interactions, making it a challenging task. Q-learning algorithms struggle with extrapolation errors in offline settings, while supervised learning methods are constrained by model expressiveness. Recently, diffusion models (DMs) have shown promise in overcoming these limitations in single-agent learning, but their application in multi-agent scenarios remains unclear. Generating trajectories for each agent with independent DMs may impede coordination, while concatenating all agents’ information can lead to low sample efficiency. Accordingly, we propose MADiff, which is realized with an attention-based diffusion model to model the complex coordination among behaviors of multiple agents. To our knowledge, MADiff is the first diffusion-based multi-agent learning framework, functioning as both a decentralized policy and a centralized controller. During decentralized executions, MADiff simultaneously performs teammate modeling, and the centralized controller can also be applied in multi-agent trajectory predictions. Our experiments demonstrate that MADiff outperforms baseline algorithms across various multi-agent learning tasks, highlighting its effectiveness in modeling complex multi-agent interactions.
MADiff: Offline Multi-agent Learning with Diffusion Models
[ "Zhengbang Zhu", "Minghuan Liu", "Liyuan Mao", "Bingyi Kang", "Minkai Xu", "Yong Yu", "Stefano Ermon", "Weinan Zhang" ]
NeurIPS.cc/2024/Conference
2305.17330
[ "https://github.com/zbzhu99/madiff" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PukaVAwYBo
@inproceedings{ ren2024learning, title={Learning and Transferring Sparse Contextual Bigrams with Linear Transformers}, author={Yunwei Ren and Zixuan Wang and Jason D. Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PukaVAwYBo} }
Transformers have achieved significant success in natural language modeling because of their exceptional capabilities to combine contextual information and global knowledge, yet their theoretical basis remains unclear. In this paper, we first propose Sparse Contextual Bigram (SCB), a natural extension to the classical bigram model, where the generation of the next token depends on a sparse set of earlier positions determined by the last token. We investigate the training dynamics and sample complexity of learning SCB using a one-layer linear transformer with a gradient-based algorithm. We show that when trained from scratch, the training process can be split into an initial sample-intensive stage where the correlation is boosted from zero to a nontrivial value, followed by a more sample-efficient stage of further improvement. Additionally, we prove that, provided a nontrivial correlation between the downstream and pretraining tasks, finetuning from a pretrained model allows us to bypass the initial sample-intensive stage. We also empirically demonstrate that our algorithm can outperform SGD in our setting.
Learning and Transferring Sparse Contextual Bigrams with Linear Transformers
[ "Yunwei Ren", "Zixuan Wang", "Jason D. Lee" ]
NeurIPS.cc/2024/Conference
2410.23438
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PuXYI4HOQU
@inproceedings{ khanh2024fundamental, title={Fundamental Convergence Analysis of Sharpness-Aware Minimization}, author={Pham Duy Khanh and Hoang-Chau Luong and Boris Mordukhovich and Dat Ba Tran}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PuXYI4HOQU} }
The paper investigates the fundamental convergence properties of Sharpness-Aware Minimization (SAM), a recently proposed gradient-based optimization method (Foret et al., 2021) that significantly improves the generalization of deep neural networks. The convergence properties including the stationarity of accumulation points, the convergence of the sequence of gradients to the origin, the sequence of function values to the optimal value, and the sequence of iterates to the optimal solution are established for the method. The universality of the provided convergence analysis based on inexact gradient descent frameworks (Khanh et al., 2023b) allows its extensions to the normalized versions of SAM such as F-SAM (Li et al. 2024), VaSSO (Li & Giannakis, 2023), RSAM (Liu et al., 2022), and to the unnormalized versions of SAM such as USAM (Andriushchenko & Flammarion, 2022). Numerical experiments are conducted on classification tasks using deep learning models to confirm the practical aspects of our analysis.
Fundamental Convergence Analysis of Sharpness-Aware Minimization
[ "Pham Duy Khanh", "Hoang-Chau Luong", "Boris Mordukhovich", "Dat Ba Tran" ]
NeurIPS.cc/2024/Conference
2401.08060
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PtD4aZPzcR
@inproceedings{ aamand2024statisticalcomputational, title={Statistical-Computational Trade-offs for Density Estimation}, author={Anders Aamand and Alexandr Andoni and Justin Y. Chen and Piotr Indyk and Shyam Narayanan and Sandeep Silwal and Haike Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PtD4aZPzcR} }
We study the density estimation problem defined as follows: given $k$ distributions $p_1, \ldots, p_k$ over a discrete domain $[n]$, as well as a collection of samples chosen from a "query" distribution $q$ over $[n]$, output $p_i$ that is "close" to $q$. Recently Aamand et al. gave the first and only known result that achieves sublinear bounds in both the sampling complexity and the query time while preserving polynomial data structure space. However, their improvement over linear samples and time is only by subpolynomial factors. Our main result is a lower bound showing that, for a broad class of data structures, their bounds cannot be significantly improved. In particular, if an algorithm uses $O(n/\log^c k)$ samples for some constant $c>0$ and polynomial space, then the query time of the data structure must be at least $k^{1-O(1)/\log \log k}$, i.e., close to linear in the number of distributions $k$. This is a novel statistical-computational trade-off for density estimation, demonstrating that any data structure must use close to a linear number of samples or take close to linear query time. The lower bound holds even in the realizable case where $q=p_i$ for some $i$, and when the distributions are flat (specifically, all distributions are uniform over half of the domain $[n]$). We also give a simple data structure for our lower bound instance with asymptotically matching upper bounds. Experiments show that the data structure is quite efficient in practice.
Statistical-Computational Trade-offs for Density Estimation
[ "Anders Aamand", "Alexandr Andoni", "Justin Y. Chen", "Piotr Indyk", "Shyam Narayanan", "Sandeep Silwal", "Haike Xu" ]
NeurIPS.cc/2024/Conference
2410.23087
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PsPR4NOiRC
@inproceedings{ yang2024generative, title={Generative Hierarchical Materials Search}, author={Sherry Yang and Simon Batzner and Ruiqi Gao and Muratahan Aykol and Alexander L Gaunt and Brendan McMorrow and Danilo Jimenez Rezende and Dale Schuurmans and Igor Mordatch and Ekin Dogus Cubuk}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PsPR4NOiRC} }
Generative models trained at scale can now produce novel text, video, and more recently, scientific data such as crystal structures. The ultimate goal for materials discovery, however, goes beyond generation: we desire a fully automated system that proposes, generates, and verifies crystal structures given a high-level user instruction. In this work, we formulate end-to-end language-to-structure generation as a multi-objective optimization problem, and propose Generative Hierarchical Materials Search (GenMS) for controllable generation of crystal structures. GenMS consists of (1) a language model that takes high-level natural language as input and generates intermediate textual information about a crystal (e.g., chemical formulae), and (2) a diffusion model that takes intermediate information as input and generates low-level continuous value crystal structures. GenMS additionally uses a graph neural network to predict properties (e.g., formation energy) from the generated crystal structures. During inference, GenMS leverages all three components to conduct a forward tree search over the space of possible structures. Experiments show that GenMS outperforms other alternatives both in satisfying user request and in generating low-energy structures. GenMS is able to generate complex structures such as double perovskites (or elpasolites), layered structures, and spinels, solely from natural language input.
Generative Hierarchical Materials Search
[ "Sherry Yang", "Simon Batzner", "Ruiqi Gao", "Muratahan Aykol", "Alexander L Gaunt", "Brendan McMorrow", "Danilo Jimenez Rezende", "Dale Schuurmans", "Igor Mordatch", "Ekin Dogus Cubuk" ]
NeurIPS.cc/2024/Conference
2409.06762
[ "" ]
https://huggingface.co/papers/2409.06762
2
6
4
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Prw98p1nV0
@inproceedings{ xu2024sharpnessaware, title={Sharpness-Aware Minimization Activates the Interactive Teaching's Understanding and Optimization}, author={Mingwei Xu and Xiaofeng Cao and Ivor Tsang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Prw98p1nV0} }
Teaching is a potentially effective approach for understanding interactions among multiple intelligences. Previous explorations have convincingly shown that teaching presents additional opportunities for observation and demonstration within the learning model, such as data distillation and selection. However, the underlying optimization principles and convergence of interactive teaching lack theoretical analysis, and in this regard co-teaching serves as a notable prototype. In this paper, we discuss its role as a reduction of the larger loss landscape derived from Sharpness-Aware Minimization (SAM). Then, we classify it as an iterative parameter estimation process using Expectation-Maximization. The convergence of this typical interactive teaching is achieved by continuously optimizing a variational lower bound on the log marginal likelihood. This lower bound represents the expected value of the log posterior distribution of the latent variables under a scaled, factorized variational distribution. To further enhance interactive teaching's performance, we incorporate SAM's strong generalization information into interactive teaching, referred as Sharpness Reduction Interactive Teaching (SRIT). This integration can be viewed as a novel sequential optimization process. Finally, we validate the performance of our approach through multiple experiments.
Sharpness-Aware Minimization Activates the Interactive Teaching's Understanding and Optimization
[ "Mingwei Xu", "Xiaofeng Cao", "Ivor Tsang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PquRXu9pQ6
@inproceedings{ zhang2024extending, title={Extending Multi-modal Contrastive Representations}, author={Ziang Zhang and Zehan Wang and Luping Liu and Rongjie Huang and Xize Cheng and Zhenhui Ye and Wang Lin and Huadai Liu and Haifeng Huang and Yang Zhao and Tao Jin and Siqi Zheng and Zhou Zhao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PquRXu9pQ6} }
Multi-modal contrastive representation (MCR) of more than three modalities is critical in multi-modal learning. Although recent methods showcase impressive achievements, the high dependence on large-scale, high-quality paired data and the expensive training costs limit their further development. Inspired by recent C-MCR, this paper proposes $\textbf{Ex}$tending $\textbf{M}$ultimodal $\textbf{C}$ontrastive $\textbf{R}$epresentation (Ex-MCR), a training-efficient and paired-data-free method to build unified contrastive representation for many modalities. Since C-MCR is designed to learn a new latent space for the two non-overlapping modalities and projects them onto this space, a significant amount of information from their original spaces is lost in the projection process. To address this issue, Ex-MCR proposes to extend one modality's space into the other's, rather than mapping both modalities onto a completely new space. This method effectively preserves semantic alignment in the original space. Experimentally, we extend pre-trained audio-text and 3D-image representations to the existing vision-text space. Without using paired data, Ex-MCR achieves comparable performance to advanced methods on a series of audio-image-text and 3D-image-text tasks and achieves superior performance when used in parallel with data-driven methods. Moreover, semantic alignment also emerges between the extended modalities (e.g., audio and 3D).
Extending Multi-modal Contrastive Representations
[ "Ziang Zhang", "Zehan Wang", "Luping Liu", "Rongjie Huang", "Xize Cheng", "Zhenhui Ye", "Wang Lin", "Huadai Liu", "Haifeng Huang", "Yang Zhao", "Tao Jin", "Siqi Zheng", "Zhou Zhao" ]
NeurIPS.cc/2024/Conference
2310.08884
[ "https://github.com/mcr-peft/ex-mcr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PqlKliEXyJ
@inproceedings{ zhu2024lodloc, title={LoD-Loc: Aerial Visual Localization using LoD 3D Map with Neural Wireframe Alignment}, author={Juelin Zhu and Shen Yan and Long Wang and zhang shengYue and Yu Liu and Maojun Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PqlKliEXyJ} }
We propose a new method named LoD-Loc for visual localization in the air. Unlike existing localization algorithms, LoD-Loc does not rely on complex 3D representations and can estimate the pose of an Unmanned Aerial Vehicle (UAV) using a Level-of-Detail (LoD) 3D map. LoD-Loc mainly achieves this goal by aligning the wireframe derived from the LoD projected model with that predicted by the neural network. Specifically, given a coarse pose provided by the UAV sensor, LoD-Loc hierarchically builds a cost volume for uniformly sampled pose hypotheses to describe pose probability distribution and select a pose with maximum probability. Each cost within this volume measures the degree of line alignment between projected and predicted wireframes. LoD-Loc also devises a 6-DoF pose optimization algorithm to refine the previous result with a differentiable Gaussian-Newton method. As no public dataset exists for the studied problem, we collect two datasets with map levels of LoD3.0 and LoD2.0, along with real RGB queries and ground-truth pose annotations. We benchmark our method and demonstrate that LoD-Loc achieves excellent performance, even surpassing current state-of-the-art methods that use textured 3D models for localization. The code and dataset will be made available upon publication.
LoD-Loc: Aerial Visual Localization using LoD 3D Map with Neural Wireframe Alignment
[ "Juelin Zhu", "Shen Yan", "Long Wang", "zhang shengYue", "Yu Liu", "Maojun Zhang" ]
NeurIPS.cc/2024/Conference
2410.12269
[ "https://github.com/VictorZoo/LoD-Loc" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Pox8jNQOo5
@inproceedings{ yu2024secondorder, title={Second-order forward-mode optimization of recurrent neural networks for neuroscience}, author={Youjing Yu and Rui Xia and Qingxi Ma and M{\'a}t{\'e} Lengyel and Guillaume Hennequin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pox8jNQOo5} }
A common source of anxiety for the computational neuroscience student is the question “will my recurrent neural network (RNN) model finally learn that task?”. Unlike in machine learning where any architectural modification of an RNN (e.g. GRU or LSTM) is acceptable if it speeds up training, the RNN models trained as _models of brain dynamics_ are subject to plausibility constraints that fundamentally exclude the usual machine learning hacks. The “vanilla” RNNs commonly used in computational neuroscience find themselves plagued by ill-conditioned loss surfaces that complicate training and significantly hinder our capacity to investigate the brain dynamics underlying complex tasks. Moreover, some tasks may require very long time horizons which backpropagation cannot handle given typical GPU memory limits. Here, we develop SOFO, a second-order optimizer that efficiently navigates loss surfaces whilst _not_ requiring backpropagation. By relying instead on easily parallelized batched forward-mode differentiation, SOFO enjoys constant memory cost in time. Morever, unlike most second-order optimizers which involve inherently sequential operations, SOFO's effective use of GPU parallelism yields a per-iteration wallclock time essentially on par with first-order gradient-based optimizers. We show vastly superior performance compared to Adam on a number of RNN tasks, including a difficult double-reaching motor task and the learning of an adaptive Kalman filter algorithm trained over a long horizon.
Second-order forward-mode optimization of recurrent neural networks for neuroscience
[ "Youjing Yu", "Rui Xia", "Qingxi Ma", "Máté Lengyel", "Guillaume Hennequin" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=Pojt9RWIjJ
@inproceedings{ zhang2024from, title={From Transparent to Opaque: Rethinking Neural Implicit Surfaces with \${\textbackslash}alpha\$-NeuS}, author={Haoran Zhang and Junkai Deng and Xuhui Chen and Fei Hou and Wencheng Wang and Hong Qin and Chen Qian and Ying He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pojt9RWIjJ} }
Traditional 3D shape reconstruction techniques from multi-view images, such as structure from motion and multi-view stereo, primarily focus on opaque surfaces. Similarly, recent advances in neural radiance fields and its variants also primarily address opaque objects, encountering difficulties with the complex lighting effects caused by transparent materials. This paper introduces $\alpha$-NeuS, a new method for simultaneously reconstructing thin transparent objects and opaque objects based on neural implicit surfaces (NeuS). Our method leverages the observation that transparent surfaces induce local extreme values in the learned distance fields during neural volumetric rendering, contrasting with opaque surfaces that align with zero level sets. Traditional iso-surfacing algorithms such as marching cubes, which rely on fixed iso-values, are ill-suited for this data. We address this by taking the absolute value of the distance field and developing an optimization method that extracts level sets corresponding to both non-negative local minima and zero iso-values. We prove that the reconstructed surfaces are unbiased for both transparent and opaque objects. To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes, demonstrating its practical utility and effectiveness. Our data and code are publicly available at https://github.com/728388808/alpha-NeuS.
From Transparent to Opaque: Rethinking Neural Implicit Surfaces with α-NeuS
[ "Haoran Zhang", "Junkai Deng", "Xuhui Chen", "Fei Hou", "Wencheng Wang", "Hong Qin", "Chen Qian", "Ying He" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PoCs4jq7cV
@inproceedings{ eysenbach2024inference, title={Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference}, author={Benjamin Eysenbach and Vivek Myers and Russ Salakhutdinov and Sergey Levine}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PoCs4jq7cV} }
Given time series data, how can we answer questions like ``what will happen in the future?'' and ``how did we get here?'' These sorts of probabilistic inference questions are challenging when observations are high-dimensional. In this paper, we show how these questions can have compact, closed form solutions in terms of learned representations. The key idea is to apply a variant of contrastive learning to time series data. Prior work already shows that the representations learned by contrastive learning encode a probability ratio. By extending prior work to show that the marginal distribution over representations is Gaussian, we can then prove that joint distribution of representations is also Gaussian. Taken together, these results show that representations learned via temporal contrastive learning follow a Gauss-Markov chain, a graphical model where inference (e.g., prediction, planning) over representations corresponds to inverting a low-dimensional matrix. In one special case, inferring intermediate representations will be equivalent to interpolating between the learned representations. We validate our theory using numerical simulations on tasks up to 46-dimensions.
Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference
[ "Benjamin Eysenbach", "Vivek Myers", "Russ Salakhutdinov", "Sergey Levine" ]
NeurIPS.cc/2024/Conference
2403.04082
[ "https://github.com/vivekmyers/contrastive_planning" ]
https://huggingface.co/papers/2403.04082
0
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Po7iQKKT5b
@inproceedings{ tangemann2024object, title={Object segmentation from common fate: Motion energy processing enables human-like zero-shot generalization to random dot stimuli}, author={Matthias Tangemann and Matthias Kuemmerer and Matthias Bethge}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Po7iQKKT5b} }
Humans excel at detecting and segmenting moving objects according to the {\it Gestalt} principle of “common fate”. Remarkably, previous works have shown that human perception generalizes this principle in a zero-shot fashion to unseen textures or random dots. In this work, we seek to better understand the computational basis for this capability by evaluating a broad range of optical flow models and a neuroscience inspired motion energy model for zero-shot figure-ground segmentation of random dot stimuli. Specifically, we use the extensively validated motion energy model proposed by Simoncelli and Heeger in 1998 which is fitted to neural recordings in cortex area MT. We find that a cross section of 40 deep optical flow models trained on different datasets struggle to estimate motion patterns in random dot videos, resulting in poor figure-ground segmentation performance. Conversely, the neuroscience-inspired model significantly outperforms all optical flow models on this task. For a direct comparison to human perception, we conduct a psychophysical study using a shape identification task as a proxy to measure human segmentation performance. All state-of-the-art optical flow models fall short of human performance, but only the motion energy model matches human capability. This neuroscience-inspired model successfully addresses the lack of human-like zero-shot generalization to random dot stimuli in current computer vision models, and thus establishes a compelling link between the Gestalt psychology of human object perception and cortical motion processing in the brain. Code, models and datasets are available at https://github.com/mtangemann/motion_energy_segmentation
Object segmentation from common fate: Motion energy processing enables human-like zero-shot generalization to random dot stimuli
[ "Matthias Tangemann", "Matthias Kuemmerer", "Matthias Bethge" ]
NeurIPS.cc/2024/Conference
2411.01505
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Pnv8C0bU9t
@inproceedings{ loeschcke2024loqt, title={Lo{QT}: Low-Rank Adapters for Quantized Pretraining}, author={Sebastian Bugge Loeschcke and Mads Toftrup and Michael Kastoryano and Serge Belongie and V{\'e}steinn Sn{\ae}bjarnarson}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pnv8C0bU9t} }
Despite advances using low-rank adapters and quantization, pretraining of large models on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose Low-Rank Adapters for Quantized Training (LoQT), a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models. We demonstrate this for language modeling and downstream task adaptation, finding that LoQT enables efficient training of models up to 7B parameters on a 24GB GPU. We also demonstrate the feasibility of training a 13B model using per-layer gradient updates on the same hardware.
LoQT: Low-Rank Adapters for Quantized Pretraining
[ "Sebastian Bugge Loeschcke", "Mads Toftrup", "Michael Kastoryano", "Serge Belongie", "Vésteinn Snæbjarnarson" ]
NeurIPS.cc/2024/Conference
2405.16528
[ "https://github.com/sebulo/LoQT" ]
https://huggingface.co/papers/2405.16528
2
3
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=PnlCHQrM69
@inproceedings{ ding2024semcoder, title={SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning}, author={Yangruibo Ding and Jinjun Peng and Marcus J. Min and Gail Kaiser and Junfeng Yang and Baishakhi Ray}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PnlCHQrM69} }
Code Large Language Models (Code LLMs) have excelled at tasks like code completion but often miss deeper semantics such as execution effects and dynamic states. This paper aims to bridge the gap between Code LLMs' reliance on static text data and the need for semantic understanding for complex tasks like debugging and program repair. We introduce a novel strategy, _monologue reasoning_, to train Code LLMs to reason comprehensive semantics, encompassing high-level functional descriptions, local execution effects of individual statements, and overall input/output behavior, thereby linking static code text with dynamic execution states. We begin by collecting PyX, a clean Python corpus of fully executable code samples with functional descriptions and test cases. We propose training Code LLMs not only to write code but also to understand code semantics by reasoning about key properties, constraints, and execution behaviors using natural language, mimicking human verbal debugging, i.e., rubber-duck debugging. This approach led to the development of SemCoder, a Code LLM with only 6.7B parameters, which shows competitive performance with GPT-3.5-turbo on code generation and execution reasoning tasks. SemCoder achieves 79.3% on HumanEval (GPT-3.5-turbo: 76.8%), 63.6% on CRUXEval-I (GPT-3.5-turbo: 50.3%), and 63.9% on CRUXEval-O (GPT-3.5-turbo: 59.0%). We also study the effectiveness of SemCoder's monologue-style execution reasoning compared to concrete scratchpad reasoning, showing that our approach integrates semantics from multiple dimensions more smoothly. Finally, we demonstrate the potential of applying learned semantics to improve Code LLMs' debugging and self-refining capabilities. Our data, code, and models are available at: https://github.com/ARiSE-Lab/SemCoder.
SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning
[ "Yangruibo Ding", "Jinjun Peng", "Marcus J. Min", "Gail Kaiser", "Junfeng Yang", "Baishakhi Ray" ]
NeurIPS.cc/2024/Conference
2406.01006
[ "https://github.com/arise-lab/semcoder" ]
https://huggingface.co/papers/2406.01006
0
0
1
6
[ "semcoder/semcoder_1030", "semcoder/semcoder_s_1030" ]
[]
[]
[ "semcoder/semcoder_1030", "semcoder/semcoder_s_1030" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=PmLty7tODm
@inproceedings{ kadra2024interpretable, title={Interpretable Mesomorphic Networks for Tabular Data}, author={Arlind Kadra and Sebastian Pineda Arango and Josif Grabocka}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PmLty7tODm} }
Even though neural networks have been long deployed in applications involving tabular data, still existing neural architectures are not explainable by design. In this paper, we propose a new class of interpretable neural networks for tabular data that are both deep and linear at the same time (i.e. mesomorphic). We optimize deep hypernetworks to generate explainable linear models on a per-instance basis. As a result, our models retain the accuracy of black-box deep networks while offering free-lunch explainability for tabular data by design. Through extensive experiments, we demonstrate that our explainable deep networks have comparable performance to state-of-the-art classifiers on tabular data and outperform current existing methods that are explainable by design.
Interpretable Mesomorphic Networks for Tabular Data
[ "Arlind Kadra", "Sebastian Pineda Arango", "Josif Grabocka" ]
NeurIPS.cc/2024/Conference
2305.13072
[ "https://github.com/arlindkadra/imn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PhsYFyTeHr
@inproceedings{ ni2024enat, title={{ENAT}: Rethinking Spatial-temporal Interactions in Token-based Image Synthesis}, author={Zanlin Ni and Yulin Wang and Renping Zhou and Yizeng Han and Jiayi Guo and Zhiyuan Liu and Yuan Yao and Gao Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PhsYFyTeHr} }
Recently, token-based generation approaches have demonstrated their effectiveness in synthesizing visual content. As a representative example, non-autoregressive Transformers (NATs) can generate decent-quality images in just a few steps. NATs perform generation in a progressive manner, where the latent tokens of a resulting image are incrementally revealed step-by-step. At each step, the unrevealed image regions are padded with [MASK] tokens and inferred by NAT, with the most reliable predictions preserved as newly revealed, visible tokens. In this paper, we delve into understanding the mechanisms behind the effectiveness of NATs and uncover two important interaction patterns that naturally emerge from NAT’s paradigm: Spatially (within a step), although [MASK] and visible tokens are processed uniformly by NATs, the interactions between them are highly asymmetric. In specific, [MASK] tokens mainly gather information for decoding. On the contrary, visible tokens tend to primarily provide information, and their deep representations can be built only upon themselves. Temporally (across steps), the interactions between adjacent generation steps mostly concentrate on updating the representations of a few critical tokens, while the computation for the majority of tokens is generally repetitive. Driven by these findings, we propose EfficientNAT (ENAT), a NAT model that explicitly encourages these critical interactions inherent in NATs. At the spatial level, we disentangle the computations of visible and [MASK] tokens by encoding visible tokens independently, while decoding [MASK] tokens conditioned on the fully encoded visible tokens. At the temporal level, we prioritize the computation of the critical tokens at each step, while maximally reusing previously computed token representations to supplement necessary information. ENAT improves the performance of NATs notably with significantly reduced computational cost. Experiments on ImageNet-256 2 & 512 2 and MS-COCO validate the effectiveness of ENAT. Code and pre-trained models will be released at https://github.com/LeapLabTHU/ENAT.
ENAT: Rethinking Spatial-temporal Interactions in Token-based Image Synthesis
[ "Zanlin Ni", "Yulin Wang", "Renping Zhou", "Yizeng Han", "Jiayi Guo", "Zhiyuan Liu", "Yuan Yao", "Gao Huang" ]
NeurIPS.cc/2024/Conference
2411.06959
[ "https://github.com/leaplabthu/enat" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PhjnK9KWOx
@inproceedings{ yang2024psl, title={{PSL}: Rethinking and Improving Softmax Loss from Pairwise Perspective for Recommendation}, author={Weiqin Yang and Jiawei Chen and Xin Xin and Sheng Zhou and Binbin Hu and Yan Feng and Chun Chen and Can Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PhjnK9KWOx} }
Softmax Loss (SL) is widely applied in recommender systems (RS) and has demonstrated effectiveness. This work analyzes SL from a pairwise perspective, revealing two significant limitations: 1) the relationship between SL and conventional ranking metrics like DCG is not sufficiently tight; 2) SL is highly sensitive to false negative instances. Our analysis indicates that these limitations are primarily due to the use of the exponential function. To address these issues, this work extends SL to a new family of loss functions, termed Pairwise Softmax Loss (PSL), which replaces the exponential function in SL with other appropriate activation functions. While the revision is minimal, we highlight three merits of PSL: 1) it serves as a tighter surrogate for DCG with suitable activation functions; 2) it better balances data contributions; and 3) it acts as a specific BPR loss enhanced by Distributionally Robust Optimization (DRO). We further validate the effectiveness and robustness of PSL through empirical experiments. The code is available at https://github.com/Tiny-Snow/IR-Benchmark.
PSL: Rethinking and Improving Softmax Loss from Pairwise Perspective for Recommendation
[ "Weiqin Yang", "Jiawei Chen", "Xin Xin", "Sheng Zhou", "Binbin Hu", "Yan Feng", "Chun Chen", "Can Wang" ]
NeurIPS.cc/2024/Conference
2411.00163
[ "https://github.com/tiny-snow/ir-benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PhLlE8UOEv
@inproceedings{ bruna2024provable, title={Provable Posterior Sampling with Denoising Oracles via Tilted Transport}, author={Joan Bruna and Jiequn Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PhLlE8UOEv} }
Score-based diffusion models have significantly advanced high-dimensional data generation across various domains, by learning a denoising oracle (or score) from datasets. From a Bayesian perspective, they offer a realistic modeling of data priors and facilitate solving inverse problems through posterior sampling. Although many heuristic methods have been developed recently for this purpose, they lack the quantitative guarantees needed in many scientific applications. This work addresses the topic from two perspectives. We first present a hardness result indicating that a generic method leveraging the prior denoising oracle for posterior sampling becomes infeasible as soon as the measurement operator is mildly ill-conditioned. We next develop the *tilted transport* technique, which leverages the quadratic structure of the log-likelihood in linear inverse problems in combination with the prior denoising oracle to exactly transform the original posterior sampling problem into a new one that is provably easier to sample from. We quantify the conditions under which the boosted posterior is strongly log-concave, highlighting how task difficulty depends on the condition number of the measurement matrix and the signal-to-noise ratio. The resulting general scheme is shown to match the best-known sampling methods for Ising models, and is further validated on high-dimensional Gaussian mixture models.
Provable Posterior Sampling with Denoising Oracles via Tilted Transport
[ "Joan Bruna", "Jiequn Han" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PgTHgLUFi3
@inproceedings{ korzhenkov2024on, title={On Sampling Strategies for Spectral Model Sharding}, author={Denis Korzhenkov and Christos Louizos}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PgTHgLUFi3} }
The problem of heterogeneous clients in federated learning has recently drawn a lot of attention. Spectral model sharding, i.e., partitioning the model parameters into low-rank matrices based on the singular value decomposition, has been one of the proposed solutions for more efficient on-device training in such settings. In this work we present two sampling strategies for such sharding, obtained as solutions to specific optimization problems. The first produces unbiased estimators of the original weights, while the second aims to minimize the squared approximation error. We discuss how both of these estimators can be incorporated in the federated learning loop and practical considerations that arise during local training. Empirically, we demonstrate that both of these methods can lead to improved performance in various commonly used datasets.
On Sampling Strategies for Spectral Model Sharding
[ "Denis Korzhenkov", "Christos Louizos" ]
NeurIPS.cc/2024/Conference
2410.24106
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PfOeAKxx6i
@inproceedings{ kogkalidis2024algebraic, title={Algebraic Positional Encodings}, author={Konstantinos Kogkalidis and Jean-Philippe Bernardy and Vikas Garg}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PfOeAKxx6i} }
We introduce a novel positional encoding strategy for Transformer-style models, addressing the shortcomings of existing, often ad hoc, approaches. Our framework provides a flexible mapping from the algebraic specification of a domain to an interpretation as orthogonal operators. This design preserves the algebraic characteristics of the source domain, ensuring that the model upholds the desired structural properties. Our scheme can accommodate various structures, including sequences, grids and trees, as well as their compositions. We conduct a series of experiments to demonstrate the practical applicability of our approach. Results suggest performance on par with or surpassing the current state-of-the-art, without hyperparameter optimizations or ``task search'' of any kind. Code is available through https://aalto-quml.github.io/ape/.
Algebraic Positional Encodings
[ "Konstantinos Kogkalidis", "Jean-Philippe Bernardy", "Vikas Garg" ]
NeurIPS.cc/2024/Conference
2312.16045
[ "https://github.com/konstantinoskokos/unitarype" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=Pf7kdIjHRf
@inproceedings{ wang2024scaling, title={Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers}, author={Lirui Wang and Xinlei Chen and Jialiang Zhao and Kaiming He}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pf7kdIjHRf} }
One of the roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies the problem of learning policy representations through heterogeneous pre-training on robot data across different embodiments and tasks at scale. We propose Heterogeneous Pre-trained Transformers (HPT), which pre-train a large, shareable trunk of a policy neural network to learn a task and embodiment agnostic shared representation. This general architecture aligns the specific proprioception and vision inputs from distinct embodiments to a short sequence of tokens and then processes such tokens to map to control robots for different tasks. Leveraging the recent large-scale multi-embodiment real-world robotic datasets as well as simulation, deployed robots, and human video datasets, we investigate pre-training policies across heterogeneity. We conduct experiments to investigate the scaling behaviors of training objectives, to the extent of 52 datasets. HPTs outperform several baselines and enhance the fine-tuned policy performance by over 20% on unseen tasks in multiple simulator benchmarks and real-world settings. See the project website (liruiw.github.io/hpt) for code and videos.
Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers
[ "Lirui Wang", "Xinlei Chen", "Jialiang Zhao", "Kaiming He" ]
NeurIPS.cc/2024/Conference
2409.20537
[ "https://github.com/liruiw/lerobot" ]
https://huggingface.co/papers/2409.20537
2
12
2
4
[ "liruiw/hpt-base" ]
[]
[]
[ "liruiw/hpt-base" ]
[]
[]
1
oral
null
https://openreview.net/forum?id=Pezt0xttae
@inproceedings{ jia2024dapperfl, title={Dapper{FL}: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices}, author={Yongzhe Jia and Xuyun Zhang and Hongsheng Hu and Kim-Kwang Raymond Choo and Lianyong Qi and Xiaolong Xu and Amin Beheshti and Wanchun Dou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pezt0xttae} }
Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However, existing FL frameworks suffer from efficacy deterioration due to the system heterogeneity inherent in edge computing, especially in the presence of domain shifts across local data. In this paper, we propose a heterogeneous FL framework DapperFL, to enhance model performance across multiple domains. In DapperFL, we introduce a dedicated Model Fusion Pruning (MFP) module to produce personalized compact local models for clients to address the system heterogeneity challenges. The MFP module prunes local models with fused knowledge obtained from both local and remaining domains, ensuring robustness to domain shifts. Additionally, we design a Domain Adaptive Regularization (DAR) module to further improve the overall performance of DapperFL. The DAR module employs regularization generated by the pruned model, aiming to learn robust representations across domains. Furthermore, we introduce a specific aggregation algorithm for aggregating heterogeneous local models with tailored architectures and weights. We implement DapperFL on a real-world FL platform with heterogeneous clients. Experimental results on benchmark datasets with multiple domains demonstrate that DapperFL outperforms several state-of-the-art FL frameworks by up to 2.28%, while significantly achieving model volume reductions ranging from 20% to 80%. Our code is available at: https://github.com/jyzgh/DapperFL.
DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices
[ "Yongzhe Jia", "Xuyun Zhang", "Hongsheng Hu", "Kim-Kwang Raymond Choo", "Lianyong Qi", "Xiaolong Xu", "Amin Beheshti", "Wanchun Dou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=PcyioHOmjq
@inproceedings{ wen2024what, title={What Makes {CLIP} More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights}, author={Xin Wen and Bingchen Zhao and Yilun Chen and Jiangmiao Pang and XIAOJUAN QI}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PcyioHOmjq} }
Severe data imbalance naturally exists among web-scale vision-language datasets. Despite this, we find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning, and demonstrates significant effectiveness in learning generalizable representations. With an aim to investigate the reasons behind this finding, we conduct controlled experiments to study various underlying factors, and reveal that CLIP's pretext task forms a dynamic classification problem wherein only a subset of classes is present in training. This isolates the bias from dominant classes and implicitly balances the learning signal. Furthermore, the robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts, which are inaccessible to supervised learning. Our study not only uncovers the mechanisms behind CLIP's generalizability beyond data imbalance but also provides transferable insights for the research community. The findings are validated in both supervised and self-supervised learning, enabling models trained on imbalanced data to achieve CLIP-level performance on diverse recognition tasks. Code and data are available at: https://github.com/CVMI-Lab/clip-beyond-tail.
What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights
[ "Xin Wen", "Bingchen Zhao", "Yilun Chen", "Jiangmiao Pang", "XIAOJUAN QI" ]
NeurIPS.cc/2024/Conference
2405.21070
[ "https://github.com/cvmi-lab/clip-beyond-tail" ]
https://huggingface.co/papers/2405.21070
1
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=Pc9LLjTL5f
@inproceedings{ boubdir2024elo, title={Elo Uncovered: Robustness and Best Practices in Language Model Evaluation}, author={Meriem Boubdir and Edward Kim and Beyza Ermis and Sara Hooker and Marzieh Fadaee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pc9LLjTL5f} }
In Natural Language Processing (NLP), the Elo rating system, originally designed for ranking players in dynamic games such as chess, is increasingly being used to evaluate Large Language Models (LLMs) through "A vs B" paired comparisons. However, while popular, the system's suitability for assessing entities with constant skill levels, such as LLMs, remains relatively unexplored. We study two fundamental axioms that evaluation methods should adhere to: reliability and transitivity. We conduct an extensive evaluation of Elo behavior across simulated and real-world scenarios, demonstrating that individual Elo computations can exhibit significant volatility. We show that both axioms are not always satisfied, raising questions about the reliability of current comparative evaluations of LLMs. If the current use of Elo scores is intended to substitute the costly head-to-head comparison of LLMs, it is crucial to ensure the ranking is as robust as possible. Guided by the axioms, our findings offer concrete guidelines for enhancing the reliability of LLM evaluation methods, suggesting a need for reassessment of existing comparative approaches.
Elo Uncovered: Robustness and Best Practices in Language Model Evaluation
[ "Meriem Boubdir", "Edward Kim", "Beyza Ermis", "Sara Hooker", "Marzieh Fadaee" ]
NeurIPS.cc/2024/Conference
2311.17295
[ "" ]
https://huggingface.co/papers/2311.17295
4
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=PaqJ71zf1M
@inproceedings{ zhou2024continuous, title={Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition}, author={Zi-Hao Zhou and Siyuan Fang and Zi-Jing Zhou and Tong Wei and Yuanyu Wan and Min-Ling Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PaqJ71zf1M} }
Long-tailed semi-supervised learning poses a significant challenge in training models with limited labeled data exhibiting a long-tailed label distribution. Current state-of-the-art LTSSL approaches heavily rely on high-quality pseudo-labels for large-scale unlabeled data. However, these methods often neglect the impact of representations learned by the neural network and struggle with real-world unlabeled data, which typically follows a different distribution than labeled data. This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning. Our framework derives the class-balanced contrastive loss through Gaussian kernel density estimation. We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using *reliable* and *smoothed* pseudo-labels. By progressively estimating the underlying label distribution and optimizing its alignment with model predictions, we tackle the diverse distribution of unlabeled data in real-world scenarios. Extensive experiments across multiple datasets with varying unlabeled data distributions demonstrate that CCL consistently outperforms prior state-of-the-art methods, achieving over 4% improvement on the ImageNet-127 dataset. The supplementary material includes the source code for reproducibility.
Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition
[ "Zi-Hao Zhou", "Siyuan Fang", "Zi-Jing Zhou", "Tong Wei", "Yuanyu Wan", "Min-Ling Zhang" ]
NeurIPS.cc/2024/Conference
2410.06109
[ "https://github.com/zhouzihao11/ccl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PacBluO5m7
@inproceedings{ zhang2024knowgpt, title={Know{GPT}: Knowledge Graph based Prompting for Large Language Models}, author={Qinggang Zhang and Junnan Dong and Hao Chen and Daochen Zha and Zailiang Yu and Xiao Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PacBluO5m7} }
Large Language Models (LLMs) have demonstrated remarkable capabilities in many real-world applications. Nonetheless, LLMs are often criticized for their tendency to produce hallucinations, wherein the models fabricate incorrect statements on tasks beyond their knowledge and perception. To alleviate this issue, graph retrieval-augmented generation (GraphRAG) has been extensively explored which leverages the factual knowledge in knowledge graphs (KGs) to ground the LLM's responses in established facts and principles. However, most state-of-the-art LLMs are closed-source, making it challenging to develop a prompting framework that can efficiently and effectively integrate KGs into LLMs with hard prompts only. Generally, existing KG-enhanced LLMs usually suffer from three critical issues, including huge search space, high API costs, and laborious prompt engineering, that impede their widespread application in practice. To this end, we introduce a novel **Know**ledge **Gr**aph based **P**romp**T**ing framework, namely **KnowGPT**, to enhance LLMs with domain knowledge. KnowGPT contains a knowledge extraction module to extract the most informative knowledge from KGs, and a context-aware prompt construction module to automatically convert extracted knowledge into effective prompts. Experiments on three benchmarks demonstrate that KnowGPT significantly outperforms all competitors. Notably, KnowGPT achieves a 92.6% accuracy on OpenbookQA leaderboard, comparable to human-level performance.
KnowGPT: Knowledge Graph based Prompting for Large Language Models
[ "Qinggang Zhang", "Junnan Dong", "Hao Chen", "Daochen Zha", "Zailiang Yu", "Xiao Huang" ]
NeurIPS.cc/2024/Conference
2312.06185
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Pa8jsrdOnU
@inproceedings{ cho2024hollowed, title={Hollowed Net for On-Device Personalization of Text-to-Image Diffusion Models}, author={Wonguk Cho and Seokeon Choi and Debasmit Das and Matthias Reisser and Taesup Kim and Sungrack Yun and Fatih Porikli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=Pa8jsrdOnU} }
Recent advancements in text-to-image diffusion models have enabled the personalization of these models to generate custom images from textual prompts. This paper presents an efficient LoRA-based personalization approach for on-device subject-driven generation, where pre-trained diffusion models are fine-tuned with user-specific data on resource-constrained devices. Our method, termed Hollowed Net, enhances memory efficiency during fine-tuning by modifying the architecture of a diffusion U-Net to temporarily remove a fraction of its deep layers, creating a hollowed structure. This approach directly addresses on-device memory constraints and substantially reduces GPU memory requirements for training, in contrast to previous methods that primarily focus on minimizing training steps and reducing the number of parameters to update. Additionally, the personalized Hollowed Net can be transferred back into the original U-Net, enabling inference without additional memory overhead. Quantitative and qualitative analyses demonstrate that our approach not only reduces training memory to levels as low as those required for inference but also maintains or improves personalization performance compared to existing methods.
Hollowed Net for On-Device Personalization of Text-to-Image Diffusion Models
[ "Wonguk Cho", "Seokeon Choi", "Debasmit Das", "Matthias Reisser", "Taesup Kim", "Sungrack Yun", "Fatih Porikli" ]
NeurIPS.cc/2024/Conference
2411.01179
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PZCiWtQjAw
@inproceedings{ pian2024continual, title={Continual Audio-Visual Sound Separation}, author={Weiguo Pian and Yiyang Nan and Shijian Deng and Shentong Mo and Yunhui Guo and Yapeng Tian}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PZCiWtQjAw} }
In this paper, we introduce a novel continual audio-visual sound separation task, aiming to continuously separate sound sources for new classes while preserving performance on previously learned classes, with the aid of visual guidance. This problem is crucial for practical visually guided auditory perception as it can significantly enhance the adaptability and robustness of audio-visual sound separation models, making them more applicable for real-world scenarios where encountering new sound sources is commonplace. The task is inherently challenging as our models must not only effectively utilize information from both modalities in current tasks but also preserve their cross-modal association in old tasks to mitigate catastrophic forgetting during audio-visual continual learning. To address these challenges, we propose a novel approach named ContAV-Sep ($\textbf{Cont}$inual $\textbf{A}$udio-$\textbf{V}$isual Sound $\textbf{Sep}$aration). ContAV-Sep presents a novel Cross-modal Similarity Distillation Constraint (CrossSDC) to uphold the cross-modal semantic similarity through incremental tasks and retain previously acquired knowledge of semantic similarity in old models, mitigating the risk of catastrophic forgetting. The CrossSDC can seamlessly integrate into the training process of different audio-visual sound separation frameworks. Experiments demonstrate that ContAV-Sep can effectively mitigate catastrophic forgetting and achieve significantly better performance compared to other continual learning baselines for audio-visual sound separation. Code is available at: https://github.com/weiguoPian/ContAV-Sep_NeurIPS2024.
Continual Audio-Visual Sound Separation
[ "Weiguo Pian", "Yiyang Nan", "Shijian Deng", "Shentong Mo", "Yunhui Guo", "Yapeng Tian" ]
NeurIPS.cc/2024/Conference
2411.02860
[ "https://github.com/weiguopian/contav-sep_neurips2024" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PXGY9Fz8vC
@inproceedings{ chang2024whos, title={Who{\textquoteright}s Gaming the System? A Causally-Motivated Approach for Detecting Strategic Adaptation}, author={Trenton Chang and Lindsay Warrenburg and Sae-Hwan Park and Ravi B Parikh and Maggie Makar and Jenna Wiens}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PXGY9Fz8vC} }
In many settings, machine learning models may be used to inform decisions that impact individuals or entities who interact with the model. Such entities, or *agents,* may *game* model decisions by manipulating their inputs to the model to obtain better outcomes and maximize some utility. We consider a multi-agent setting where the goal is to identify the “worst offenders:” agents that are gaming most aggressively. However, identifying such agents is difficult without knowledge of their utility function. Thus, we introduce a framework in which each agent’s tendency to game is parameterized via a scalar. We show that this gaming parameter is only partially identifiable. By recasting the problem as a causal effect estimation problem where different agents represent different “treatments,” we prove that a ranking of all agents by their gaming parameters is identifiable. We present empirical results in a synthetic data study validating the usage of causal effect estimation for gaming detection and show in a case study of diagnosis coding behavior in the U.S. that our approach highlights features associated with gaming.
Who’s Gaming the System? A Causally-Motivated Approach for Detecting Strategic Adaptation
[ "Trenton Chang", "Lindsay Warrenburg", "Sae-Hwan Park", "Ravi B Parikh", "Maggie Makar", "Jenna Wiens" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PWzB2V2b6R
@inproceedings{ zhao2024does, title={Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?}, author={Qingsong Zhao and Yi Wang and Jilan Xu and Yinan He and Zifan Song and Limin Wang and Yu Qiao and Cairong Zhao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PWzB2V2b6R} }
Video understanding relies on accurate action detection for temporal analysis. However, existing mainstream methods have limitations in real-world applications due to their offline and closed-set evaluation approaches, as well as their dependence on manual annotations. To address these challenges and enable real-time action understanding in open-world scenarios, we propose OV-OAD, a zero-shot online action detector that leverages vision-language models and learns solely from text supervision. By introducing an object-centered decoder unit into a Transformer-based model, we aggregate frames with similar semantics using video-text correspondence. Extensive experiments on four action detection benchmarks demonstrate that OV-OAD outperforms other advanced zero-shot methods. Specifically, it achieves 37.5\% mean average precision on THUMOS’14 and 73.8\% calibrated average precision on TVSeries. This research establishes a robust baseline for zero-shot transfer in online action detection, enabling scalable solutions for open-world temporal understanding. The code will be available for download at \url{https://github.com/OpenGVLab/OV-OAD}.
Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?
[ "Qingsong Zhao", "Yi Wang", "Jilan Xu", "Yinan He", "Zifan Song", "Limin Wang", "Yu Qiao", "Cairong Zhao" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PWkjxjgGLP
@inproceedings{ park2024hierarchical, title={Hierarchical Visual Feature Aggregation for {OCR}-Free Document Understanding}, author={Jaeyoo Park and Jin Young Choi and Jeonghyung Park and Bohyung Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PWkjxjgGLP} }
We present a novel OCR-free document understanding framework based on pretrained Multimodal Large Language Models (MLLMs). Our approach employs multi-scale visual features to effectively handle various font sizes within document images. To address the increasing costs of considering the multi-scale visual inputs for MLLMs, we propose the Hierarchical Visual Feature Aggregation (HVFA) module, designed to reduce the number of input tokens to LLMs. Leveraging a feature pyramid with cross-attentive pooling, our approach effectively manages the trade-off between information loss and efficiency without being affected by varying document image sizes. Furthermore, we introduce a novel instruction tuning task, which facilitates the model's text-reading capability by learning to predict the relative positions of input text, eventually minimizing the risk of truncated text caused by the limited capacity of LLMs. Comprehensive experiments validate the effectiveness of our approach, demonstrating superior performance in various document understanding tasks.
Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding
[ "Jaeyoo Park", "Jin Young Choi", "Jeonghyung Park", "Bohyung Han" ]
NeurIPS.cc/2024/Conference
2411.05254
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PVgAeMm3MW
@inproceedings{ zhang2024sfv, title={{SF}-V: Single Forward Video Generation Model}, author={Zhixing Zhang and Yanyu Li and Yushu Wu and yanwu xu and Anil Kag and Ivan Skorokhodov and Willi Menapace and Aliaksandr Siarohin and Junli Cao and Dimitris N. Metaxas and Sergey Tulyakov and Jian Ren}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PVgAeMm3MW} }
Diffusion-based video generation models have demonstrated remarkable success in obtaining high-fidelity videos through the iterative denoising process. However, these models require multiple denoising steps during sampling, resulting in high computational costs. In this work, we propose a novel approach to obtain single-step video generation models by leveraging adversarial training to fine-tune pre-trained video diffusion models. We show that, through the adversarial training, the multi-steps video diffusion model, i.e., Stable Video Diffusion (SVD), can be trained to perform single forward pass to synthesize high-quality videos, capturing both temporal and spatial dependencies in the video data. Extensive experiments demonstrate that our method achieves competitive generation quality of synthesized videos with significantly reduced computational overhead for the denoising process (i.e., around $23\times$ speedup compared with SVD and $6\times$ speedup compared with existing works, with even better generation quality), paving the way for real-time video synthesis and editing.
SF-V: Single Forward Video Generation Model
[ "Zhixing Zhang", "Yanyu Li", "Yushu Wu", "yanwu xu", "Anil Kag", "Ivan Skorokhodov", "Willi Menapace", "Aliaksandr Siarohin", "Junli Cao", "Dimitris N. Metaxas", "Sergey Tulyakov", "Jian Ren" ]
NeurIPS.cc/2024/Conference
2406.04324
[ "https://github.com/snap-research/SF-V" ]
https://huggingface.co/papers/2406.04324
7
23
2
12
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=PTxRRUEpHq
@inproceedings{ nie2024gradient, title={Gradient Methods for Online {DR}-Submodular Maximization with Stochastic Long-Term Constraints}, author={Guanyu Nie and Vaneet Aggarwal and Christopher John Quinn}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PTxRRUEpHq} }
In this paper, we consider the problem of online monotone DR-submodular maximization subject to long-term stochastic constraints. Specifically, at each round $t\in [T]$, after committing an action $\mathbf{x}_t$, a random reward $f_t(\mathbf{x}_t)$ and an unbiased gradient estimate of the point $\widetilde{\nabla}f_t(\mathbf{x}_t)$ (semi-bandit feedback) are revealed. Meanwhile, a budget of $g_t(\mathbf{x}_t)$, which is linear and stochastic, is consumed of its total allotted budget $B_T$. We propose a gradient ascent based algorithm that achieves $\frac{1}{2}$-regret of $\mathcal{O}(\sqrt{T})$ with $\mathcal{O}(T^{3/4})$ constraint violation with high probability. Moreover, when first-order full-information feedback is available, we propose an algorithm that achieves $(1-1/e)$-regret of $\mathcal{O}(\sqrt{T})$ with $\mathcal{O}(T^{3/4})$ constraint violation. These algorithms significantly improve over the state-of-the-art in terms of query complexity.
Gradient Methods for Online DR-Submodular Maximization with Stochastic Long-Term Constraints
[ "Guanyu Nie", "Vaneet Aggarwal", "Christopher John Quinn" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PThi9hf9UT
@inproceedings{ letizia2024mutual, title={Mutual Information Estimation via \$f\$-Divergence and Data Derangements}, author={Nunzio Alexandro Letizia and Nicola Novello and Andrea M Tonello}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PThi9hf9UT} }
Estimating mutual information accurately is pivotal across diverse applications, from machine learning to communications and biology, enabling us to gain insights into the inner mechanisms of complex systems. Yet, dealing with high-dimensional data presents a formidable challenge, due to its size and the presence of intricate relationships. Recently proposed neural methods employing variational lower bounds on the mutual information have gained prominence. However, these approaches suffer from either high bias or high variance, as the sample size and the structure of the loss function directly influence the training process. In this paper, we propose a novel class of discriminative mutual information estimators based on the variational representation of the $f$-divergence. We investigate the impact of the permutation function used to obtain the marginal training samples and present a novel architectural solution based on derangements. The proposed estimator is flexible since it exhibits an excellent bias/variance trade-off. The comparison with state-of-the-art neural estimators, through extensive experimentation within established reference scenarios, shows that our approach offers higher accuracy and lower complexity.
Mutual Information Estimation via f-Divergence and Data Derangements
[ "Nunzio Alexandro Letizia", "Nicola Novello", "Andrea M Tonello" ]
NeurIPS.cc/2024/Conference
[ "https://github.com/tonellolab/fdime" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PSubtZAitM
@inproceedings{ jung2024efficient, title={Efficient Policy Evaluation Across Multiple Different Experimental Datasets}, author={Yonghan Jung and Alexis Bellot}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PSubtZAitM} }
Artificial intelligence systems are trained combining various observational and experimental datasets from different source sites, and are increasingly used to reason about the effectiveness of candidate policies. One common assumption in this context is that the data in source and target sites (where the candidate policy is due to be deployed) come from the same distribution. This assumption is often violated in practice, causing challenges for generalization, transportability, or external validity. Despite recent advances for determining the identifiability of the effectiveness of policies in a target domain, there are still challenges for the accurate estimation of effects from finite samples. In this paper, we develop novel graphical criteria and estimators for evaluating the effectiveness of policies (e.g., conditional, stochastic) by combining data from multiple experimental studies. Asymptotic error analysis of our estimators provides fast convergence guarantee. We empirically verified the robustness of estimators through simulations.
Efficient Policy Evaluation Across Multiple Different Experimental Datasets
[ "Yonghan Jung", "Alexis Bellot" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PSVkinBs4u
@inproceedings{ wang2024infusing, title={Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models}, author={Zun Wang and Chang Liu and Nianlong Zou and He Zhang and Xinran Wei and Lin Huang and Lijun Wu and Bin Shao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PSVkinBs4u} }
In this study, we introduce a unified neural network architecture, the Deep Equilibrium Density Functional Theory Hamiltonian (DEQH) model, which incorporates Deep Equilibrium Models (DEQs) for predicting Density Functional Theory (DFT) Hamiltonians. The DEQH model inherently captures the self-consistency nature of Hamiltonian, a critical aspect often overlooked by traditional machine learning approaches for Hamiltonian prediction. By employing DEQ within our model architecture, we circumvent the need for DFT calculations during the training phase to introduce the Hamiltonian's self-consistency, thus addressing computational bottlenecks associated with large or complex systems. We propose a versatile framework that combines DEQ with off-the-shelf machine learning models for predicting Hamiltonians. When benchmarked on the MD17 and QH9 datasets, DEQHNet, an instantiation of the DEQH framework, has demonstrated a significant improvement in prediction accuracy. Beyond a predictor, the DEQH model is a Hamiltonian solver, in the sense that it uses the fixed-point solving capability of the deep equilibrium model to iteratively solve for the Hamiltonian. Ablation studies of DEQHNet further elucidate the network's effectiveness, offering insights into the potential of DEQ-integrated networks for Hamiltonian learning. We open source our implementation at https://github.com/Zun-Wang/DEQHNet.
Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models
[ "Zun Wang", "Chang Liu", "Nianlong Zou", "He Zhang", "Xinran Wei", "Lin Huang", "Lijun Wu", "Bin Shao" ]
NeurIPS.cc/2024/Conference
2406.03794
[ "https://github.com/zun-wang/deqhnet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PSPtj26Lbp
@inproceedings{ ren2024lgm, title={L4{GM}: Large 4D Gaussian Reconstruction Model}, author={Jiawei Ren and Kevin Xie and Ashkan Mirzaei and hanxue liang and Xiaohui Zeng and Karsten Kreis and Ziwei Liu and Antonio Torralba and Sanja Fidler and Seung Wook Kim and Huan Ling}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PSPtj26Lbp} }
We present L4GM, the first 4D Large Reconstruction Model that produces animated objects from a single-view video input -- in a single feed-forward pass that takes only a second. Key to our success is a novel dataset of multiview videos containing curated, rendered animated objects from Objaverse. This dataset depicts 44K diverse objects with 110K animations rendered in 48 viewpoints, resulting in 12M videos with a total of 300M frames. We keep our L4GM simple for scalability and build directly on top of LGM, a pretrained 3D Large Reconstruction Model that outputs 3D Gaussian ellipsoids from multiview image input. L4GM outputs a per-frame 3D Gaussian splat representation from video frames sampled at a low fps and then upsamples the representation to a higher fps to achieve temporal smoothness. We add temporal self-attention layers to the base LGM to help it learn consistency across time, and utilize a per-timestep multiview rendering loss to train the model. The representation is upsampled to a higher framerate by training an interpolation model which produces intermediate 3D Gaussian representations. We showcase that L4GM that is only trained on synthetic data generalizes well on in-the-wild videos, producing high quality animated 3D assets.
L4GM: Large 4D Gaussian Reconstruction Model
[ "Jiawei Ren", "Kevin Xie", "Ashkan Mirzaei", "hanxue liang", "Xiaohui Zeng", "Karsten Kreis", "Ziwei Liu", "Antonio Torralba", "Sanja Fidler", "Seung Wook Kim", "Huan Ling" ]
NeurIPS.cc/2024/Conference
2406.10324
[ "" ]
https://huggingface.co/papers/2406.10324
4
13
1
11
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=PSMBefUZa2
@inproceedings{ heidari2024reinforcement, title={Reinforcement Learning Guided Semi-Supervised Learning}, author={Marzi Heidari and Hanping Zhang and Yuhong Guo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PSMBefUZa2} }
In recent years, semi-supervised learning (SSL) has gained significant attention due to its ability to leverage both labeled and unlabeled data to improve model performance, especially when labeled data is scarce. However, most current SSL methods rely on heuristics or predefined rules for generating pseudo-labels and leveraging unlabeled data. They are limited to exploiting loss functions and regularization methods within the standard norm. In this paper, we propose a novel Reinforcement Learning (RL) Guided SSL method, RLGSSL, that formulates SSL as a one-armed bandit problem and deploys an innovative RL loss based on weighted reward to adaptively guide the learning process of the prediction model. RLGSSL incorporates a carefully designed reward function that balances the use of labeled and unlabeled data to enhance generalization performance. A semi-supervised teacher-student framework is further deployed to increase the learning stability. We demonstrate the effectiveness of RLGSSL through extensive experiments on several benchmark datasets and show that our approach achieves consistent superior performance compared to state-of-the-art SSL methods.
Reinforcement Learning Guided Semi-Supervised Learning
[ "Marzi Heidari", "Hanping Zhang", "Yuhong Guo" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PSLH5q7PFo
@inproceedings{ bergstr{\"o}m2024active, title={Active preference learning for ordering items in- and out-of-sample}, author={Herman Bergstr{\"o}m and Emil Carlsson and Devdatt Dubhashi and Fredrik D. Johansson}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PSLH5q7PFo} }
Learning an ordering of items based on pairwise comparisons is useful when items are difficult to rate consistently on an absolute scale, for example, when annotators have to make subjective assessments. When exhaustive comparison is infeasible, actively sampling item pairs can reduce the number of annotations necessary for learning an accurate ordering. However, many algorithms ignore shared structure between items, limiting their sample efficiency and precluding generalization to new items. It is also common to disregard how noise in comparisons varies between item pairs, despite it being informative of item similarity. In this work, we study active preference learning for ordering items with contextual attributes, both in- and out-of-sample. We give an upper bound on the expected ordering error of a logistic preference model as a function of which items have been compared. Next, we propose an active learning strategy that samples items to minimize this bound by accounting for aleatoric and epistemic uncertainty in comparisons. We evaluate the resulting algorithm, and a variant aimed at reducing model misspecification, in multiple realistic ordering tasks with comparisons made by human annotators. Our results demonstrate superior sample efficiency and generalization compared to non-contextual ranking approaches and active preference learning baselines.
Active preference learning for ordering items in- and out-of-sample
[ "Herman Bergström", "Emil Carlsson", "Devdatt Dubhashi", "Fredrik D. Johansson" ]
NeurIPS.cc/2024/Conference
2405.03059
[ "https://github.com/healthy-ai/guro" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PRBsEz8rnV
@inproceedings{ simoncini2024no, title={No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations}, author={Walter Simoncini and Andrei Bursuc and Spyros Gidaris and Yuki M Asano}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PRBsEz8rnV} }
This paper introduces FUNGI, **F**eatures from **UN**supervised **G**rad**I**ents, a method to enhance the features of transformer encoders by leveraging self-supervised gradients. Our method is simple: given any pretrained model, we first compute gradients from various self-supervised objectives for each input. These gradients are projected to a lower dimension and then concatenated with the model's output embedding. The resulting features are evaluated on k-nearest neighbor classification over 11 datasets from vision, 5 from natural language processing, and 2 from audio. Across backbones spanning various sizes and pretraining strategies, FUNGI features provide consistent performance improvements over the embeddings. We also show that using FUNGI features can benefit linear classification, clustering and image retrieval, and that they significantly improve the retrieval-based in-context scene understanding abilities of pretrained models, for example improving upon DINO by +17% for semantic segmentation - without any training. Code is available at https://github.com/WalterSimoncini/fungivision.
No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations
[ "Walter Simoncini", "Andrei Bursuc", "Spyros Gidaris", "Yuki M Asano" ]
NeurIPS.cc/2024/Conference
2407.10964
[ "https://github.com/waltersimoncini/fungivision" ]
https://huggingface.co/papers/2407.10964
1
0
0
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=PRAsjrmXXK
@inproceedings{ ramesh2024group, title={Group Robust Preference Optimization in Reward-free {RLHF}}, author={Shyam Sundhar Ramesh and Yifan Hu and Iason Chaimalas and Viraj Mehta and Pier Giuseppe Sessa and Haitham Bou Ammar and Ilija Bogunovic}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PRAsjrmXXK} }
Adapting large language models (LLMs) for specific tasks usually involves fine-tuning through reinforcement learning with human feedback (RLHF) on preference data. While these data often come from diverse labelers' groups (e.g., different demographics, ethnicities, company teams, etc.), traditional RLHF approaches adopt a "one-size-fits-all" approach, i.e., they indiscriminately assume and optimize a single preference model, thus not being robust to unique characteristics and needs of the various groups. To address this limitation, we propose a novel Group Robust Preference Optimization (GRPO) method to align LLMs to individual groups' preferences robustly. Our approach builds upon reward-free direct preference optimization methods, but unlike previous approaches, it seeks a robust policy which maximizes the worst-case group performance. To achieve this, GRPO adaptively and sequentially weights the importance of different groups, prioritizing groups with worse cumulative loss. We theoretically study the feasibility of GRPO and analyze its convergence for the log-linear policy class. By fine-tuning LLMs with GRPO using diverse group-based global opinion data, we significantly improved performance for the worst-performing groups, reduced loss imbalances across groups, and improved probability accuracies compared to non-robust baselines.
Group Robust Preference Optimization in Reward-free RLHF
[ "Shyam Sundhar Ramesh", "Yifan Hu", "Iason Chaimalas", "Viraj Mehta", "Pier Giuseppe Sessa", "Haitham Bou Ammar", "Ilija Bogunovic" ]
NeurIPS.cc/2024/Conference
2405.20304
[ "https://github.com/rsshyam/Group-robust-preference-optimization-bandits" ]
https://huggingface.co/papers/2405.20304
0
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=PQt6Vg2X5u
@inproceedings{ wu2024recursive, title={Recursive {PAC}-Bayes: A Frequentist Approach to Sequential Prior Updates with No Information Loss}, author={Yi-Shan Wu and Yijie Zhang and Badr-Eddine Ch{\'e}rief-Abdellatif and Yevgeny Seldin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PQt6Vg2X5u} }
PAC-Bayesian analysis is a frequentist framework for incorporating prior knowledge into learning. It was inspired by Bayesian learning, which allows sequential data processing and naturally turns posteriors from one processing step into priors for the next. However, despite two and a half decades of research, the ability to update priors sequentially without losing confidence information along the way remained elusive for PAC-Bayes. While PAC-Bayes allows construction of data-informed priors, the final confidence intervals depend only on the number of points that were not used for the construction of the prior, whereas confidence information in the prior, which is related to the number of points used to construct the prior, is lost. This limits the possibility and benefit of sequential prior updates, because the final bounds depend only on the size of the final batch. We present a novel and, in retrospect, surprisingly simple and powerful PAC-Bayesian procedure that allows sequential prior updates with no information loss. The procedure is based on a novel decomposition of the expected loss of randomized classifiers. The decomposition rewrites the loss of the posterior as an excess loss relative to a downscaled loss of the prior plus the downscaled loss of the prior, which is bounded recursively. As a side result, we also present a generalization of the split-kl and PAC-Bayes-split-kl inequalities to discrete random variables, which we use for bounding the excess losses, and which can be of independent interest. In empirical evaluation the new procedure significantly outperforms state-of-the-art.
Recursive PAC-Bayes: A Frequentist Approach to Sequential Prior Updates with No Information Loss
[ "Yi-Shan Wu", "Yijie Zhang", "Badr-Eddine Chérief-Abdellatif", "Yevgeny Seldin" ]
NeurIPS.cc/2024/Conference
2405.14681
[ "https://github.com/pyijiezhang/rpb" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=PPdJPIO3mV
@inproceedings{ tran2024accelerating, title={Accelerating Transformers with Spectrum-Preserving Token Merging}, author={Hoai-Chau Tran and Duy Minh Ho Nguyen and Manh-Duy Nguyen and TrungTin Nguyen and Ngan Hoang Le and Pengtao Xie and Daniel Sonntag and James Zou and Binh T. Nguyen and Mathias Niepert}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PPdJPIO3mV} }
Increasing the throughput of the Transformer architecture, a foundational component used in numerous state-of-the-art models for vision and language tasks (e.g., GPT, LLaVa), is an important problem in machine learning. One recent and effective strategy is to merge token representations within Transformer models, aiming to reduce computational and memory requirements while maintaining accuracy. Prior work has proposed algorithms based on Bipartite Soft Matching (BSM), which divides tokens into distinct sets and merges the top $k$ similar tokens. However, these methods have significant drawbacks, such as sensitivity to token-splitting strategies and damage to informative tokens in later layers. This paper presents a novel paradigm called PiToMe, which prioritizes the preservation of informative tokens using an additional metric termed the \textit{energy score}. This score identifies large clusters of similar tokens as high-energy, indicating potential candidates for merging, while smaller (unique and isolated) clusters are considered as low-energy and preserved. Experimental findings demonstrate that PiToMe saved from 40-60\% FLOPs of the base models while exhibiting superior off-the-shelf performance on image classification (0.5\% average performance drop of ViT-MAEH compared to 2.6\% as baselines), image-text retrieval (0.3\% average performance drop of Clip on Flick30k compared to 4.5\% as others), and analogously in visual questions answering with LLaVa-7B. Furthermore, PiToMe is theoretically shown to preserve intrinsic spectral properties to the original token space under mild conditions.
Accelerating Transformers with Spectrum-Preserving Token Merging
[ "Hoai-Chau Tran", "Duy Minh Ho Nguyen", "Manh-Duy Nguyen", "TrungTin Nguyen", "Ngan Hoang Le", "Pengtao Xie", "Daniel Sonntag", "James Zou", "Binh T. Nguyen", "Mathias Niepert" ]
NeurIPS.cc/2024/Conference
2405.16148
[ "https://github.com/hchautran/PiToMe" ]
https://huggingface.co/papers/2405.16148
0
0
0
10
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=PLbFid00aU
@inproceedings{ munn2024the, title={The Impact of Geometric Complexity on Neural Collapse in Transfer Learning}, author={Michael Munn and Benoit Dherin and Javier Gonzalvo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PLbFid00aU} }
Many of the recent advances in computer vision and language models can be attributed to the success of transfer learning via the pre-training of large foundation models. However, a theoretical framework which explains this empirical success is incomplete and remains an active area of research. Flatness of the loss surface and neural collapse have recently emerged as useful pre-training metrics which shed light on the implicit biases underlying pre-training. In this paper, we explore the geometric complexity of a model's learned representations as a fundamental mechanism that relates these two concepts. We show through experiments and theory that mechanisms which affect the geometric complexity of the pre-trained network also influence the neural collapse. Furthermore, we show how this effect of the geometric complexity generalizes to the neural collapse of new classes as well, thus encouraging better performance on downstream tasks, particularly in the few-shot setting.
The Impact of Geometric Complexity on Neural Collapse in Transfer Learning
[ "Michael Munn", "Benoit Dherin", "Javier Gonzalvo" ]
NeurIPS.cc/2024/Conference
2405.15706
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PKcCHncbzg
@inproceedings{ jiahaoli2024relationship, title={Relationship Prompt Learning is Enough for Open-Vocabulary Semantic Segmentation}, author={Jiahaoli and Yang Lu and Yuan Xie and Yanyun Qu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PKcCHncbzg} }
Open-vocabulary semantic segmentation (OVSS) aims to segment unseen classes without corresponding labels. Existing Vision-Language Model (VLM)-based methods leverage VLM's rich knowledge to enhance additional explicit segmentation-specific networks, yielding competitive results, but at the cost of extensive training cost. To reduce the cost, we attempt to enable VLM to directly produce the segmentation results without any segmentation-specific networks. Prompt learning offers a direct and parameter-efficient approach, yet it falls short in guiding VLM for pixel-level visual classification. Therefore, we propose the ${\bf R}$elationship ${\bf P}$rompt ${\bf M}$odule (${\bf RPM}$), which generates the relationship prompt that directs VLM to extract pixel-level semantic embeddings suitable for OVSS. Moreover, RPM integrates with VLM to construct the ${\bf R}$elationship ${\bf P}$rompt ${\bf N}$etwork (${\bf RPN}$), achieving OVSS without any segmentation-specific networks. RPN attains state-of-the-art performance with merely about ${\bf 3M}$ trainable parameters (2\% of total parameters).
Relationship Prompt Learning is Enough for Open-Vocabulary Semantic Segmentation
[ "Jiahaoli", "Yang Lu", "Yuan Xie", "Yanyun Qu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PK8xOCBQRO
@inproceedings{ jalan2024transfer, title={Transfer Learning for Latent Variable Network Models}, author={Akhil Jalan and Arya Mazumdar and Soumendu Sundar Mukherjee and Purnamrita Sarkar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PK8xOCBQRO} }
We study transfer learning for estimation in latent variable network models. In our setting, the conditional edge probability matrices given the latent variables are represented by $P$ for the source and $Q$ for the target. We wish to estimate $Q$ given two kinds of data: (1) edge data from a subgraph induced by an $o(1)$ fraction of the nodes of $Q$, and (2) edge data from all of $P$. If the source $P$ has no relation to the target $Q$, the estimation error must be $\Omega(1)$. However, we show that if the latent variables are shared, then vanishing error is possible. We give an efficient algorithm that utilizes the ordering of a suitably defined graph distance. Our algorithm achieves $o(1)$ error and does not assume a parametric form on the source or target networks. Next, for the specific case of Stochastic Block Models we prove a minimax lower bound and show that a simple algorithm achieves this rate. Finally, we empirically demonstrate our algorithm's use on real-world and simulated graph transfer problems.
Transfer Learning for Latent Variable Network Models
[ "Akhil Jalan", "Arya Mazumdar", "Soumendu Sundar Mukherjee", "Purnamrita Sarkar" ]
NeurIPS.cc/2024/Conference
2406.03437
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PI0CDY6nmo
@inproceedings{ zhou2024towards, title={Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits}, author={Julien Zhou and Pierre Gaillard and Thibaud Rahier and Houssam Zenati and Julyan Arbel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PI0CDY6nmo} }
We address the problem of stochastic combinatorial semi-bandits, where a player selects among $P$ actions from the power set of a set containing $d$ base items. Adaptivity to the problem's structure is essential in order to obtain optimal regret upper bounds. As estimating the coefficients of a covariance matrix can be manageable in practice, leveraging them should improve the regret. We design ``optimistic'' covariance-adaptive algorithms relying on online estimations of the covariance structure, called OLS-UCB-C and COS-V (only the variances for the latter). They both yields improved gap-free regret. Although COS-V can be slightly suboptimal, it improves on computational complexity by taking inspiration from Thompson Sampling approaches. It is the first sampling-based algorithm satisfying a $\sqrt{T}$ gap-free regret (up to poly-logs). We also show that in some cases, our approach efficiently leverages the semi-bandit feedback and outperforms bandit feedback approaches, not only in exponential regimes where $P\gg d$ but also when $P\leq d$, which is not covered by existing analyses.
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits
[ "Julien Zhou", "Pierre Gaillard", "Thibaud Rahier", "Houssam Zenati", "Julyan Arbel" ]
NeurIPS.cc/2024/Conference
2402.15171
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PH7sdEanXP
@inproceedings{ lin2024scaling, title={Scaling Laws in Linear Regression: Compute, Parameters, and Data}, author={Licong Lin and Jingfeng Wu and Sham M. Kakade and Peter Bartlett and Jason D. Lee}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PH7sdEanXP} }
Empirically, large-scale deep learning models often satisfy a neural scaling law: the test error of the trained model improves polynomially as the model size and data size grow. However, conventional wisdom suggests the test error consists of approximation, bias, and variance errors, where the variance error increases with model size. This disagrees with the general form of neural scaling laws, which predict that increasing model size monotonically improves performance. We study the theory of scaling laws in an infinite dimensional linear regression setup. Specifically, we consider a model with $M$ parameters as a linear function of sketched covariates. The model is trained by one-pass stochastic gradient descent (SGD) using $N$ data. Assuming the optimal parameter satisfies a Gaussian prior and the data covariance matrix has a power-law spectrum of degree $a>1$, we show that the reducible part of the test error is $\Theta(M^{-(a-1)} + N^{-(a-1)/a})$. The variance error, which increases with $M$, is dominated by the other errors due to the implicit regularization of SGD, thus disappearing from the bound. Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.
Scaling Laws in Linear Regression: Compute, Parameters, and Data
[ "Licong Lin", "Jingfeng Wu", "Sham M. Kakade", "Peter Bartlett", "Jason D. Lee" ]
NeurIPS.cc/2024/Conference
2406.08466
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PGOuBHYdbr
@inproceedings{ zhang2024thompson, title={Thompson Sampling For Combinatorial Bandits: Polynomial Regret and Mismatched Sampling Paradox}, author={Raymond Zhang and Richard Combes}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PGOuBHYdbr} }
We consider Thompson Sampling (TS) for linear combinatorial semi-bandits and subgaussian rewards. We propose the first known TS whose finite-time regret does not scale exponentially with the dimension of the problem. We further show the mismatched sampling paradox: A learner who knows the rewards distributions and samples from the correct posterior distribution can perform exponentially worse than a learner who does not know the rewards and simply samples from a well-chosen Gaussian posterior. The code used to generate the experiments is available at https://github.com/RaymZhang/CTS-Mismatched-Paradox
Thompson Sampling For Combinatorial Bandits: Polynomial Regret and Mismatched Sampling Paradox
[ "Raymond Zhang", "Richard Combes" ]
NeurIPS.cc/2024/Conference
2410.05441
[ "https://github.com/raymzhang/cts-mismatched-paradox" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=PEEqnXlSCk
@inproceedings{ jia2024sdpbit, title={{SDP}4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for {LLM} Training}, author={Jinda Jia and Cong Xie and Hanlin Lu and Daoce Wang and Hao Feng and Chengming Zhang and Baixi Sun and Haibin Lin and Zhi Zhang and Xin Liu and Dingwen Tao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PEEqnXlSCk} }
Recent years have witnessed a clear trend towards language models with an ever-increasing number of parameters, as well as the growing training overhead and memory usage. Distributed training, particularly through Sharded Data Parallelism (ShardedDP) which partitions optimizer states among workers, has emerged as a crucial technique to mitigate training time and memory usage. Yet, a major challenge in the scalability of ShardedDP is the intensive communication of weights and gradients. While compression techniques can alleviate this issue, they often result in worse accuracy. Driven by this limitation, we propose SDP4Bit (Toward 4Bit Communication Quantization in Sharded Data Parallelism for LLM Training), which effectively reduces the communication of weights and gradients to nearly 4 bits via two novel techniques: quantization on weight differences, and two-level gradient smooth quantization. Furthermore, SDP4Bit presents an algorithm-system co-design with runtime optimization to minimize the computation overhead of compression. Additional to the theoretical guarantees of convergence, we empirically evaluate the accuracy of SDP4Bit on the pre-training of GPT models with up to 6.7 billion parameters, and the results demonstrate a negligible impact on training loss. Furthermore, speed experiments show that SDP4Bit achieves up to 4.08× speedup in end-to-end throughput on a scale of 128 GPUs.
SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training
[ "Jinda Jia", "Cong Xie", "Hanlin Lu", "Daoce Wang", "Hao Feng", "Chengming Zhang", "Baixi Sun", "Haibin Lin", "Zhi Zhang", "Xin Liu", "Dingwen Tao" ]
NeurIPS.cc/2024/Conference
2410.15526
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PCgnTiGC9K
@inproceedings{ wang2024credal, title={Credal Deep Ensembles for Uncertainty Quantification}, author={Kaizheng Wang and Fabio Cuzzolin and Shireen Kudukkil Manchingal and Keivan Shariatmadar and David Moens and Hans Hallez}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PCgnTiGC9K} }
This paper introduces an innovative approach to classification called Credal Deep Ensembles (CreDEs), namely, ensembles of novel Credal-Set Neural Networks (CreNets). CreNets are trained to predict a lower and an upper probability bound for each class, which, in turn, determine a convex set of probabilities (credal set) on the class set. The training employs a loss inspired by distributionally robust optimization which simulates the potential divergence of the test distribution from the training distribution, in such a way that the width of the predicted probability interval reflects the epistemic uncertainty about the future data distribution. Ensembles can be constructed by training multiple CreNets, each associated with a different random seed, and averaging the outputted intervals. Extensive experiments are conducted on various out-of-distributions (OOD) detection benchmarks (CIFAR10/100 vs SVHN/Tiny-ImageNet, CIFAR10 vs CIFAR10-C, ImageNet vs ImageNet-O) and using different network architectures (ResNet50, VGG16, and ViT Base). Compared to Deep Ensemble baselines, CreDEs demonstrate higher test accuracy, lower expected calibration error, and significantly improved epistemic uncertainty estimation.
Credal Deep Ensembles for Uncertainty Quantification
[ "Kaizheng Wang", "Fabio Cuzzolin", "Shireen Kudukkil Manchingal", "Keivan Shariatmadar", "David Moens", "Hans Hallez" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PAu0W5YAKC
@inproceedings{ yan2024linear, title={Linear Causal Bandits: Unknown Graph and Soft Interventions}, author={Zirui Yan and Ali Tajer}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=PAu0W5YAKC} }
Designing causal bandit algorithms depends on two central categories of assumptions: (i) the extent of information about the underlying causal graphs and (ii) the extent of information about interventional statistical models. There have been extensive recent advances in dispensing with assumptions on either category. These include assuming known graphs but unknown interventional distributions, and the converse setting of assuming unknown graphs but access to restrictive hard/$\operatorname{do}$ interventions, which removes the stochasticity and ancestral dependencies. Nevertheless, the problem in its general form, i.e., _unknown_ graph and _unknown_ stochastic intervention models, remains open. This paper addresses this problem and establishes that in a graph with $N$ nodes, maximum in-degree $d$ and maximum causal path length $L$, after $T$ interaction rounds the regret upper bound scales as $\tilde{\mathcal{O}}((cd)^{L-\frac{1}{2}}\sqrt{T} + d + RN)$ where $c>1$ is a constant and $R$ is a measure of intervention power. A universal minimax lower bound is also established, which scales as $\Omega(d^{L-\frac{3}{2}}\sqrt{T})$. Importantly, the graph size $N$ has a diminishing effect on the regret as $T$ grows. These bounds have matching behavior in $T$, exponential dependence on $L$, and polynomial dependence on $d$ (with the gap $d\ $). On the algorithmic aspect, the paper presents a novel way of designing a computationally efficient CB algorithm, addressing a challenge that the existing CB algorithms using soft interventions face.
Linear Causal Bandits: Unknown Graph and Soft Interventions
[ "Zirui Yan", "Ali Tajer" ]
NeurIPS.cc/2024/Conference
2411.02383
[ "https://github.com/ZiruiYan/Linear-Causal-Bandit-Unknown-Graph" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster