bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=fwXj1c6faX
@inproceedings{ liu2023zeroshot, title={Zero-shot Cross-task Preference Alignment for Offline {RL} via Optimal Transport}, author={Runze Liu and Yali Du and Fengshuo Bai and Jiafei Lyu and Xiu Li}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=fwXj1c6faX} }
In preference-based Reinforcement Learning (PbRL), aligning rewards with human intentions often necessitates a substantial volume of human-provided labels. Furthermore, the expensive preference data from prior tasks often lacks reusability for subsequent tasks, resulting in repetitive labeling for each new task. In this paper, we propose a novel zero-shot cross-task preference-based RL algorithm that leverages labeled preference data from source tasks to infer labels for target tasks, eliminating the requirement for human queries. Our approach utilizes Gromov-Wasserstein distance to align trajectory distributions between source and target tasks. The solved optimal transport matrix serves as a correspondence between trajectories of two tasks, making it possible to identify corresponding trajectory pairs between tasks and transfer the preference labels. However, direct learning from these inferred labels might introduce noisy or inaccurate reward functions. To this end, we introduce Robust Preference Transformer, which considers both reward mean and uncertainty by modeling rewards as Gaussian distributions. Through extensive empirical validation on robotic manipulation tasks from Meta-World and Robomimic, our approach exhibits strong capabilities of transferring preferences between tasks in a zero-shot way and learns reward functions from noisy labels robustly. Notably, our approach significantly surpasses existing methods in limited-data scenarios. The videos of our method are available on the website: https://sites.google.com/view/pot-rpt.
Zero-shot Cross-task Preference Alignment for Offline RL via Optimal Transport
[ "Runze Liu", "Yali Du", "Fengshuo Bai", "Jiafei Lyu", "Xiu Li" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eUZZHoTknJ
@inproceedings{ le2023accelerating, title={Accelerating Motion Planning via Optimal Transport}, author={An Le and Georgia Chalvatzaki and Armin Biess and Jan Peters}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=eUZZHoTknJ} }
Motion planning is still an open problem for many disciplines, e.g., robotics, autonomous driving, due to their need for high computational resources that hinder real-time, efficient decision-making. A class of methods striving to provide smooth solutions is gradient-based trajectory optimization. However, those methods usually suffer from bad local minima, while for many settings, they may be inapplicable due to the absence of easy-to-access gradients of the optimization objectives. In response to these issues, we introduce Motion Planning via Optimal Transport (MPOT)---a \textit{gradient-free} method that optimizes a batch of smooth trajectories over highly nonlinear costs, even for high-dimensional tasks, while imposing smoothness through a Gaussian Process dynamics prior via the planning-as-inference perspective. To facilitate batch trajectory optimization, we introduce an original zero-order and highly-parallelizable update rule----the Sinkhorn Step, which uses the regular polytope family for its search directions. Each regular polytope, centered on trajectory waypoints, serves as a local cost-probing neighborhood, acting as a \textit{trust region} where the Sinkhorn Step ``transports'' local waypoints toward low-cost regions. We theoretically show that Sinkhorn Step guides the optimizing parameters toward local minima regions of non-convex objective functions. We then show the efficiency of MPOT in a range of problems from low-dimensional point-mass navigation to high-dimensional whole-body robot motion planning, evincing its superiority compared to popular motion planners, paving the way for new applications of optimal transport in motion planning.
Accelerating Motion Planning via Optimal Transport
[ "An Le", "Georgia Chalvatzaki", "Armin Biess", "Jan Peters" ]
Workshop/OTML
2309.15970
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=aJbyq2lmQc
@inproceedings{ nietert2023outlierrobust, title={Outlier-Robust Wasserstein {DRO}}, author={Sloan Nietert and Ziv Goldfeld and Soroosh Shafieezadeh-Abadeh}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=aJbyq2lmQc} }
Distributionally robust optimization (DRO) is an effective approach for data-driven decision-making in the presence of uncertainty. Geometric uncertainty due to~sampling or localized perturbations of data points is captured by Wasserstein DRO (WDRO), which seeks to learn a model that performs uniformly well over a Wasserstein ball centered around the observed data distribution. However, WDRO fails to account for non-geometric perturbations such as adversarial outliers, which can greatly distort the Wasserstein distance measurement and impede the learned model. We address this gap by proposing a novel outlier-robust WDRO framework for decision-making under both geometric (Wasserstein) perturbations and non-geometric (total variation (TV)) contamination that allows an $\varepsilon$-fraction of data to be arbitrarily corrupted. We design an uncertainty set using a certain robust Wasserstein ball that accounts for both perturbation types. We derive minimax optimal excess risk bounds for this procedure that explicitly capture the Wasserstein and TV risks. We prove a strong duality result that enables efficient computation of our outlier-robust WDRO problem. When the loss function depends only on low-dimensional features of the data, we eliminate certain dimension dependencies from the risk bounds that are unavoidable in the general setting. Finally, we present experiments validating our theory on standard regression and classification tasks.
Outlier-Robust Wasserstein DRO
[ "Sloan Nietert", "Ziv Goldfeld", "Soroosh Shafieezadeh-Abadeh" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZbioTIO6y6
@inproceedings{ mroueh2023towards, title={Towards a Statistical Theory of Learning to Learn In-context with Transformers}, author={Youssef Mroueh}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=ZbioTIO6y6} }
Classical learning theory focuses on supervised learning of functions via empirical risk minimization where labeled examples for a particular task are represented by the data distribution experienced by the model during training. Recently, in-context learning emerged as a paradigm shift in large pre-trained models. When conditioned with few labeled examples of potentially unseen tasks in the training, the model infers the task at hand and makes predictions on new points. Learning to learn in-context on the other hand, aims at training models in a meta-learning setup that generalize to new unseen tasks from only few shots of labeled examples. We present in this paper a statistical learning framework for the problem of in-context meta learning and define a function class that enables it. The meta-learner is abstracted as a function defined on the cross product of the probability space (representing context) and the data space. The data distribution is sampled from a meta distribution on tasks. Thanks to the regularity we assume on the function class in the Wasserstein geometry, we leverage tools from optimal transport in order to study the generalization of the meta learner to unseen tasks. Finally, we show that encoder transformers exhibit this type of regularity and leverage our theory to analyze their generalization properties.
Towards a Statistical Theory of Learning to Learn In-context with Transformers
[ "Youssef Mroueh" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=YI873MmwQJ
@inproceedings{ polo2023estimating, title={Estimating Fr\'echet bounds for validating programmatic weak supervision}, author={Felipe Maia Polo and Mikhail Yurochkin and Moulinath Banerjee and Subha Maity and Yuekai Sun}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=YI873MmwQJ} }
We develop methods for estimating Fréchet bounds on (possibly high-dimensional) distribution classes in which some variables are continuous-valued. We establish the statistical correctness of the computed bounds under uncertainty in the marginal constraints and demonstrate the usefulness of our algorithms by evaluating the performance of machine learning (ML) models trained with programmatic weak supervision (PWS). PWS is a framework for principled learning from weak supervision inputs (e.g., crowdsourced labels, knowledge bases, pre-trained models on related tasks, etc.), and it has achieved remarkable success in many areas of science and engineering. Unfortunately, it is generally difficult to validate the performance of ML models trained with PWS due to the absence of labeled data. Our algorithms address this issue by estimating sharp lower and upper bounds for performance metrics such as accuracy/recall/precision drawing connections to tools from computational optimal transport.
Estimating Fréchet bounds for validating programmatic weak supervision
[ "Felipe Maia Polo", "Mikhail Yurochkin", "Moulinath Banerjee", "Subha Maity", "Yuekai Sun" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XWKy9bjNOA
@inproceedings{ chen2023semidefinite, title={Semidefinite Relaxations of the Gromov-Wasserstein Distance}, author={Junyu Chen and Binh Nguyen and Yong Sheng Soh}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=XWKy9bjNOA} }
The Gromov-Wasserstein distance (GW) is an extension of the optimal transport problem that allows one to match objects between incomparable spaces. At its core, GW-distance is specified as the solution of a non-convex quadractically constrained quadratic program, which is not known to be tractable to solve. In particular, existing solvers are only able to find local optimizers. In this work, we propose a semi-definite programming (SDP) relaxation of the GW distance. Our approach provides the ability to compute the optimality gap of any transport map from the global optimal solution. Our initial numerical experiments suggest that our proposed relaxation is strong in that it frequently computes the global optimal solution, together with a proof of global optimality.
Semidefinite Relaxations of the Gromov-Wasserstein Distance
[ "Junyu Chen", "Binh Nguyen", "Yong Sheng Soh" ]
Workshop/OTML
2312.14572
[ "https://github.com/tbng/gwsdp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WWk68S38Cn
@inproceedings{ xu2023normalizing, title={Normalizing flow neural networks by {JKO} scheme}, author={Chen Xu and Xiuyuan Cheng and Yao Xie}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=WWk68S38Cn} }
Normalizing flow is a class of deep generative models for efficient sampling and likelihood estimation, which achieves attractive performance, particularly in high dimensions. The flow is often implemented using a sequence of invertible residual blocks. Existing works adopt special network architectures and regularization of flow trajectories. In this paper, we develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto (JKO) scheme, which unfolds the discrete-time dynamic of the Wasserstein gradient flow. The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks, avoiding sampling SDE trajectories and score matching or variational learning, thus reducing the memory load and difficulty in end-to-end training. We also develop adaptive time reparameterization of the flow network with a progressive refinement of the induced trajectory in probability space to improve the model accuracy further. Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance compared with existing flow and diffusion models at a significantly reduced computational and memory cost.
Normalizing flow neural networks by JKO scheme
[ "Chen Xu", "Xiuyuan Cheng", "Yao Xie" ]
Workshop/OTML
2212.14424
[ "https://github.com/hamrel-cxu/jko-iflow" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ToLEG6EEaU
@inproceedings{ xiong2023symotflow, title={Sy{MOT}-Flow: Learning optimal transport flow for two arbitrary distributions with maximum mean discrepancy}, author={Zhe Xiong and Qiaoqiao Ding and Xiaoqun Zhang}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=ToLEG6EEaU} }
Finding a transformation between two unknown probability distributions from samples is crucial for modeling complex data distributions and perform tasks such as density estimation, sample generation, and statistical inference. One powerful framework for such transformations is normalizing flow, which transforms an unknown distribution into a standard normal distribution using an invertible network. In this paper, we introduce a novel model called SyMOT-Flow that trains an invertible transformation by minimizing the symmetric maximum mean discrepancy between samples from two unknown distributions, and we incorporate an optimal transport cost as regularization to obtain a short-distance and interpretable transformation. The resulted transformation leads to more stable and accurate sample generation. We establish several theoretical results for the proposed model and demonstrate its effectiveness with low-dimensional illustrative examples as well as high-dimensional generative samples obtained through the forward and reverse flows.
SyMOT-Flow: Learning optimal transport flow for two arbitrary distributions with maximum mean discrepancy
[ "Zhe Xiong", "Qiaoqiao Ding", "Xiaoqun Zhang" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ReUeclFikK
@inproceedings{ xu2023computing, title={Computing high-dimensional optimal transport by flow neural networks}, author={Chen Xu and Xiuyuan Cheng and Yao Xie}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=ReUeclFikK} }
Flow-based models are widely used in generative tasks, including normalizing flow, where a neural network transports from a data distribution $P$ to a normal distribution. This work develops a flow-based model that transports from $P$ to an arbitrary $Q$ where both distributions are only accessible via finite samples. We propose to learn the dynamic optimal transport between $P$ and $Q$ by training a flow neural network. The model is trained to find an invertible transport map between $P$ and $Q$ optimally by minimizing the transport cost. The trained optimal transport flow allows for performing many downstream tasks, including infinitesimal density ratio estimation and distribution interpolation in the latent space for generative models. The effectiveness of the proposed model on high-dimensional data is empirically demonstrated in mutual information estimation, energy-based generative models, and image-to-image translation.
Computing high-dimensional optimal transport by flow neural networks
[ "Chen Xu", "Xiuyuan Cheng", "Yao Xie" ]
Workshop/OTML
2305.11857
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=RYPNAOWANk
@inproceedings{ zhang2023duality, title={Duality and Sample Complexity for the Gromov-Wasserstein Distance}, author={Zhengxin Zhang and Ziv Goldfeld and Youssef Mroueh and Bharath Sriperumbudur}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=RYPNAOWANk} }
The Gromov-Wasserstein (GW) distance, rooted in optimal transport (OT) theory, quantifies dissimilarity between metric measure spaces and provides a framework for aligning heterogeneous datasets. While computational aspects of the GW problem have been widely studied, a duality theory and fundamental statistical questions concerning empirical convergence rates remained obscure. This work closes these gaps for the quadratic GW distance over Euclidean spaces of different dimensions $d_x$ and $d_y$. We derive a dual form that represents the GW distance in terms of the well-understood OT problem. This enables employing proof techniques from statistical OT based on regularity analysis of dual potentials and empirical process theory, using which we establish the first GW empirical convergence rates. The derived two-sample rate is $n^{-2/\max\{\min\{d_x,d_y\},4\}}$ (up to a log factor when $\min\{d_x,d_y\}=4$), which matches the corresponding rates for OT. We also provide matching lower bounds, thus establishing sharpness of the derived rates. Lastly, the duality is leveraged to shed new light on the open problem of the one-dimensional GW distance between uniform distributions on $n$ points, illuminating why the identity and anti-identity permutations may not be optimal. Our results serve as a first step towards a comprehensive statistical theory as well as computational advancements for GW distances, based on the discovered dual formulations.
Duality and Sample Complexity for the Gromov-Wasserstein Distance
[ "Zhengxin Zhang", "Ziv Goldfeld", "Youssef Mroueh", "Bharath Sriperumbudur" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=RXYURNZzfs
@inproceedings{ liu2023ptlp, title={{PTLP}: Partial Transport \$L{\textasciicircum}p\$ Distances}, author={Xinran Liu and Yikun Bai and Huy Tran and Zhanqi Zhu and Matthew Thorpe and Soheil Kolouri}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=RXYURNZzfs} }
Optimal transport and its related problems, including optimal partial transport, have proven to be valuable tools in machine learning for computing meaningful distances between probability or positive measures. This success has led to a growing interest in defining transport-based distances that allow for comparing signed measures and, more generally, multi-channeled signals. Transport $L^p$ distances are notable extensions of the optimal transport framework to signed and possibly multi-channeled signals. In this paper, we introduce partial transport $L^p$ distances as a new family of metrics for comparing generic signals, benefiting from the robustness of partial transport distances. We provide theoretical background such as the existence of optimal plans and the behavior of the distance in various limits. Furthermore, we introduce the sliced variation of these distances, which allows for faster comparison of generic signals. Finally, we demonstrate the application of the proposed distances in signal class separability and nearest neighbor classification.
PTLP: Partial Transport L^p Distances
[ "Xinran Liu", "Yikun Bai", "Huy Tran", "Zhanqi Zhu", "Matthew Thorpe", "Soheil Kolouri" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Puw2kj7Wc7
@inproceedings{ le2023optimal, title={Optimal Transport for Measures with Noisy Tree Metric}, author={Tam Le and Truyen Nguyen and Kenji Fukumizu}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=Puw2kj7Wc7} }
We study optimal transport (OT) problem for probability measures supported on a tree metric space. It is known that such OT problem (i.e., tree-Wasserstein (TW)) admits a closed-form expression, but depends fundamentally on the underlying tree structure over supports of input measures. In practice, the given tree structure may be, however, perturbed due to noisy or adversarial measurements. In order to mitigate this issue, we follow the max-min robust OT approach which considers the maximal possible distances between two input measures over an uncertainty set of tree metrics. In general, this approach is hard to compute, even for measures supported in $1$-dimensional space, due to its non-convexity and non-smoothness which hinders its practical applications, especially for large-scale settings. In this work, we propose \emph{novel uncertainty sets of tree metrics} from the lens of edge deletion/addition which covers a diversity of tree structures in an elegant framework. Consequently, by building upon the proposed uncertainty sets, and leveraging the tree structure over supports, we show that the max-min robust OT also admits a closed-form expression for a fast computation as its counterpart standard OT (i.e., TW). Furthermore, we demonstrate that the max-min robust OT satisfies the metric property and is negative definite. We then exploit its negative definiteness to propose \emph{positive definite kernels} and test them in several simulations on various real-world datasets on document classification and topological data analysis for measures with noisy tree metric.
Optimal Transport for Measures with Noisy Tree Metric
[ "Tam Le", "Truyen Nguyen", "Kenji Fukumizu" ]
Workshop/OTML
2310.13653
[ "https://github.com/lttam/robustot-noisytreemetric" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PkoKaLNvGW
@inproceedings{ kwegyir-aggrey2023repairing, title={Repairing Regressors for Fair Binary Classification at Any Decision Threshold}, author={Kweku Kwegyir-Aggrey and Jessica Dai and A. Feder Cooper and John Dickerson and Keegan Hines and Suresh Venkatasubramanian}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=PkoKaLNvGW} }
We study the problem of post-processing a supervised machine-learned regressor to maximize fair binary classification at all decision thresholds. By decreasing the statistical distance between each group's score distributions, we show that we can increase fair performance across all thresholds at once, and that we can do so without a large decrease in accuracy. To this end, we introduce a formal measure of Distributional Parity, which captures the degree of similarity in the distributions of classifications for different protected groups. Our main result is to put forward a novel post-processing algorithm based on optimal transport, which provably maximizes Distributional Parity, thereby attaining common notions of group fairness like Equalized Odds or Equal Opportunity at all thresholds. We demonstrate on two fairness benchmarks that our technique works well empirically , while also outperforming and generalizing similar techniques from related work.
Repairing Regressors for Fair Binary Classification at Any Decision Threshold
[ "Kweku Kwegyir-Aggrey", "Jessica Dai", "A. Feder Cooper", "John Dickerson", "Keegan Hines", "Suresh Venkatasubramanian" ]
Workshop/OTML
2203.07490
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PMGGrKTIii
@inproceedings{ akbari2023causal, title={Causal Discovery via Monotone Triangular Transport Maps}, author={Sina Akbari and Luca Ganassali and Negar Kiyavash}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=PMGGrKTIii} }
We study the problem of causal structure learning from data using transport maps. Specifically, we first provide a constraint-based method which builds upon lower-triangular monotone parametric transport maps to design conditional independence tests which are agnostic to the noise distribution. We provide an algorithm for causal discovery up to Markov Equivalence for general structural equations and noise distributions, which allows for settings with latent variables. Our approach also extends to score-based causal discovery by providing a novel means for defining scores. This allows us to uniquely recover the causal graph under additional identifiability and structural assumptions, such as additive noise or post-nonlinear models. We provide experimental results to compare the proposed approach with the state of the art on both synthetic and real-world datasets.
Causal Discovery via Monotone Triangular Transport Maps
[ "Sina Akbari", "Luca Ganassali", "Negar Kiyavash" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=OyDMxFt35U
@inproceedings{ singh2023applications, title={Applications of Optimal Transport Distances in Unsupervised Auto{ML}}, author={Prabhant Singh and Joaquin Vanschoren}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=OyDMxFt35U} }
In this work, we explore the utility of Optimal Transport-based dataset similarity to find similar \textit{unlabeled tabular} datasets, especially in the context of automated machine learning (AutoML) on unsupervised tasks. Since unsupervised tasks don't have a ground truth that optimization techniques can optimize towards, but often do have historical information on which pipelines work best, we propose to meta-learn over prior tasks to transfer useful pipelines to new tasks. Our intuition behind this work is that pipelines that worked well on datasets with a \textit{similar underlying data distribution} will work well on new datasets. We use Optimal Transport distances to find this similarity between unlabeled tabular datasets and recommend machine learning pipelines on two downstream unsupervised tasks: Outlier Detection and Clustering. We obtain very promising results against existing baselines and state-of-the-art methods.
Applications of Optimal Transport Distances in Unsupervised AutoML
[ "Prabhant Singh", "Joaquin Vanschoren" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Ne4WjToIb6
@inproceedings{ tamir2023dataconditional, title={Data-Conditional Diffusion Bridges}, author={Ella Tamir and Martin Trapp and Arno Solin}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=Ne4WjToIb6} }
The dynamic Schrödinger bridge problem provides an appealing setting for solving constrained time-series data generation tasks posed as an iteration over optimal transport problems. Recent works have demonstrated state-of-the-art results but are limited to learning bridges with only initial and terminal constraints. Our work extends this paradigm by proposing the Iterative Smoothing Bridge (ISB). We integrate Bayesian filtering and optimal control into learning the diffusion process, enabling constrained stochastic processes governed by sparse observations at intermediate stages and terminal constraints, and assess the effectiveness of ISB on a single-cell embryo RNA data set.
Data-Conditional Diffusion Bridges
[ "Ella Tamir", "Martin Trapp", "Arno Solin" ]
Workshop/OTML
[ "https://github.com/aaltoml/iterative-smoothing-bridge" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=MJ1qZBwHHI
@inproceedings{ lu2023characterizing, title={Characterizing Out-of-Distribution Error via Optimal Transport}, author={Yuzhe Lu and Yilong Qin and Runtian Zhai and Andrew Shen and Ketong Chen and Zhenlin Wang and Soheil Kolouri and Simon Stepputtis and Joseph Campbell and Katia Sycara}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=MJ1qZBwHHI} }
Out-of-distribution (OOD) data poses serious challenges in deployed machine learning models, so methods of predicting a model's performance on OOD data without labels are important for machine learning safety. While a number of methods have been proposed by prior work, they often underestimate the actual error, sometimes by a large margin, which greatly impacts their applicability to real tasks. In this work, we identify pseudo-label shift, or the difference between the predicted and true OOD label distributions, as a key indicator to this underestimation. Based on this observation, we introduce a novel method for estimating model performance by leveraging optimal transport theory, Confidence Optimal Transport (COT), and show that it provably provides more robust error estimates in the presence of pseudo-label shift. Additionally, we introduce an empirically-motivated variant of COT, Confidence Optimal Transport with Thresholding (COTT), which applies thresholding to the individual transport costs and further improves the accuracy of COT's error estimates. We evaluate COT and COTT on a variety of standard benchmarks that induce various types of distribution shift -- synthetic, novel subpopulation, and natural -- and show that our approaches significantly outperform existing state-of-the-art methods with an up to 3x lower prediction error.
Characterizing Out-of-Distribution Error via Optimal Transport
[ "Yuzhe Lu", "Yilong Qin", "Runtian Zhai", "Andrew Shen", "Ketong Chen", "Zhenlin Wang", "Soheil Kolouri", "Simon Stepputtis", "Joseph Campbell", "Katia Sycara" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LXGhneskDs
@inproceedings{ alfonso2023a, title={A generative flow model for conditional sampling via optimal transport}, author={Jason Alfonso and Ricardo Baptista and Anupam Bhakta and Noam Gal and Alfin Hou and Vasilisa Lyubimova and Daniel Pocklington and Josef Sajonz and Giulio Trigila and Ryan Tsai}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=LXGhneskDs} }
Sampling conditional distributions is a fundamental task for Bayesian inference and density estimation. Generative models characterize conditionals by learning a transport map that pushes forward a reference (e.g., a standard Gaussian) to the target distribution. While these approaches can successfully describe many non-Gaussian problems, their performance is often limited by parametric bias and the reliability of gradient-based (adversarial) optimizers to learn the map. This work proposes a non-parametric generative model that adaptively maps reference samples to the target. The model uses block-triangular transport maps, whose components characterize conditionals of the target distribution. These maps arise from solving an optimal transport problem with a weighted $L^2$ cost function, thereby extending the data-driven approach in [Trigila and Tabak, 2016] for conditional sampling. The proposed approach is demonstrated on a low-dimensional example and a parameter inference problem involving nonlinear ODEs.
A generative flow model for conditional sampling via optimal transport
[ "Jason Alfonso", "Ricardo Baptista", "Anupam Bhakta", "Noam Gal", "Alfin Hou", "Vasilisa Lyubimova", "Daniel Pocklington", "Josef Sajonz", "Giulio Trigila", "Ryan Tsai" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=L7c0hEWO2m
@inproceedings{ baheri2023understanding, title={Understanding Reward Ambiguity Through Optimal Transport Theory in Inverse Reinforcement Learning}, author={Ali Baheri}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=L7c0hEWO2m} }
In inverse reinforcement learning (IRL), the central objective is to infer underlying reward functions from observed expert behaviors in a way that not only explains the given data but also generalizes to unseen scenarios, ensuring robustness against reward ambiguity—where multiple reward functions can equally explain the same expert behaviors. While significant strides have been made in addressing this issue, current methods often face with high-dimensional problems and lack a geometric foundation. This paper harnesses the optimal transport (OT) theory to provide a fresh perspective on these challenges. By utilizing the Wasserstein distance from OT, we establish a geometric framework that allows for quantifying reward ambiguity and identifying a central representation or centroid of reward functions. These insights pave the way for robust IRL methodologies anchored in geometric interpretations, offering a structured approach to tackle reward ambiguity in high-dimensional settings.
Understanding Reward Ambiguity Through Optimal Transport Theory in Inverse Reinforcement Learning
[ "Ali Baheri" ]
Workshop/OTML
2310.12055
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Kwrz4fYD2E
@inproceedings{ assel2023optimal, title={Optimal Transport with Adaptive Regularisation}, author={Hugues Van Assel and Titouan Vayer and R{\'e}mi Flamary and Nicolas Courty}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=Kwrz4fYD2E} }
Regularising the primal formulation of optimal transport (OT) with a strictly convex term leads to enhanced numerical complexity and a denser transport plan. Many formulations impose a global constraint on the transport plan, for instance by relying on entropic regularisation. As it is more expensive to diffuse mass for outlier points compared to central ones, this typically results in a significant imbalance in the way mass is spread across the points. This can be detrimental for some applications where a minimum of smoothing is required per point. To remedy this, we introduce OT with Adaptive RegularIsation (OTARI), a new formulation of OT that imposes constraints on the mass going in or/and out of each point. We then showcase the benefits of this approach for domain adaptation.
Optimal Transport with Adaptive Regularisation
[ "Hugues Van Assel", "Titouan Vayer", "Rémi Flamary", "Nicolas Courty" ]
Workshop/OTML
2310.02925
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=DW3a9czPGx
@inproceedings{ mariella2023quantum, title={Quantum Theory and Application of Contextual Optimal Transport}, author={Nicola Mariella and Jannis Born and Albert Akhriev and Francesco Tacchino and Christa Zoufal and Eugene Koskin and Ivano Tavernelli and Stefan Woerner and Marianna Rapsomaniki and Sergiy Zhuk}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=DW3a9czPGx} }
Optimal Transport (OT) has fueled machine learning (ML) applications across various domains. In cases where paired data measurements ($\mu$, $\nu$) are coupled to a context variable $p_i$ , one may aspire to learn a global transportation map, parameterized through the context to facilitate prediction of target states even from unseen context. Existing approaches for this task leverage Brenier’s theorem and utilize Neural OT. Here, we follow a radically different approach inspired by quantum computing principles to develop a Quantum formulation for learning transportation plans parameterized by a context variable. This is achieved through exploiting a natural link between doubly stochastic matrices and unitary operators. The latter can be directly related to recent results in quantum learning theory suggesting intrinsic advantages in modelling constrained problems with quantum methods. We verify our methodology on synthetic data, emulating the task of predicting single-cell perturbation responses parameterized through drug dosage as context. Our experimental comparisons to a baseline reveal that our method can capture dose-induced variations in cell distributions, even to some extent when extrapolating to dosages outside the interval seen during training. In summary, this work assesses the feasibility of learning to predict contextualized transportation plans through a novel quantum computing approach.
Quantum Theory and Application of Contextual Optimal Transport
[ "Nicola Mariella", "Jannis Born", "Albert Akhriev", "Francesco Tacchino", "Christa Zoufal", "Eugene Koskin", "Ivano Tavernelli", "Stefan Woerner", "Marianna Rapsomaniki", "Sergiy Zhuk" ]
Workshop/OTML
2402.14991
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=CmgB5fGWbN
@inproceedings{ assel2023interpolating, title={Interpolating between Clustering and Dimensionality Reduction with Gromov-Wasserstein}, author={Hugues Van Assel and C{\'e}dric Vincent-Cuaz and Titouan Vayer and R{\'e}mi Flamary and Nicolas Courty}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=CmgB5fGWbN} }
We present a versatile adaptation of existing dimensionality reduction (DR) objectives, enabling the simultaneous reduction of both sample and feature sizes. Correspondances between input and embedding samples are computed through a semi-relaxed Gromov-Wasserstein optimal transport (OT) problem. When the embedding sample size matches that of the input, our model recovers classical popular DR models. When the embedding's dimensionality is unconstrained, we show that the OT plan delivers a competitive hard clustering. We emphasize the importance of intermediate stages that blend DR and clustering for summarizing real data and apply our method to visualize datasets of images.
Interpolating between Clustering and Dimensionality Reduction with Gromov-Wasserstein
[ "Hugues Van Assel", "Cédric Vincent-Cuaz", "Titouan Vayer", "Rémi Flamary", "Nicolas Courty" ]
Workshop/OTML
2310.03398
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Bd4DTPzOGO
@inproceedings{ brekelmans2023on, title={On Schr\"odinger Bridge Matching and Expectation Maximization}, author={Rob Brekelmans and Kirill Neklyudov}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=Bd4DTPzOGO} }
In this work, we analyze methods for solving the Schrödinger Bridge problem from the perspective of alternating KL divergence minimization. While existing methods such as Iterative Proportional- or Markovian- Fitting require exact updates due to each iteration optimizing the same argument in the \kl divergence, we justify a joint optimization of a single KL divergence objective from the perspective of information geometry. As in the variational EM algorithm, this allows for partial, stochastic gradient updates to decrease a unified objective. We highlight connections with related bridge-matching, flow-matching, and few-step generative modeling approaches, where various parameterizations of the coupling distributions are contextualized from the perspective of marginal-preserving inference.
On Schrödinger Bridge Matching and Expectation Maximization
[ "Rob Brekelmans", "Kirill Neklyudov" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=AusgpiFekp
@inproceedings{ zhu2023optimal, title={Optimal transport for vector Gaussian mixture models}, author={Jiening Zhu and Kaiming Xu and Allen Tannenbaum}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=AusgpiFekp} }
Vector-valued Gaussian mixtures form an important special subset of vector-valued distributions. In general, vector-valued distributions constitute natural representations for physical entities, which can mutate or transit among alternative manifestations distributed in a given space. A key example is color imagery. In this note, we vectorize the Gaussian mixture model and study several different optimal mass transport related problems associated to such models. The benefits of using vector Gaussian mixture for optimal mass transport include computational efficiency and the ability to preserve structure.
Optimal transport for vector Gaussian mixture models
[ "Jiening Zhu", "Kaiming Xu", "Allen Tannenbaum" ]
Workshop/OTML
2012.09226
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9xW9OgdJfs
@inproceedings{ min2023unsupervised, title={Unsupervised Learning Permutations for {TSP} using Gumbel-Sinkhorn Operator}, author={Yimeng Min and Carla Gomes}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=9xW9OgdJfs} }
The Machine Learning community has recently shown a growing interest in Optimal Transport (OT). Methods that leverage entropy regularization based on OT have proven especially effective for various tasks, including ranking, sorting, and solving jigsaw puzzles. In our study, we broaden the application of entropy regularization methods to address the NP-hard Travelling Salesman Problem (TSP). We first formulate TSP as identifying the permutation of a Hamiltonian Cycle with the shortest length. Following this, we establish the permutation representation using the Gumbel-Sinkhorn operator with entropic regularization. Our findings indicate a balance between entropy and generalization. We further discuss how to generalize across different hardnesses.
Unsupervised Learning Permutations for TSP using Gumbel-Sinkhorn Operator
[ "Yimeng Min", "Carla Gomes" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9P02CcVkGU
@inproceedings{ maurais2023adaptive, title={Adaptive Algorithms for Continuous-Time Transport: Homotopy-Driven Sampling and a New Interacting Particle System}, author={Aimee Maurais and Youssef Marzouk}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=9P02CcVkGU} }
We propose a new dynamic algorithm which transports samples from a reference distribution to a target distribution in unit time, given access to the target-to-reference density ratio. Our approach is to seek a sequence of transport maps that push forward the reference along a path given by a geometric mixture of the two densities. We take the maps to be simply parameterized, local, sample-driven optimal transport maps which we identify by approximately solving a root-finding problem formulated using importance weights. When feature functions for the maps are taken to be kernels, we obtain a novel interacting particle system from which we derive finite-particle and mean-field ODEs. In discrete time, we introduce an adaptive algorithm for simulating this interacting particle system which adjusts the ODE time steps based on the quality of the transport, automatically uncovering a good "schedule" for traversing the geometric mixture of densities.
Adaptive Algorithms for Continuous-Time Transport: Homotopy-Driven Sampling and a New Interacting Particle System
[ "Aimee Maurais", "Youssef Marzouk" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=6Muz6c1wBc
@inproceedings{ agarwal2023fast, title={Fast and Accurate Cost-Scaling Algorithm for the Semi-Discrete Optimal Transport}, author={Pankaj Agarwal and Sharath Raghvendra and Pouyan Shirzadian and Keegan Yao}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=6Muz6c1wBc} }
Given a continuous probability distribution $\mu$ and a discrete distribution $\nu$ in the $d$-dimensional space, the semi-discrete Optimal Transport (OT) problem asks for computing a minimum-cost plan to transport mass from $\mu$ to $\nu$. In this paper, given any parameter $\varepsilon>0$, we present an algorithm that computes a semi-discrete transport plan $\tilde\tau$ with cost $\textcent(\tilde\tau) \le \textcent(\tau^*)+\varepsilon$ in $n^{O(d)}\log\frac{\mathrm{D}}{\varepsilon}$ time; here, $\tau^*$ is the optimal transport plan, $\mathrm{D}$ is the diameter of the supports of $\mu$ and $\nu$, and we assume we have access to an oracle that outputs the mass of $\mu$ inside a constant-complexity region in $O(1)$ time. Our algorithm works for several ground distances including the $L_p$-norm and the squared-Euclidean distance.
Fast and Accurate Cost-Scaling Algorithm for the Semi-Discrete Optimal Transport
[ "Pankaj Agarwal", "Sharath Raghvendra", "Pouyan Shirzadian", "Keegan Yao" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=5NGwDptjPx
@inproceedings{ neklyudov2023a, title={A Computational Framework for Solving Wasserstein Lagrangian Flows}, author={Kirill Neklyudov and Rob Brekelmans and Alexander Tong and Lazar Atanackovic and qiang liu and Alireza Makhzani}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=5NGwDptjPx} }
The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry (kinetic energy), and the regularization of density paths (potential energy). These combinations yield different variational problems (Lagrangians), encompassing many variations of the optimal transport problem such as the Schrödinger bridge, unbalanced optimal transport, and optimal transport with physical constraints, among others. In general, the optimal density path is unknown, and solving these variational problems can be computationally challenging. Leveraging the dual formulation of the Lagrangians, we propose a novel deep learning based framework approaching all of these problems from a unified perspective. Our method does not require simulating or backpropagating through the trajectories of the learned dynamics, and does not need access to optimal couplings. We showcase the versatility of the proposed framework by outperforming previous approaches for the single-cell trajectory inference, where incorporating prior knowledge into the dynamics is crucial for correct predictions.
A Computational Framework for Solving Wasserstein Lagrangian Flows
[ "Kirill Neklyudov", "Rob Brekelmans", "Alexander Tong", "Lazar Atanackovic", "qiang liu", "Alireza Makhzani" ]
Workshop/OTML
2310.10649
[ "https://github.com/necludov/wl-mechanics" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=237y5Bc0p8
@inproceedings{ das2023provably, title={Provably Fast Finite Particle Variants of {SVGD} via Virtual Particle Stochastic Approximation}, author={Aniket Das and Dheeraj Nagaraj}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=237y5Bc0p8} }
SVGD is a popular particle-based variational inference algorithm with well studied mean-field dynamics. However, its finite-particle behavior is far less understood. Our work introduces the notion of *virtual particles* to develop novel stochastic approximations of mean-field SVGD dynamics in the space of probability measures, that are exactly realizable using finite particles. As a result, we design two computationally efficient variants of SVGD (VP-SVGD and GB-SVGD) with provably fast finite-particle convergence rates. Our algorithms are specific random-batch approximations of SVGD which are computationally more efficient than ordinary SVGD. We show that the $n$ output particles of VP-SVGD and GB-SVGD, run for $T$ steps with batchsize $K$, are as good as i.i.d samples from a measure whose Kernel Stein Discrepancy to the target is at most $O(\tfrac{d^{1/3}}{(KT)^{1/6}})$ under standard assumptions. We prove similar results under a mild growth condition on the score function, which is weaker than the assumptions of prior works. Our convergence rates for the empirical measure (of the particles output by VP-SVGD and GB-SVGD) to the target distribution enjoys a **double exponential improvement** over the best known finite-particle analysis of SVGD. Furthermore, our results give the **first known polynomial oracle complexity in dimension**, completely eliminating the curse of dimensionality exhibited by previously known finite-particle rates.
Provably Fast Finite Particle Variants of SVGD via Virtual Particle Stochastic Approximation
[ "Aniket Das", "Dheeraj Nagaraj" ]
Workshop/OTML
2305.17558
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=1Yi0XQviNO
@inproceedings{ hong2023fourierbased, title={Fourier-Based Bounds for Wasserstein Distances and Their Implications in Computational Inversion}, author={Wanli Hong and Vladimir A Kobzar and Kui Ren}, booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning}, year={2023}, url={https://openreview.net/forum?id=1Yi0XQviNO} }
Computational inverse problems entail fitting a mathematical model to data. These problems are often solved numerically, by minimizing the mismatch between the model and the data using an appropriate metric. We focus on the case when this metric is the Wasserstein-$p$ ($W_p$) distance between probability measures as well as its generalizations by Piccoli et al., for unbalanced measures, including the Kantorovich-Rubinstein norm. The recent work of Niles-Weed and Berthet established that $W_p$ is bounded from below and above by weighted $\ell_p$ norms of the wavelet coefficients of the mismatch, among other things, relying on the fluid dynamics formulation of $W_p$. Building on this research, we establish lower and upper bounds on $W_p$ on the hypercube and flat torus in terms of weighted $\ell_{q}$ norms of the Fourier coefficients of the mismatch. In this setting, for measures uniformly bounded above, the lower bound increases as $p$ increases. Based on that fact, in our setting, the lower bound resolves the open problem posed by Steinerberger to prove the existence of a Fourier-based lower bound on $W_p$ that grows with $p$. When $W_p$ is used as the mismatch metric in computational inversion, these bounds allow us to analyze the effects of stopping early the computational minimization of the mismatch on the resolution of frequencies, and the dependence of the resolution on $p$. Since the $W_p$ distance is used in a broad range of other problems in mathematics and computational sciences, we expect that our bounds will also be of interest beyond inverse problems.
Fourier-Based Bounds for Wasserstein Distances and Their Implications in Computational Inversion
[ "Wanli Hong", "Vladimir A Kobzar", "Kui Ren" ]
Workshop/OTML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xqvB784PCv
@inproceedings{ kang2023the, title={The Fair Value of Data Under Heterogeneous Privacy Constraints in Federated Learning}, author={Justin Kang and Kannan Ramchandran and Ramtin Pedarsani}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=xqvB784PCv} }
Modern data aggregation often involves a platform collecting data from a network of users with various privacy options. Platforms must solve the problem of how to allocate incentives to users to convince them to share their data. This paper puts forth an idea for a fair amount to compensate users for their data at a given privacy level based on an axiomatic definition of fairness, along the lines of the celebrated Shapley value. To the best of our knowledge, these are the first fairness concepts for data that explicitly consider privacy constraints. We also formulate a heterogeneous federated learning problem for the platform with privacy level options for users. By studying this problem, we investigate the amount of compensation users receive under fair allocations with different privacy levels, amounts of data, and degrees of heterogeneity. We also discuss what happens when the platform is forced to design fair incentives. Under certain conditions we find that when privacy sensitivity is low, the platform will set incentives to ensure that it collects all the data with the lowest privacy options. When the privacy sensitivity is above a given threshold, the platform will provide no incentives to users. Between these two extremes, the platform will set the incentives so some fraction of the users chooses the higher privacy option and the other chooses the lower privacy option
The Fair Value of Data Under Heterogeneous Privacy Constraints in Federated Learning
[ "Justin Singh Kang", "Kannan Ramchandran", "Ramtin Pedarsani" ]
Workshop/Federated_Learning
2301.13336
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=sPtEDSVD4K
@inproceedings{ recasens2023beyond, title={Beyond Parameter Averaging in Model Aggregation}, author={Pol G. Recasens and Jordi Torres and Josep Berral and S{\o}ren Hauberg and Pablo Moreno-Mu{\~n}oz}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=sPtEDSVD4K} }
The success of foundation models is strongly linked to scale, which has reinforced the interest in federated learning. With the prohibitive cost of training a large language model (LLM) in mind, little attention has been placed on reusing pre-trained models in collaborative training settings. Self-supervision has also played an important role in this success, but its emphasis has been primarily on data. This paper leverages Bayesian principles to bring self-supervision into the model aggregation toolbox. It introduces self-supervised Fisher merging, a framework that successfully merges models in parameter space without re-visiting data, opening a new door in model reusability. Experimental results build the foundation of our method on tractable linear models, and highlight its potential on aggregating neural networks.
Beyond Parameter Averaging in Model Aggregation
[ "Pol G. Recasens", "Jordi Torres", "Josep Lluis Berral", "Søren Hauberg", "Pablo Moreno-Muñoz" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=s7hquGszME
@inproceedings{ zhang2023dpzero, title={{DPZ}ero: Dimension-Independent and Differentially Private Zeroth-Order Optimization}, author={Liang Zhang and Kiran Thekumparampil and Sewoong Oh and Niao He}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=s7hquGszME} }
The widespread practice of fine-tuning pretrained large language models (LLMs) on domain-specific data faces two major challenges in memory and privacy. First, as the size of LLMs continue to grow, encompassing billions of parameters, the memory demands of gradient-based training methods via backpropagation become prohibitively high. Second, given the tendency of LLMs to memorize and disclose sensitive training data, the privacy of fine-tuning data must be respected. To this end, we explore the potential of zeroth-order methods in differentially private optimization for fine-tuning LLMs. Zeroth-order methods, which rely solely on forward passes, substantially reduce memory consumption during training. However, directly combining them with standard differential privacy mechanism poses dimension-dependent complexity. To bridge the gap, we introduce DPZero, a novel differentially private zeroth-order algorithm with nearly dimension-independent rates. Our theoretical analysis reveals that its complexity hinges primarily on the problem's intrinsic dimension and exhibits only a logarithmic dependence on the ambient dimension. This renders DPZero a highly practical option for real-world LLMs deployments.
DPZero: Dimension-Independent and Differentially Private Zeroth-Order Optimization
[ "Liang Zhang", "Kiran Koshy Thekumparampil", "Sewoong Oh", "Niao He" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qwnOt7FFSD
@inproceedings{ agarwal2023an, title={An Empirical Evaluation of Federated Contextual Bandit Algorithms}, author={Alekh Agarwal and Hugh McMahan and Zheng Xu}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=qwnOt7FFSD} }
Fine-tuning (foundation) models with user feedback can be important for improving task-specific performance, as fine-grained supervision is generally unavailable. While the adoption of federated learning increases for learning from sensitive data local to user devices, it is unclear if learning can be done using implicit signals generated as users interact with the applications. We approach such problems with the framework of federated contextual bandits, and develop variants of prominent contextual bandit algorithms from the centralized seting for the federated setting. We carefully evaluate these algorithms in a range of scenarios simulated using publicly available datasets. Our simulations model typical setups encountered in the real-world, such as various misalignments between an initial pre-trained model and the subsequent user interactions due to non-stationarity in the data and/or heterogeneity across clients. Our experiments reveal the surprising effectiveness of the simple and commonly used softmax heuristic in balancing the well-know exploration-exploitation tradeoff across the breadth of our settings.
An Empirical Evaluation of Federated Contextual Bandit Algorithms
[ "Alekh Agarwal", "Hugh McMahan", "Zheng Xu" ]
Workshop/Federated_Learning
2303.10218
[ "https://github.com/google-research/federated" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qQ2qXFu05s
@inproceedings{ chung2023parameter, title={Parameter Averaging Laws for Multitask Language Models}, author={Woojin Chung and Hyowon Cho and James Thorne and Se-Young Yun}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=qQ2qXFu05s} }
Parameter-averaging, a method for combining multiple models into a single one, has emerged as a promising approach to enhance performance without requiring additional space or retraining. Nonetheless, the conditions for successful parameter-averaging remain undefined, calling for further research to characterize them. In this study, we empirically investigate the influential factors for successful parameter-averaging and reveal \emph{positive correlations between representation power and the performance gain of parameter-averaging}. Specifically, we evaluate how computational budget, data diversity and vocabulary size contribute to representation power, and their influence on the success of parameter-averaging. Our results demonstrate that parameter-averaging improves the generalization ability for both in-domain and out-of-domain data. Additionally, to reduce the computational cost of parameter-averaging, we introduce \textit{partial averaging}, which assumes arbitrary participation of a subset of contributors. We observe that partial averaging outperforms fine-tuning for models with sufficient representation power. Furthermore, we find that the impact of data heterogeneity, which arises from different data distributions of contributors, reduces as the representation power of the model increases. These findings provide valuable insights into the principles governing parameter-averaging and its potential for enhancing model performance.
Parameter Averaging Laws for Multitask Language Models
[ "Woojin Chung", "Hyowon Cho", "James Thorne", "Se-Young Yun" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ozN92d7CHX
@inproceedings{ azam2023federated, title={Federated Learning for Speech Recognition: Revisiting Current Trends Towards Large-Scale {ASR}}, author={Sheikh Shams Azam and Martin Pelikan and Vitaly Feldman and Kunal Talwar and Jan Silovsky and Tatiana Likhomanenko}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=ozN92d7CHX} }
While automatic speech recognition (ASR) has witnessed remarkable achievements in recent years, it has not garnered a widespread focus within the federated learning (FL) and differential privacy (DP) communities. Meanwhile, ASR is also a well-suited benchmark for FL and DP as there is (i) a natural data split across users by using speaker information; (ii) heterogeneous data across speakers close to practical settings; (iii) interplay between acoustic and language modeling; (iv) and it is a sequence-to-sequence task. Recent production-ready state-of-the-art models in ASR include *large* conformer and transformer models, the optimization of which is known to pose challenges even for central training. While the main trends and benchmarks in FL and DP focus on *small* models, we show the necessity of disentangling optimization and model size: the behavior of FL and DP for *large* models is different from the one for *small* models. We speculate that FL and DP are harder for *small* models due to harder optimization problems even in central training. In this paper, we analyze the key FL parameters (optimizers, training from scratch or a seed model pre-trained centrally, cohort size, data heterogeneity) and propose *first* benchmark of *FL with DP* in the context of *large* models in ASR. We examine the applicability of prior results and present an overview of observed departures from the trends in prior works and from training different ASR models. Through this work, we provide researchers and practitioners in the fields of FL and DP with valuable insights into the fundamental differences that may arise when applying FL and DP research to large-scale ASR training.
Federated Learning for Speech Recognition: Revisiting Current Trends Towards Large-Scale ASR
[ "Sheikh Shams Azam", "Martin Pelikan", "Vitaly Feldman", "Kunal Talwar", "Jan Silovsky", "Tatiana Likhomanenko" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=le0Emy9SqA
@inproceedings{ zhu2023consensus, title={Consensus Optimization at Representation: Improving Personalized Federated Learning via Data-Centric Regularization}, author={Heng Zhu and Arya Mazumdar}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=le0Emy9SqA} }
Federated learning is a large scale machine learning training paradigm where data is distributed across clients, and can be highly heterogeneous from one client to another. To ensure personalization in client models, and at the same time to ensure that the local models have enough commonality (i.e., prevent ``client-drift''), it has been recently proposed to cast the federated learning problem as a consensus optimization problem, where local models are trained on local data, but are forced to be similar via a regularization term. In this paper we propose an improved federated learning algorithm, where we ensure consensus optimization at the representation part of each local client, and not on whole local models. This algorithm naturally takes into account that today's deep networks are often partitioned into a feature extraction part (representation) and a prediction part. Our algorithm ensures greater flexibility compared to previous works on exact shared representation in highly heterogeneous settings, as it has been seen that the representation part can differ substantially with data distribution. We validate its good performance experimentally in standard datasets.
Consensus Optimization at Representation: Improving Personalized Federated Learning via Data-Centric Regularization
[ "Heng Zhu", "Arya Mazumdar" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ldN6QdyukS
@inproceedings{ zhang2023augmenting, title={Augmenting Federated Learning with Pretrained Transformers}, author={Xuechen Zhang and Mingchen Li and Xiangyu Chang and Jiasi Chen and Amit Roy-Chowdhury and Ananda Suresh and Samet Oymak}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=ldN6QdyukS} }
The explosive growth and diversity of machine learning applications motivate a fundamental rethinking of learning with mobile and edge devices. How can we address *diverse/disparate client goals* and learn with *scarce heterogeneous data*? While federated learning (FL) aims to address these issues, it has several bottlenecks and challenges hindering a unified solution. On the other hand, large transformer models have been shown to work across a variety of tasks often achieving remarkable few-shot adaptation. This raises the question: Can FL clients use a single general-purpose model -- rather than custom models for each task -- while obeying *device and network constraints*? In this work, we investigate pretrained transformers (PTF) to achieve these on-device learning goals and thoroughly explore the roles of model size and modularity, where the latter refers to adaptation through modules such as prompts or adapters. We demonstrate that: **(1) Larger scale** shrinks the accuracy gaps between alternative approaches and improves heterogeneity robustness. Crucially, scale allows clients to run *more local SGD epochs* which substantially ($\times 4$) reduces the number of communication rounds. At the extreme, clients can achieve respectable accuracy fully-locally reducing the need for collaboration. **(2) Modularity** enables $>$100$\times$ less communication in bits. Surprisingly, it also boosts the generalization capability of local adaptation methods and the robustness of smaller PTFs. To explain these benefits, we show that scale and modularity can synergistically mitigate the *representation shift* during FL. Finally, to harness multitasking capabilities of modern PTFs, we propose FedYolo: A new FL approach that assigns both dedicated and shared modules to FL tasks to manage their interference. Our extensive experiments demonstrate FedYolo's value and the power of scale and modularity for multitasking.
Augmenting Federated Learning with Pretrained Transformers
[ "Xuechen Zhang", "Mingchen Li", "Xiangyu Chang", "Jiasi Chen", "Amit Roy-Chowdhury", "Ananda Suresh", "Samet Oymak" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=lcElZPvMFp
@inproceedings{ li2023exploring, title={Exploring User-level Gradient Inversion with a Diffusion Prior}, author={Zhuohang Li and Andrew Lowy and Jing Liu and Toshiaki Koike-Akino and Bradley A. Malin and Kieran Parsons and Ye Wang}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=lcElZPvMFp} }
We explore user-level gradient inversion as a new attack surface in distributed learning. We first investigate existing attacks on their ability to make inferences about private information info beyond training data reconstruction. Motivated by the low reconstruction quality of existing methods, we propose a novel gradient inversion attack that applies a denoising diffusion model as a strong image prior in order to enhance recovery in the large batch setting. Unlike traditional attacks, which aim to reconstruct individual samples and suffer at large batch and image sizes, our approach instead aims to recover a representative image that captures the sensitive shared semantic information corresponding to the underlying user. Our experiments with face images demonstrate the ability of our methods to recover realistic facial images along with private user attributes.
Exploring User-level Gradient Inversion with a Diffusion Prior
[ "Zhuohang Li", "Andrew Lowy", "Jing Liu", "Toshiaki Koike-Akino", "Bradley A. Malin", "Kieran Parsons", "Ye Wang" ]
Workshop/Federated_Learning
2409.07291
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=iKQC652XIk
@inproceedings{ zhong2023making, title={Making Batch Normalization Great in Federated Deep Learning}, author={Jike Zhong and Hong-You Chen and Wei-Lun Chao}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=iKQC652XIk} }
Batch Normalization (BN) is commonly used in modern deep foundation models to improve stability and speed up convergence in centralized training. In federated learning (FL) with non-IID decentralized data, previous works observed that training with BN could hinder performance due to the mismatch of the BN statistics between training and testing. Group Normalization (GN) is thus more often used in FL as an alternative to BN. In this paper, we identify a more fundamental issue of BN in FL that makes BN inferior even with high-frequency communication between clients and servers. We then propose a frustratingly simple treatment, which significantly improves BN and makes it outperform GN across a wide range of FL settings. Along with this study, we also reveal an unreasonable behavior of BN in FL. We find it quite robust in the low-frequency communication regime where FL is commonly believed to degrade drastically. We hope that our study could serve as a valuable reference for future practical usage and theoretical analysis in FL.
Making Batch Normalization Great in Federated Deep Learning
[ "Jike Zhong", "Hong-You Chen", "Wei-Lun Chao" ]
Workshop/Federated_Learning
2303.06530
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=gACRiXPGmM
@inproceedings{ wu2023leveraging, title={Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning}, author={Xidong Wu and Wan-Yi Lin and Devin Willmott and Filipe Condessa and Yufei Huang and Zhenzhen Li and Madan Ganesh}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=gACRiXPGmM} }
Federated Learning (FL) is a distributed training paradigm that enables clients scattered across the world to cooperatively learn a global model without divulging confidential data. However, FL faces a significant challenge in the form of heterogeneous data distributions among clients, which leads to a reduction in performance and robustness. A recent approach to mitigating the impact of heterogeneous data distributions is through the use of foundation models, which offer better performance at the cost of larger computational overheads and slower inference speeds. We introduce foundation model distillation to assist in the federated training of lightweight client models and increase their performance under heterogeneous data settings while keeping inference costs low. Our results show improvement in the global model performance on a balanced testing set, which contains rarely observed samples, even under extreme non-IID client data distributions. We conduct a thorough evaluation of our framework with different foundation model backbones on CIFAR10, with varying degrees of heterogeneous data distributions ranging from class-specific data partitions across clients to dirichlet data sampling, parameterized by values between 0.01 and 1.0.
Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning
[ "Xidong Wu", "Wan-Yi Lin", "Devin Willmott", "Filipe Condessa", "Yufei Huang", "Zhenzhen Li", "Madan Ganesh" ]
Workshop/Federated_Learning
2311.08479
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=dsWg7n6zoo
@inproceedings{ halbe2023hepco, title={He{PC}o: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning}, author={Shaunak Halbe and James Smith and Junjiao Tian and Zsolt Kira}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=dsWg7n6zoo} }
In this paper, we focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts over time without sharing or storing any data. The complexity of this problem is compounded by challenges from both the Continual and Federated Learning perspectives. Specifically, models trained in a CFL setup suffer from catastrophic forgetting which is exacerbated by data heterogeneity across clients. Existing attempts at this problem tend to impose large overheads on clients and communication channels or require access to stored data which renders them unsuitable for real-world use due to privacy. We study this problem in the context of Foundation Models and showcase their effectiveness in mitigating forgetting while minimizing overhead costs and without requiring access to any stored data. We achieve this by leveraging a prompting based approach (such that only prompts and classifier heads have to be communicated) and proposing a novel and lightweight generation and distillation scheme to aggregate client models at the server. We formulate this problem for image classification and establish strong baselines for comparison, conduct experiments on CIFAR-100 as well as challenging, large-scale datasets like ImageNet-R and DomainNet. Our approach outperforms both existing methods and our own baselines by more than 7\% while significantly reducing communication and client-level computation costs.
HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning
[ "Shaunak Halbe", "James Seale Smith", "Junjiao Tian", "Zsolt Kira" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=YqqWQP8POe
@inproceedings{ li2023marina, title={{MARINA} Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization}, author={Hanmin Li and Avetik Karagulyan and Peter Richt{\'a}rik}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=YqqWQP8POe} }
Matrix-stepsized gradient descent algorithms have been demonstrated to exhibit superior efficiency in non-convex optimization compared to their scalar counterparts. The det-CGD algorithm, as introduced by [LKR23], leverages matrix stepsizes to perform compressed gradient descent for non-convex objectives and matrix-smooth problems in a federated manner. The authors establish the algorithm's convergence to a neighborhood of the weighted stationarity point under a convex condition for the symmetric and positive-definite stepsize matrix. In this paper, we propose a variance-reduced version of the det-CGD algorithm, incorporating the MARINA method. Notably, we establish theoretically and empirically, that det-MARINA outperforms both MARINA and the distributed MARINA algorithms
MARINA Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization
[ "Hanmin Li", "Avetik Karagulyan", "Peter Richtárik" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XSfsvBoc8M
@inproceedings{ setlur2023private, title={Private and Personalized Histogram Estimation in a Federated Setting}, author={Amrith Setlur and Vitaly Feldman and Kunal Talwar}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=XSfsvBoc8M} }
Personalized federated learning (PFL) aims at learning personalized models for users in a federated setup. We focus on the problem of privately estimating histograms (in the KL metric) for each user in the network. Conventionally, for more general problems learning a global model jointly via federated averaging, and then finetuning locally for each user has been a winning strategy. But this can be suboptimal if the user distribution observes diverse subpopulations, as one might expect with user vocabularies. To tackle this, we study an alternative PFL technique: clustering based personalization that first identifies diverse subpopulations when present, enabling users to collaborate more closely with others from the same subpopulation. We motivate our algorithm via a stylized generative process: mixture of Dirichlets, and propose initialization/pre-processing techniques that reduce the iteration complexity of clustering. This enables the application of privacy mechanisms at each step of our iterative procedure, making the algorithm user-level differentially private without severe drop in utility due to added noise. Finally, we present empirical results on Reddit users data where we compare our method with other well-known PFL approaches applied to private histogram estimation.
Private and Personalized Histogram Estimation in a Federated Setting
[ "Amrith Setlur", "Vitaly Feldman", "Kunal Talwar" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XJhL1XlefX
@inproceedings{ chu2023focus, title={{FOCUS}: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data}, author={Wenda Chu and Chulin Xie and Boxin Wang and Linyi Li and Lang Yin and Arash Nourian and Han Zhao and Bo Li}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=XJhL1XlefX} }
Federated learning (FL) allows agents to jointly train a global model without sharing their local data to protect the privacy of local agents. However, due to the heterogeneous nature of local data, existing definitions of fairness in the context of FL are prone to noisy agents in the network. For instance, existing work usually considers accuracy parity as the fairness metric for different agents, which is not robust under the heterogeneous setting, since it will enforce agents with high-quality data to achieve similar accuracy to those who contribute low-quality data and may discourage the agents with high-quality data from participating in FL. In this work, we propose a formal FL fairness definition, fairness via agent-awareness (FAA), which takes the heterogeneity of different agents into account by measuring the data quality with approximated Bayes optimal error. Under FAA, the performance of agents with high-quality data will not be sacrificed just due to the existence of large numbers of agents with low-quality data. In addition, we propose a fair FL training algorithm leveraging agent clustering (FOCUS) to achieve fairness in FL, as measured by FAA and other fairness metrics. Theoretically, we prove the convergence and optimality of FOCUS under mild conditions for both linear and general convex loss functions with bounded smoothness. We also prove that FOCUS always achieves higher fairness in terms of FAA compared with standard FedAvg under both linear and general convex loss functions. Empirically, we show that on four FL datasets, including synthetic data, images, and texts, FOCUS achieves significantly higher fairness in terms of FAA and other fairness metrics, while maintaining competitive prediction accuracy compared with FedAvg and four state-of-the-art fair FL algorithms.
FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
[ "Wenda Chu", "Chulin Xie", "Boxin Wang", "Linyi Li", "Lang Yin", "Arash Nourian", "Han Zhao", "Bo Li" ]
Workshop/Federated_Learning
2207.10265
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=WYLhRgBAFH
@inproceedings{ lee2023fedsol, title={FedSoL: Bridging Global Alignment and Local Generality in Federated Learning}, author={Gihun Lee and Minchan Jeong and SangMook Kim and Jaehoon Oh and Se-Young Yun}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=WYLhRgBAFH} }
While FL enables learning a model with data privacy, it often suffers from significant performance degradation when client data distributions are heterogeneous. Many previous FL algorithms have addressed this issue by introducing various proximal restrictions. These restrictions aim to encourage global alignment by constraining the deviation of local learning from the global objective. However, they inherently limit local learning by interfering with the original local objectives. Recently, an alternative approach has emerged to improve local learning generality. By obtaining local models within a smooth loss landscape, this approach mitigates conflicts among different local objectives of the clients. Yet, it does not ensure stable global alignment, as local learning does not take the global objective into account. In this study, we propose Federated Stability on Learning (FedSoL), which combines both the concepts of global alignment and local generality. In FedSoL, the local learning seeks a parameter region robust against proximal perturbations. This strategy introduces an implicit proximal restriction effect in local learning while maintaining the original local objective for parameter update.
FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
[ "Gihun Lee", "Minchan Jeong", "SangMook Kim", "Jaehoon Oh", "Se-Young Yun" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=TaDiklyVps
@inproceedings{ zhang2023towards, title={Towards Building the Federated{GPT}: Federated Instruction Tuning}, author={Jianyi Zhang and Saeed Vahidian and Martin Kuo and Chunyuan Li and Ruiyi Zhang and Tong Yu and Guoyin Wang and Yiran Chen}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=TaDiklyVps} }
While "instruction-tuned" generative large language models (LLMs) have demonstrated an impressive ability to generalize to new tasks, the training phases heavily rely on large amounts of diverse and high-quality instruction data (such as ChatGPT and GPT-4). Unfortunately, acquiring high-quality data, especially when it comes to human-written data, can pose significant challenges both in terms of cost and accessibility. Moreover, concerns related to privacy can further limit access to such data, making the process of obtaining it a complex and nuanced undertaking. To tackle this issue, our study introduces a new approach called \textbf{Fed}erated \textbf{I}nstruction \textbf{T}uning (FedIT), which leverages federated learning (FL) as the learning framework for the instruction tuning of LLMs. This marks the first exploration of FL-based instruction tuning for LLMs. This is especially important since text data is predominantly generated by end users. For example, collecting extensive amounts of everyday user conversations can be a useful approach to improving the generalizability of LLMs, allowing them to generate authentic and natural responses. Therefore, it is imperative to design and adapt FL approaches to effectively leverage these users' diverse instructions stored on local devices while mitigating concerns related to data sensitivity and the cost of data transmission. In this study, we leverage extensive qualitative analysis, including the prevalent GPT-4 auto-evaluation, to illustrate how our FedIT framework enhances the performance of LLMs. Utilizing diverse instruction sets on the client side, FedIT outperforms centralized training with only limited local instructions.
Towards Building the FederatedGPT: Federated Instruction Tuning
[ "Jianyi Zhang", "Saeed Vahidian", "Martin Kuo", "Chunyuan Li", "Ruiyi Zhang", "Tong Yu", "Guoyin Wang", "Yiran Chen" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=SvJx4a75QZ
@inproceedings{ condat2023tamuna, title={{TAMUNA}: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation}, author={Laurent Condat and Ivan Agarsk{\'y} and Grigory Malinovsky and Peter Richt{\'a}rik}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=SvJx4a75QZ} }
In federated learning, a large number of users collaborate to learn a global model. They alternate local computations and communication with a distant server. Communication, which can be slow and costly, is the main bottleneck in this setting. In addition to communication-efficiency, a robust algorithm should allow for partial participation, the desirable feature that not all clients need to participate to every round of the training process. To reduce the communication load and therefore accelerate distributed gradient descent, two strategies are popular: 1) communicate less frequently; that is, perform several iterations of local computations between the communication rounds; and 2) communicate compressed information instead of full-dimensional vectors. We propose TAMUNA, the first algorithm for distributed optimization and federated learning, which harnesses these two strategies jointly and allows for partial participation. TAMUNA converges linearly to an exact solution in the strongly convex setting, with a doubly accelerated rate: it provably benefits from the two acceleration mechanisms provided by local training and compression, namely a better dependency on the condition number of the functions and on the model dimension, respectively.
TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation
[ "Laurent Condat", "Ivan Agarský", "Grigory Malinovsky", "Peter Richtárik" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PuYD0fh5aq
@inproceedings{ jin2023fedmlhe, title={Fed{ML}-{HE}: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System}, author={Weizhao Jin and Yuhang Yao and Shanshan Han and Carlee Joe-Wong and Srivatsan Ravi and Salman Avestimehr and Chaoyang He}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=PuYD0fh5aq} }
Federated Learning trains machine learning models on distributed devices by aggregating local model updates instead of local data. However, privacy concerns arise as the aggregated local models on the server may reveal sensitive personal information by inversion attacks. Privacy-preserving methods, such as homomorphic encryption (HE), then become necessary for FL training. Despite HE's privacy advantages, its applications suffer from impractical overheads, especially for foundation models. In this paper, we present FedML-HE, the first practical federated learning system with efficient HE-based secure model aggregation. FedML-HE proposes to selectively encrypt sensitive parameters, significantly reducing both computation and communication overheads during training while providing customizable privacy preservation. Our optimized system demonstrates considerable overhead reduction, particularly for large foundation models (e.g., $\sim$10x reduction for ResNet-50, and up to $\sim$40x reduction for BERT), demonstrating the potential for scalable HE-based FL deployment.
FedML-HE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System
[ "Weizhao Jin", "Yuhang Yao", "Shanshan Han", "Carlee Joe-Wong", "Srivatsan Ravi", "Salman Avestimehr", "Chaoyang He" ]
Workshop/Federated_Learning
2303.10837
[ "https://github.com/FedML-AI/FedML" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=PmahoyE89G
@inproceedings{ makkuva2023laser, title={{LASER}: Linear Compression in Wireless Distributed Optimization}, author={Ashok Makkuva and Marco Bondaschi and Thijs Vogels and Martin Jaggi and Hyeji Kim and Michael Gastpar}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=PmahoyE89G} }
Data-parallel SGD is the de facto algorithm for distributed optimization, especially for large scale machine learning. Despite its merits, communication bottleneck is one of its persistent issues. Most compression schemes to alleviate this either assume noiseless communication links, or fail to achieve good performance on practical tasks. In this paper, we close this gap and introduce $\bf{LASER}$: ${\bf L}$ine${\bf A}$r Compre${\bf S}$sion in Wir${\bf E}$less Dist${\bf R}$ibuted Optimization. LASER capitalizes on the inherent low-rank structure of gradients and transmits them efficiently over the noisy channels. Whilst enjoying theoretical guarantees similar to those of the classical SGD, \textsc{LASER} shows consistent gains over baselines on a variety of practical benchmarks. In particular, it outperforms the state-of-the-art compression schemes on challenging computer vision and GPT language modeling tasks. On the latter, we obtain $50$-$64$ % improvement in perplexity over our baselines for noisy channels.
LASER: Linear Compression in Wireless Distributed Optimization
[ "Ashok Vardhan Makkuva", "Marco Bondaschi", "Thijs Vogels", "Martin Jaggi", "Hyeji Kim", "Michael Gastpar" ]
Workshop/Federated_Learning
2310.13033
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=Pj6BPHZy56
@inproceedings{ hartmann2023mofld, title={{MOFL}/D: A Federated Multi-objective Learning Framework with Decomposition}, author={Maria Hartmann and Gr{\'e}goire Danoy and Mohammed Alswaitti and Pascal Bouvry}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=Pj6BPHZy56} }
Multi-objective learning problems occur in all aspects of life and have been studied for decades, including in the field of machine learning. Many such problems also exist in distributed settings, where data cannot easily be shared. In recent years, joint machine learning has been made possible in such settings through the development of the Federated Learning (FL) paradigm. However, there is as of now very little research on the general problem of extending the FL concept to multi- objective learning, limiting such problems to non-cooperative individual learning. We address this gap by presenting a general framework for multi-objective FL, based on decomposition (MOFL/D). Our framework addresses the a posteriori type of multi-objective problem, where user preferences are not known during the optimisation process, allowing multiple participants to jointly find a set of solutions, each optimised for some distribution of preferences. We present an instantiation of the framework and validate it through experiments on a set of multi-objective benchmarking problems that are extended from well-known single- objective benchmarks.
MOFL/D: A Federated Multi-objective Learning Framework with Decomposition
[ "Maria Hartmann", "Grégoire Danoy", "Mohammed Alswaitti", "Pascal Bouvry" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=OoEIUohfcp
@inproceedings{ papadopoulos2023absolute, title={Absolute Variation Distance: an Inversion Attack Evaluation Metric for Federated Learning}, author={Georgios Papadopoulos and yash satsangi and Shaltiel Eloul and Marco Pistoia}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=OoEIUohfcp} }
Federated Learning (FL) has emerged as a pivotal approach for training models on decentralized data sources by sharing only model gradients. However, the shared gradients in FL are susceptible to inversion attacks which can expose sensitive information. While several defense and attack strategies have been proposed, their effectiveness is often evaluated using metrics that may not necessarily reflect the success rate of an attack or information retrieval, especially in the context of multidimensional data such as images. Traditional metrics like the Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Mean Squared Error (MSE) are typically used as lightweight metrics, assume only pixel-wise comparison, but fail to consider the semantic context of the recovered data. This paper introduces the Absolute Variation Distance (AVD), a lightweight metric derived from total variation, to assess data recovery and information leakage in FL. Unlike traditional metrics, AVD offers a continuous measure for extracting information in noisy images and aligns closely with human perception. Our results are combined with a user experience survey demonstrate that AVD provides a more accurate and consistent measure of data recovery. It also matches the accuracy of the more costly and complex Neural Network based metric, the Learned Perceptual Image Patch Similarity (LPIPS). Hence it offers an effective tool for automatic evaluation of data security in Federation and a reliable way of studying defence and inversion attacks strategies in FL.
Absolute Variation Distance: an Inversion Attack Evaluation Metric for Federated Learning
[ "Georgios Papadopoulos", "yash satsangi", "Shaltiel Eloul", "Marco Pistoia" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LiSj1GRVhL
@inproceedings{ fan{\`\i}2023fedr, title={Fed3R: Recursive Ridge Regression for Federated Learning with strong pre-trained models}, author={Eros Fan{\`\i} and Raffaello Camoriano and Barbara Caputo and Marco Ciccone}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=LiSj1GRVhL} }
Current Federated Learning (FL) methods often struggle with high statistical heterogeneity across clients' data, resulting in client drift due to biased local solutions. This issue is particularly pronounced in the final classification layer, negatively impacting convergence speed and accuracy throughout model aggregation. To overcome these challenges, we introduce Federated Recursive Ridge Regression (Fed3R). Our method replaces the softmax classifier with a ridge regression-based one computed in a closed form, ensuring robustness to statistical heterogeneity and drastically reducing convergence time and communication costs. When the feature extractor is fixed, the incremental formulation of Fed3R is equivalent to the exact centralized solution. Thus, Fed3R enables higher-capacity pre-trained feature extractors with better predictive performance, incompatible with previous FL techniques, because no backpropagation is required through the feature extractor, and only a few rounds are needed to converge. We propose Fed3R in three variants, with Fed3R-RF significantly enhancing performance to levels akin to centralized training while remaining competitive regarding the total communication costs.
Fed3R: Recursive Ridge Regression for Federated Learning with strong pre-trained models
[ "Eros Fanì", "Raffaello Camoriano", "Barbara Caputo", "Marco Ciccone" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=JmrHzzDiyI
@inproceedings{ andrew2023oneshot, title={One-shot Empirical Privacy Estimation for Federated Learning}, author={Galen Andrew and Peter Kairouz and Sewoong Oh and Alina Oprea and Hugh McMahan and Vinith Suriyakumar}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=JmrHzzDiyI} }
Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss in settings where known analytical bounds are not tight. However, existing privacy auditing techniques usually make strong assumptions on the adversary (e.g., knowledge of intermediate model iterates or the training data distribution), are tailored to specific tasks, model architectures, or DP algorithm, and/or require retraining the model many times (typically on the order of thousands). These shortcomings make deploying such techniques at scale difficult in practice, especially in federated settings where model training can take days or weeks. In this work, we present a novel ``one-shot'' approach that can systematically address these challenges, allowing efficient auditing or estimation of the privacy loss of a model during the same, single training run used to fit model parameters, and without requiring any *a priori* knowledge about the model architecture, task, or DP training algorithm. We show that our method provides provably correct estimates for the privacy loss under the Gaussian mechanism, and we demonstrate its performance on well-established FL benchmark datasets under several adversarial threat models.
One-shot Empirical Privacy Estimation for Federated Learning
[ "Galen Andrew", "Peter Kairouz", "Sewoong Oh", "Alina Oprea", "Hugh McMahan", "Vinith Suriyakumar" ]
Workshop/Federated_Learning
2302.03098
[ "https://github.com/google-research/federated" ]
https://huggingface.co/papers/2302.03098
0
0
0
6
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=HyRwexERAo
@inproceedings{ zhao2023breaking, title={Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages}, author={Wanru Zhao and Yihong Chen and Royson Lee and Xinchi Qiu and Yan Gao and Hongxiang Fan and Nicholas Donald Lane}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=HyRwexERAo} }
Pretrained large language models (LLMs) have emerged as a cornerstone in modern natural language processing, with their utility expanding to various applications and languages. However, the fine-tuning of multilingual LLMs, particularly for low-resource languages, is fraught with challenges steming from data-sharing restrictions (the physical border) and from the inherent linguistic differences (the linguistic border). These barriers hinder users of various languages, especially those in low-resource regions, from fully benefiting from the advantages of LLMs. To address these challenges, we propose the Federated Prompt Tuning Paradigm for multilingual scenarios, which utilizes parameter-efficient fine-tuning while adhering to privacy restrictions. We have designed a comprehensive set of experiments and analyzed them using a novel notion of language distance to underscore the strengths of this paradigm: Even under computational constraints, our method not only bolsters data efficiency but also facilitates mutual enhancements across languages, particularly benefiting low-resource ones. Compared to traditional local cross-lingual transfer tuning methods, our approach achieves 6.9\% higher accuracy, reduces the training parameters by over 99\%, and demonstrates stronger cross-lingual generalization. Such findings underscore the potential of our approach to promote social equality, ensure user privacy, and champion linguistic diversity.
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
[ "Wanru Zhao", "Yihong Chen", "Royson Lee", "Xinchi Qiu", "Yan Gao", "Hongxiang Fan", "Nicholas Donald Lane" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HiPe4SjZMs
@inproceedings{ joseph2023learning, title={Learning Optimizers for Local {SGD}}, author={Charles-{\'E}tienne Joseph and Benjamin Th{\'e}rien and Abhinav Moudgil and Boris Knyazev and Eugene Belilovsky}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=HiPe4SjZMs} }
Communication-efficient variants of SGD, specifically local SGD, have received a great deal of interest in recent years. These approaches compute multiple gradient steps locally, that is on each worker, before averaging model parameters, helping relieve the critical communication bottleneck in distributed deep learning training. Although many variants of these approaches have been proposed, they can sometimes lag behind state-of-the-art optimizers for deep learning. In this work, we incorporate local optimizers that compute multiple updates into a learned optimization framework, allowing to meta-learn potentially more efficient local SGD algorithms. Our results demonstrate that local learned optimizers can substantially outperform local SGD and its sophisticated variants while maintaining their communication efficiency. We show that the learned optimizers can generalize to new datasets and architectures, demonstrating the potential of learned optimizers for improving communication-efficient distributed learning.
Learning Optimizers for Local SGD
[ "Charles-Étienne Joseph", "Benjamin Thérien", "Abhinav Moudgil", "Boris Knyazev", "Eugene Belilovsky" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=H0inHCV05c
@inproceedings{ li2023beyond, title={Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning}, author={Jianwei Li and Sheng Liu and Qi Lei}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=H0inHCV05c} }
Federated learning (FL) emphasizes decentralized training by storing data locally and transmitting only model updates, underlining user privacy. However, a line of work on privacy attacks undermines user privacy by extracting sensitive data from large language models during FL.Yet, these attack techniques face distinct hurdles: some work chiefly with limited batch sizes (e.g., batch size of 1), and others can be easily defended or are transparently detectable. This paper introduces an innovative approach that is challenging to detect and defend, significantly enhancing the recovery rate of text in various batch-size settings. Building on fundamental gradient matching and domain prior knowledge, we enhance the recovery by tapping into the input of the Pooler layer of language models, offering additional feature-level guidance that effectively assists optimization-based attacks. We benchmark our method using text classification tasks on datasets such as CoLA, SST, and Rotten Tomatoes. Across different batch sizes and models, our approach consistently outperforms previous state-of-the-art results.
Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning
[ "Jianwei Li", "Sheng Liu", "Qi Lei" ]
Workshop/Federated_Learning
2312.05720
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=FakNykU4PF
@inproceedings{ bornstein2023realfm, title={Real{FM}: A Realistic Mechanism to Incentivize Data Contribution and Device Participation}, author={Marco Bornstein and Amrit Bedi and Anit Kumar Sahu and Furqan Khan and Furong Huang}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=FakNykU4PF} }
Edge device participation in federating learning (FL) has been typically studied under the lens of device-server communication (e.g., device dropout) and assumes an undying desire from edge devices to participate in FL. As a result, current FL frameworks are flawed when implemented in real-world settings, with many encountering the free-rider problem. In a step to push FL towards realistic settings, we propose RealFM: the first truly federated mechanism which (1) realistically models device utility, (2) incentivizes data contribution and device participation, and (3) provably removes the free-rider phenomena. RealFM does not require data sharing and allows for a non-linear relationship between model accuracy and utility, which improves the utility gained by the server and participating devices compared to non-participating devices as well as devices participating in other FL mechanisms. On real-world data, RealFM improves device and server utility, as well as data contribution, by up to 3 magnitudes and $7$x respectively compared to baseline mechanisms.
RealFM: A Realistic Mechanism to Incentivize Data Contribution and Device Participation
[ "Marco Bornstein", "Amrit Bedi", "Anit Kumar Sahu", "Furqan Khan", "Furong Huang" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=EmV9sGpZ7q
@inproceedings{ cho2023heterogeneous, title={Heterogeneous Lo{RA} for Federated Fine-tuning of On-device Foundation Models}, author={Yae Jee Cho and Luyang Liu and Zheng Xu and Aldi Fahrezi and Matt Barnes and Gauri Joshi}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=EmV9sGpZ7q} }
Foundation models (FMs) in massive parameter space pretrained on a large amount of (public) data perform remarkably well on various downstream tasks with just a few samples for fine-tuning. However, direct fine-tuning of the standard FMs often becomes difficult due to their massive size, especially for scenarios where FMs are adapted on private data distributed across resource-limited devices. As such, only those FMs with relatively small parameter size may be capable of on-device fine-tuning. We call these smaller FMs as *on-device FMs (ODFMs)*. In our work, we investigate parameter-efficient federated fine-tuning of ODFMs (XXS PaLM2) for downstream tasks on devices using low-rank approximations (LoRAs) for potential downstream tasks of devices, where we investigate multi-session chat data from real clients as the downstream task of interest. We first examine federated fine-tuning with homogeneous LoRA ranks across clients, and show that higher ranks can lead to overfitting despite their faster learning speed whilst lower ranks do not overfit but converge slower in training. Based on these observations, we propose heterogeneous LoRA, where we deploy *hetergeneous ranks* across clients, aggregate the heterogeneous LoRA modules through zero-padding, and redistribute the LoRA modules heterogeneously through truncation. Our proposed heterogeneous LoRA is simple yet effective. It achieves the best of both worlds by combining the advantages of high-rank and low-rank LoRAs. This allows us to achieve the best performance with the fewest number of communication rounds, while also avoiding the problem of overfitting.
Heterogeneous LoRA for Federated Fine-tuning of On-device Foundation Models
[ "Yae Jee Cho", "Luyang Liu", "Zheng Xu", "Aldi Fahrezi", "Matt Barnes", "Gauri Joshi" ]
Workshop/Federated_Learning
2401.06432
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ESCL5T3EgV
@inproceedings{ jiang2023fdapt, title={{FDAPT}: Federated Domain-adaptive Pre-training for Language Models}, author={Lekang Jiang and Filip Svoboda and Nicholas Donald Lane}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=ESCL5T3EgV} }
Foundation models (FMs) have shown prominent success in a wide range of tasks [Bommasani et al., 2021]. Their applicability to specific domain-task pairings relies on the availability of, both, high-quality data and significant computational resources. These challenges are not new to the field and, indeed, Federated Learning (FL) has been shown to be a promising solution in similar setups [Yu et al., 2023, Zhuang et al., 2023]. This paper tackles the specific case of Domain-adaptive Pre-training (DAPT), a key step in the application of FMs. We conduct the first comprehensive empirical study to evaluate the performance of Federated Domain-adaptive Pre-training (FDAPT). We demonstrate that FDAPT can maintain competitive downstream task performance to the centralized baseline in both IID and non-IID situations. Finally, we propose a novel algorithm, Frozen Federated Domain-adaptive Pre-training (FFDAPT). FFDAPT improves the computational efficiency by 12.1% on average and exhibits similar downstream task performance to vanilla FDAPT, with general performance fluctuations remaining less than 1%.
FDAPT: Federated Domain-adaptive Pre-training for Language Models
[ "Lekang Jiang", "Filip Svoboda", "Nicholas Donald Lane" ]
Workshop/Federated_Learning
2307.06933
[ "https://github.com/scylj1/FDAPT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=BrcHuO2BVc
@inproceedings{ li2023backdoor, title={Backdoor Threats from Compromised Foundation Models to Federated Learning}, author={Xi Li and Songhe Wang and Chen Wu and Hao Zhou and Jiaqi Wang}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=BrcHuO2BVc} }
Federated learning (FL) represents a novel paradigm to machine learning, addressing critical issues related to data privacy and security, yet suffering from data insufficiency and imbalance. The emergence of foundation models (FMs) provides a promising solution to the problems with FL. For instance, FMs could serve as teacher models or good starting points for FL. However, the integration of FM in FL presents a new challenge, exposing the FL systems to potential threats. This paper investigates the robustness of FL incorporating FMs by assessing their susceptibility to backdoor attacks. Contrary to classic backdoor attacks against FL, the proposed attack (1) does not require the attacker fully involved in the FL process; (2) poses a significant risk in practical FL scenarios; (3) is able to evade existing robust FL frameworks/ FL backdoor defenses; (4) underscores the researches on the robustness of FL systems integrated with FMs. The effectiveness of the proposed attack is demonstrated by extensive experiments with various well-known models and benchmark datasets encompassing both text and image classification domains.
Backdoor Threats from Compromised Foundation Models to Federated Learning
[ "Xi Li", "Songhe Wang", "Chen Wu", "Hao Zhou", "Jiaqi Wang" ]
Workshop/Federated_Learning
2311.00144
[ "https://github.com/lixi1994/backdoor_fm_fl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=AbrnDOw8R9
@inproceedings{ choquette-choo2023correlated, title={Correlated Noise Provably Beats Independent Noise for Differentially Private Learning}, author={Christopher Choquette-Choo and Krishnamurthy Dvijotham and Krishna Pillutla and Arun Ganesh and Thomas Steinke and Abhradeep Guha Thakurta}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=AbrnDOw8R9} }
Differentially private learning algorithms inject noise into the learning where the most common private learning algorithm, DP-SGD, adds independent Gaussian noise in each iteration. Motivated by the practical considerations in federated learning, recent work on matrix factorization mechanisms has shown empirically that introducing correlations in the noise can greatly improve their utility. We characterize the asymptotic objective suboptimality for any choice of the correlation function, giving precise analytical bounds for linear regression. We show, using these bounds, how correlated noise provably improves upon vanilla DP-SGD as a function of problem parameters such as the effective dimension and condition number. Moreover, our analytical expression for the near-optimal correlation function circumvents the cubic complexity of the semi-definite program used to optimize the noise correlation in prior work. We validate these theoretical results with experiments on private deep learning in both centralized and federated settings. Our work matches or outperforms prior work while being efficient both in terms of computation and memory.
Correlated Noise Provably Beats Independent Noise for Differentially Private Learning
[ "Christopher A. Choquette-Choo", "Krishnamurthy Dj Dvijotham", "Krishna Pillutla", "Arun Ganesh", "Thomas Steinke", "Abhradeep Guha Thakurta" ]
Workshop/Federated_Learning
2310.06771
[ "" ]
https://huggingface.co/papers/2310.06771
1
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=8zduZGpzZl
@inproceedings{ qiu2023textdriven, title={Text-driven Prompt Generation for Vision-Language Models in Federated Learning}, author={Chen Qiu and Xingyu Li and Chaithanya Kumar Mummadi and Madan Ganesh and Zhenzhen Li and Lu Peng and Wan-Yi Lin}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=8zduZGpzZl} }
Prompt learning for vision-language models, e.g., CoOp, has shown great success in adapting CLIP to different downstream tasks, making it a promising solution for federated learning due to computational reasons. Existing prompt learning techniques replace hand-crafted text prompts with learned vectors that offer improvements on seen classes, but struggle to generalize to unseen classes. Our work addresses this challenge by proposing Federated Text-driven Prompt Generation (FedTPG), which learns a unified prompt generation network across multiple remote clients in a scalable manner. The prompt generation network is conditioned on task-related text input, thus is context-aware, making it suitable to generalize for both seen and unseen classes. Our comprehensive empirical evaluations on nine diverse image classification datasets show that our method is superior to existing federated prompt learning methods, that achieve overall better generalization on both seen and unseen classes and is also generalizable to unseen datasets.
Text-driven Prompt Generation for Vision-Language Models in Federated Learning
[ "Chen Qiu", "Xingyu Li", "Chaithanya Kumar Mummadi", "Madan Ravi Ganesh", "Zhenzhen Li", "Lu Peng", "Wan-Yi Lin" ]
Workshop/Federated_Learning
2310.06123
[ "" ]
https://huggingface.co/papers/2310.06123
0
0
0
7
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=5JsO2DClwk
@inproceedings{ collins2023profit, title={Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning}, author={Liam Collins and Shanshan Wu and Sewoong Oh and Khe Chai Sim}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=5JsO2DClwk} }
In many applications of federated learning (FL), clients desire models that are personalized using their local data, yet are also robust in the sense that they retain general global knowledge. However, the presence of data heterogeneity across clients induces a fundamental trade-off between personalization (i.e., adaptation to a local distribution) and robustness (i.e., not forgetting previously learned general knowledge). It is critical to understand how to navigate this personalization vs robustness trade-off when designing federated systems, which are increasingly moving towards a paradigm of fine-tuning large foundation models. Due to limited computational and communication capabilities in most federated settings, this foundation model fine-tuning must be done using parameter-efficient fine-tuning (PEFT) approaches. While some recent work has studied federated approaches to PEFT, the personalization vs robustness trade-off of federated PEFT has been largely unexplored. In this work, we take a step towards bridging this gap by benchmarking fundamental FL algorithms -- FedAvg and FedSGD plus personalization (via client local fine-tuning) -- applied to one of the most ubiquitous PEFT approaches to large language models (LLMs) -- prompt tuning -- in a multitude of hyperparameter settings under varying levels of data heterogeneity. Our results show that federated-trained prompts can be surprisingly robust when using a small learning rate with many local epochs for personalization, especially when using an adaptive optimizer as the client optimizer during federated training. We also demonstrate that simple approaches such as adding regularization and interpolating two prompts are effective in improving the personalization vs robustness trade-off in computation-limited settings with few local updates allowed for personalization.
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
[ "Liam Collins", "Shanshan Wu", "Sewoong Oh", "Khe Chai Sim" ]
Workshop/Federated_Learning
2310.04627
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=4uyyLG4KCH
@inproceedings{ kandpal2023user, title={User Inference Attacks on Large Language Models}, author={Nikhil Kandpal and Krishna Pillutla and Alina Oprea and Peter Kairouz and Christopher Choquette-Choo and Zheng Xu}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=4uyyLG4KCH} }
We study the privacy implications of fine-tuning large language models (LLMs) on user-stratified (i.e. federated) data. We define a realistic threat model, called user inference, wherein an attacker infers whether or not a user's data was used for fine-tuning. We implement attacks for this threat model that require only a small set of samples from a user (possibly different from the samples used for training) and black-box access to the fine-tuned LLM. We find that LLMs are susceptible to user inference attacks across a variety of fine-tuning datasets with outlier users (i.e., those with data distributions sufficiently different from other users) and users who contribute large quantities of data being most susceptible. Finally, we find that mitigation interventions in the training algorithm, such as batch or per-example gradient clipping and early stopping fail to prevent user inference while limiting the number of fine-tuning samples from a single user can reduce attack effectiveness (albeit at the cost of reducing the total amount of fine-tuning data).
User Inference Attacks on Large Language Models
[ "Nikhil Kandpal", "Krishna Pillutla", "Alina Oprea", "Peter Kairouz", "Christopher A. Choquette-Choo", "Zheng Xu" ]
Workshop/Federated_Learning
2310.09266
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4apX9Kcxie
@inproceedings{ kim2023fedfn, title={Fed{FN}: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning}, author={Seongyoon Kim and Gihun Lee and Jaehoon Oh and Se-Young Yun}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=4apX9Kcxie} }
Federated Learning (FL) is a collaborative method for training models while preserving data privacy in decentralized settings. However, FL encounters challenges related to data heterogeneity, which can result in performance degradation. In our study, we observe that as data heterogeneity increases, feature representation in the FedAVG model deteriorates more significantly compared to classifier weight. Additionally, we observe that as data heterogeneity increases, the gap between higher feature norms for observed classes, obtained from local models, and feature norms of unobserved classes widens, in contrast to the behavior of classifier weight norms. This widening gap extends to encompass the feature norm disparities between local and the global models. To address these issues, we introduce Federated Averaging with Feature Normalization Update (FedFN), a straightforward learning method. We demonstrate the superior performance of FedFN through extensive experiments, even when applied to pretrained ResNet18. Subsequently, we confirm the applicability of FedFN to foundation models.
FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning
[ "Seongyoon Kim", "Gihun Lee", "Jaehoon Oh", "Se-Young Yun" ]
Workshop/Federated_Learning
2311.13267
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=1ww9tjEQVL
@inproceedings{ mclaughlin2023fedlda, title={Fed{LDA}: Personalized Federated Learning Through Collaborative Linear Discriminant Analysis}, author={Connor Mclaughlin and Lili Su}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=1ww9tjEQVL} }
Data heterogeneity poses a significant challenge to federated learning. Observing the universality of neural networks in approximating the ground-truth, one emerging perspective is to train personalized models via learning a shared representation coupled with customized classifiers for each client. To the best of our knowledge, except for the concurrent work FedPAC, individual classifiers in most existing works only utilize local datasets, which may result in poor generalization. In this work, we propose FedLDA which enables federation in training classifiers by performing collaborative Linear Discriminant Analysis (LDA) on top of the latent shared representation. Our algorithm design is motivated by the observation that upon network initialization the extracted features are highly Gaussian, and client LDA models may benefit from distributed estimation of the Gaussian parameters. To support the high-dimension, low-sample scenario often encountered in PFL, we utilize a momentum update of the Gaussian parameters and employ $\ell_1$ regularization of local covariances. Our numerical results show that, surprisingly, in contrast to multiple state-of-the-art methods, our FedLDA is capable of maintaining the initial Gaussianity. More importantly, through empirical study, we demonstrate that our FedLDA method leads to improved generalization than state-of-the-art algorithms. Compared with FedPAC, our method is communication-efficient and does not require the availability of a validation dataset.
FedLDA: Personalized Federated Learning Through Collaborative Linear Discriminant Analysis
[ "Connor Mclaughlin", "Lili Su" ]
Workshop/Federated_Learning
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=06quMTmtRV
@inproceedings{ babakniya2023slora, title={{SL}o{RA}: Federated Parameter Efficient Fine-Tuning of Language Models}, author={Sara Babakniya and Ahmed Elkordy and Yahya Ezzeldin and Qingfeng Liu and Kee-Bong Song and MOSTAFA EL-Khamy and Salman Avestimehr}, booktitle={International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023}, year={2023}, url={https://openreview.net/forum?id=06quMTmtRV} }
Fine-tuning pre-trained models has gained significant success in delivering SOTA results across various NLP tasks. In the absence of centralized data, Federated Learning (FL) helps the model to benefit from clients' private data for fine-tuning. However, due to the limited communication, computation, and storage capabilities of edge devices and the huge sizes of popular pre-trained models, efficient fine-tuning is crucial. This work explores the opportunities and challenges of applying parameter efficient fine-tuning (PEFT) methods in FL for language tasks. Specifically, our investigations reveal that with increasing data heterogeneity across users, the gap between fully fine-tuning the model and employing PEFT methods widens. To bridge this performance gap, we propose a method, SLoRA, which overcomes the key limitations of LoRA in high heterogeneous data scenarios through a novel data-driven initialization technique. Our experimental results demonstrate that SLoRA achieves performance comparable to full fine-tuning, with significant sparse updates with $\sim 1\%$ density while reducing training time by up to $90\%$.
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
[ "Sara Babakniya", "Ahmed Roushdy Elkordy", "Yahya H. Ezzeldin", "Qingfeng Liu", "Kee-Bong Song", "MOSTAFA EL-Khamy", "Salman Avestimehr" ]
Workshop/Federated_Learning
2308.06522
[ "" ]
https://huggingface.co/papers/2308.06522
0
0
0
7
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=zrgqy65sQw
@inproceedings{ zollo2023prompt, title={Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models}, author={Thomas Zollo and Todd Morrill and Zhun Deng and Jake Snell and Toniann Pitassi and Richard Zemel}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=zrgqy65sQw} }
The recent explosion in the capabilities of large language models has led to a wave of interest in how best to prompt the model to perform a given task. While it may be tempting to choose a prompt based on average empirical results on a validation set, this can lead to a deployment where unexpectedly poor responses are generated. To mitigate this prospect, we propose a lightweight framework, Prompt Risk Control, for selecting a prompt based on rigorous upper bounds on families of informative risk measures. We provide and compare different methods for producing bounds on a diverse set of metrics measuring quantities such as worst-case response and disparities in generation quality across the population of users. In addition, we extend the underlying statistical bounding techniques to accommodate the possibility of distribution shifts in deployment. Experiments on applications such as chatbots, medical question summarization, and code generation highlight how such a framework can reduce the risk of the worst outcomes.
Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models
[ "Thomas Zollo", "Todd Morrill", "Zhun Deng", "Jake Snell", "Toniann Pitassi", "Richard Zemel" ]
Workshop/SoLaR
2311.13628
[ "https://github.com/thomaspzollo/prompt_risk" ]
https://huggingface.co/papers/2311.13628
0
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=zNgdomlg4k
@inproceedings{ rateike2023weakly, title={Weakly Supervised Detection of Hallucinations in {LLM} Activations}, author={Miriam Rateike and Celia Cintas and John Wamburu and Tanya Akumu and Skyler Speakman}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=zNgdomlg4k} }
We propose an auditing method to identify whether a large language model (LLM) encodes patterns such as hallucinations in its internal states, which may propagate to downstream tasks. We introduce a weakly supervised auditing technique using a subset scanning approach to detect anomalous patterns in LLM activations from pre-trained models. Importantly, our method does not need knowledge of the type of patterns \emph{a-priori}. Instead, it relies on a reference dataset devoid of anomalies during testing. Further, our approach enables the identification of pivotal nodes responsible for encoding these patterns, which may offer crucial insights for fine-tuning specific sub-networks for bias mitigation. We introduce two new scanning methods to handle LLM activations for anomalous sentences that may deviate from the expected distribution in either direction. Our results confirm prior findings of BERT's limited internal capacity for encoding hallucinations, while OPT appears capable of encoding hallucination information internally. Importantly, our scanning approach, without prior exposure to false statements, performs comparably to a fully supervised out-of-distribution classifier.
Weakly Supervised Detection of Hallucinations in LLM Activations
[ "Miriam Rateike", "Celia Cintas", "John Wamburu", "Tanya Akumu", "Skyler Speakman" ]
Workshop/SoLaR
2312.02798
[ "https://github.com/Trusted-AI/adversarial-robustness-toolbox" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zKDSfGhCoK
@inproceedings{ dorner2023do, title={Do Personality Tests Generalize to Large Language Models?}, author={Florian Dorner and Tom S{\"u}hr and Samira Samadi and Augustin Kelava}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=zKDSfGhCoK} }
With large language models (LLMs) appearing to behave increasingly human-like in text-based interactions, it has become popular to attempt to evaluate various properties of these models using tests originally designed for humans. While re-using existing tests is a resource-efficient way to evaluate LLMs, careful adjustments are usually required to ensure that test results are even valid across human sub-populations. Thus, it is not clear to what extent different tests’ validity generalizes to LLMs. In this work, we provide evidence that LLMs’ responses to personality tests systematically deviate from typical human responses, implying that these results cannot be interpreted in the same way as human test results. Concretely, reverse-coded items (e.g. “I am introverted” vs “I am extraverted”) are often both answered affirmatively by LLMs. In addition, variation across different prompts designed to “steer” LLMs to simulate particular personality types does not follow the clear separation into five independent personality factors from human samples. In light of these results, we believe it is important to pay more attention to tests’ validity for LLMs before drawing strong conclusions about potentially ill-defined concepts like LLMs’ “personality”.
Do Personality Tests Generalize to Large Language Models?
[ "Florian Dorner", "Tom Sühr", "Samira Samadi", "Augustin Kelava" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zBbpEfbuTm
@inproceedings{ wang2023mope, title={MoPe: Model Perturbation-based Privacy Attacks on Language Models}, author={Jason Wang and Jeffrey Wang and Marvin Li and Seth Neel}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=zBbpEfbuTm} }
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present MoPe ($\textbf{Mo}$del $\textbf{Pe}$rturbations), a new method to identify with high confidence if a given text is in the training data of a pre-trained language model, given white-box access to the models parameters. MoPe adds noise to the model in parameter space and measures the drop in the log-likelihood for a given point $x$, a statistic we show approximates the trace of the Hessian matrix with respect to model parameters. We compare MoPe to existing state-of-the-art loss-based attacks and other attacks based on second-order curvature information (such as the trace of the Hessian with respect to the model input). Across language models ranging from size $70$M to $12$B parameters, we show that MoPe is more effective than existing attacks. We also find that the loss of a point alone is insufficient to determine extractability---there are training points we can recover using our methods that have average loss. This casts some doubt on prior work that uses the loss of a point as evidence of memorization or "unlearning."
MoPe: Model Perturbation-based Privacy Attacks on Language Models
[ "Jason Wang", "Jeffrey Wang", "Marvin Li", "Seth Neel" ]
Workshop/SoLaR
2310.14369
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yOP4RvPpeK
@inproceedings{ nicks2023language, title={Language Model Detectors Are Easily Optimized Against}, author={Charlotte Nicks and Eric Mitchell and Rafael Rafailov and Archit Sharma and Christopher Manning and Chelsea Finn and Stefano Ermon}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=yOP4RvPpeK} }
The fluency and general applicability of large language models (LLMs) has motivated significant interest in detecting whether a piece of text was written by a language model. While both academic and commercial detectors have been deployed in some settings, particularly education, other research has highlighted the fragility of these systems. In this paper, we demonstrate a data-efficient attack that fine-tunes language models to confuse existing detectors, leveraging recent developments in reinforcement learning of language models. We use the 'human-ness' score (often just a log probability) of various open-source and commercial detectors as a reward function for reinforcement learning, subject to a KL-divergence constraint that the resulting model does not differ significantly from the original. For a 7B parameter Llama-2 model, fine-tuning for under a day reduces the AUROC of the OpenAI RoBERTa-Large detector from 0.84 to 0.62, while perplexity on OpenWebText increases from 8.7 to only 9.0; with a larger perplexity budget, we reduce AUROC to 0.30 (worse than random), with a perplexity increase to 9.9. Similar to traditional adversarial attacks, we find that this increase in `detector evasion' generalizes to other detectors not used during training. In light of our empirical results, we advise against continued reliance on LLM-generated text detectors.
Language Model Detectors Are Easily Optimized Against
[ "Charlotte Nicks", "Eric Mitchell", "Rafael Rafailov", "Archit Sharma", "Christopher Manning", "Chelsea Finn", "Stefano Ermon" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=x3Ltqz1UFg
@inproceedings{ shah2023scalable, title={Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation}, author={Rusheb Shah and Quentin Feuillade Montixi and Soroush Pour and Arush Tagade and Javier Rando}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=x3Ltqz1UFg} }
Despite efforts to align large language models to produce harmless responses, they are still vulnerable to jailbreak prompts that elicit unrestricted behaviour. In this work, we investigate persona modulation as a black-box jailbreaking method to steer a target model to take on personalities that are willing to comply with harmful instructions. Rather than manually crafting prompts for each persona, we automate the generation of jailbreaks using a language model assistant. We demonstrate a range of harmful completions made possible by persona modulation, including detailed instructions for synthesising methamphetamine, building a bomb, and laundering money. These automated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is 185 times larger than before modulation (0.23%). These prompts also transfer to Claude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%, respectively. Our work reveals yet another vulnerability in commercial large language models and highlights the need for more comprehensive safeguards.
Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation
[ "Rusheb Shah", "Quentin Feuillade Montixi", "Soroush Pour", "Arush Tagade", "Javier Rando" ]
Workshop/SoLaR
2311.03348
[ "" ]
https://huggingface.co/papers/2311.03348
2
1
0
6
[]
[ "JailbreakBench/JBB-Behaviors", "walledai/JailbreakBench" ]
[]
[]
[ "JailbreakBench/JBB-Behaviors", "walledai/JailbreakBench" ]
[]
1
poster
null
https://openreview.net/forum?id=ww41yviaql
@inproceedings{ choi2023flexmodel, title={FlexModel: A Framework for Interpretability of Distributed Large Language Models}, author={Matthew Choi and Muhammad Adil Asif and John Willes and D. Emerson}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=ww41yviaql} }
With the rise of Large Language Models (LLMs) characterized by billions of parameters, the hardware prerequisites for their training and deployment have seen a corresponding increase. Although existing tools facilitate model parallelization and distributed training, deeper model interactions, crucial for interpretability and responsible AI techniques, demand thorough knowledge in distributed computing. This complexity often hampers researchers with machine learning expertise but limited distributed computing background. Addressing this challenge, we present FlexModel, a software package crafted to offer a streamlined interface for engaging with large models across multi-GPU and multi-node configurations. FlexModel is compatible with existing technological frameworks and encapsulates PyTorch models. Its HookFunctions facilitate simple interaction with distributed model internals, bridging the gap between distributed and single-device model handling paradigms. Our work's primary contribution FlexModel democratizes model interactions, and we validate it in two large-scale experimental contexts: Transformer Induction Head Isolation and the TunedLens implementation. FlexModel enhances accessibility and promotes more inclusive research in the domain of large-scale neural networks. The package is found at https://github.com/VectorInstitute/flex_model.
FlexModel: A Framework for Interpretability of Distributed Large Language Models
[ "Matthew Choi", "Muhammad Adil Asif", "John Willes", "D. Emerson" ]
Workshop/SoLaR
2312.03140
[ "https://github.com/vectorinstitute/flex_model" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=wKe6jE065x
@inproceedings{ yao2023large, title={Large Language Model Unlearning}, author={Yuanshun Yao and Xiaojun Xu and Yang Liu}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=wKe6jE065x} }
We study how to perform unlearning in large language models (LLMs), which can forget an LLM's harmful behaviors learned in its pretraining stage or remove the effect of training samples that need to be deleted per user requests. It highlights the application of aligning LLMs with human preferences. Compared to the standard RLHF (RL from human feedback) solution for aligning LLMs, unlearning has three benefits. (1) It only requires negative examples, which are cheaper to collect than high-quality (i.e. positive) examples in RLHF that require human effort. (2) It is less computationally expensive; the cost is comparable to fine-tuning. (3) It is more effective when we know which training samples cause the misbehavior. To the best of our knowledge, our work is the first to explore LLM unlearning, as well as to set up the settings, goals, and evaluations in LLM unlearning. Our empirical results suggest unlearning is a promising direction for LLM alignment.
Large Language Model Unlearning
[ "Yuanshun Yao", "Xiaojun Xu", "Yang Liu" ]
Workshop/SoLaR
2310.10683
[ "https://github.com/kevinyaobytedance/llm_unlearn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vRPnLsWQNh
@inproceedings{ kadhe2023fairsisa, title={Fair{SISA}: Ensemble Post-Processing to Improve Fairness of Unlearning in {LLM}s}, author={Swanand Kadhe and Anisa Halimi and Ambrish Rawat and Nathalie Baracaldo}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=vRPnLsWQNh} }
Training large language models (LLMs) is a costly endeavour in terms of time and computational resources. The large amount of training data used during the unsupervised pre-training phase makes it difficult to verify all data and, unfortunately, undesirable data may be ingested during training. Re-training from scratch is impractical and has led to the creation of the \textit{unlearning} discipline where models are modified to ``unlearn" undesirable information without retraining. However, any modification can alter the behaviour of LLMs, especially on key dimensions such as \textit{fairness}. This is the first work that examines this interplay between unlearning and fairness for LLMs. In particular, we focus on a popular unlearning framework known as SISA [Bourtoule et al., 2021], which creates an ensemble of models trained on disjoint shards. We evaluate the performance-fairness trade-off for SISA, and empirically demsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we propose post-processing bias mitigation techniques for ensemble models produced by SISA. Through experimental results, we demonstrate the efficacy of our post-processing framework called \textit{FairSISA}.
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs
[ "Swanand Kadhe", "Anisa Halimi", "Ambrish Rawat", "Nathalie Baracaldo" ]
Workshop/SoLaR
2312.07420
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=v1WL01lgp8
@inproceedings{ tian2023efficient, title={Efficient Evaluation of Bias in Large Language Models through Prompt Tuning}, author={Jacob-Junqi Tian and D. Emerson and Deval Pandya and Laleh Seyyed-Kalantari and Faiza Khattak}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=v1WL01lgp8} }
Prompting large language models (LLMs) has gained substantial popularity as pre-trained LLMs are capable of performing downstream tasks without requiring large quantities of labelled data. It is, therefore, natural that prompting is also used to evaluate biases exhibited by these models. However, achieving good task-specific performance often requires manual prompt optimization. In this paper, we explore the use of soft-prompt tuning to quantify the biases of LLMs such as OPT and LLaMA. These models are trained on real-world data with potential implicit biases toward certain groups. Since LLMs are increasingly used across many industries and applications, it is crucial to accurately and efficiently identify such biases and their practical implications. In this paper, we use soft-prompt tuning to evaluate model bias across several sensitive attributes through the lens of group fairness (bias). In addition to improved task performance, using soft-prompt tuning provides the advantage of avoiding potential injection of human bias through manually designed prompts. Probing with prompt-tuning reveals important bias patterns, including disparities across age and sexuality.
Efficient Evaluation of Bias in Large Language Models through Prompt Tuning
[ "Jacob-Junqi Tian", "D. Emerson", "Deval Pandya", "Laleh Seyyed-Kalantari", "Faiza Khattak" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=tko8Ln5roY
@inproceedings{ pochinkov2023dissecting, title={Dissecting Large Language Models}, author={Nicky Pochinkov and Nandi Schoots}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=tko8Ln5roY} }
Understanding and shaping the behaviour of Large Language Models (LLMs) is increasingly important as applications become more powerful and more frequently adopted. This paper introduces a machine unlearning method specifically designed for LLMs. We introduce a selective pruning method for LLMs that removes neurons based on their relative importance on a targeted capability compared to overall network performance. This approach is a compute- and data-efficient method for identifying and removing neurons that enable specific behaviours. Our findings reveal that both feed-forward and attention neurons in LLMs are specialized; that is, for specific tasks, certain neurons are more crucial than others.
Dissecting Large Language Models
[ "Nicky Pochinkov", "Nandi Schoots" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=t6ill8O1bq
@inproceedings{ fry2023comparing, title={Comparing Optimization Targets for Contrast-Consistent Search}, author={Hugo Fry and Seamus Fallows and Jamie Wright and Ian Fan and Nandi Schoots}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=t6ill8O1bq} }
We investigate the optimization target of contrast-consistent search (CCS), which aims to recover the internal representations of truth of a large language model. We present a new loss function that we call the Midpoint-Displacement (MD) loss function. We demonstrate that for a certain hyper-parameter value this MD loss function leads to a prober with very similar weights to CCS. We further show that this hyper-parameter is not optimal and that with a better hyper-parameter the MD loss function tentatively attains a higher test accuracy than CCS.
Comparing Optimization Targets for Contrast-Consistent Search
[ "Hugo Fry", "Seamus Fallows", "Jamie Wright", "Ian Fan", "Nandi Schoots" ]
Workshop/SoLaR
2311.00488
[ "https://github.com/ash-ai-safety-hub/g3-nandi" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rOiymxm8tQ
@inproceedings{ zhu2023autodan, title={Auto{DAN}: Automatic and Interpretable Adversarial Attacks on Large Language Models}, author={Sicheng Zhu and Ruiyi Zhang and Bang An and Gang Wu and Joe Barrow and Zichao Wang and Furong Huang and Ani Nenkova and Tong Sun}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=rOiymxm8tQ} }
Large Language Models (LLMs) exhibit broad utility in diverse applications but remain vulnerable to jailbreak attacks, including hand-crafted and automated adversarial attacks, which can compromise their safety measures. However, recent work suggests that patching LLMs against these attacks is possible: manual jailbreak attacks are human-readable but often limited and public, making them easy to block, while automated adversarial attacks generate gibberish prompts that can be detected using perplexity-based filters. In this paper, we propose an interpretable adversarial attack, \texttt{AutoDAN}, that combines the strengths of both types of attacks. It automatically generates attack prompts that bypass perplexity-based filters while maintaining a high attack success rate like manual jailbreak attacks. These prompts are interpretable, exhibiting strategies commonly used in manual jailbreak attacks. Moreover, these interpretable prompts transfer better than their non-readable counterparts, especially when using limited data or a single proxy model. Beyond eliciting harmful content, we also customize the objective of \texttt{AutoDAN} to leak system prompts, demonstrating its versatility. Our work underscores the seemingly intrinsic vulnerability of LLMs to interpretable adversarial attacks.
AutoDAN: Automatic and Interpretable Adversarial Attacks on Large Language Models
[ "Sicheng Zhu", "Ruiyi Zhang", "Bang An", "Gang Wu", "Joe Barrow", "Zichao Wang", "Furong Huang", "Ani Nenkova", "Tong Sun" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pn83r8V2sv
@inproceedings{ yong2023lowresource, title={Low-Resource Languages Jailbreak {GPT}-4}, author={Zheng Xin Yong and Cristina Menghini and Stephen Bach}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=pn83r8V2sv} }
AI safety training and red-teaming of large language models (LLMs) are measures to mitigate the generation of unsafe content. Our work exposes the inherent cross-lingual vulnerability of these safety mechanisms, resulting from the linguistic inequality of safety training data, by successfully circumventing GPT-4's safeguard through translating unsafe English inputs into low-resource languages. On the AdvBench benchmark, GPT-4 engages with the unsafe translated inputs and provides actionable items that can get the users towards their harmful goals 79% of the time, which is on par with or even surpassing state-of-the-art jailbreaking attacks. Other high-/mid-resource languages have significantly lower attack success rates, which suggests that the cross-lingual vulnerability mainly applies to low-resource languages. Previously, limited training on low-resource languages primarily affected speakers of those languages, causing technological disparities. However, our work highlights a crucial shift: this deficiency now poses a risk to all LLMs users. Publicly available translation APIs enable anyone to exploit LLMs' safety vulnerabilities. Therefore, our work calls for more holistic red-teaming efforts to develop robust multilingual safeguards with wide language coverage.
Low-Resource Languages Jailbreak GPT-4
[ "Zheng Xin Yong", "Cristina Menghini", "Stephen Bach" ]
Workshop/SoLaR
2310.02446
[ "" ]
https://huggingface.co/papers/2310.02446
1
1
0
3
[]
[]
[ "TrustSafeAI/GradientCuff-Jailbreak-Defense" ]
[]
[]
[ "TrustSafeAI/GradientCuff-Jailbreak-Defense" ]
1
oral
null
https://openreview.net/forum?id=pJmgsDJXe0
@inproceedings{ ezell2023postdeployment, title={Post-Deployment Regulatory Oversight for General-Purpose Large Language Models}, author={Carson Ezell and Abraham Loeb}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=pJmgsDJXe0} }
The development and deployment of increasingly capable, general-purpose large language models (LLMs) has led to a wide array of risks and harms from automation that are correlated across sectors and use cases. Effective regulation and oversight of general-purpose AI (GPAI) requires the ability to monitor, investigate, and respond to risks and harms that appear across use cases, as well as hold upstream developers accountable for downstream harms that result from their decisions and practices. We argue that existing processes for sector-specific AI oversight in the U.S. should be complemented by post-deployment oversight to address risks and harms specifically from GPAI usage. We examine oversight processes implemented by other federal agencies as precedents for the GPAI oversight activities that a regulatory agency can conduct. The post-deployment oversight function of a regulatory agency can complement other GPAI-related regulatory functions that federal regulatory agencies may perform which are discussed elsewhere in the literature, including pre-deployment licensing or model evaluations for LLMs.
Post-Deployment Regulatory Oversight for General-Purpose Large Language Models
[ "Carson Ezell", "Abraham Loeb" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oss9uaPFfB
@inproceedings{ liu2023trustworthy, title={Trustworthy {LLM}s: a Survey and Guideline for Evaluating Large Language Models' Alignment}, author={Yang Liu and Yuanshun Yao and Jean-Francois Ton and Xiaoying Zhang and Ruocheng Guo and Hao Cheng and Yegor Klochkov and Muhammad Faaiz Taufiq and Hang Li}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=oss9uaPFfB} }
Ensuring alignment has become a critical task before deploying large language models (LLMs) in real-world applications. A major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations. This obstacle hinders the systematic iteration and deployment of LLMs. To address this issue, this paper presents a comprehensive survey of key dimensions that are crucial to consider when assessing LLM trustworthiness. The survey covers 7 major categories of LLM trustworthiness: reliability, safety, fairness, resistance to misuse, explainability and reasoning, adherence to social norms, and robustness. Each major category is further divided into several sub-categories, resulting in a total of 29 sub-categories. Additionally, a subset of 8 sub-categories is selected for further investigation, where corresponding measurement studies are designed and conducted on several widely-used LLMs. The measurement results indicate that, in general, more aligned models tend to perform better in terms of overall trustworthiness. However, the effectiveness of alignment varies across the different trustworthiness categories considered. This highlights the importance of conducting more fine-grained analyses, testing, and making continuous improvements on LLM alignment. By shedding light on these key dimensions of LLM trustworthiness, this paper aims to provide valuable insights and guidance to practitioners in the field. Understanding and addressing these concerns will be crucial in achieving reliable and ethically sound deployment of LLMs in various applications.
Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
[ "Yang Liu", "Yuanshun Yao", "Jean-Francois Ton", "Xiaoying Zhang", "Ruocheng Guo", "Hao Cheng", "Yegor Klochkov", "Muhammad Faaiz Taufiq", "Hang Li" ]
Workshop/SoLaR
2308.05374
[ "https://github.com/kevinyaobytedance/llm_eval" ]
https://huggingface.co/papers/2308.05374
2
27
2
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=mVhOKo62Q2
@inproceedings{ wang2023are, title={Are Large Language Models Really Robust to Word-Level Perturbations?}, author={Haoyu Wang and Guozheng Ma and Cong Yu and Ning Gui and Linrui Zhang and Zhiqi Huang and Suwei Ma and Yongzhe Chang and Sen Zhang and Li Shen and Xueqian Wang and Peilin Zhao and Dacheng Tao}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=mVhOKo62Q2} }
The swift advancement in the scales and capabilities of Large Language Models (LLMs) positions them as promising tools for a variety of downstream tasks. In addition to the pursuit of better performance and the avoidance of violent feedback on a certain prompt, to ensure the responsibility of the LLM, much attention is drawn to the robustness of LLMs. However, existing evaluation methods mostly rely on traditional question answering datasets with predefined supervised labels, which do not align with the superior generation capabilities of contemporary LLMs. To address this issue, we propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools to evaluate the longer conversation generated from more challenging open questions by LLMs, which we refer to as the $R$eward Model for $R$easonable $R$obustness $Eval$uation ($TREvaL$). Longer conversations manifest the comprehensive grasp of language models in terms of their proficiency in understanding questions, a capability not entirely encompassed by individual words or letters, which may exhibit oversimplification and inherent biases. Our extensive empirical experiments demonstrate that TREvaL provides an innovative method for evaluating the robustness of LLMs. Furthermore, our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage. Notably, we are surprised to discover that robustness tends to decrease as fine-tuning (SFT and RLHF) is conducted.
Are Large Language Models Really Robust to Word-Level Perturbations?
[ "Haoyu Wang", "Guozheng Ma", "Cong Yu", "Ning Gui", "Linrui Zhang", "Zhiqi Huang", "Suwei Ma", "Yongzhe Chang", "Sen Zhang", "Li Shen", "Xueqian Wang", "Peilin Zhao", "Dacheng Tao" ]
Workshop/SoLaR
2309.11166
[ "https://github.com/harry-mic/treval" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=m6xyTie61H
@inproceedings{ pfau2023eliciting, title={Eliciting Language Model Behaviors using Reverse Language Models}, author={Jacob Pfau and Alex Infanger and Abhay Sheshadri and Ayush Panda and Julian Michael and Curtis Huebner}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=m6xyTie61H} }
Despite advances in fine-tuning methods, language models (LMs) continue to output toxic and harmful responses on worst-case inputs, including adversarial attacks and jailbreaks. We train an LM on tokens in reverse order---a \textit{reverse LM}---as a tool for identifying such worst-case inputs. By prompting a reverse LM with a problematic string, we can sample prefixes that are likely to precede the problematic suffix. We test our reverse LM by using it to guide beam search for prefixes that have high probability of generating toxic statements when input to a forwards LM. Our 160m parameter reverse LM outperforms the existing state-of-the-art adversarial attack method, GCG, when measuring the probability of toxic continuations from the Pythia-160m LM. We also find that the prefixes generated by our reverse LM for the Pythia model are more likely to transfer to other models, eliciting toxic responses also from Llama 2 when compared to GCG-generated attacks.
Eliciting Language Model Behaviors using Reverse Language Models
[ "Jacob Pfau", "Alex Infanger", "Abhay Sheshadri", "Ayush Panda", "Julian Michael", "Curtis Huebner" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=jo57H1CpD8
@inproceedings{ mudgal2023controlled, title={Controlled Decoding from Language Models}, author={Sidharth Mudgal and Jong Lee and Harish Ganapathy and YaGuang Li and Tao Wang and Yanping Huang and Zhifeng Chen and Heng-Tze Cheng and Michael Collins and Jilin Chen and Alex Beutel and Ahmad Beirami}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=jo57H1CpD8} }
We propose controlled decoding (CD), a novel off-policy reinforcement learning method to control the autoregressive generation from language models towards high reward outcomes. CD solves an off-policy reinforcement learning problem through a value function for the reward, which we call a prefix scorer. The prefix scorer is used at inference time to steer the generation towards higher reward outcomes. We show that the prefix scorer may be trained on (possibly) off-policy data to predict the expected reward when decoding is continued from a partially decoded response. We empirically demonstrate that CD is effective as a control mechanism on Reddit conversations corpus. We also show that the modularity of the design of CD makes it possible to control for multiple rewards effectively solving a multi-objective reinforcement learning problem with no additional complexity. Finally, we show that CD can be applied in a novel blockwise fashion at inference-time, again without the need for any training-time changes, essentially bridging the gap between the popular sequence-level best-of-k strategy and token-level reinforcement learning. This makes CD a promising approach for alignment of language models.
Controlled Decoding from Language Models
[ "Sidharth Mudgal", "Jong Lee", "Harish Ganapathy", "YaGuang Li", "Tao Wang", "Yanping Huang", "Zhifeng Chen", "Heng-Tze Cheng", "Michael Collins", "Jilin Chen", "Alex Beutel", "Ahmad Beirami" ]
Workshop/SoLaR
2310.17022
[ "" ]
https://huggingface.co/papers/2310.17022
3
14
2
13
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=izC60DLg29
@inproceedings{ lee2023the, title={The Effect of Group Status on the Variability of Group Representations in {LLM}-generated Text}, author={Messi Lee and Jacob Montgomery and Calvin Lai}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=izC60DLg29} }
Large Language Models (LLMs) have become pervasive in everyday life, yet their inner workings remain opaque. While scholarly efforts have demonstrated LLMs’ propensity to reproduce biases in their training data, they have primarily focused on the association of social groups with stereotypic attributes. In this paper, we extend this line of inquiry to investigate a bias akin to the social-psychological phenomenon where socially dominant groups are perceived to be less homogeneous than socially subordinate groups as it is reproduced by LLMs. We had ChatGPT, a state-of-the-art LLM, generate a diversity of texts about intersectional group identities and compared text homogeneity. We consistently find that LLMs portray African, Asian, and Hispanic Americans as more homogeneous than White Americans. They also portray women as more homogeneous than men, but these differences are small. Finally, we find that the effect of gender differs across racial/ethnic groups such that the effect of gender is consistent within African and Hispanic Americans but not within Asian and White Americans. We speculate possible sources of this bias in LLMs and posit that the bias has the potential to amplify biases in future LLM training and to reinforce stereotypes.
The Effect of Group Status on the Variability of Group Representations in LLM-generated Text
[ "Messi Lee", "Jacob Montgomery", "Calvin Lai" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=hCZGKGZ802
@inproceedings{ yang2023learning, title={Learning Inner Monologue and Its Utilization in Vision-Language Challenges}, author={Diji Yang and Kezhen Chen and Jinmeng Rao and Xiaoyuan Guo and Yawen Zhang and Jie Yang and Yi Zhang}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=hCZGKGZ802} }
Inner monologue is an essential phenomenon for reasoning and insight mining in human cognition. In this work, we propose a novel approach for AI systems to simulate inner monologue. Specifically, we consider the communications between components in an LLM-centric system as inner monologues, and demonstrate inner monologue reasoning ability can be learned by supervised learning and reinforcement learning, and then be utilized to solve different complex vision-language problems in different domains. Driven by the power of Large Language Models (LLMs), two prominent methods for vision-language tasks have emerged: (1) the hybrid integration between LLMs and Vision-Language Models (VLMs), where visual inputs are firstly converted into language descriptions by VLMs, serving as inputs for LLMs to generate final answer(s); (2) visual feature alignment in language space, where visual inputs are encoded as embeddings and projected to LLMs' language space via further supervised fine-tuning. The first approach provides light training costs and interpretability but is hard to be optimized in an end-to-end fashion. The second approach presents decent performance, but feature alignment usually requires large amounts of training data and lacks interpretability. With inner monologue simulation, our approach achieves competitive performance with less training data and promising interpretability when compared with state-of-the-art models on two popular tasks.
Learning Inner Monologue and Its Utilization in Vision-Language Challenges
[ "Diji Yang", "Kezhen Chen", "Jinmeng Rao", "Xiaoyuan Guo", "Yawen Zhang", "Jie Yang", "Yi Zhang" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ee0kxTFS9a
@inproceedings{ cruz2023reinforcement, title={Reinforcement Learning Fine-tuning of Language Models is Biased Towards More Extractable Features}, author={Diogo Cruz and Edoardo Pona and Alex Holness-Tofts and Elias Schmied and V{\'\i}ctor Abia Alonso and Charlie Griffin and Bogdan-Ionut Cirstea}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=ee0kxTFS9a} }
Many capable large language models (LLMs) are developed via self-supervised pre-training followed by a reinforcement-learning fine-tuning phase, often based on human or AI feedback. During this stage, models may be guided by their inductive biases to rely on simpler features which may be easier to extract, at a cost to robustness and generalisation. We investigate whether principles governing inductive biases in the supervised fine-tuning of LLMs also apply when the fine-tuning process uses reinforcement learning. Following Lovering et al (2021), we test two hypotheses: that features more $\textit{extractable}$ after pre-training are more likely to be utilised by the final policy, and that the evidence for/against a feature predicts whether it will be utilised. Through controlled experiments on synthetic and natural language tasks, we find statistically significant correlations which constitute strong evidence for these hypotheses.
Reinforcement Learning Fine-tuning of Language Models is Biased Towards More Extractable Features
[ "Diogo Cruz", "Edoardo Pona", "Alex Holness-Tofts", "Elias Schmied", "Víctor Abia Alonso", "Charlie Griffin", "Bogdan-Ionut Cirstea" ]
Workshop/SoLaR
2311.04046
[ "https://github.com/edoardopona/predicting-inductive-biases-rl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=bak7hB0Zv9
@inproceedings{ kulveit2023predictive, title={Predictive Minds: {LLM}s As Atypical Active Inference Agents}, author={Jan Kulveit}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=bak7hB0Zv9} }
Large language models (LLMs) like GPT are often conceptualized as passive predictors, simulators, or even 'stochastic parrots'. We instead conceptualize LLMs by drawing on the theory of active inference originating in cognitive science and neuroscience. We examine similarities and differences between traditional active inference systems and LLMs, leading to the conclusion that, currently, LLMs lack a tight feedback loop between acting in the world and perceiving the impacts of their actions, but otherwise fit in the active inference paradigm. We list reasons why this loop may soon be closed, and possible consequences of this including enhanced model self-awareness and the drive to minimize prediction error by changing the world.
Predictive Minds: LLMs As Atypical Active Inference Agents
[ "Jan Kulveit" ]
Workshop/SoLaR
2311.10215
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZDeEYmKYrR
@inproceedings{ dong2023probing, title={Probing Explicit and Implicit Gender Bias through {LLM} Conditional Text Generation}, author={Xiangjue Dong and Yibo Wang and Philip Yu and James Caverlee}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=ZDeEYmKYrR} }
Large Language Models (LLMs) can generate biased and toxic responses. Yet most prior work on LLM gender bias evaluation requires predefined gender-related phrases or gender stereotypes, which are challenging to be comprehensively collected and are limited to explicit bias evaluation. In addition, we believe that instances devoid of gender-related language or explicit stereotypes can still induce gender bias in LLMs. Thus, in this work, we propose a conditional text generation mechanism without the need for predefined gender phrases and stereotypes. This approach employs three types of inputs generated through three distinct strategies to probe LLMs, aiming to show evidence of explicit and implicit gender biases in LLMs. We also utilize explicit and implicit evaluation metrics to evaluate gender bias in LLMs under different strategies. Our experiments demonstrate that an increased model size does not consistently lead to enhanced fairness and all tested LLMs exhibit explicit and/or implicit gender bias, even when explicit gender stereotypes are absent in the inputs.
Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation
[ "Xiangjue Dong", "Yibo Wang", "Philip Yu", "James Caverlee" ]
Workshop/SoLaR
2311.00306
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Y6M3uaQwLz
@inproceedings{ wang2023a, title={A Simple Test of Expected Utility Theory with {GPT}}, author={Mengxin Wang}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=Y6M3uaQwLz} }
This paper tests GPT (specifically, GPT-3.5 with the model variant text-davinci-003) with one of the most classic behavioral choice experiments -- the Allais paradox, to understand the mechanism behind GPT's choices. The Allais paradox is well-known for exposing the irrationality of human choices. Our result shows that, like humans, GPT also falls into the trap of the Allais paradox by violating the independence axiom of the expected utility theory, indicating that its choices are irrational. However, GPT violates the independence axiom in the opposite direction compared to human subjects. Specifically, human subjects tend to be more risk-seeking in the event of an opportunity gain, while GPT displays more risk aversion. This observation implies that GPT's choices structurally differ from those of humans under this context, which might serve as a caveat for developers using LLM to generate human-like data or assist human decision-making.
A Simple Test of Expected Utility Theory with GPT
[ "Mengxin Wang" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=Wu975eZkO3
@inproceedings{ wu2023towards, title={Towards Auditing Large Language Models: Improving Text-based Stereotype Detection}, author={Zekun Wu and Sahan Bulathwela and Adriano Koshiyama}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=Wu975eZkO3} }
Large Language Models (LLM) have made significant advances in the recent past becoming more mainstream in Artificial Intelligence (AI) enabled human-facing applications. However, LLMs often generate stereotypical output, taking from their training data, amplifying societal biases and raising ethical concerns. This work introduces i) the Multi-Grain Stereotype Dataset, which includes 52,751 instances of gender, race, profession and religion stereotypic text and ii) a novel stereotype classifier for English text. We design several experiments to rigorously test the proposed model trained on the novel dataset. Our experiments show that training the model in a multi-class setting can outperform the one-vs-all binary counterpart. Consistent feature importance signals from different eXplainable AI tools demonstrate that the new model exploits relevant text features. We utilise the newly created model to assess the stereotypic behaviour of the popular GPT family of models and observe the reduction of bias over time. In summary, our work establishes a robust and practical framework for auditing and evaluating the stereotypic bias in LLMs.
Towards Auditing Large Language Models: Improving Text-based Stereotype Detection
[ "Zekun Wu", "Sahan Bulathwela", "Adriano Koshiyama" ]
Workshop/SoLaR
2311.14126
[ "" ]
https://huggingface.co/papers/2311.14126
0
0
0
3
[]
[ "PriyaPatel/Bias_identification" ]
[]
[]
[ "PriyaPatel/Bias_identification" ]
[]
1
poster
null
https://openreview.net/forum?id=WnR5BCX8GS
@inproceedings{ mukobi2023welfare, title={Welfare Diplomacy: Benchmarking Language Model Cooperation}, author={Gabriel Mukobi and Hannah Erlebach and Niklas Lauffer and Lewis Hammond and Alan Chan and Jesse Clifton}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=WnR5BCX8GS} }
The growing capabilities and increasingly widespread deployment of AI systems necessitate robust benchmarks for measuring their cooperative capabilities. Unfortunately, most multi-agent benchmarks are either zero-sum or purely cooperative, providing limited opportunities for such measurements. We introduce a general-sum variant of the zero-sum board game Diplomacy—called Welfare Diplomacy—in which players must balance investing in military conquest and domestic welfare. We argue that Welfare Diplomacy facilitates both a clearer assessment of and stronger training incentives for cooperative capabilities. Our contributions are: (1) proposing the Welfare Diplomacy rules and implementing them via an open- source Diplomacy engine; (2) constructing baseline agents using zero-shot prompted language models; and (3) conducting experiments where we find that baselines using state-of-the-art models attain high social welfare but are exploitable. Our work aims to promote societal safety by aiding researchers in developing and assessing multi-agent AI systems. Code to evaluate Welfare Diplomacy and reproduce our experiments is available at https://anonymous.4open.science/r/welfare-diplomacy-72AC.
Welfare Diplomacy: Benchmarking Language Model Cooperation
[ "Gabriel Mukobi", "Hannah Erlebach", "Niklas Lauffer", "Lewis Hammond", "Alan Chan", "Jesse Clifton" ]
Workshop/SoLaR
2310.08901
[ "https://github.com/mukobi/welfare-diplomacy" ]
https://huggingface.co/papers/2310.08901
1
0
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=WcGXAxhC81
@inproceedings{ cui2023a, title={A Divide-Conquer-Reasoning Approach to Consistency Evaluation and Improvement in Blackbox Large Language Models}, author={Wendi Cui and Jiaxin Zhang and Zhuohang Li and Damien Lopez and Kamalika Das and Bradley Malin and Sricharan Kumar}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=WcGXAxhC81} }
Evaluating the quality and variability of text generated by Large Language Models (LLMs) poses a significant, yet unresolved research challenge. Traditional evaluation methods, such as ROUGE and BERTScore, which measure token similarity, often fail to capture the holistic semantic equivalence. This results in a low correlation with human judgments and intuition, which is especially problematic in high-stakes applications like healthcare and finance where reliability, safety, and robust decision-making are highly critical. This work proposes an automated framework for evaluating the consistency of LLM-generated texts using a divide-and-conquer strategy. Unlike existing LLM-based evaluators that operate at the paragraph level, our method employs a divide-and-conquer evaluator (DCE) that breaks down the comparison between two generated responses into individual sentences, each evaluated based on predefined criteria. To facilitate this approach, we introduce an automatic metric converter (AMC) that translates the output from DCE into an interpretable numeric score. Beyond the consistency evaluation, we further present a reason-assisted improver (RAI) that leverages the analytical reasons with explanations identified by DCE to generate new responses aimed at reducing these inconsistencies. Through comprehensive and systematic empirical analysis, we show that our approach outperforms state-of-the-art methods by a large margin (e.g., +19.3% and +24.3% on the SummEval dataset) in evaluating the consistency of LLM generation across multiple benchmarks in semantic, factual, and summarization consistency tasks. Our approach also substantially reduces nearly 90% output inconsistencies, showing promise for effective hallucination mitigation and reduction.
A Divide-Conquer-Reasoning Approach to Consistency Evaluation and Improvement in Blackbox Large Language Models
[ "Wendi Cui", "Jiaxin Zhang", "Zhuohang Li", "Damien Lopez", "Kamalika Das", "Bradley Malin", "Sricharan Kumar" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=VMOMKhAm7q
@inproceedings{ go2023compositional, title={Compositional preference models for alignment with scalable oversight}, author={Dongyoung Go and Tomasz Korbak and Germ{\`a}n Kruszewski and Jos Rozen and Marc Dymetman}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=VMOMKhAm7q} }
As language models (LMs) become more capable, it is increasingly important to align them with human preferences. However, the dominant paradigm for training Preference Models (PMs) for that purpose suffers from fundamental limitations, such as lack of transparency and scalability, along with susceptibility to overfitting the preference dataset. We propose Compositional Preference Models (CPMs), a novel PM framework that decomposes one global preference assessment into several interpretable features, obtains scalar scores for these features from a prompted LM, and aggregates these scores using a logistic regression classifier. CPMs allow to control which properties of the preference data are used to train the preference model and to build it based on features that are believed to underlie the human preference judgement. Our experiments show that CPMs not only improve interpretability and are more robust to overoptimization than standard PMs, but also that best-of-$n$ samples obtained using CPMs tend to be preferred over samples obtained using conventional PMs. Overall, our approach demonstrates the benefits of endowing PMs with priors about which features determine human preferences while relying on LM capabilities to extract those features in a scalable and interpretable way.
Compositional preference models for alignment with scalable oversight
[ "Dongyoung Go", "Tomasz Korbak", "Germàn Kruszewski", "Jos Rozen", "Marc Dymetman" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=V1740FqidS
@inproceedings{ liu2023investigating, title={Investigating the Fairness of Large Language Models for Predictions on Tabular Data}, author={Yanchen Liu and Srishti Gautam and Jiaqi Ma and Himabindu Lakkaraju}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=V1740FqidS} }
Recent literature has suggested the potential of using large language models (LLMs) to make predictions for tabular tasks. However, LLMs have been shown to exhibit harmful social biases that reflect the stereotypes and inequalities present in the society. To this end, as well as the widespread use of tabular data in many high-stake applications, it is imperative to explore the following questions: what sources of information do LLMs draw upon when making predictions for tabular tasks; whether and to what extent are LLM predictions for tabular tasks influenced by social biases and stereotypes; and what are the consequential implications for fairness? Through a series of experiments, we delve into these questions and show that LLMs tend to inherit social biases from their training data which significantly impact their fairness in tabular prediction tasks. Furthermore, our investigations show that in the context of bias mitigation, though in-context learning and fine-tuning have a moderate effect, the fairness metric gap between different subgroups is still larger than that in traditional machine learning models, such as Random Forest and shallow Neural Networks. This observation emphasizes that the social biases are inherent within the LLMs themselves and inherited from their pre-training corpus, not only from the downstream task datasets. Besides, we demonstrate that label-flipping of in-context examples can significantly reduce biases, further highlighting the presence of inherent bias within LLMs.
Investigating the Fairness of Large Language Models for Predictions on Tabular Data
[ "Yanchen Liu", "Srishti Gautam", "Jiaqi Ma", "Himabindu Lakkaraju" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=RDyvhOgFvQ
@inproceedings{ campbell2023localizing, title={Localizing Lying in Llama: Understanding Instructed Dishonesty on True-False Questions Through Prompting, Probing, and Patching}, author={James Campbell and Phillip Guo and Richard Ren}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=RDyvhOgFvQ} }
Large language models (LLMs) demonstrate significant knowledge through their outputs, though it is often unclear whether undesirable outputs are due to a lack of knowledge or dishonesty. In this paper, we conduct an extensive study of intentional dishonesty in Llama-2-70b-chat by engineering prompts that instruct it to lie and then use mechanistic interpretability approaches to localize where in the network this lying behavior occurs. We consistently find five layers in the model that are highly important for lying using three independent methodologies (probing, patching, and concept erasure). We then successfully perform causal interventions on only 46 attention heads (or less than 1\% of all heads in the network), causing the lying model to act honestly. These interventions work robustly across four prompts and six dataset splits. We hope our work can help understand and thus prevent lying behavior in LLMs.
Localizing Lying in Llama: Understanding Instructed Dishonesty on True-False Questions Through Prompting, Probing, and Patching
[ "James Campbell", "Phillip Guo", "Richard Ren" ]
Workshop/SoLaR
2311.15131
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=R8XHBFJOl5
@inproceedings{ kandpal2023user, title={User Inference Attacks on {LLM}s}, author={Nikhil Kandpal and Krishna Pillutla and Alina Oprea and Peter Kairouz and Christopher Choquette-Choo and Zheng Xu}, booktitle={Socially Responsible Language Modelling Research}, year={2023}, url={https://openreview.net/forum?id=R8XHBFJOl5} }
We study the privacy implications of fine-tuning large language models (LLMs) on user-stratified data. We define a realistic threat model, called user inference, wherein an attacker infers whether or not a user's data was used for fine-tuning. We implement attacks for this threat model that require only a small set of samples from a user (possibly different from the samples used for training) and black-box access to the fine-tuned LLM. We find that LLMs are susceptible to user inference attacks across a variety of fine-tuning datasets with outlier users (i.e. those with data distributions sufficiently different from other users) and users who contribute large quantities of data being most susceptible. Finally, we find that mitigation interventions in the training algorithm, such as batch or per-example gradient clipping and early stopping fail to prevent user inference while limiting the number of fine-tuning samples from a single user can reduce attack effectiveness (albeit at the cost of reducing the total amount of fine-tuning data).
User Inference Attacks on LLMs
[ "Nikhil Kandpal", "Krishna Pillutla", "Alina Oprea", "Peter Kairouz", "Christopher Choquette-Choo", "Zheng Xu" ]
Workshop/SoLaR
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster