bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=PYZ2lNVxgz | @inproceedings{
keles2023on,
title={On the Computational Complexity of Inverting Generative Models},
author={Feyza Duman Keles and Chinmay Hegde},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=PYZ2lNVxgz}
} | The objective of generative model inversion is to identify a size-$n$ latent vector that produces a generative model output that closely matches a given target. This operation is a core computational primitive in numerous modern applications involving computer vision and NLP. However, the problem is known to be computationally challenging and NP-hard in the worst case. This paper aims to provide a fine-grained view of the landscape of computational hardness for this problem. We establish several new hardness lower bounds for both exact and approximate model inversion. In exact inversion, the goal is to determine whether a target is contained within the range of a given generative model. Under the strong exponential time hypothesis (SETH), we demonstrate that the computational complexity of exact inversion is lower bounded by $\Omega(2^n)$ via a reduction from $k$-SAT; this is a strengthening of known results. For the more practically relevant problem of approximate inversion, the goal is to determine whether a point in the model range is close to a given target with respect to the $\ell_p$-norm. When $p$ is a positive odd integer, under SETH, we provide an $\Omega(2^n)$ complexity lower bound via a reduction from the closest vectors problem (CVP). Finally, when $p$ is even, under the exponential time hypothesis (ETH), we provide a lower bound of $2^{\Omega (n)}$ via a reduction from Half-Clique and Vertex-Cover. | On the Computational Complexity of Inverting Generative Models | [
"Feyza Duman Keles",
"Chinmay Hegde"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=M5SPJzYsWF | @inproceedings{
xu2023flowbased,
title={Flow-based Distributionally Robust Optimization},
author={Chen Xu and Jonghyeok Lee and Xiuyuan Cheng and Yao Xie},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=M5SPJzYsWF}
} | Flow-based models establish a continuous-time invertible transport map between a data distribution and a pre-specified target distribution, such as the standard Gaussian in normalizing flow. In this work, we study beyond the constraint of known target distributions. We specifically aim to find the worst-case distribution in distributional robust optimization (DRO), which is an infinite-dimensional problem that becomes particularly challenging in high-dimensional settings. To this end, we introduce a computational tool called FlowDRO Specifically, we reformulate the difficult task of identifying the worst-case distribution within a Wasserstein-2 uncertainty set into a more manageable form, i.e., training parameters of a corresponding flow-based neural network. Notably, the proposed FlowDRO is applicable to general risk functions and data distributions in DRO. We demonstrate the effectiveness of the proposed approach in various high-dimensional problems that can be viewed as DRO, including adversarial attack and differential privacy. | Flow-based Distributionally Robust Optimization | [
"Chen Xu",
"Jonghyeok Lee",
"Xiuyuan Cheng",
"Yao Xie"
] | Workshop/M3L | 2310.19253 | [
"https://github.com/hamrel-cxu/flowdro"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=LyF1LmzXtU | @inproceedings{
lin2023transformers,
title={Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining},
author={Licong Lin and Yu Bai and Song Mei},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=LyF1LmzXtU}
} | Large transformer models pretrained on offline reinforcement learning datasets have demonstrated remarkable in-context reinforcement learning (ICRL) capabilities, where they can make good decisions when prompted with interaction trajectories from unseen environments. However, when and how transformers can be trained to perform ICRL have not been theoretically well-understood. In particular, it is unclear which reinforcement-learning algorithms transformers can perform in context, and how distribution mismatch in offline training data affects the learned algorithms.
This paper provides a theoretical framework that analyzes supervised pretraining for ICRL. This includes two recently proposed training methods --- algorithm distillation and decision-pretrained transformers. First, assuming model realizability, we prove the supervised-pretrained transformer will imitate the conditional expectation of the expert algorithm given the observed trajectory. The generalization error will scale with model capacity and a distribution divergence factor between the expert and offline algorithms. Second, we show transformers with ReLU attention can efficiently approximate near-optimal online reinforcement learning algorithms like LinUCB and Thompson sampling for stochastic linear bandits, and UCB-VI for tabular Markov decision processes. This provides the first quantitative analysis of the ICRL capabilities of transformers pretrained from offline trajectories. | Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining | [
"Licong Lin",
"Yu Bai",
"Song Mei"
] | Workshop/M3L | 2310.08566 | [
"https://github.com/licong-lin/in-context-rl"
] | https://huggingface.co/papers/2310.08566 | 2 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=LaFMLwI3rM | @inproceedings{
guo2023how,
title={How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations},
author={Tianyu Guo and Wei Hu and Song Mei and Huan Wang and Caiming Xiong and Silvio Savarese and Yu Bai},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=LaFMLwI3rM}
} | While large language models based on the transformer architecture have demonstrated remarkable in-context learning (ICL) capabilities, understandings of such capabilities are still in an early stage, where existing theory and mechanistic understanding focus mostly on simple scenarios such as learning simple function classes. This paper takes initial steps on understanding ICL in more complex scenarios, by studying learning with \emph{representations}. Concretely, we construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but \emph{fixed} representation function, composed with a linear function that \emph{differs} in each instance. By construction, the optimal ICL algorithm first transforms the inputs by the representation function, and then performs linear ICL on top of the transformed dataset. We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size. Empirically, we find trained transformers consistently achieve near-optimal ICL performance in this setting, and exhibit the desired dissection where lower layers transforms the dataset and upper layers perform linear ICL. Through extensive probing and a new pasting experiment, we further reveal several mechanisms within the trained transformers, such as concrete copying behaviors on both the inputs and the representations, linear ICL capability of the upper layers alone, and a post-ICL representation selection mechanism in a harder mixture setting. These observed mechanisms align well with our theory and may shed light on how transformers perform ICL in more realistic scenarios. | How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations | [
"Tianyu Guo",
"Wei Hu",
"Song Mei",
"Huan Wang",
"Caiming Xiong",
"Silvio Savarese",
"Yu Bai"
] | Workshop/M3L | 2310.10616 | [
""
] | https://huggingface.co/papers/2310.10616 | 0 | 1 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=KzR07JhgtW | @inproceedings{
laidlaw2023a,
title={A Theoretical Explanation of Deep {RL} Performance in Stochastic Environments},
author={Cassidy Laidlaw and Banghua Zhu and Stuart Russell and Anca Dragan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=KzR07JhgtW}
} | Reinforcement learning (RL) theory has largely focused on proving minimax sample complexity bounds. These require _strategic_ exploration algorithms that use relatively limited function classes for representing the policy or value function. Our goal is to explain why deep RL algorithms often perform well in practice, despite using _random_ exploration and much more expressive function classes like neural networks. Our work arrives at an explanation by showing that many stochastic MDPs can be solved by performing only a few steps of value iteration on the random policy's Q function and then acting greedily. When this is true, we find that it is possible to separate the _exploration_ and _learning_ components of RL, making it much easier to analyze. We introduce a new RL algorithm, SQIRL, that iteratively learns a near-optimal policy by exploring randomly to collect rollouts and then performing a limited number of steps of fitted-Q iteration over those rollouts. We find that any regression algorithm that satisfies basic in-distribution generalization properties can be used in SQIRL to efficiently solve common MDPs. This can explain why deep RL works with complex function approximators like neural networks, since it is empirically established that neural networks generalize well in-distribution. Furthermore, SQIRL explains why random exploration works well in practice, since we show many environments can be solved by effectively estimating the random policy's Q-function and then applying zero or a few steps of value iteration. We leverage SQIRL to derive instance-dependent sample complexity bounds for RL that are exponential only in an "effective horizon" of lookahead—which is typically much smaller than the full horizon—and on the complexity of the class used for function approximation. Empirically, we also find that SQIRL performance strongly correlates with PPO and DQN performance in a variety of stochastic environments, supporting that our theoretical analysis is predictive of practical performance. | A Theoretical Explanation of Deep RL Performance in Stochastic Environments | [
"Cassidy Laidlaw",
"Banghua Zhu",
"Stuart Russell",
"Anca Dragan"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Kx4gLWx2ze | @inproceedings{
mei2023deep,
title={Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models},
author={Song Mei and Yuchen Wu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Kx4gLWx2ze}
} | We investigate the efficiency of deep neural networks for approximating scoring functions in diffusion-based generative modeling. While existing approximation theories leverage the smoothness of score functions, they suffer from the curse of dimensionality for intrinsically high-dimensional data. This limitation is pronounced in graphical models such as Markov random fields, where the approximation efficiency of score functions remains unestablished.
To address this, we note score functions can often be well-approximated in graphical models through variational inference denoising algorithms. Furthermore, these algorithms can be efficiently represented by neural networks. We demonstrate this through examples, including Ising models, conditional Ising models, restricted Boltzmann machines, and sparse encoding models. Combined with off-the-shelf discretization error bounds for diffusion-based sampling, we provide an efficient sample complexity bound for diffusion-based generative modeling when the score function is learned by deep neural networks. | Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models | [
"Song Mei",
"Yuchen Wu"
] | Workshop/M3L | 2309.11420 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=KZ47dqKtGs | @inproceedings{
sonthalia2023underparameterized,
title={Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line},
author={Rishi Sonthalia and Xinyue Li and Bochao Gu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=KZ47dqKtGs}
} | In this paper, we present a simple example that provably exhibits double descent in the under-parameterized regime. For simplicity, we look at the ridge regularized least squares denoising problem with data on a line embedded in high-dimension space. By deriving an asymptotically accurate formula for the generalization error, we observe sample-wise and parameter-wise double descent with the peak in the under-parameterized regime rather than at the interpolation point or in the over-parameterized regime. Further, the peak of the sample-wise double descent curve corresponds to a peak in the curve for the norm of the estimator, and adjusting $\mu$, the strength of the ridge regularization, shifts the location of the peak. We observe that parameter-wise double descent occurs for this model for small $\mu$. For larger values of $\mu$, we observe that the curve for the norm of the estimator has a peak but that this no longer translates to a peak in the generalization error. | Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line | [
"Rishi Sonthalia",
"Xinyue Li",
"Bochao Gu"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IfyZSIxcoM | @inproceedings{
molahasani2023continual,
title={Continual Learning for Long-Tailed Recognition: Bridging the Gap in Theory and Practice},
author={Mahdiyar Molahasani and Ali Etemad and Michael Greenspan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=IfyZSIxcoM}
} | The Long-Tailed Recognition (LTR) problem arises in imbalanced datasets. This paper bridges the theory-practice gap in this context, providing mathematical insights into the training dynamics of LTR scenarios by proposing a theorem stating that, under strong convexity, the learner's weights trained on the full dataset are bounded by those trained only on the Head. We extend this theorem for multiple subsets and introduce a novel perspective of using Continual Learning (CL) for LTR. We sequentially learn the Head and Tail by updating the learner's weights without forgetting the Head using CL methods. We prove that CL reduces loss compared to fine-tuning on the Tail. Our experiments on MNIST-LT and standard LTR benchmarks (CIFAR100-LT, CIFAR10-LT, and ImageNet-LT) validate our theory and demonstrate the effectiveness of CL solutions. We also show the efficacy of CL on real-world data, specifically the Caltech256 dataset, outperforming state-of-the-art classifiers. Our work unifies LTR and CL and paves the way for leveraging advances in CL to tackle the LTR challenge effectively. | Continual Learning for Long-Tailed Recognition: Bridging the Gap in Theory and Practice | [
"Mahdiyar Molahasani",
"Ali Etemad",
"Michael Greenspan"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Gw77i2J8g5 | @inproceedings{
bizeul2023simvae,
title={Sim{VAE}: Narrowing the gap between Discriminative \& Generative Representation Learning},
author={Alice Bizeul and Carl Allen},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Gw77i2J8g5}
} | Self-supervised representation learning is a powerful paradigm that leverages the relationship between semantically similar data, such as augmentations, extracts of an image or sound clip, or multiple views/modalities. Recent methods, e.g. SimCLR, CLIP and DINO, have made significant strides, yielding representations that achieve state-of-the-art results on multiple downstream tasks. A number of self-supervised discriminative approaches have been proposed, e.g. instance discrimination, latent clustering and contrastive methods.
Though often intuitive, a comprehensive theoretical understanding of their underlying mechanisms or *what* they learn eludes.
Meanwhile, generative approaches, such as variational autoencoders (VAEs), fit a specific latent variable model and have principled appeal, but lag significantly in terms of performance. We present a theoretical analysis of self-supervised discriminative methods and a graphical model that reflects the assumptions they implicitly make and unifies these methods. We show that fitting this model under an ELBO objective improves representations over previous VAE methods on several common benchmarks, narrowing the gap to discriminative methods, and can also preserve information lost by discriminative approaches. This work brings new theoretical insight to modern machine learning practice. | SimVAE: Narrowing the gap between Discriminative Generative Representation Learning | [
"Alice Bizeul",
"Carl Allen"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FyCkPgTlXO | @inproceedings{
kosson2023rotational,
title={Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks},
author={Atli Kosson and Bettina Messmer and Martin Jaggi},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=FyCkPgTlXO}
} | Weight decay can significantly impact the optimization dynamics of deep neural networks. In certain situations the effects of weight decay and gradient updates on the magnitude of a parameter vector cancel out on average, forming a state known as equilibrium. This causes the expected rotation of the vector in each update to remain constant along with its magnitude. Importantly, equilibrium can arise independently for the weight vectors of different layers and neurons. These equilibria are highly homogeneous for some optimizer and normalization configurations, effectively balancing the average rotation—a proxy for the effective learning rate—across network components. In this work we explore the equilibrium states of multiple optimizers including AdamW and SGD with momentum, providing insights into interactions between the learning rate, weight decay, initialization, normalization and learning rate schedule. We show how rotational equilibrium can be enforced throughout training, eliminating the chaotic transient phase corresponding to the transition towards equilibrium, thus simplifying the training dynamics. Finally, we show that rotational behavior may play a key role in the effectiveness of AdamW compared to Adam with L2-regularization, the performance of different normalization layers, and the need for learning rate warmup. | Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks | [
"Atli Kosson",
"Bettina Messmer",
"Martin Jaggi"
] | Workshop/M3L | 2305.17212 | [
"https://github.com/epfml/rotational-optimizers"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FM6MtmeRxZ | @inproceedings{
lu2023benign,
title={Benign Oscillation of Stochastic Gradient Descent with Large Learning Rate},
author={Miao Lu and Beining Wu and Xiaodong Yang and Difan Zou},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=FM6MtmeRxZ}
} | In this work, we theoretically investigate the generalization properties of neural networks (NN) trained by stochastic gradient descent (SGD) with \emph{large learning rates}. Under such a training regime, our finding is that, the \emph{oscillation} of the NN weights caused by SGD with large learning rates turns out to be beneficial to the generalization of the NN, potentially improving over the same NN trained by SGD with small learning rates that converges more smoothly. In view of this finding, we call such a phenomenon ``\emph{benign oscillation}". | Benign Oscillation of Stochastic Gradient Descent with Large Learning Rate | [
"Miao Lu",
"Beining Wu",
"Xiaodong Yang",
"Difan Zou"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=DYiER5LgUU | @inproceedings{
diamond2023on,
title={On Compositionality and Emergence in Physical Systems Generativie Modeling},
author={Justin Diamond},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=DYiER5LgUU}
} | The principle of compositionality plays a pivotal role in both machine learning and physical sciences but remains under-explored, particularly in the context of synthetic data derived from physical energy potentials. This study aims to bridge this gap by examining the compositional nature of synthetic datasets generated using composite energy potentials. By combining established Lennard-Jones and Morse potentials into a composite potential, we generate synthetic datasets using Markov Chain Monte Carlo (MCMC) techniques. These datasets serve as training grounds for machine learning models, specifically Neural Ordinary Differential Equations (ODEs). Our primary focus is to investigate whether the properties of the composite datasets retain the characteristics of their individual components, effectively testing the principle of compositionality. The findings not only shed light on the compositional integrity of synthetic physical datasets but also lay the groundwork for more robust and interpretable machine learning models applied to complex physical systems by using the formalism of Category Theory. | On Compositionality and Emergence in Physical Systems Generativie Modeling | [
"Justin Diamond"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ClCrg213JS | @inproceedings{
sarnthein2023escaping,
title={Escaping Random Teacher Initialization Enhances Signal Propagation and Representation},
author={Felix Sarnthein and Sidak Pal Singh and Antonio Orvieto and Thomas Hofmann},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ClCrg213JS}
} | Recent work shows that by mimicking a random teacher network, student networks learn to produce better feature representations, even if they are initialized at the teacher. In this paper, we characterize how students escape this global optimum and investigate how this process translates into concrete properties of the representations. To that end, we first describe a simplified setup and identify very large step sizes as the main driver of this phenomenon. Then, we investigate key signal propagation and representation separability properties during the escape. Our analysis reveals a two-stage process: the network first undergoes a form of representational collapse, then steers to a parameter region that not only allows for better propagation of input signals but also gives rise to well-conditioned representations. This might relate to the edge of stability and label-independent dynamics. | Escaping Random Teacher Initialization Enhances Signal Propagation and Representation | [
"Felix Sarnthein",
"Sidak Pal Singh",
"Antonio Orvieto",
"Thomas Hofmann"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CDmerQ37Zs | @inproceedings{
merrill2023the,
title={The Expressive Power of Transformers with Chain of Thought},
author={William Merrill and Ashish Sabharwal},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=CDmerQ37Zs}
} | Recent theoretical work has identified surprisingly simple reasoning problems, such as checking if two nodes in a graph are connected or simulating finite-state machines, that are provably unsolvable by standard transformers that answer immediately after reading their input. However, in practice, transformers' reasoning can be improved by allowing them to use a "chain of thought" or "scratchpad", i.e., generate and condition on a sequence of intermediate tokens before answering. Motivated by this, we ask: *Does such intermediate generation fundamentally extend the computational power of a decoder-only transformer?* We show that the answer is *yes*, but the amount of increase depends crucially on the amount of intermediate generation. For instance, we find that transformer decoders with a logarithmic number of decoding steps (w.r.t. the input length) push the limits of standard transformers only slightly, while a linear number of decoding steps adds a clear new ability (under standard complexity conjectures): recognizing all regular languages. Our results also imply that linear steps keep transformer decoders within context-sensitive languages, and polynomial steps make them recognize exactly the class of polynomial-time solvable problems---the first exact characterization of a type of transformers in terms of standard complexity classes. Together, our results provide a nuanced framework for understanding how the length of a transformer’s chain of thought or scratchpad impacts its reasoning power. | The Expressive Power of Transformers with Chain of Thought | [
"William Merrill",
"Ashish Sabharwal"
] | Workshop/M3L | 2310.07923 | [
""
] | https://huggingface.co/papers/2310.07923 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=BMQ4i2RVbE | @inproceedings{
li2023transformers,
title={Transformers as Multi-Task Feature Selectors: Generalization Analysis of In-Context Learning},
author={Hongkang Li and Meng Wang and Songtao Lu and Hui Wan and Xiaodong Cui and Pin-Yu Chen},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=BMQ4i2RVbE}
} | Transformer-based large language models have displayed impressive capabilities in the domain of in-context learning, wherein they use multiple input-output pairs to make predictions on unlabeled test data. To lay the theoretical groundwork for in-context learning, we delve into the optimization and generalization of a single-head, one-layer Transformer in the context of multi-task learning for classification. Our investigation uncovers that lower sample complexity is associated with increased training-relevant features and reduced noise in prompts, resulting in improved learning performance. The trained model exhibits the mechanism to first attend to demonstrations of training-relevant features and then decode the corresponding label embedding. Furthermore, we delineate the necessary conditions for successful out-of-domain generalization for in-context learning, specifically regarding the relationship between training and testing prompts. | Transformers as Multi-Task Feature Selectors: Generalization Analysis of In-Context Learning | [
"Hongkang Li",
"Meng Wang",
"Songtao Lu",
"Hui Wan",
"Xiaodong Cui",
"Pin-Yu Chen"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=AxoqhdHHH1 | @inproceedings{
qin2023fit,
title={Fit Like You Sample: Sample-Efficient Score Matching From Fast Mixing Diffusions},
author={Yilong Qin and Andrej Risteski},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=AxoqhdHHH1}
} | Score matching is an approach to learning probability distributions parametrized up to a constant of proportionality (e.g. Energy-Based Models). The idea is to fit the score of the distribution (i.e. $\nabla_x \log p(x)$), rather than the likelihood, thus avoiding the need to evaluate the constant of proportionality. While there's a clear algorithmic benefit, the statistical cost can be steep: recent work by (Koehler et al '22) showed that for distributions that have poor isoperimetric properties (a large Poincar'e or log-Sobolev constant), score matching is substantially statistically less efficient than maximum likelihood. However, many natural realistic distributions, e.g. multimodal distributions as simple as a mixture of two Gaussians in one dimension---have a poor Poincar'e constant.
In this paper, we show a close connection between the mixing time of a broad class of Markov processes with generator $\mathcal{L}$ and stationary distribution $p$, and an appropriately chosen generalized score matching loss that tries to fit $\frac{\mathcal{O} p}{p}$. In the special case of $\mathcal{O} = \nabla_x$, and $\mathcal{L}$ being the generator of Langevin diffusion, this generalizes and recovers the results from (Koehler et al '22). This allows us to adapt techniques to speed up Markov chains to construct better score-matching losses. In particular, "preconditioning" the diffusion can be translated to an appropriate "preconditioning" of the score loss. Lifting the chain by adding a temperature like in simulated tempering can be shown to result in a Gaussian-convolution annealed score matching loss, similar to (Song-Ermon '19). Moreover, we show that if the distribution being learned is a finite mixture of Gaussians in $d$ dimensions with a shared covariance, the sample complexity of annealed score matching is polynomial in the ambient dimension, the diameter of the means, and the smallest and largest eigenvalues of the covariance---obviating the Poincar'e constant-based lower bounds of the basic score matching loss shown in (Koehler et al '22). | Fit Like You Sample: Sample-Efficient Score Matching From Fast Mixing Diffusions | [
"Yilong Qin",
"Andrej Risteski"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=9qxoXqxa0N | @inproceedings{
zhao2023towards,
title={Towards the Fundamental Limits of Knowledge Transfer over Finite Domains},
author={Qingyue Zhao and Banghua Zhu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=9qxoXqxa0N}
} | We characterize the statistical efficiency of knowledge transfer through $n$ samples from a teacher to a probabilistic student classifier with input space $\mathcal{S}$ over labels $\mathcal{A}$. We show that privileged information at three progressive levels accelerates the transfer. At the first level, only samples with hard labels are known, via which the maximum likelihood estimator attains the minimax rate $\sqrt{{|\mathcal{S}||\mathcal{A}|}/{n}}$. The second level has the teacher probabilities of sampled labels available in addition, which turns out to boost the convergence rate lower bound to ${{|\mathcal{S}||\mathcal{A}|}/{n}}$. However, under this second data acquisition protocol, minimizing a naive adaptation of the cross-entropy loss results in an asymptotically biased student. We overcome this limitation and achieve the fundamental limit by using a novel empirical variant of the squared error logit loss. The third level further equips the student with the soft labels (complete logits) on $\mathcal{A}$ given every sampled input, thereby provably enables the student to enjoy a rate ${|\mathcal{S}|}/{n}$ free of $|\mathcal{A}|$. We find any Kullback-Leibler divergence minimizer to be optimal in the last case. Numerical simulations distinguish the four learners and corroborate our theory. | Towards the Fundamental Limits of Knowledge Transfer over Finite Domains | [
"Qingyue Zhao",
"Banghua Zhu"
] | Workshop/M3L | 2310.07838 | [
""
] | https://huggingface.co/papers/2310.07838 | 1 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=9BYEBbhDRk | @inproceedings{
rosenfeld2023outliers,
title={Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization},
author={Elan Rosenfeld and Andrej Risteski},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=9BYEBbhDRk}
} | We identify a new phenomenon in neural network optimization which arises from the interaction of depth and a particular heavy-tailed structure in natural data. Our result offers intuitive explanations for several previously reported observations about network training dynamics, including a conceptually new cause for progressive sharpening and the edge of stability. It also enables new predictions of training behavior which we confirm experimentally, plus a new lens through which to theoretically study and improve modern stochastic optimization on neural nets.
Experimentally, we demonstrate the significant influence of paired groups of outliers in the training data with strong *Opposing Signals*: consistent, large magnitude features which dominate the network output and occur in both groups with similar frequency. Due to these outliers, early optimization enters a narrow valley which carefully balances the opposing groups; subsequent sharpening causes their loss to rise rapidly, oscillating between high on one group and then the other, until the overall loss spikes. We complement these experiments with a theoretical analysis of a two-layer linear network on a simple model of opposing signals. | Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization | [
"Elan Rosenfeld",
"Andrej Risteski"
] | Workshop/M3L | 2311.04163 | [
""
] | https://huggingface.co/papers/2311.04163 | 1 | 1 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=8s8w7nwCuk | @inproceedings{
singh2023moxcohow,
title={Mo{XC}o:How I learned to stop exploring and love my local minima?},
author={Esha Singh and Shoham Sabach and Yu-Xiang Wang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=8s8w7nwCuk}
} | Deep Neural Networks (DNNs) are well-known for their generalization capabilities despite overparameterization. This is commonly attributed to the optimizer’s ability to find “good” solutions within high-dimensional loss landscapes. However, widely employed adaptive optimizers, such as ADAM, may suffer from subpar generalization. This paper presents an innovative methodology, $\textit{MoXCo}$, to address these concerns by designing adaptive optimizers that not only expedite exploration with faster convergence speeds but also ensure the avoidance of over-exploitation in specific parameter regimes, ultimately leading to convergence to good solutions. | MoXCo:How I learned to stop exploring and love my local minima? | [
"Esha Singh",
"Shoham Sabach",
"Yu-Xiang Wang"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=89fKHVtsMR | @inproceedings{
y{\"u}ksel2023firstorder,
title={First-order {ANIL} provably learns representations despite overparametrisation},
author={O{\u{g}}uz Y{\"u}ksel and Etienne Boursier and Nicolas Flammarion},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=89fKHVtsMR}
} | Meta-learning methods leverage data from previous tasks to learn a new task in a sample-efficient manner. In particular, model-agnostic methods look for initialisation points from which gradient descent quickly adapts to any new task.
Although it has been empirically suggested that such methods learns shared representations during pretraining, there is limited theoretical evidence of such behavior. In this direction, this work shows, in the limit of infinite tasks, first-order ANIL with a linear two-layer network successfully learns linear shared representations. This result even holds under _overparametrisation_; having a width larger than the dimension of the shared representations results in an asymptotically low-rank solution. | First-order ANIL provably learns representations despite overparametrisation | [
"Oğuz Yüksel",
"Etienne Boursier",
"Nicolas Flammarion"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=7iibkkg0WI | @inproceedings{
gomes2023a,
title={A Data-Driven Measure of Relative Uncertainty for Misclassification Detection},
author={Eduardo Dadalto C{\^a}mara Gomes and Marco Romanelli and Georg Pichler and Pablo Piantanida},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=7iibkkg0WI}
} | Misclassification detection is an important problem in machine learning, as it allows for the identification of instances where the model's predictions are unreliable. However, conventional uncertainty measures such as Shannon entropy do not provide an effective way to infer the real uncertainty associated with the model's predictions. In this paper, we introduce a novel data-driven measure of relative uncertainty to an observer for misclassification detection. By learning patterns in the distribution of soft-predictions, our uncertainty measure can identify misclassified samples based on the predicted class probabilities. Interestingly, according to the proposed measure, soft-predictions that correspond to misclassified instances can carry a large amount of uncertainty, even though they may have low Shannon entropy. We demonstrate empirical improvements over multiple image classification tasks, outperforming state-of-the-art misclassification detection methods. | A Data-Driven Measure of Relative Uncertainty for Misclassification Detection | [
"Eduardo Dadalto Câmara Gomes",
"Marco Romanelli",
"Georg Pichler",
"Pablo Piantanida"
] | Workshop/M3L | 2306.01710 | [
"https://github.com/edadaltocg/relative-uncertainty"
] | https://huggingface.co/papers/2306.01710 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=7QRaAfbium | @inproceedings{
lotfi2023nonvacuous,
title={Non-Vacuous Generalization Bounds for Large Language Models},
author={Sanae Lotfi and Marc Finzi and Yilun Kuang and Tim Rudner and Micah Goldblum and Andrew Wilson},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=7QRaAfbium}
} | Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply regurgitate their training corpora. We provide the first non-vacuous generalization bounds for pretrained large language models (LLMs), indicating that language models are capable of discovering regularities that generalize to unseen data. In particular, we derive a compression bound that is valid for the unbounded log-likelihood loss, and we extend the bound to handle subsampling, accelerating bound computation on massive datasets. To achieve the extreme level of compression required for non-vacuous generalization bounds, we devise SubLoRA, a low-dimensional non-linear parameterization. Using this approach, we find that larger models have better generalization bounds and are more compressible than smaller models. | Non-Vacuous Generalization Bounds for Large Language Models | [
"Sanae Lotfi",
"Marc Finzi",
"Yilun Kuang",
"Tim Rudner",
"Micah Goldblum",
"Andrew Wilson"
] | Workshop/M3L | 2312.17173 | [
"https://github.com/sanaelotfi/sublora-bounds-for-llms"
] | https://huggingface.co/papers/2312.17173 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=7OwnWAPfXu | @inproceedings{
ravichandran2023learning,
title={Learning from setbacks: the impact of adversarial initialization on generalization performance},
author={Kavya Ravichandran and Yatin Dandi and Stefani Karp and Francesca Mignacco},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=7OwnWAPfXu}
} | The loss landscape of state-of-the-art neural networks is far from simple. Understanding how optimization algorithms initialized differently navigate such high-dimensional non-convex profiles is a key problem in machine learning. [Liu et al. 2020] use pre-training on random labels to produce adversarial initializations that lead stochastic gradient descent into global minima with poor generalization. This result contrasts with other literature arguing that pre-training on random labels produces positive effects (see, e.g., [Maennel et al. (2020)]). We ask under which conditions this initialization results in solutions that generalize poorly. Our goal is to build a theoretical understanding of the properties of good solutions by isolating this phenomenon in some minimal models. To this end, we posit and study several hypotheses for why the phenomenon might arise in models of varying levels of simplicity, including representation quality and complex structure in data. | Learning from setbacks: the impact of adversarial initialization on generalization performance | [
"Kavya Ravichandran",
"Yatin Dandi",
"Stefani Karp",
"Francesca Mignacco"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=6pfCFDPhy6 | @inproceedings{
bordelon2023depthwise,
title={Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit},
author={Blake Bordelon and Lorenzo Noci and Mufan Li and Boris Hanin and Cengiz Pehlevan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6pfCFDPhy6}
} | We study residual networks with a residual branch scale of $1/\sqrt{\text{depth}}$ in combination with the $\mu$P parameterization.
We provide experiments demonstrating that residual architectures including convolutional ResNets and Vision Transformers trained with this parameterization exhibit transfer of optimal hyperparameters across width and depth on CIFAR-10 and ImageNet.
Furthermore, using recent developments in the dynamical mean field theory (DMFT) description of neural network learning dynamics, we show that this parameterization of ResNets admits a well-defined feature learning joint infinite-width and infinite-depth limit and show convergence of finite-size network dynamics towards this limit. | Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit | [
"Blake Bordelon",
"Lorenzo Noci",
"Mufan Li",
"Boris Hanin",
"Cengiz Pehlevan"
] | Workshop/M3L | 2309.16620 | [
""
] | https://huggingface.co/papers/2309.16620 | 0 | 1 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=6ZUH4KRM1o | @inproceedings{
ujv{\'a}ry2023estimating,
title={Estimating optimal {PAC}-Bayes bounds with Hamiltonian Monte Carlo},
author={Szilvia Ujv{\'a}ry and Gergely Flamich and Vincent Fortuin and Jos{\'e} Miguel Hern{\'a}ndez-Lobato},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6ZUH4KRM1o}
} | An important yet underexplored question in the PAC-Bayes literature is how much tightness we lose by restricting the posterior family to factorized Gaussian distributions when optimizing a PAC-Bayes bound. We investigate this issue by estimating data-independent PAC-Bayes bounds using the optimal posteriors, comparing them to bounds obtained using MFVI. Concretely, we (1) sample from the optimal Gibbs posterior using Hamiltonian Monte Carlo, (2) estimate its KL divergence from the prior with thermodynamic integration, and (3) propose three methods to obtain high-probability bounds under different assumptions. Our experiments on the MNIST dataset reveal significant tightness gaps, as much as 5-6% in some cases. | Estimating optimal PAC-Bayes bounds with Hamiltonian Monte Carlo | [
"Szilvia Ujváry",
"Gergely Flamich",
"Vincent Fortuin",
"José Miguel Hernández-Lobato"
] | Workshop/M3L | 2310.20053 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=6V11PG7w5l | @inproceedings{
schaeffer2023divergence,
title={Divergence at the Interpolation Threshold: Identifying, Interpreting \& Ablating the Sources of a Deep Learning Puzzle},
author={Rylan Schaeffer and Zachary Robertson and Akhilan Boopathy and Mikail Khona and Ila Fiete and Andrey Gromov and Sanmi Koyejo},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6V11PG7w5l}
} | Machine learning models misbehave, often in unexpected ways. One prominent misbehavior is when the test loss diverges at the interpolation threshold, perhaps best known from its distinctive appearance in double descent. While considerable theoretical effort has gone into understanding generalization of overparameterized models, less effort has been made at understanding why the test loss misbehaves at the interpolation threshold. Moreover, analytically solvable models in this area employ a range of assumptions and use complex techniques from random matrix theory, statistical mechanics, and kernel methods, making it difficult to assess when and why test error might diverge. In this work, we analytically study the simplest supervised model - ordinary linear regression - and show intuitively and rigorously when and why a divergence occurs at the interpolation threshold using basic linear algebra. We identify three interpretable factors that, when all present, cause the divergence. We demonstrate on real data that linear models' test losses diverge at the interpolation threshold and that the divergence disappears when we ablate any one of the three identified factors. | Divergence at the Interpolation Threshold: Identifying, Interpreting Ablating the Sources of a Deep Learning Puzzle | [
"Rylan Schaeffer",
"Zachary Robertson",
"Akhilan Boopathy",
"Mikail Khona",
"Ila Fiete",
"Andrey Gromov",
"Sanmi Koyejo"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=6O15A3h2yl | @inproceedings{
wang2023good,
title={Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult},
author={Yuqing Wang and Zhenghao Xu and Tuo Zhao and Molei Tao},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6O15A3h2yl}
} | Large learning rates, when applied to gradient descent for nonconvex optimization, yield various implicit biases including edge of stability (Cohen et al., 2021), balancing (Wang et al., 2022), and catapult (Lewkowycz et al., 2020). These phenomena cannot be well explained by classical optimization theory. Significant theoretical progress has been made to understand these implicit biases, but it remains unclear for which objective functions would they occur. This paper provides an initial step in answering this question, showing that these implicit biases are different tips of the same iceberg. Specifically, they occur when the optimization objective function has certain regularity. This regularity, together with gradient descent using a large learning rate that favors flatter regions, result in these nontrivial dynamical behaviors. To demonstrate this claim, we develop new global convergence theory under large learning rates for two examples of nonconvex functions without global smoothness, departing from typical assumptions in traditional analyses. We also discuss the implications on training neural networks, where different losses and activations can affect regularity and lead to highly varied training dynamics. | Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult | [
"Yuqing Wang",
"Zhenghao Xu",
"Tuo Zhao",
"Molei Tao"
] | Workshop/M3L | 2310.17087 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=5QyvPPnrGf | @inproceedings{
dong2023toward,
title={Toward Student-oriented Teacher Network Training for Knowledge Distillation},
author={Chengyu Dong and Liyuan Liu and Jingbo Shang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=5QyvPPnrGf}
} | How to conduct teacher training for knowledge distillation is still an open problem. It has been widely observed that a best-performing teacher does not necessarily yield the best-performing student, suggesting a fundamental discrepancy between the current teacher training practice and the ideal teacher training strategy. To fill this gap, we explore the feasibility of training a teacher that is oriented toward student performance with empirical risk minimization (ERM). Our analyses are inspired by the recent findings that the effectiveness of knowledge distillation hinges on the teacher’s capability to approximate the true label distribution of training inputs. We theoretically establish that ERM minimizer can approximate the true label distribution of training data as long as the feature extractor of the learner network is Lipschitz continuous and is robust to feature transformations. In light of our theory, we propose a teacher training method SoTeacher which incorporates Lipschitz regularization and consistency regularization into ERM. Experiments on benchmark datasets using various knowledge distillation algorithms and teacher-student pairs confirm that SoTeacher can improve student accuracy consistently. | Toward Student-oriented Teacher Network Training for Knowledge Distillation | [
"Chengyu Dong",
"Liyuan Liu",
"Jingbo Shang"
] | Workshop/M3L | 2206.06661 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=5CRGXIjjJA | @inproceedings{
bair2023adaptive,
title={Adaptive Sharpness-Aware Pruning for Robust Sparse Networks},
author={Anna Bair and Hongxu Yin and Maying Shen and Pavlo Molchanov and Jose M. Alvarez},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=5CRGXIjjJA}
} | Robustness and compactness are two essential attributes of deep learning models that are deployed in the real world.
The goals of robustness and compactness may seem to be at odds, since robustness requires generalization across domains, while the process of compression exploits specificity in one domain.
We introduce \textit{Adaptive Sharpness-Aware Pruning (AdaSAP)}, which unifies these goals through the lens of network sharpness.
The AdaSAP method produces sparse networks that are robust to input variations which are \textit{unseen at training time}.
We achieve this by strategically incorporating weight perturbations in order to optimize the loss landscape. This allows the model to be both primed for pruning and regularized for improved robustness.
AdaSAP improves the robust accuracy of pruned models on classification and detection over recent methods by up to +6\% on OOD datasets, over a wide range of compression ratios, pruning criteria, and architectures. | Adaptive Sharpness-Aware Pruning for Robust Sparse Networks | [
"Anna Bair",
"Hongxu Yin",
"Maying Shen",
"Pavlo Molchanov",
"Jose M. Alvarez"
] | Workshop/M3L | 2306.14306 | [
""
] | https://huggingface.co/papers/2306.14306 | 2 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=4pPnQqUMLS | @inproceedings{
yaras2023invariant,
title={Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations},
author={Can Yaras and Peng Wang and Wei Hu and Zhihui Zhu and Laura Balzano and Qing Qu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=4pPnQqUMLS}
} | An extensively studied phenomenon of the past few years in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we further investigate this phenomenon by narrowing our focus to deep matrix factorization, where we reveal surprising low-dimensional structures in the learning dynamics when the target matrix is low-rank. Specifically, we show that the evolution of gradient descent starting from arbitrary orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, despite the fact that all parameters are updated throughout training. From this, we provide rigorous justification for low-rank training in a specific, yet practical setting. In particular, we demonstrate that we can construct compressed factorizations that are equivalent to full-width, deep factorizations throughout training for solving low-rank matrix completion problems efficiently. | Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations | [
"Can Yaras",
"Peng Wang",
"Wei Hu",
"Zhihui Zhu",
"Laura Balzano",
"Qing Qu"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4927EDtTI6 | @inproceedings{
nitanda2023how,
title={How Structured Data Guides Feature Learning: A Case Study of the Parity Problem},
author={Atsushi Nitanda and Kazusato Oko and Taiji Suzuki and Denny Wu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=4927EDtTI6}
} | Recent works have shown that neural networks optimized by gradient-based methods can adapt to sparse or low-dimensional target functions through feature learning; an often studied target is classification of the sparse parity function on the unit hypercube. However, such isotropic data setting does not capture the anisotropy and low intrinsic dimensionality exhibited in realistic datasets. In this work, we address this shortcoming by studying how feature learning interacts with structured (anisotropic) input data: we consider the classification of sparse parity on high-dimensional orthotope where the feature coordinates have varying magnitudes. Specifically, we analyze the learning complexity of the mean-field Langevin dynamics (MFLD), which describes the noisy gradient descent update on two-layer neural network, and show that the statistical complexity (i.e. sample size) and computational complexity (i.e. network width) of MFLD can both be improved when prominent directions of the anisotropic input data aligns with the support of the target function. Moreover, we demonstrate the benefit of feature learning by establishing a kernel lower bound on the classification error, which applies to neural networks in the lazy regime. | How Structured Data Guides Feature Learning: A Case Study of the Parity Problem | [
"Atsushi Nitanda",
"Kazusato Oko",
"Taiji Suzuki",
"Denny Wu"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3vkeYFEZxW | @inproceedings{
bhattamishra2023the,
title={The Next Symbol Prediction Problem: {PAC}-learning and its relation to Language Models},
author={Satwik Bhattamishra and Phil Blunsom and Varun Kanade},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3vkeYFEZxW}
} | The *next symbol prediction* (NSP) problem has been widely used to empirically evaluate the performance of neural sequence models on formal language tasks. We formalize the setting so as to make it amenable to PAC-learning analysis. In the NSP setting, a learning algorithm receives valid sequences (positive examples) from the underlying language, along with rich labels indicating for every prefix, whether the prefix is in the language and what symbols could appear subsequently that lead to accepting string. In the conventional classification setting where learning occurs with only positive and negative examples, the problem of learning regular languages or even subclasses represented by acyclic DFAs is known to be computationally hard based on cryptographic assumptions. In contrast, our main result shows that regular languages are efficiently PAC-learnable in the next symbol prediction setting. Further, we provide a more efficient learning algorithm for the case where the target DFA is known to be acyclic. Given the rich labels required in the NSP setting, one may wonder whether this setting is applicable to non-artificial tasks. We explain how language models can act as a source of such labeled data, and consequently, our algorithm can be applied to fit a finite-state model (DFA) that learns the (truncated) support of the language model. | The Next Symbol Prediction Problem: PAC-learning and its relation to Language Models | [
"Satwik Bhattamishra",
"Phil Blunsom",
"Varun Kanade"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3vBtKsiHEC | @inproceedings{
d'angelo2023why,
title={Why Do We Need Weight Decay for Overparameterized Deep Networks?},
author={Francesco D'Angelo and Aditya Varre and Maksym Andriushchenko and Nicolas Flammarion},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3vBtKsiHEC}
} | Weight decay is a broadly used technique for training state-of-the-art deep networks. Despite its widespread usage, its role remains poorly understood. In this work, we highlight that the role of weight decay in modern deep learning is different from its regularization effect studied in classical learning theory. For overparameterized deep networks, we show how weight decay modifies the optimization dynamics enhancing the ever-present implicit regularization of SGD via loss stabilization. | Why Do We Need Weight Decay for Overparameterized Deep Networks? | [
"Francesco D'Angelo",
"Aditya Varre",
"Maksym Andriushchenko",
"Nicolas Flammarion"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3YcBch3ejt | @inproceedings{
cohen2023the,
title={The Double-Edged Sword: Perception and Uncertainty in Inverse Problems},
author={Regev Cohen and Ehud Rivlin and Daniel Freedman},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3YcBch3ejt}
} | Inverse problems pose significant challenges due to their inherent ambiguity in mapping observed data back to its original state. While recent advances have yielded impressive results in restoring degraded data, attaining high perceptual quality comes at the cost of increased hallucinations. This paper investigates this phenomenon to reveal a fundamental tradeoff between perception and uncertainty in solving inverse problems. Using error entropy as a measure of uncertainty, we demonstrate that higher perceptual quality in restoration algorithms is accompanied by a surge in uncertainty. Leveraging Rényi divergence as a perception metric, we derive bounds for this tradeoff, allowing for categorization of different inverse methods based on their performance. Additionally, we connect estimation distortion with uncertainty, offering novel insights into the traditional perception-distortion tradeoff. Our work provides a rigorous framework for analyzing uncertainty in the context of solving inverse problems, highlighting its interplay with perception and distortion, while underscoring the limitations of current approaches to achieving both high perceptual quality and low uncertainty simultaneously. | The Double-Edged Sword: Perception and Uncertainty in Inverse Problems | [
"Regev Cohen",
"Ehud Rivlin",
"Daniel Freedman"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3WBUUAgPxy | @inproceedings{
wang2023nearinterpolators,
title={Near-Interpolators: Fast Norm Growth and Tempered Near-Overfitting},
author={Yutong Wang and Rishi Sonthalia and Wei Hu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3WBUUAgPxy}
} | We study linear regression when the input data population
covariance matrix has eigenvalues $\lambda_i \sim i^{-\alpha}$ where $\alpha > 1$.
Under a generic random matrix theory assumption, we prove
that any near-interpolator, i.e., ${\beta}$ whose training error is below the noise floor, must have its squared $\ell_2$-norm growing super-linearly with the number of samples $n$:
$\|{\beta}\|_{2}^{2} = \Omega(n^{\alpha})$. This implies that existing norm-based generalization bounds increase as the number of samples increases, matching the empirical observations from prior work.
On the other hand, such near-interpolators when properly tuned achieve good generalization, where the test errors approach arbitrarily close to the noise floor.
Our work demonstrates that existing norm-based generalization bounds are vacuous for explaining
the generalization capability of \emph{any} near-interpolators.
Moreover, we show that the trade-off between train and test accuracy is better when the norm growth exponential is smaller. | Near-Interpolators: Fast Norm Growth and Tempered Near-Overfitting | [
"Yutong Wang",
"Rishi Sonthalia",
"Wei Hu"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3BqXz5sAEN | @inproceedings{
tian2023on,
title={On robust overfitting: adversarial training induced distribution matters},
author={Runzhi Tian and Yongyi Mao},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3BqXz5sAEN}
} | Robust overfitting has been observed to arise in adversarial training. We hypothesize that this phenomenon may be related to the evolution of the data distribution along the training trajectory. To investigate this, we select a set of checkpoints in adversarial training and perform standard training on distributions induced by adversarial perturbation w.r.t the checkpoints. We observe that the obtained models become increasingly harder to generalize when robust overfitting occurs, thereby validating the hypothesis. We show the hardness of generalization on the induced distributions is related to certain local property of the perturbation operator at each checkpoint. The connection between the local property and the generalization on the induced distribution is proved by establishing an upper bound of the generalization error. Other interesting phenomena related to the adversarial training trajectory are also observed. | On robust overfitting: adversarial training induced distribution matters | [
"Runzhi Tian",
"Yongyi Mao"
] | Workshop/M3L | 2311.16526 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=25g5u1t6nC | @inproceedings{
yau2023are,
title={Are Graph Neural Networks Optimal Approximation Algorithms?},
author={Morris Yau and Eric Lu and Nikolaos Karalias and Jessica Xu and Stefanie Jegelka},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=25g5u1t6nC}
} | In this work we design graph neural network architectures that capture optimal approximation algorithms for a large class of combinatorial optimization problems using powerful algorithmic tools from semidefinite programming (SDP).
Concretely, we prove that polynomial-sized message passing algorithms can represent the most powerful polynomial time algorithms for Max Constraint Satisfaction Problems assuming the Unique Games Conjecture. We leverage this result to construct efficient graph neural network architectures, OptGNN, that obtain high-quality approximate solutions on landmark combinatorial optimization problems such as Max Cut and Minimum Vertex Cover. Finally, we take advantage of OptGNN's ability to capture convex relaxations to design an algorithm for producing dual certificates of optimality (bounds on the optimal solution) from the learned embeddings of OptGNN. | Are Graph Neural Networks Optimal Approximation Algorithms? | [
"Morris Yau",
"Eric Lu",
"Nikolaos Karalias",
"Jessica Xu",
"Stefanie Jegelka"
] | Workshop/M3L | 2310.00526 | [
"https://github.com/penlu/bespoke-gnn4do"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=1UGbUnQZv0 | @inproceedings{
tian2023joma,
title={Jo{MA}: Demystifying Multilayer Transformers via {JO}int Dynamics of {MLP} and Attention},
author={Yuandong Tian and Yiping Wang and Zhenyu Zhang and Beidi Chen and Simon Du},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=1UGbUnQZv0}
} | We propose Joint MLP/Attention (JoMA) dynamics, a novel mathematical framework to understand the training procedure of multilayer Transformer architectures. This is achieved by integrating out the self-attention layer in Transformers, producing a modified dynamics of MLP layers only. JoMA removes unrealistic assumptions in previous analysis (e.g., lack of residual connection), and predicts that the attention first becomes sparse (to learn salient tokens), then dense (to learn less salient tokens) in the presence of nonlinear activations, while in the linear case, it is consistent with existing works. We leverage JoMA to qualitatively explains how tokens are combined to form hierarchies in multilayer Transformers, when the input tokens are generated by a latent hierarchical generative model. Experiments on models trained from real-world dataset (Wikitext2/Wikitext103) and various pre-trained models (OPT, Pythia) verify our theoretical findings. | JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention | [
"Yuandong Tian",
"Yiping Wang",
"Zhenyu Zhang",
"Beidi Chen",
"Simon Du"
] | Workshop/M3L | 2310.00535 | [
"https://github.com/facebookresearch/luckmatters"
] | https://huggingface.co/papers/2310.00535 | 2 | 2 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=yx3Hkx5ved | @inproceedings{
liu2023improved,
title={Improved Baselines with Visual Instruction Tuning},
author={Haotian Liu and Chunyuan Li and Yuheng Li and Yong Jae Lee},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=yx3Hkx5ved}
} | Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in ~1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available. | Improved Baselines with Visual Instruction Tuning | [
"Haotian Liu",
"Chunyuan Li",
"Yuheng Li",
"Yong Jae Lee"
] | Workshop/Instruction | 2310.03744 | [
""
] | https://huggingface.co/papers/2310.03744 | 3 | 37 | 8 | 4 | [
"llava-hf/llava-v1.6-mistral-7b-hf",
"llava-hf/llava-v1.6-34b-hf",
"stabilityai/japanese-stable-vlm",
"Intel/llava-gemma-2b",
"cyberagent/llava-calm2-siglip",
"llava-hf/llava-v1.6-vicuna-13b-hf",
"llava-hf/llava-v1.6-vicuna-7b-hf",
"Intel/llava-llama-3-8b",
"SpursgoZmy/table-llava-v1.5-7b",
"Intel/llava-gemma-7b",
"jingyaogong/minimind-v-v1",
"SpursgoZmy/table-llava-v1.5-13b",
"multitensor/model1",
"circulus/TinyHawk-v1",
"saurabh-straive/llava_100k_finetuned",
"Straive/llava-1.5-13b-lora-100k-8-mar",
"GDinesh/llava-1-5",
"saurabh-straive/llava-1-5",
"Straive/llava-v1.6-34b-hf",
"adinath/tablellava",
"rroset/llava-v1.6-mistral-7b-rrm",
"starriver030515/LLaVA",
"chart-misinformation-detection/llava-1.6-mistral-7b-snoopy-1.0-post-finetune-full-folder",
"chart-misinformation-detection/hf-llava-next-finetune-blp4k",
"csuhan/LLaVA_EF",
"mylesgoose/Llama-3.1-Minitron-4B-Llava-Nvidia-siglip-ov",
"jingyaogong/minimind-v-v1-small",
"palpit/mydataset",
"zliubot/table-llava-v1.5-7b"
] | [
"lmms-lab/M4-Instruct-Data",
"X2FD/LVIS-Instruct4V-mix730k",
"swap-uniba/WILD_IT",
"jiahaonie/MMRel",
"jiahaonie/README.md"
] | [
"merve/llava-next",
"merve/compare_VLMs",
"DiscloseAI/pllava-7b-demo",
"anand004/Multimodal-PDF-Chatbot",
"dwb2023/omniscience",
"catastropiyush/llava-next-Plot-Extract",
"dwb2023/hf_extractor",
"dwb2023/model_explorer2",
"DiscloseAI/pllava-13b-demo",
"PeepDaSlan9/stabilityai-japanese-stable-vlm",
"mrqadeer/llava-hf-llava-v1.6-mistral-7b-hf",
"dwb2023/model_explorer4",
"Raghuan/pdf_bot_new",
"larry1129/WooWoof_AI",
"yashasgupta/Image_Caption",
"vitvit/llava-hf-llava-v1.6-34b-hf",
"arkerekon/llava-hf-llava-v1.6-34b-hf",
"JayGhiya/image-text-llava-13b-v1.6",
"realaer/src",
"stmackcat/llava-next",
"Johnyquest7/llava-next-test",
"fradeai/llava-hf-llava-v1.6-mistral-7b-hf",
"arkerekon/llava-hf-llava-v1.6-mistral-7b-hf",
"ThinkAI-Morocco/llava-next",
"anand004/Multimodal-PDF-RAG",
"kh-CHEUNG/EIL-Demo",
"AIEArt/llava-chatbot",
"dkondic/ImageChatbot",
"chart-misinformation-detection/Llava-Next-BLP4k",
"Raghuan/PDF_chatbot",
"whan12/llava-next",
"Spencer525/MMrapd",
"ojilethegenius/llava-next",
"ovidiuionita/cro-audit",
"Benfou21/Mutli_vector_RAG",
"BOUDELLAH/llava",
"ermu2001/pllava-34b-demo",
"fradeai/llava-hf-llava-v1.6-vicuna-7b-hf",
"AI-Secure/MMDT-radar",
"ManishThota/pllava-34b-demo",
"diabolic6045/japanese-stable-vlm-demo",
"Aranya31/ft_LLaVA-Med"
] | [
"llava-hf/llava-v1.6-mistral-7b-hf",
"llava-hf/llava-v1.6-34b-hf",
"stabilityai/japanese-stable-vlm",
"Intel/llava-gemma-2b",
"cyberagent/llava-calm2-siglip",
"llava-hf/llava-v1.6-vicuna-13b-hf",
"llava-hf/llava-v1.6-vicuna-7b-hf",
"Intel/llava-llama-3-8b",
"SpursgoZmy/table-llava-v1.5-7b",
"Intel/llava-gemma-7b",
"jingyaogong/minimind-v-v1",
"SpursgoZmy/table-llava-v1.5-13b",
"multitensor/model1",
"circulus/TinyHawk-v1",
"saurabh-straive/llava_100k_finetuned",
"Straive/llava-1.5-13b-lora-100k-8-mar",
"GDinesh/llava-1-5",
"saurabh-straive/llava-1-5",
"Straive/llava-v1.6-34b-hf",
"adinath/tablellava",
"rroset/llava-v1.6-mistral-7b-rrm",
"starriver030515/LLaVA",
"chart-misinformation-detection/llava-1.6-mistral-7b-snoopy-1.0-post-finetune-full-folder",
"chart-misinformation-detection/hf-llava-next-finetune-blp4k",
"csuhan/LLaVA_EF",
"mylesgoose/Llama-3.1-Minitron-4B-Llava-Nvidia-siglip-ov",
"jingyaogong/minimind-v-v1-small",
"palpit/mydataset",
"zliubot/table-llava-v1.5-7b"
] | [
"lmms-lab/M4-Instruct-Data",
"X2FD/LVIS-Instruct4V-mix730k",
"swap-uniba/WILD_IT",
"jiahaonie/MMRel",
"jiahaonie/README.md"
] | [
"merve/llava-next",
"merve/compare_VLMs",
"DiscloseAI/pllava-7b-demo",
"anand004/Multimodal-PDF-Chatbot",
"dwb2023/omniscience",
"catastropiyush/llava-next-Plot-Extract",
"dwb2023/hf_extractor",
"dwb2023/model_explorer2",
"DiscloseAI/pllava-13b-demo",
"PeepDaSlan9/stabilityai-japanese-stable-vlm",
"mrqadeer/llava-hf-llava-v1.6-mistral-7b-hf",
"dwb2023/model_explorer4",
"Raghuan/pdf_bot_new",
"larry1129/WooWoof_AI",
"yashasgupta/Image_Caption",
"vitvit/llava-hf-llava-v1.6-34b-hf",
"arkerekon/llava-hf-llava-v1.6-34b-hf",
"JayGhiya/image-text-llava-13b-v1.6",
"realaer/src",
"stmackcat/llava-next",
"Johnyquest7/llava-next-test",
"fradeai/llava-hf-llava-v1.6-mistral-7b-hf",
"arkerekon/llava-hf-llava-v1.6-mistral-7b-hf",
"ThinkAI-Morocco/llava-next",
"anand004/Multimodal-PDF-RAG",
"kh-CHEUNG/EIL-Demo",
"AIEArt/llava-chatbot",
"dkondic/ImageChatbot",
"chart-misinformation-detection/Llava-Next-BLP4k",
"Raghuan/PDF_chatbot",
"whan12/llava-next",
"Spencer525/MMrapd",
"ojilethegenius/llava-next",
"ovidiuionita/cro-audit",
"Benfou21/Mutli_vector_RAG",
"BOUDELLAH/llava",
"ermu2001/pllava-34b-demo",
"fradeai/llava-hf-llava-v1.6-vicuna-7b-hf",
"AI-Secure/MMDT-radar",
"ManishThota/pllava-34b-demo",
"diabolic6045/japanese-stable-vlm-demo",
"Aranya31/ft_LLaVA-Med"
] | 1 | poster |
null | https://openreview.net/forum?id=yi8KGilFFk | @inproceedings{
chen2023can,
title={Can {LLM}-Generated Misinformation Be Detected?},
author={Canyu Chen and Kai Shu},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=yi8KGilFFk}
} | The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures. | Can LLM-Generated Misinformation Be Detected? | [
"Canyu Chen",
"Kai Shu"
] | Workshop/Instruction | 2309.13788 | [
"https://github.com/llm-misinformation/llm-misinformation"
] | https://huggingface.co/papers/2309.13788 | 1 | 0 | 1 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=yeQvl1q8Vy | @inproceedings{
kim2023prometheus,
title={Prometheus: Inducing Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=yeQvl1q8Vy}
} | Recently, using a powerful proprietary Large Language Model (LLM) (e.g., GPT-4) as an evaluator for long-form responses has become the de facto standard. However, for practitioners with large-scale evaluation tasks and custom criteria in consideration (e.g., child-readability), using proprietary LLMs as an evaluator is unreliable due to the closed-source nature, uncontrolled versioning, and prohibitive costs. In this work, we propose PROMETHEUS, a fully open-source LLM that is on par with GPT-4’s evaluation capabilities when the appropriate reference materials (reference answer, score rubric) are accompanied. We first construct the FEEDBACK COLLECTION, a new dataset that consists of 1K fine-grained score rubrics, 20K instructions, and 100K responses and language feedback generated by GPT-4. Using the FEEDBACK COLLECTION, we train PROMETHEUS, a 13B evaluator LLM that can assess any given long-form text based on customized score rubric provided by the user. Experimental results show that PROMETHEUS scores a Pearson correlation of 0.897 with human evaluators when evaluating 45 customized score rubrics, which is on par with GPT-4 (0.882), and greatly outperforms ChatGPT (0.392). Furthermore, measuring correlation with GPT-4 with 1222 customized score rubrics across four benchmarks (MT Bench, Vicuna Bench, Feedback Bench, Flask Eval) shows similar trends, bolstering PROMETHEUS’s capability as an evaluator LLM. Lastly, PROMETHEUS achieves the highest accuracy on two human preference benchmarks (HHH Alignment & MT Bench Human Judgment) compared to open-sourced reward models explicitly trained on human preference datasets, highlighting its potential as an universal reward model. We open-source our code, dataset, and model at https://github.com/kaistAI/Prometheus. | Prometheus: Inducing Evaluation Capability in Language Models | [
"Seungone Kim",
"Jamin Shin",
"Yejin Cho",
"Joel Jang",
"Shayne Longpre",
"Hwaran Lee",
"Sangdoo Yun",
"Seongjin Shin",
"Sungdong Kim",
"James Thorne",
"Minjoon Seo"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ydY0xXNhwi | @inproceedings{
aw2023instructiontuned,
title={Instruction-tuned {LLM}s with World Knowledge are More Aligned to the Human Brain},
author={Khai Loong Aw and Syrielle Montariol and Badr AlKhamissi and Martin Schrimpf and Antoine Bosselut},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=ydY0xXNhwi}
} | Instruction-tuning is a widely adopted method of finetuning that enables large language models (LLMs) to generate output that more closely resembles human responses to natural language queries, in many cases leading to human-level performance on diverse testbeds. However, it remains unclear whether instruction-tuning truly makes LLMs more similar to how humans process language. We investigate the effect of instruction-tuning on LLM-human similarity in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs across three datasets involving humans reading naturalistic stories and sentences, and discover that instruction-tuning generally enhances brain alignment by an average of 6%, but does not have a similar effect on behavioral alignment. To identify the factors underlying LLM-brain alignment, we compute the correlation between the brain alignment of LLMs and various model properties, such as model size, performance ability on problem-solving benchmarks, and ability on benchmarks requiring world knowledge spanning various domains. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and human brain alignment, suggesting that mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain. | Instruction-tuned LLMs with World Knowledge are More Aligned to the Human Brain | [
"Khai Loong Aw",
"Syrielle Montariol",
"Badr AlKhamissi",
"Martin Schrimpf",
"Antoine Bosselut"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xulyCXgIWH | @inproceedings{
liu2023ring,
title={Ring Attention with Blockwise Transformers for Near-Infinite Context},
author={Hao Liu and Matei Zaharia and Pieter Abbeel},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=xulyCXgIWH}
} | Transformers have emerged as the architecture of choice for many state-of-the-art AI models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands imposed by Transformers limit their ability to handle long sequences, thereby creating challenges for tasks involving extended sequences or long-term dependencies. We present a distinct approach, Ring Attention, which leverages blockwise computation of self-attention to distribute long sequences across multiple devices while overlapping the communication of key-value blocks with the computation of blockwise attention. Ring Attention enables training and inference of sequences that are up to device count times longer than those of prior memory-efficient Transformers, effectively eliminating the memory constraints imposed by individual devices.
Extensive experiments on language modeling tasks demonstrate the effectiveness of Ring Attention in allowing large sequence input size and improving performance. | Ring Attention with Blockwise Transformers for Near-Infinite Context | [
"Hao Liu",
"Matei Zaharia",
"Pieter Abbeel"
] | Workshop/Instruction | 2310.01889 | [
"https://github.com/forhaoliu/ringattention"
] | https://huggingface.co/papers/2310.01889 | 0 | 10 | 3 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xaqoZZqkPU | @inproceedings{
li2023reflectiontuning,
title={Reflection-Tuning: Recycling Data for Better Instruction-Tuning},
author={Ming Li and Lichang Chen and Jiuhai Chen and Shwai He and Tianyi Zhou},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=xaqoZZqkPU}
} | Recent advancements in Large Language Models (LLMs) have expanded the horizons of natural language understanding and generation. Notably, the output control and alignment with the input of LLMs can be refined through instruction tuning. However, as highlighted in several studies, low-quality data in the training set are usually detrimental to instruction tuning, resulting in inconsistent or even misleading LLM outputs. We propose a novel method, termed ``reflection-tuning,'' which addresses the problem by self-improvement and judging capabilities of LLMs. This approach utilizes an oracle LLM to recycle the original training data by introspecting and enhancing the quality of instructions and responses in the data. Extensive experiments on widely used evaluation benchmarks show that LLMs trained with our recycled data outperform those trained with existing datasets in various benchmarks. Codes, data, and models are available at https://github.com/tianyi-lab/Reflection_Tuning. | Reflection-Tuning: Recycling Data for Better Instruction-Tuning | [
"Ming Li",
"Lichang Chen",
"Jiuhai Chen",
"Shwai He",
"Tianyi Zhou"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=x6MlSOzbmC | @inproceedings{
ge2023supervised,
title={Supervised Fine-Tuning of Large Language Models on Human Demonstrations Through the Lens of Memorization},
author={Yubin Ge and Devamanyu Hazarika and Yang Liu and Mahdi Namazifar},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=x6MlSOzbmC}
} | In recent years, the field of natural language processing (NLP) has witnessed remarkable advancements driven by the development of large language models (LLMs). Various techniques, such as instruction tuning, have emerged as crucial approaches, enhancing LLMs' adaptability to new tasks guided by instructional prompts. Meanwhile, the phenomenon of memorization within LLMs has garnered considerable attention. In this work, we delve into memorization within LLMs during supervised fine-tuning on human demonstrations and find a distinct pattern marked by initial memorization growth followed by stabilization, with different degrees of memorization observed across various tasks. An intriguing observation is the increase in validation perplexity, typically indicative of overfitting, does not result in lower generation quality. We probe deeper by examining the entropy derived from LLM's output probabilities, uncovering a consistent trend of decreasing entropy throughout training under both nucleus sampling and teacher forcing scenarios. This implies growing confidence within the LLM in generating output, while such output may deviate from the expected ground truth. Building upon our investigation, we propose a novel Memorization-Based Curriculum (MBC) learning approach. We leverage likelihood as a proxy for measuring memorization and employ it to construct a data distribution for sampling instances with replacement during supervised fine-tuning, emphasizing data with lower degrees of memorization. Evaluations using GPT-4 as a judge demonstrate the effectiveness of MBC in fine-tuning LLMs on human demonstrations. | Supervised Fine-Tuning of Large Language Models on Human Demonstrations Through the Lens of Memorization | [
"Yubin Ge",
"Devamanyu Hazarika",
"Yang Liu",
"Mahdi Namazifar"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wq4OaU8tfE | @inproceedings{
wen2023grounding,
title={Grounding Code Generation with Input-Output Specifications},
author={Yeming Wen and Pengcheng Yin and Kensen Shi and Henryk Michalewski and Swarat Chaudhuri and Alex Polozov},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=wq4OaU8tfE}
} | Large language models (LLMs) have demonstrated significant potential in code generation. However, the code generated by these models occasionally deviates from the user's intended outcome, resulting in executable but incorrect code. To mitigate this issue, we propose Gift4Code, a novel approach for the instruction fine-tuning of LLMs specifically tailored for code generation. Our method leverages synthetic data produced by the LLM itself and utilizes execution-derived feedback as a key learning signal. This feedback, in the form of program input-output specifications, is provided to the LLM to facilitate fine-tuning. We evaluated our approach on two challenging data science benchmarks, Arcade and DS-1000. Our results suggest that the method enhances the LLM's alignment with user intentions, reducing the incidence of executable but incorrect outputs. | Grounding Code Generation with Input-Output Specifications | [
"Yeming Wen",
"Pengcheng Yin",
"Kensen Shi",
"Henryk Michalewski",
"Swarat Chaudhuri",
"Alex Polozov"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wcSx5VjTPP | @inproceedings{
lu2023instag,
title={\#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models},
author={Keming Lu and Hongyi Yuan and Zheng Yuan and Runji Lin and Junyang Lin and Chuanqi Tan and Chang Zhou and Jingren Zhou},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=wcSx5VjTPP}
} | Pre-trained large language models (LLMs) can understand and align with human instructions by supervised fine-tuning (SFT). It is commonly believed that diverse and complex SFT data are of the essence to enable good instruction-following abilities. However, such diversity and complexity are obscure and lack quantitative analyses. In this work, we propose InsTag, an open-set instruction tagging method, to identify semantics and intentions of human instructions by tags that provide access to definitions and quantified analyses of instruction diversity and complexity. We obtain 6.6K fine-grained tags to describe instructions from popular open-sourced SFT datasets comprehensively. We find that the abilities of aligned LLMs benefit from more diverse and complex instructions in SFT data. Based on this observation, we propose a data sampling procedure based on InsTag, and select 6K diverse and complex samples from open-source datasets for SFT. The resulting models, TagLM, outperform open-source models based on considerably larger SFT data evaluated by MT-Bench, echoing the importance of instruction diversity and complexity and the effectiveness of InsTag. InsTag has robust potential to be extended to more applications beyond the data selection as it provides an effective way to analyze the distribution of instructions. | #InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models | [
"Keming Lu",
"Hongyi Yuan",
"Zheng Yuan",
"Runji Lin",
"Junyang Lin",
"Chuanqi Tan",
"Chang Zhou",
"Jingren Zhou"
] | Workshop/Instruction | 2308.07074 | [
"https://github.com/ofa-sys/instag"
] | https://huggingface.co/papers/2308.07074 | 1 | 0 | 0 | 8 | [
"OFA-Sys/InsTagger",
"zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"RichardErkhov/zake7749_-_gemma-2-2b-it-chinese-kyara-dpo-4bits",
"RichardErkhov/zake7749_-_gemma-2-2b-it-chinese-kyara-dpo-8bits"
] | [] | [] | [
"OFA-Sys/InsTagger",
"zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"RichardErkhov/zake7749_-_gemma-2-2b-it-chinese-kyara-dpo-4bits",
"RichardErkhov/zake7749_-_gemma-2-2b-it-chinese-kyara-dpo-8bits"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=wYLASkYFJU | @inproceedings{
lai2023training,
title={Training Speech Recognition Models to Follow Instructions},
author={Cheng-I Lai and Zhiyun Lu and Liangliang Cao and Ruoming Pang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=wYLASkYFJU}
} | Conventional end-to-end Automatic Speech Recognition (ASR) models primarily focus on exact transcription tasks, lacking flexibility for nuanced user interactions. In this paper, we train a speech recognition model to follow a diverse set of free-form text instructions for a multitude of speech recognition tasks -- ranging from simple transcript manipulation to summarization. We emphasize that even without pre-trained LLMs or speech modules, a Listen-Attend-Spell model trained from scratch on Librispeech understands and executes instructions with high fidelity. This preliminary findings highlight the potential of instruction-following training to advance speech foundation models. | Training Speech Recognition Models to Follow Instructions | [
"Cheng-I Lai",
"Zhiyun Lu",
"Liangliang Cao",
"Ruoming Pang"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vDXvtytjdX | @inproceedings{
zhang2023enhanced,
title={Enhanced Visual Instruction Tuning for Text-Rich Image Understanding},
author={Yanzhe Zhang and Ruiyi Zhang and Jiuxiang Gu and Yufan Zhou and Nedim Lipka and Diyi Yang and Tong Sun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=vDXvtytjdX}
} | Instruction tuning enhances the capability of Large Language Models (LLMs) to interact with humans. Furthermore, recent instruction-following datasets include images as visual input, collecting responses for image-based instructions. However, current visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first used publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Furthermore, we prompt text-only GPT-4 with recognized text and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multimodal instruction-following data, our model, LLaVAR, substantially improves the capability of the LLaVA model on text-based VQA datasets (up to 20% accuracy improvement). The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction skills (e.g., reasoning, writing, and elaboration) with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available. | Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | [
"Yanzhe Zhang",
"Ruiyi Zhang",
"Jiuxiang Gu",
"Yufan Zhou",
"Nedim Lipka",
"Diyi Yang",
"Tong Sun"
] | Workshop/Instruction | [
"https://github.com/SALT-NLP/LLaVAR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=umtG8Hs32R | @inproceedings{
sakhinana2023crossmodal,
title={Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning},
author={Sagar Sakhinana and Venkataramana Runkana},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=umtG8Hs32R}
} | In the field of chemistry, the objective is to create novel molecules with desired properties, facilitating accurate property predictions for applications such as material design and drug screening. However, existing graph deep learning methods face limitations that curb their expressive power. To address this, we explore the integration of vast molecular domain knowledge from Large Language Models
(LLMs) with the complementary strengths of Graph Neural Networks (GNNs) to enhance performance in property prediction tasks. We introduce a Multi-Modal Fusion (MMF) framework that synergistically harnesses the analytical prowess of GNNs and the linguistic generative and predictive abilities of LLMs, thereby improving accuracy and robustness in predicting molecular properties. Our framework
combines the effectiveness of GNNs in modeling graph-structured data with the zero-shot and few-shot learning capabilities of LLMs, enabling improved predictions while reducing the risk of overfitting. Furthermore, our approach effectively addresses distributional shifts, a common challenge in real-world applications, and showcases the efficacy of learning cross-modal representations, surpassing
state-of-the-art baselines on benchmark datasets for property prediction tasks. | Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning | [
"Sagar Sakhinana",
"Venkataramana Runkana"
] | Workshop/Instruction | 2408.14964 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=twgpHJmvJy | @inproceedings{
lin2023use,
title={Use Your {INSTINCT}: {INST}ruction optimization usIng Neural bandits Coupled with Transformers},
author={Xiaoqiang Lin and Zhaoxuan Wu and Zhongxiang Dai and Wenyang Hu and Yao Shu and See-Kiong Ng and Patrick Jaillet and Bryan Kian Hsiang Low},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=twgpHJmvJy}
} | Large language models (LLMs) have shown remarkable instruction-following capabilities and achieved impressive performances in various applications. However, the performances of LLMs depend heavily on the instructions given to them, which are typically manually tuned with substantial human efforts. Recent work has used the query-efficient Bayesian optimization (BO) algorithm to automatically optimize the instructions given to black-box LLMs. However, BO usually falls short when optimizing highly sophisticated (e.g., high-dimensional) objective functions, such as the functions mapping an instruction to the performance of an LLM. This is mainly due to the limited expressive power of the Gaussian process (GP) model which is used by BO as a surrogate to model the objective function. Meanwhile, it has been repeatedly shown that neural networks (NNs), especially pre-trained transformers, possess strong expressive power and can model highly complex functions. So, we adopt a neural bandit algorithm which replaces the GP in BO by an NN surrogate to optimize instructions for black-box LLMs. More importantly, the neural bandit algorithm allows us to naturally couple the NN surrogate with the hidden representation learned by a pre-trained transformer (i.e., an open-source LLM), which significantly boosts its performance. These motivate us to propose our INSTruction optimization usIng Neural bandits Coupled with Transformers (INSTINCT) algorithm. We perform instruction optimization for ChatGPT and use extensive experiments to show that our INSTINCT consistently outperforms the existing methods in different tasks, such as in various instruction induction tasks and the task of improving the zero-shot chain-of-thought instruction. | Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers | [
"Xiaoqiang Lin",
"Zhaoxuan Wu",
"Zhongxiang Dai",
"Wenyang Hu",
"Yao Shu",
"See-Kiong Ng",
"Patrick Jaillet",
"Bryan Kian Hsiang Low"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tw2w8rgWMV | @inproceedings{
nayak2023learning,
title={Learning to Generate Instructions to Adapt Language Models to New Tasks},
author={Nihal Nayak and Yiyang Nan and Avi Trost and Stephen Bach},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=tw2w8rgWMV}
} | We present Bonito, the first open-source model for conditional task generation: the problem of converting unannotated corpus into a collection of tasks for instruction tuning. Our goal is to enable efficient task adaptation of instruction tuned language models on users' specialized, private data without relying on proprietary API-access-only models like GPT-4. We create Bonito by remixing existing, general-purpose instruction tuning data into a new training mixture for conditional task generation. Bonito learns to generate new tasks conditioned on the text and desired task type. The generated instructions in the specialized domain can be used to further train language models. We demonstrate that this procedure leads to improved performance on extractive question answering and yes-no question answering: across four datasets, each in a different domain, Bonito improves the F1 score of FLAN T5 Small by an average of 14.5% and FLAN-T5 Base by an average of 4.4%. We also find that Bonito improves FLAN-T5 Large on two out of four datasets but shows a slight negative transfer on the other two datasets. Overall, these results show a promising direction for adapting instruction tuned language models to new tasks without using proprietary models. | Learning to Generate Instructions to Adapt Language Models to New Tasks | [
"Nihal Nayak",
"Yiyang Nan",
"Avi Trost",
"Stephen Bach"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tdqZUxKfIj | @inproceedings{
mitchell2023an,
title={An Emulator for Fine-tuning Large Language Models using Small Language Models},
author={Eric Mitchell and Rafael Rafailov and Archit Sharma and Chelsea Finn and Christopher Manning},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=tdqZUxKfIj}
} | Widely used language models (LMs) are typically built by scaling up a two-stage training pipeline: a pre-training stage that uses a very large, diverse dataset of text and a fine-tuning (sometimes, 'alignment') stage using more targeted examples of specific behaviors and/or human preferences. While it has been hypothesized that knowledge and skills come from pre-training, and fine-tuning mostly filters this knowledge and skillset, this intuition has not been rigorously tested. In this paper, we test this hypothesis with a novel methodology for scaling these two stages independently, essentially asking, *What would happen if we combined the knowledge learned by a large model during pre-training with the knowledge learned by a small model during fine-tuning (or vice versa)?* Using a reinforcement learning-based framework derived from recent developments in learning from human preferences, we introduce *emulated fine-tuning (EFT)*, a principled and practical method for sampling from a distribution that approximates the result of pre-training and fine-tuning at different scales. Our experiments with EFT show that scaling up fine-tuning tends to improve helpfulness, while scaling up pre-training tends to improve factuality. Further, we show that EFT enables test-time adjustment of competing behavioral factors like helpfulness and harmlessness without additional training. Finally, we find that a special case of emulated fine-tuning, which we call LM *up-scaling*, avoids resource-intensive fine-tuning of large pre-trained models by ensembling small fine-tuned models with large pre-trained models, essentially 'emulating' the result of fine-tuning the large pre-trained model. Up-scaling consistently improves helpfulness and factuality of widely used pre-trained models like Llama, Llama-2, and Falcon, without additional hyperparameters or training. | An Emulator for Fine-tuning Large Language Models using Small Language Models | [
"Eric Mitchell",
"Rafael Rafailov",
"Archit Sharma",
"Chelsea Finn",
"Christopher Manning"
] | Workshop/Instruction | 2310.12962 | [
""
] | https://huggingface.co/papers/2310.12962 | 4 | 14 | 1 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=tC2lDRhHNT | @inproceedings{
zeng2023evaluating,
title={Evaluating Large Language Models at Evaluating Instruction Following},
author={Zhiyuan Zeng and Jiatong Yu and Tianyu Gao and Yu Meng and Tanya Goyal and Danqi Chen},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=tC2lDRhHNT}
} | As research in large language models (LLMs) continues to accelerate, LLM-based evaluation has emerged as a scalable and cost-effective alternative to human evaluations for comparing the ever increasing list of models. This paper investigates the efficacy of these "LLM evaluators", particularly in using them to assess instruction following, a metric that gauges how closely generated text adheres to the given instruction. We introduce a challenging meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM evaluator in discerning instruction-following outputs. The authors manually curated 419 pairs of outputs, one adhering to instructions while the other diverging, yet may possess deceptive qualities that mislead an LLM evaluator, e.g., a more engaging tone. Contrary to existing meta-evaluation, we discover that different evaluators (i.e., combinations of LLMs and prompts) exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement. We also present a novel suite of prompting strategies that further close the gap between LLM and human evaluators. With LLMBar, we hope to offer more insight into LLM evaluators and foster future research in developing better instruction-following models. | Evaluating Large Language Models at Evaluating Instruction Following | [
"Zhiyuan Zeng",
"Jiatong Yu",
"Tianyu Gao",
"Yu Meng",
"Tanya Goyal",
"Danqi Chen"
] | Workshop/Instruction | 2310.07641 | [
"https://github.com/princeton-nlp/llmbar"
] | https://huggingface.co/papers/2310.07641 | 1 | 0 | 0 | 6 | [] | [
"allenai/reward-bench",
"bay-calibration-llm-evaluators/llmbar-annotated-latest",
"bay-calibration-llm-evaluators/mtbench-annotated-latest"
] | [
"allenai/reward-bench",
"AtlaAI/judge-arena"
] | [] | [
"allenai/reward-bench",
"bay-calibration-llm-evaluators/llmbar-annotated-latest",
"bay-calibration-llm-evaluators/mtbench-annotated-latest"
] | [
"allenai/reward-bench",
"AtlaAI/judge-arena"
] | 1 | poster |
null | https://openreview.net/forum?id=rz6u0qOVth | @inproceedings{
li2023instructionfollowing,
title={Instruction-following Evaluation through Verbalizer Manipulation},
author={Shiyang Li and Jun Yan and Hai Wang and Zheng Tang and Xiang Ren and Vijay Srinivasan and Hongxia Jin},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=rz6u0qOVth}
} | While instruction-tuned models have shown remarkable success in various natural language processing tasks, accurately evaluating their ability to follow instructions remains challenging. Existing benchmarks primarily focus on common instructions that align well with what the model learned during training. However, proficiency in responding to these instructions does not necessarily imply strong ability in instruction following. In this paper, we propose a novel instruction-following evaluation protocol called verbalizer manipulation. It instructs the model to verbalize the task label with words aligning with model priors to different extents, adopting verbalizers from highly aligned (e.g., outputting "postive" for positive sentiment), to minimally aligned (e.g., outputting "negative" for positive sentiment). Verbalizer manipulation can be seamlessly integrated with any classification benchmark to examine the model's reliance on priors and its ability to override them to accurately follow the instructions. We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them. We observe that the instruction-following abilities of models, across different families and scales, are significantly distinguished by their performance on less natural verbalizers. Even the strongest GPT-4 model struggles to perform better than random guessing on the most challenging verbalizer, emphasizing the need for continued advancements to improve their instruction-following abilities. | Instruction-following Evaluation through Verbalizer Manipulation | [
"Shiyang Li",
"Jun Yan",
"Hai Wang",
"Zheng Tang",
"Xiang Ren",
"Vijay Srinivasan",
"Hongxia Jin"
] | Workshop/Instruction | 2307.10558 | [
""
] | https://huggingface.co/papers/2307.10558 | 2 | 3 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=rxEmiOEIFL | @inproceedings{
zheng2023delve,
title={Delve into {PPO}: Implementation Matters for Stable {RLHF}},
author={Rui Zheng and Shihan Dou and Songyang Gao and Yuan Hua and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Yuhao Zhou and Limao Xiong and Lu Chen and Zhiheng Xi and Nuo Xu and Wenbin Lai and Minghao Zhu and Haoran Huang and Tao Gui and Qi Zhang and Xuanjing Huang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=rxEmiOEIFL}
} | Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include reward models to measure human preferences, Proximal Policy Optimization (PPO) to optimize policy model outputs, and process supervision to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this paper, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. Beyond additional qualitative results, we even find that LLMs successfully trained by our algorithm can often better understand the deep meaning of the queries, and its responses are more able to hit people's souls directly. | Delve into PPO: Implementation Matters for Stable RLHF | [
"Rui Zheng",
"Shihan Dou",
"Songyang Gao",
"Yuan Hua",
"Wei Shen",
"Binghai Wang",
"Yan Liu",
"Senjie Jin",
"Yuhao Zhou",
"Limao Xiong",
"Lu Chen",
"Zhiheng Xi",
"Nuo Xu",
"Wenbin Lai",
"Minghao Zhu",
"Haoran Huang",
"Tao Gui",
"Qi Zhang",
"Xuanjing Huang"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=qN9T4cmTEw | @inproceedings{
song2023nlpbench,
title={{NLPB}ench: Evaluating Large Language Models on Solving {NLP} Problems},
author={Linxin Song and Jieyu Zhang and Lechao Cheng and Pengyuan Zhou and Tianyi Zhou and Irene Li},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=qN9T4cmTEw}
} | Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP). Despite these successes, there remains a dearth of research dedicated to the NLP problem-solving abilities of LLMs. To fill the gap in this area, we present a unique benchmarking dataset, NLPBench, comprising 378 college-level NLP questions spanning various NLP topics sourced from some Universities' prior final exams. NLPBench includes questions with context, in which multiple sub-questions share the same public information, and diverse question types, including multiple choice, short answer, and math. Our evaluation, centered on LLMs such as GPT-3.5/4, PaLM-2, and LLAMA-2, incorporates advanced prompting strategies like the chain-of-thought (CoT) and tree-of-thought (ToT). Our study reveals that the effectiveness of the advanced prompting strategies can be inconsistent, occasionally damaging LLM performance, especially in smaller models like the LLAMA-2 (13b). Furthermore, our manual assessment illuminated specific shortcomings in LLMs' scientific problem-solving skills, with weaknesses in logical decomposition and reasoning notably affecting results. | NLPBench: Evaluating Large Language Models on Solving NLP Problems | [
"Linxin Song",
"Jieyu Zhang",
"Lechao Cheng",
"Pengyuan Zhou",
"Tianyi Zhou",
"Irene Li"
] | Workshop/Instruction | 2309.15630 | [
"https://github.com/linxins97/nlpbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=psJibeRV0T | @inproceedings{
hu2023evoke,
title={Evoke: Evoking Critical Thinking Abilities in {LLM}s via Reviewer-Author Prompt Editing},
author={Xinyu Hu and Pengfei Tang and Simiao Zuo and Zihan Wang and Bowen Song and Qiang Lou and Jian Jiao and Denis Charles},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=psJibeRV0T}
} | Large language models (LLMs) have made impressive progress in natural language processing. These models rely on proper human instructions (or prompts) to generate suitable responses. However, the potential of LLMs are not fully harnessed by commonly-used prompting methods: many human-in-the-loop algorithms employ ad-hoc procedures for prompt selection; while auto prompt generation approaches are essentially searching all possible prompts randomly and inefficiently. We propose Evoke, an automatic prompt refinement framework. In Evoke, there are two instances of a same LLM: one as a reviewer (LLM-Reviewer), it scores the current prompt; the other as an author (LLM-Author), it edits the prompt by considering the edit history and the reviewer's feedback. Such an author-reviewer feedback loop ensures that the prompt is refined in each iteration. We further aggregate a data selection approach to Evoke, where only the hard samples are exposed to the LLM. The hard samples are more important because the LLM can develop deeper understanding of the tasks out of them, while the model may already know how to solve the easier cases. Experimental results show that Evoke significantly outperforms existing methods. For instance, in the challenging task of logical fallacy detection, Evoke scores above 80, while all other baseline methods struggle to reach 20. | Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing | [
"Xinyu Hu",
"Pengfei Tang",
"Simiao Zuo",
"Zihan Wang",
"Bowen Song",
"Qiang Lou",
"Jian Jiao",
"Denis Charles"
] | Workshop/Instruction | 2310.13855 | [
""
] | https://huggingface.co/papers/2310.13855 | 0 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=pFHeZzl5ft | @inproceedings{
lin2023urial,
title={{URIAL}: Tuning-Free Instruction Learning and Alignment for Untuned {LLM}s},
author={Bill Yuchen Lin and Abhilasha Ravichander and Ximing Lu and Nouha Dziri and Melanie Sclar and Khyathi Chandu and Chandra Bhagavatula and Yejin Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=pFHeZzl5ft}
} | Large language models (LLMs) have shown significant improvements due to alignment tuning, that is, supervised fine-tuning (SFT) on instruction data and reinforcement learning from human feedback (RLHF).
This raises questions about what is precisely learned during the alignment tuning process.
We investigate the effects of alignment tuning through the lens of token distribution shift between untuned LLMs and their aligned counterparts (e.g., Llama-2 versus Llama-2-Chat).
Our findings reveal that most distribution changes lie in stylistic tokens (e.g., transitional words, discourse markers), suggesting that LLMs primarily learn the language style of AI assistants during alignment tuning, while most of useful knowledge has been acquired by untuned LLMs. Thus, we pose the question: Is it necessary to update model weights to attain LLM alignment?
Based on these insights, we propose an alternative tuning-free method for instruction learning and alignment for untuned LLMs, URIAL, which achieves effective alignment solely through in-context learning (ICL) with as few as three curated, stylistic examples and a system prompt.
We also introduce a dataset named just-eval-instruct, which consists of 1,000 examples collected from 9 existing instruction datasets such as those used by AlpacaEval.
Our multi-aspect evaluation demonstrates that \textsc{Urial} can achieve highly satisfactory performance, sometimes equaling or surpassing SFT+RLHF counterparts, especially when the untuned LLM is sufficiently pre-trained.
This implies that fine-tuning may not be as always crucial as previously assumed for LLM alignment, and lightweight alignment methods like \textsc{Urial} hold promise for efficiently tailoring LLM behavior without fine-tuning. | URIAL: Tuning-Free Instruction Learning and Alignment for Untuned LLMs | [
"Bill Yuchen Lin",
"Abhilasha Ravichander",
"Ximing Lu",
"Nouha Dziri",
"Melanie Sclar",
"Khyathi Chandu",
"Chandra Bhagavatula",
"Yejin Choi"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=magEgFpK1y | @inproceedings{
saito2023verbosity,
title={Verbosity Bias in Preference Labeling by Large Language Models},
author={Keita Saito and Akifumi Wachi and Koki Wataoka and Youhei Akimoto},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=magEgFpK1y}
} | In recent years, Large Language Models (LLMs) have witnessed a remarkable surge in prevalence, altering the landscape of natural language processing and machine learning. One key factor in improving the performance of LLMs is alignment with humans achieved with Reinforcement Learning from Human Feedback (RLHF), as for many LLMs such as GPT-4, Bard, etc. In addition, recent studies are investigating the replacement of human feedback with feedback from other LLMs named Reinforcement Learning from AI Feedback (RLAIF). We examine the biases that come along with evaluating LLMs with other LLMs and take a closer look into verbosity bias – a bias where LLMs sometimes prefer more verbose answers even if they have similar qualities. We see that in our problem setting, GPT-4 prefers longer answers more than humans. We also propose a metric to measure this bias. | Verbosity Bias in Preference Labeling by Large Language Models | [
"Keita Saito",
"Akifumi Wachi",
"Koki Wataoka",
"Youhei Akimoto"
] | Workshop/Instruction | 2310.10076 | [
""
] | https://huggingface.co/papers/2310.10076 | 0 | 2 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=kqtB2AueFm | @inproceedings{
byun2023automatic,
title={Automatic Construction of a Korean Toxic Instruction Dataset for Ethical Tuning of Large Language Models},
author={SungJoo Byun and Dongjun Jang and Hyemi Jo and Hyopil Shin},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=kqtB2AueFm}
} | $\textit{\textbf{Caution}: this paper may include material that could be offensive or distressing.} $
The advent of Large Language Models (LLMs) necessitates the development of training approaches that mitigate the generation of unethical language and aptly manage toxic user queries. Given the challenges related to human labor and the scarcity of data, we present KoTox, comprising 39K unethical instruction-output pairs. This collection of automatically generated toxic instructions refines the training of LLMs and establishes a foundational framework for improving LLMs' ethical awareness and response to various toxic inputs, promoting more secure and responsible interactions in Natural Language Processing (NLP) applications. | Automatic Construction of a Korean Toxic Instruction Dataset for Ethical Tuning of Large Language Models | [
"SungJoo Byun",
"Dongjun Jang",
"Hyemi Jo",
"Hyopil Shin"
] | Workshop/Instruction | 2311.18215 | [
""
] | https://huggingface.co/papers/2311.18215 | 1 | 0 | 0 | 4 | [] | [
"SungJoo/KoTox"
] | [] | [] | [
"SungJoo/KoTox"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=kEK08VdSO5 | @inproceedings{
tian2023finetuning,
title={Fine-tuning Language Models for Factuality},
author={Katherine Tian and Eric Mitchell and Huaxiu Yao and Christopher Manning and Chelsea Finn},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=kEK08VdSO5}
} | The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines. However, language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations', which can harmfully perpetuate myths and misconceptions. Further, manual fact-checking of model responses is a time-consuming process, making human factuality labels expensive to acquire. In this work, we leverage two key recent innovations in NLP to fine-tune language models to be more factual without human labeling, targeting more open-ended generation settings than past work. First, several recent works have proposed methods for scoring the factuality of open-ended text derived from consistency with an external knowledge base or simply a large model's confidence scores. Second, the Direct Preference Optimization algorithm enables straightforward fine-tuning of language models on objectives other than supervised imitation, using a preference ranking over possible model responses. We show that learning from preference rankings generated by either automated criterion significantly improves the factuality of Llama-2 on held-out topics (percent of generated claims that are correct) compared with existing RLHF procedures or decoding strategies targeted at factuality, showing over 50% and 20--30% error reduction for biographies and medical questions respectively. | Fine-tuning Language Models for Factuality | [
"Katherine Tian",
"Eric Mitchell",
"Huaxiu Yao",
"Christopher Manning",
"Chelsea Finn"
] | Workshop/Instruction | 2311.08401 | [
""
] | https://huggingface.co/papers/2311.08401 | 4 | 28 | 2 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=jbNjgmE0OP | @inproceedings{
asai2023selfrag,
title={Self-{RAG}: Self-reflective Retrieval Augmented Generation},
author={Akari Asai and Zeqiu Wu and Yizhong Wang and Avirup Sil and Hannaneh Hajishirzi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=jbNjgmE0OP}
} | Scaling up language models (LMs) or instruction tuning has shown limited effects on improving factuality of LM outputs. Retrieval-Augmented Generation (RAG), an ad hoc approach that augments Language Models (LMs) with retrieval, decreases hallucination issues of large LMs. However, indiscriminately retrieving and incorporating a fixed number of retrieved passages, regardless of whether retrieval is necessary, or passages are relevant, diminishes instruction-following LM versatility or can lead to unhelpful response generation.
In this work, we introduce a new framework called **Self-Reflective Retrieval-Augmented Generation (Self-RAG)** that enhances an LM's quality and factuality through retrieval and self-reflection. Our framework trains a single arbitrary LM to learn to adaptively retrieve passages on-demand, and generate and reflect on retrieved passages and its own generations using special tokens, called *reflection* tokens, on diverse instruction-tuning data with interleaving retrieved passages and reflection tokens. Generating reflection tokens makes the LM controllable during the inference phase, enabling it to tailor its behavior to diverse task requirements. Experiments show that Self-RAG (7B and 13B parameters) significantly outperforms state-of-the-art pre-trained and instruction-follwing LLMs and retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA, fact verification and reasoning tasks, and it shows significant gains in factuality scores and citation accuracy for long-form generations relative to these models. | Self-RAG: Self-reflective Retrieval Augmented Generation | [
"Akari Asai",
"Zeqiu Wu",
"Yizhong Wang",
"Avirup Sil",
"Hannaneh Hajishirzi"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=fhSTeAAVb6 | @inproceedings{
min2023factscore,
title={{FA}ctScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation},
author={Sewon Min and Kalpesh Krishna and Xinxi Lyu and Mike Lewis and Wen-tau Yih and Pang Wei Koh and Mohit Iyyer and Luke Zettlemoyer and Hannaneh Hajishirzi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=fhSTeAAVb6}
} | Evaluating the factuality of long-form text generated by large language models (LMs) is non-trivial because (1) generations often contain a mixture of supported and unsupported pieces of information, making binary judgments of quality inadequate, and (2) human evaluation is time-consuming and costly. In this paper, we introduce FActScore (Factual precision in Atomicity Score), a new evaluation that breaks a generation into a series of atomic facts and computes the percentage of atomic facts supported by a reliable knowledge source. We conduct an extensive human evaluation to obtain FActScores of people biographies generated by several state-of-the-art commercial LMs -- InstructGPT, ChatGPT, and the retrieval-augmented PerplexityAI -- and report new analysis demonstrating the need for such a fine-grained score (e.g., ChatGPT only achieves 58%). Since human evaluation is costly, we also introduce an automated model that estimates FActScore, using retrieval and a strong language model, with less than a 2% error rate. Finally, we use this automated metric to evaluate 6,500 generations from a new set of 13 recent LMs that would have cost $26K if evaluated by humans, with various findings: GPT-4 and ChatGPT are more factual than public models, and Vicuna and Alpaca are some of the best public models. | FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation | [
"Sewon Min",
"Kalpesh Krishna",
"Xinxi Lyu",
"Mike Lewis",
"Wen-tau Yih",
"Pang Wei Koh",
"Mohit Iyyer",
"Luke Zettlemoyer",
"Hannaneh Hajishirzi"
] | Workshop/Instruction | 2305.14251 | [
"https://github.com/shmsw25/factscore"
] | https://huggingface.co/papers/2305.14251 | 1 | 2 | 0 | 9 | [
"kalpeshk2011/instruct-llama-7b-wdiff"
] | [] | [] | [
"kalpeshk2011/instruct-llama-7b-wdiff"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=fT3gT2QKOb | @inproceedings{
sakhinana2023hierarchical,
title={Hierarchical Network Fusion for Multi-Modal Electron Micrograph Representation Learning with Foundational Large Language Models},
author={Sagar Sakhinana and Sannidhi Geethan and Venkataramana Runkana},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=fT3gT2QKOb}
} | Characterizing materials with electron micrographs is a crucial task in fields such as semiconductors and quantum materials. The complex hierarchical structure of micrographs often poses challenges for traditional classification methods. In this study, we propose an innovative backbone architecture for analyzing electron micrographs. We create multi-modal representations of the micrographs by tok-
enizing them into patch sequences and, additionally, representing them as vision graphs, commonly referred to as patch attributed graphs. We introduce the Hierarchical Network Fusion (HNF), a multi-layered network structure architecture that facilitates information exchange between the multi-modal representations and knowledge integration across different patch resolutions. Furthermore, we leverage large language models (LLMs) to generate detailed technical descriptions of nanomaterials as auxiliary information to assist in the downstream task. We utilize a cross-modal attention mechanism for knowledge fusion across cross-domain representations(both image-based and linguistic insights) to predict the nanomaterial category. This multi-faceted approach promises a more comprehensive and accurate representation and classification of micrographs for nanomaterial identification. Our framework outperforms traditional methods, overcoming challenges posed by distributional shifts, and facilitating high-throughput screening. | Hierarchical Network Fusion for Multi-Modal Electron Micrograph Representation Learning with Foundational Large Language Models | [
"Sagar Sakhinana",
"Sannidhi Geethan",
"Venkataramana Runkana"
] | Workshop/Instruction | 2408.13661 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=fHtLiX36r6 | @inproceedings{
sharma2023exploring,
title={Exploring and Improving the Spatial Reasoning Abilities of Large Language Models},
author={Manasi Sharma},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=fHtLiX36r6}
} | Large Language Models (LLMs) represent formidable tools for sequence modeling, boasting an innate capacity for general pattern recognition. Nevertheless, their broader spatial reasoning capabilities remain insufficiently explored. In this paper, we investigate the zero-shot performance of LLMs when confronted with a limited dataset comprising 3D robotic trajectory data and associated tasks, such as directional and motion labeling. Additionally, we introduce a novel prefix-based prompting mechanism, which yields a 30\% improvement on the 3D trajectory data and an increase of up to 16\% on SpartQA tasks when contrasted with the conventional vanilla prompt baseline (with gains over Chain-of-Thought prompting as well). The experimentation with 3D trajectory data offers an intriguing glimpse into the manner in which LLMs engage with numerical and spatial information, thus laying a solid foundation for the identification of target areas for future enhancements. | Exploring and Improving the Spatial Reasoning Abilities of Large Language Models | [
"Manasi Sharma"
] | Workshop/Instruction | 2312.01054 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eBu11K7THc | @inproceedings{
lee2023investigating,
title={Investigating the Effects of Zero-Shot Chain-of-Thought on Empathetic Dialogue Generation},
author={Young-Jun Lee and Dokyong Lee and Jihui Im and Joo Won Sung and Ho-Jin Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=eBu11K7THc}
} | This study investigates the effectiveness of the Zero-shot Chain-of-Thought (CoT) approach, specifically the "Let's think step by step.'', in boosting the empathetic reasoning capabilities of Large Language Models (LLMs). Our experiments, however, reveal that Zero-shot CoT does not sufficiently enhance the empathetic reasoning of LLMs as compared to Zero-shot In-Context Learning (ICL), according to a variety of performance metrics. Importantly, we discovered that the perspective-taking prompting method, or ``\textit{Let's put {speaker} into {interlocutor}'s shoes.}'', surpasses the performance of Zero-shot CoT, especially in terms of emotion and intent accuracy, with an improvement of 21\% and 7\% respectively. The source code will be released after publication. | Investigating the Effects of Zero-Shot Chain-of-Thought on Empathetic Dialogue Generation | [
"Young-Jun Lee",
"Dokyong Lee",
"Jihui Im",
"Joo Won Sung",
"Ho-Jin Choi"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=czQZ4aLUb6 | @inproceedings{
zhou2023analyzing,
title={Analyzing and Mitigating Object Hallucination in Large Vision-Language Models},
author={Yiyang Zhou and Chenhang Cui and Jaehong Yoon and Linjun Zhang and Zhun Deng and Chelsea Finn and Mohit Bansal and Huaxiu Yao},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=czQZ4aLUb6}
} | Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23\% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | [
"Yiyang Zhou",
"Chenhang Cui",
"Jaehong Yoon",
"Linjun Zhang",
"Zhun Deng",
"Chelsea Finn",
"Mohit Bansal",
"Huaxiu Yao"
] | Workshop/Instruction | 2310.00754 | [
"https://github.com/yiyangzhou/lure"
] | https://huggingface.co/papers/2310.00754 | 1 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=byNSabn9cA | @inproceedings{
lei2023chain,
title={Chain of Natural Language Inference for Reducing Large Language Model Hallucinations},
author={Deren Lei and Yaxi Li and Mengya Hu and Mingyu Wang and Xi Yun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=byNSabn9cA}
} | Large language models (LLMs) can generate fluent natural language texts when given relevant documents as background context. This ability has attracted considerable interest in developing industry applications of LLMs. However, LLMs are prone to generate hallucinations that are not supported by the provided sources. In this paper, we propose a hierarchical framework to detect and mitigate such ungrounded hallucination. Our framework uses Chain of Natural Language Inference (CoNLI) for hallucination detection and hallucination reduction via post-editing. Our approach achieves state-of-the-art performance on hallucination detection and enhances text quality through rewrite, using LLMs without any fine-tuning or domain-specific prompt engineering. We show that this simple plug-and-play framework can serve as an effective choice for hallucination detection and reduction, achieving competitive performance across various contexts. | Chain of Natural Language Inference for Reducing Large Language Model Hallucinations | [
"Deren Lei",
"Yaxi Li",
"Mengya Hu",
"Mingyu Wang",
"Xi Yun"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=bH64KCBzqS | @inproceedings{
zhang2023chainofthought,
title={Chain-of-Thought Reasoning is a Policy Improvement Operator},
author={Hugh Zhang and David Parkes},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=bH64KCBzqS}
} | Large language models have astounded the world with fascinating new capabilities. However, they currently lack the ability to teach themselves new skills, relying instead on large amounts of human-generated training data. We introduce SECToR (Self-Education via Chain-of-Thought Reasoning), a proof-of-concept demonstration that language models can teach themselves new skills using chain-of-thought reasoning. During the self-learning loop, SECToR asks models to solve addition problems using chain-of-thought reasoning before training the next version of the model to solve those same problems directly without using such reasoning. This process often results in an improved model which can, when again augmented with chain-of-thought reasoning, solve even harder problems than the original model, allowing the self-learning loop to continue. Language models trained via SECToR autonomously learn to add up to 29-digit numbers without access to any ground truth examples beyond an initial supervised fine-tuning phase consisting only of numbers with 6 or fewer digits. Our central hypothesis is that chain-of-thought reasoning can act as a policy improvement operator, similarly to how Monte-Carlo Tree Search is used in AlphaZero \citep{silver2017mastering}. We hope that this research can lead to new directions in which language models can learn to teach themselves without the need for human demonstrations. | Chain-of-Thought Reasoning is a Policy Improvement Operator | [
"Hugh Zhang",
"David Parkes"
] | Workshop/Instruction | 2309.08589 | [
""
] | https://huggingface.co/papers/2309.08589 | 1 | 1 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=b519y1V7fX | @inproceedings{
wang2023beyond,
title={Beyond Reverse {KL}: Generalizing Direct Preference Optimization with Diverse Divergence Constraints},
author={Chaoqi Wang and Yibo Jiang and Chenghao Yang and Han Liu and Yuxin Chen},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=b519y1V7fX}
} | The increasing capabilities of large language models (LLMs) raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment. Reinforcement Learning from Human Feedback (RLHF) has emerged as a promising pathway towards AI alignment but brings forth challenges due to its complexity and dependence on a separate reward model. Direct Preference Optimization (DPO) has been proposed as an alternative; and it remains equivalent to RLHF under the reverse KL regularization constraint. This paper presents $f$-DPO, a generalized approach to DPO by incorporating diverse divergence constraints. We show that under certain $f$-divergences, including Jensen-Shannon divergence, forward KL divergences and $\alpha$-divergences, the complex relationship between the reward and optimal policy can also be simplified by addressing the Karush–Kuhn–Tucker conditions. This eliminates the need for estimating the normalizing constant in the Bradley-Terry model and enables a tractable mapping between the reward function and the optimal policy. Our approach optimizes LLMs to align with human preferences in a more efficient and supervised manner under a broad set of divergence constraints. Empirically, adopting these divergences ensures a balance between alignment performance and generation diversity. Importantly, our $f$-DPO outperforms PPO-based methods in divergence efficiency, and divergence constraints directly influence expected calibration error (ECE). | Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints | [
"Chaoqi Wang",
"Yibo Jiang",
"Chenghao Yang",
"Han Liu",
"Yuxin Chen"
] | Workshop/Instruction | 2309.16240 | [
""
] | https://huggingface.co/papers/2309.16240 | 2 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=apEdj9baZx | @inproceedings{
sun2023interactive,
title={Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks},
author={Lingfeng Sun and Devesh Jha and Chiori Hori and Siddarth Jain and Radu Corcodel and Xinghao Zhu and Masayoshi Tomizuka and Diego Romeres},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=apEdj9baZx}
} | Designing robotic agents to perform open vocabulary tasks has been the long-standing goal in robotics and AI. Recently, Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks. However, planning for these tasks in the presence of uncertainties is challenging as it requires "chain-of-thought" reasoning, aggregating information from the environment, updating state estimates, and generating actions based on the updated state estimates. In this paper, we present an interactive planning technique for partially observable tasks using LLMs. In the proposed method, an LLM is used to collect missing information from the environment using a robot and infer the state of the underlying problem from collected observations while guiding the robot to perform the required actions. We also use a fine-tuned Llama 2 model via self-instruct and compare its performance against a pre-trained LLM like GPT-4. Results are demonstrated on several tasks in simulation as well as real-world environments. | Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks | [
"Lingfeng Sun",
"Devesh K. Jha",
"Chiori Hori",
"Siddarth Jain",
"Radu Corcodel",
"Xinghao Zhu",
"Masayoshi Tomizuka",
"Diego Romeres"
] | Workshop/Instruction | 2312.06876 | [
""
] | https://huggingface.co/papers/2312.06876 | 2 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=YA56eOURrG | @inproceedings{
zhang2023tell,
title={Tell Your Model Where to Attend: Post-hoc Attention Steering for {LLM}s},
author={Qingru Zhang and Chandan Singh and Liyuan Liu and Xiaodong Liu and Bin Yu and Jianfeng Gao and Tuo Zhao},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=YA56eOURrG}
} | In human-written articles, we often leverage the subtleties of text style, such as bold and italics, to guide the attention of readers. These textual emphases are vital for the readers to grasp the conveyed information. When interacting with large language models (LLMs), we have a similar need -- steering the model to pay closer attention to user-specified information, e.g., an instruction. Existing methods, however, are constrained to process plain text and do not support such a mechanism. This motivates us to introduce PASTA -- Post-hoc Attention STeering Approach, a method that allows LLMs to read text with user-specified emphasis marks. To this end, PASTA identifies a small subset of attention heads and applies precise attention reweighting on them, directing the model attention to user-specified parts. Like prompting, PASTA is applied at inference time and does not require changing any model parameters. Experiments demonstrate that PASTA can substantially enhance an LLM's ability to follow user instructions or integrate new knowledge from user inputs, leading to a significant performance improvement on a variety of tasks, e.g., an average accuracy improvement of 22% for LLAMA-7B. Our code is publicly available at https://github.com/QingruZhang/PASTA . | Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs | [
"Qingru Zhang",
"Chandan Singh",
"Liyuan Liu",
"Xiaodong Liu",
"Bin Yu",
"Jianfeng Gao",
"Tuo Zhao"
] | Workshop/Instruction | 2311.02262 | [
"https://github.com/qingruzhang/pasta"
] | https://huggingface.co/papers/2311.02262 | 6 | 10 | 2 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=V09d7AMh15 | @inproceedings{
jiang2023identifying,
title={Identifying and Mitigating Vulnerabilities in {LLM}-Integrated Applications},
author={Fengqing Jiang and Zhangchen Xu and Luyao Niu and Boxin Wang and Jinyuan Jia and Bo Li and Radha Poovendran},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=V09d7AMh15}
} | The remarkable instruction following capabilities of large language models (LLMs) allow them to be increasingly deployed as the service backend for LLM-integrated applications such as code completion and AI-powered search. Compared with the traditional usage of LLMs where users directly send queries to an LLM, LLM-integrated applications serve as middleware to refine users’ queries with domain-specific knowledge to better inform LLMs and enhance the responses. Despite numerous opportunities and benefits, blindly following instructions given to LLMs exposes LLM-integrated applications to new attack surfaces. Understanding, minimizing, and eliminating the emerging attack surfaces is a new area of research. In this work, we consider a setup where the user and LLM interact via an LLM-integrated application in the middle. We focus on the communication rounds that begin with user’s queries and end with LLM-integrated application returning responses to the queries, powered by LLMs at the service backend. For this query-response protocol, we identify potential high-risk vulnerabilities that can originate from the malicious application developer or from an outsider threat initiator that is able to control the database access, manipulate and poison data that are high-risk for the user. Successful exploits of the identified vulnerabilities result in the users receiving responses tailored to the intent of a threat initiator (e.g., biased preferences for certain products). We assess such threats against LLM-integrated applications empowered by OpenAI GPT-3.5 and GPT-4. Our empirical results show that the threats can effectively bypass the restrictions and moderation policies of OpenAI, resulting in users receiving responses that contain bias, toxic content, privacy risk, and disinformation. To mitigate those threats, we identify and define four key properties, namely integrity, source identification, attack detectability, and utility preservation, that need to be satisfied by a safe LLM-integrated application. Based on these properties, we develop a lightweight, threat-agnostic defense that mitigates both insider and outsider threats. Our evaluations demonstrate the efficacy of our defense. | Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications | [
"Fengqing Jiang",
"Zhangchen Xu",
"Luyao Niu",
"Boxin Wang",
"Jinyuan Jia",
"Bo Li",
"Radha Poovendran"
] | Workshop/Instruction | 2311.16153 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=UWymGURI75 | @inproceedings{
toyer2023tensor,
title={Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game},
author={Sam Toyer and Olivia Watkins and Ethan Mendes and Justin Svegliato and Luke Bailey and Tiffany Wang and Isaac Ong and Karim Elmaaroufi and Pieter Abbeel and Trevor Darrell and Alan Ritter and Stuart Russell},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=UWymGURI75}
} | While Large Language Models (LLMs) are increasingly being used in real-world applications, they remain vulnerable to prompt injection attacks: malicious third party prompts that subvert the intent of the system designer. To help researchers study this problem, we present a dataset of over 126,000 prompt injection attacks and 46,000 prompt-based "defenses" against prompt injection, all created by players of an online game called Tensor Trust. To the best of our knowledge, this is currently the largest dataset of human-generated adversarial examples for instruction-following LLMs. The attacks in our dataset have a lot of easily interpretable structure, and shed light on the weaknesses of LLMs. We also use the dataset to create a benchmark for resistance to two types of prompt injection, which we refer to as prompt extraction and prompt hijacking. Our benchmark results show that many models are vulnerable to the attack strategies in the Tensor Trust dataset. Furthermore, we show that some attack strategies from the dataset generalize to deployed LLM-based applications, even though they have a very different set of constraints to the game. We release all data and source code at https://tensortrust.ai/paper. | Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game | [
"Sam Toyer",
"Olivia Watkins",
"Ethan Mendes",
"Justin Svegliato",
"Luke Bailey",
"Tiffany Wang",
"Isaac Ong",
"Karim Elmaaroufi",
"Pieter Abbeel",
"Trevor Darrell",
"Alan Ritter",
"Stuart Russell"
] | Workshop/Instruction | 2311.01011 | [
""
] | https://huggingface.co/papers/2311.01011 | 1 | 0 | 0 | 12 | [] | [
"qxcv/tensor-trust"
] | [
"latticeflow/compl-ai-board"
] | [] | [
"qxcv/tensor-trust"
] | [
"latticeflow/compl-ai-board"
] | 1 | poster |
null | https://openreview.net/forum?id=Rye1neGGUd | @inproceedings{
ostapenko2023a,
title={A Case Study of Instruction Tuning with Mixture of Parameter-Efficient Experts},
author={Oleksiy Ostapenko and Lucas Caccia and Zhan Su and Nicolas Le Roux and Laurent Charlin and Alessandro Sordoni},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=Rye1neGGUd}
} | We study the applicability of mixture of parameter-efficient experts (MoPEs) for instruction-tuning large decoder-only language models. Recent literature indicates that MoPEs might enhance performance in specific multi-task instruction-following datasets. In this paper, we extend such previous results and study applicability of MoPEs in settings previously overlooked: a) with open-domain instruction-following datasets; b) with recent decoder-only models and c) with downstream out-of-distribution test sets. We build on top of LLaMA1-13B/-7B and LLaMA2-13B. We study different variants of learned routing, namely per-example routing ([PE]), and a more expensive per-token ([PT]) routing. Overall, we are unable to substantiate strong performance gains observed in related studies in our setting. We observe occasional enhancements of LLAMA2 fine-tuned on Open Platypus dataset in 0-shot SNI evaluation and TruthfulQA evaluation after fine-tuning on a subset of Flan. We shed some light on the inner workings of MoPEs by comparing different routing strategies. We find that [PE] routing tends to collapse at downstream evaluation time reducing the importance of router's application.
We plan to publicly release our code. | A Case Study of Instruction Tuning with Mixture of Parameter-Efficient Experts | [
"Oleksiy Ostapenko",
"Lucas Caccia",
"Zhan Su",
"Nicolas Le Roux",
"Laurent Charlin",
"Alessandro Sordoni"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RJyfNSoyDC | @inproceedings{
zhai2023investigating,
title={Investigating the Catastrophic Forgetting in Multimodal Large Language Models},
author={Yuexiang Zhai and Shengbang Tong and Xiao Li and Mu Cai and Qing Qu and Yong Jae Lee and Yi Ma},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=RJyfNSoyDC}
} | Following the success of GPT4, there has been a surge in interest in multimodal large language model (MLLM) research. This line of research focuses on developing general-purpose LLMs through fine-tuning pre-trained LLMs and vision models. However, catastrophic forgetting, a notorious phenomenon where the fine-tuned model fails to retain similar performance compared to the pre-trained model, still remains an inherent problem in multimodal LLMs (MLLM). In this paper, we introduce EMT: Evaluating MulTimodality for evaluating the catastrophic forgetting in MLLMs, by treating each MLLM as an image classifier. We first apply EMT to evaluate several open-source fine-tuned MLLMs and we discover that almost all evaluated MLLMs fail to retain the same performance levels as their vision encoders on standard image classification tasks. Moreover, we continue fine-tuning LLaVA, an MLLM and utilize EMT to assess performance throughout the fine-tuning. Interestingly, our results suggest that early-stage fine-tuning on an image dataset improves performance across other image datasets, by enhancing the alignment of text and visual features. However, as fine-tuning proceeds, the MLLMs begin to hallucinate, resulting in a significant loss of generalizability, even when the image encoder remains frozen. Our results suggest that MLLMs have yet to demonstrate performance on par with their vision models on standard image classification tasks and the current MLLM fine-tuning procedure still has room for improvement. | Investigating the Catastrophic Forgetting in Multimodal Large Language Models | [
"Yuexiang Zhai",
"Shengbang Tong",
"Xiao Li",
"Mu Cai",
"Qing Qu",
"Yong Jae Lee",
"Yi Ma"
] | Workshop/Instruction | 2309.10313 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QxtL4Q1enz | @inproceedings{
jha2023limit,
title={{LIMIT}: Less Is More for Instruction Tuning Across Evaluation Paradigms},
author={Aditi Jha and Sam Havens and Jeremy Dohmann and Alexander Trott and Jacob Portes},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=QxtL4Q1enz}
} | Large Language Models are traditionally finetuned on large instruction datasets. However recent studies suggest that small, high-quality datasets can suffice for general purpose instruction following. This lack of consensus surrounding finetuning best practices is in part due to rapidly diverging approaches to LLM evaluation. In this study, we ask whether a small amount of diverse finetuning samples can improve performance on both traditional perplexity-based NLP benchmarks, and on open-ended, model-based evaluation. We finetune open-source MPT-7B and MPT-30B models on finetuning datasets of various sizes ranging from 1k to 60k samples. We find that subsets of 1k-6k instruction finetuning samples are sufficient to achieve good performance on both (1) traditional NLP benchmarks and (2) model-based evaluation. Finally, we show that mixing textbook-style and open-ended QA finetuning datasets optimizes performance on both evaluation paradigms. | LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms | [
"Aditi Jha",
"Sam Havens",
"Jeremy Dohmann",
"Alexander Trott",
"Jacob Portes"
] | Workshop/Instruction | 2311.13133 | [
""
] | https://huggingface.co/papers/2311.13133 | 2 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=QkdRqpClab | @inproceedings{
pan2023lets,
title={Let's Reinforce Step by Step},
author={Sarah Pan and Vladislav Lialin and Sherin Muckatira and Anna Rumshisky},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=QkdRqpClab}
} | While recent advances have boosted LM proficiency in linguistic benchmarks, LMs consistently struggle to reason correctly on complex tasks like mathematics. We turn to Reinforcement Learning from Human Feedback (RLHF) as a method with which to shape model reasoning processes. In particular, we explore two reward schemes, outcome-supervised reward models (ORMs) and process-supervised reward models (PRMs), to optimize for logical reasoning. Our results show that the fine-grained reward provided by PRM-based methods enhances accuracy on simple mathematical reasoning (GSM8K) while, unexpectedly, reducing performance in complex tasks (MATH). Furthermore, we show the critical role reward aggregation functions play in model performance. Providing promising avenues for future research, our study underscores the need for further exploration into fine-grained reward modeling for more reliable language models. | Let's Reinforce Step by Step | [
"Sarah Pan",
"Vladislav Lialin",
"Sherin Muckatira",
"Anna Rumshisky"
] | Workshop/Instruction | 2311.05821 | [
""
] | https://huggingface.co/papers/2311.05821 | 1 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=QUwcQFgA5a | @inproceedings{
lee2023dialogcc,
title={Dialog{CC}: An Automated Pipeline for Creating High-Quality Multi-modal Dialogue Datasets},
author={Young-Jun Lee and Byungsoo Ko and Han-Gyu Kim and Jonghwan Hyeon and Ho-Jin Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=QUwcQFgA5a}
} | As sharing images in an instant message is a crucial factor, there has been active research on learning an image-text multi-modal dialogue models.
However, training a well-generalized multi-modal dialogue model remains challenging due to the low quality and limited diversity of images per dialogue in existing multi-modal dialogue datasets.
In this paper, we propose an automated pipeline to construct a multi-modal dialogue dataset, ensuring both dialogue quality and image diversity without requiring any human effort.
In our pipeline, to guarantee the coherence between images and dialogue, we prompt GPT-4 to infer potential image-sharing moments - specifically, the utterance, speaker, rationale, and image description.
Furthermore, we leverage CLIP similarity to maintain consistency between aligned multiple images to the utterance.
Through this pipeline, we introduce DialogCC, a high-quality and diverse multi-modal dialogue dataset that surpasses existing datasets in terms of quality and diversity in human evaluation.
Our comprehensive experiments highlight that when multi-modal dialogue models are trained using our dataset, their generalization performance on unseen dialogue datasets is significantly enhanced. We will release the source code and dataset following publication. | DialogCC: An Automated Pipeline for Creating High-Quality Multi-modal Dialogue Datasets | [
"Young-Jun Lee",
"Byungsoo Ko",
"Han-Gyu Kim",
"Jonghwan Hyeon",
"Ho-Jin Choi"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=OQHckRYbpT | @inproceedings{
fabian2023knowledge,
title={Knowledge Augmented Instruction Tuning for Zero-shot Animal Species Recognition},
author={Zalan Fabian and Zhongqi Miao and Chunyuan Li and Yuanhan Zhang and Ziwei Liu and Andres Hernandez and Pablo Arbelaez and Andr{\'e}s Link and Andr{\'e}s Montes-Rojas and Rafael Escucha and Laura Siabatto and Rahul Dodhia and Juan Lavista Ferres},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=OQHckRYbpT}
} | Due to deteriorating environmental conditions and increasing human activity, conservation efforts directed towards wildlife is crucial. Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe. Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts. Reducing the reliance on costly labelled data therefore has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor. In this work, we propose a novel zero-shot species classification framework that leverages multimodal foundation models. In particular, we instruction tune vision-language models to generate detailed visual descriptions of camera trap images using similar terminology to experts. Then, we match the generated caption to an external knowledge base of descriptions in order to determine the species in a zero-shot manner. We investigate techniques to build instruction tuning datasets for detailed animal description generation and propose a novel knowledge augmentation technique to enhance caption quality. We demonstrate the performance of our proposed method on a new camera trap dataset collected in the Magdalena Medio region of Colombia. | Knowledge Augmented Instruction Tuning for Zero-shot Animal Species Recognition | [
"Zalan Fabian",
"Zhongqi Miao",
"Chunyuan Li",
"Yuanhan Zhang",
"Ziwei Liu",
"Andres Hernandez",
"Pablo Arbelaez",
"Andrés Link",
"Andrés Montes-Rojas",
"Rafael Escucha",
"Laura Siabatto",
"Rahul Dodhia",
"Juan Lavista Ferres"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NiQYQEPUsA | @inproceedings{
coste2023reward,
title={Reward Model Ensembles Help Mitigate Overoptimization},
author={Thomas Coste and Usman Anwar and Robert Kirk and David Krueger},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=NiQYQEPUsA}
} | Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the "true" reward, these learned reward models are susceptible to overoptimization. Gao et al. studied this phenomenon in a synthetic human feedback setup with a significantly larger "gold" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization. | Reward Model Ensembles Help Mitigate Overoptimization | [
"Thomas Coste",
"Usman Anwar",
"Robert Kirk",
"David Krueger"
] | Workshop/Instruction | 2310.02743 | [
"https://github.com/tlc4418/llm_optimization"
] | https://huggingface.co/papers/2310.02743 | 2 | 1 | 0 | 4 | [
"tlc4418/pythia_1.4b_sft_policy"
] | [
"tlc4418/1.4b-policy_preference_data_gold_labelled",
"tlc4418/gold_labelled_gens",
"SJTUwanyi/rm_pref"
] | [] | [
"tlc4418/pythia_1.4b_sft_policy"
] | [
"tlc4418/1.4b-policy_preference_data_gold_labelled",
"tlc4418/gold_labelled_gens",
"SJTUwanyi/rm_pref"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=Md6RUrGz67 | @inproceedings{
srinivasan2023nexusraven,
title={NexusRaven: a commercially-permissive Language Model for function calling},
author={Venkat Krishna Srinivasan and Zhen Dong and Banghua Zhu and Brian Yu and Hanzi Mao and Damon Mosk-Aoyama and Kurt Keutzer and Jiantao Jiao and Jian Zhang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=Md6RUrGz67}
} | The rise of open-source, commercially permissive large language models (LLMs) is revolutionizing generative AI, presenting organizations with enhanced control, minimized data risks, and cost benefits compared to proprietary models. However, in the field of tool use and function-calling LLMs, many open-source models, such as Gorilla and ToolLLAMA, are dependent on proprietary LLMs like GPT-4 for high-quality training data, which often faces legal restrictions for competitive commercial applications. In this paper, we introduce NexusRaven-13B, an open-source LLM designed for function calls. Originating from the CodeLLAMA-13B lineage, NexusRaven-13B employs a unique data curation via multi-step refinement, ensuring high-quality training data without relying on GPT-4 distillation. NexusRaven-13B matches GPT-3.5 in zero-shot function-calling accuracy. When combined with our second core technique, demonstration retrieval augmentation, its performance significantly surpasses GPT-4. The code, model, and demo will be available after the review process. | NexusRaven: a commercially-permissive Language Model for function calling | [
"Venkat Krishna Srinivasan",
"Zhen Dong",
"Banghua Zhu",
"Brian Yu",
"Hanzi Mao",
"Damon Mosk-Aoyama",
"Kurt Keutzer",
"Jiantao Jiao",
"Jian Zhang"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=LywifFNXV5 | @inproceedings{
li2023how,
title={How Long Can Context Length of Open-Source {LLM}s truly Promise?},
author={Dacheng Li and Rulin Shao and Anze Xie and Ying Sheng and Lianmin Zheng and Joseph Gonzalez and Ion Stoica and Xuezhe Ma and Hao Zhang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=LywifFNXV5}
} | Large language models (LLMs) with long-context instruction following ability has unlocked new potentials, such as supporting long interactive chat sessions. In this paper, we introduce a test suite, LongEval, which enables us to evaluate the long-range retrieval ability of LLMs at various context lengths. We use LongEval to evaluate open-sourced LLMs, and surprisingly, we find many of them fail to achieve their promised context length. In addition, we present a recipe to fine tune a long-context chatbot based on LLaMA models, and introduce LongChat models that supporting conversations of up to 16,384 tokens. We have released our code at https://github.com/DachengLi1/LongChat. | How Long Can Context Length of Open-Source LLMs truly Promise? | [
"Dacheng Li",
"Rulin Shao",
"Anze Xie",
"Ying Sheng",
"Lianmin Zheng",
"Joseph Gonzalez",
"Ion Stoica",
"Xuezhe Ma",
"Hao Zhang"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=LqISMaceov | @inproceedings{
chang2023learning,
title={Learning to Generate Better Than Your {LLM}},
author={Jonathan Chang and Kiant{\'e} Brantley and Rajkumar Ramamurthy and Dipendra Misra and Wen Sun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=LqISMaceov}
} | Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning Large Language Models (LLMs) for text generation. In particular, recent LLMs such as ChatGPT and GPT4 can engage in fluent conversations with users after finetuning with RL. Capitalizing on key properties of text generation, we seek to investigate RL algorithms beyond general purpose algorithms like Proximal Policy Optimization (PPO). In particular, we extend RL algorithms to allow them to interact with a dynamic black-box guide LLM and propose RL with guided feedback (RLGF), a suite of RL algorithms for LLM fine-tuning. We provide two ways for the guide LLM to interact with the LLM to be optimized for maximizing rewards. The guide LLM can generate text which serves as additional starting states for the RL optimization procedure. The guide LLM can also be used to complete the partial sentences generated by the LLM that is being optimized, treating the guide LLM as an expert to imitate and surpass eventually. We experiment on the IMDB positive sentiment, CommonGen, and TL;DR summarization tasks. We show that our RL algorithms achieve higher performance than supervised learning (SL) and the RL baseline PPO, demonstrating the benefit of interaction with the guide LLM. On both CommonGen and TL;DR, we not only outperform our SL baselines but also improve upon PPO across a variety of metrics beyond the one we optimized for. Our code can be found at https://github.com/Cornell-RL/tril. | Learning to Generate Better Than Your LLM | [
"Jonathan Chang",
"Kianté Brantley",
"Rajkumar Ramamurthy",
"Dipendra Misra",
"Wen Sun"
] | Workshop/Instruction | 2306.11816 | [
"https://github.com/cornell-rl/tril"
] | https://huggingface.co/papers/2306.11816 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=KLPLCXo4aD | @inproceedings{
li2023from,
title={From Classification to Generation: Insights into Crosslingual Retrieval Augmented {ICL}},
author={Xiaoqian Li and Ercong Nie and Sheng Liang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=KLPLCXo4aD}
} | The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages. To address this, we introduce a novel approach that leverages cross-lingual retrieval-augmented in-context learning (CREA-ICL). By extracting semantically similar prompts from high-resource languages, we aim to bolster the zero-shot performance of multilingual pretrained language models (MPLMs) across diverse tasks. Though our approach yields steady improvements in classification tasks, it faces challenges in generation tasks, with Bangla serving as a key case study. Our evaluation offers insights into the performance dynamics of retrieval-augmented in-context learning across both classification and generation domains. | From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL | [
"Xiaoqian Li",
"Ercong Nie",
"Sheng Liang"
] | Workshop/Instruction | 2311.06595 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=KGBNAJCuPf | @inproceedings{
wang2023reward,
title={Reward Model Aggregation},
author={Zihao Wang and Chirag Nagpal and Alexander D'Amour and Victor Veitch and Sanmi Koyejo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=KGBNAJCuPf}
} | Aligning language models requires guiding outputs towards desired properties using reward models. This paper tackles the challenge of combining multiple reward models for diverse objectives. We introduce methods for aggregating these rewards using logical operations. Experiments confirm our methods beat traditional aggregation techniques and underscore the significance of proper reference values. | Reward Model Aggregation | [
"Zihao Wang",
"Chirag Nagpal",
"Alexander D'Amour",
"Victor Veitch",
"Sanmi Koyejo"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IqJ3CU3flr | @inproceedings{
kim2023distort,
title={Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions},
author={Taehyeon Kim and Joonkee Kim and Gihun Lee and Se-Young Yun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=IqJ3CU3flr}
} | While instruction-tuned language models have demonstrated impressive zero-shot generalization, these models often struggle to generate accurate responses when faced with instructions that fall outside their training set. This paper presents Instructive Decoding (ID), a simple yet effective approach that augments the efficacy of instruction-tuned models. Specifically, ID adjusts the logits for next-token prediction in a contrastive manner, utilizing predictions generated from a manipulated version of the original instruction, referred to as a noisy instruction. This noisy instruction aims to elicit responses that could diverge from the intended instruction yet remain plausible. We conduct experiments across a spectrum of such noisy instructions, ranging from those that insert semantic noise via random words to others like 'opposite' that elicit the deviated responses. Our approach achieves considerable performance gains across various instruction-tuned models and tasks without necessitating any additional parameter updates. Notably, utilizing 'opposite' as the noisy instruction in ID, which shows the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks. | Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions | [
"Taehyeon Kim",
"Joonkee Kim",
"Gihun Lee",
"Se-Young Yun"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IPJqprsrNX | @inproceedings{
jin2023dataefficient,
title={Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language},
author={Di Jin and Shikib Mehri and Devamanyu Hazarika and Aishwarya Padmakumar and SUNGJIN LEE and Yang Liu and Mahdi Namazifar},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=IPJqprsrNX}
} | Learning from human feedback is a prominent technique to align the output of large language models (LLMs) with human expectations. Reinforcement learning from human feedback (RLHF) leverages human preference signals that are in the form of ranking of response pairs to perform this alignment. However, human preference on LLM outputs can come in much richer forms including natural language, which may provide detailed feedback on strengths and weaknesses of a given response. In this work we investigate data efficiency of modeling human feedback that is in natural language. Specifically, we fine-tune an open-source LLM, e.g., Falcon-40B-Instruct, on a relatively small amount (1000 records or even less) of human feedback in natural language in the form of critiques and revisions of responses. We show that this model is able to improve the quality of responses from even some of the strongest LLMs such as ChatGPT, BARD, and Vicuna, through critique and revision of those responses. For instance, through one iteration of revision of ChatGPT responses, the revised responses have 56.6% win rate over the original ones, and this win rate can be further improved to 65.9% after applying the revision for five iterations. | Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language | [
"Di Jin",
"Shikib Mehri",
"Devamanyu Hazarika",
"Aishwarya Padmakumar",
"SUNGJIN LEE",
"Yang Liu",
"Mahdi Namazifar"
] | Workshop/Instruction | 2311.14543 | [
""
] | https://huggingface.co/papers/2311.14543 | 0 | 1 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=HgSViBZd1A | @inproceedings{
grzywinski2023releasing,
title={Releasing the {CR}a{QA}n (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models},
author={Rob Grzywinski and Joshua D'Arcy and Robert Naidoff and Ashish Shukla and Alex Browne and Ren Gibbons and Brinnae Bent},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=HgSViBZd1A}
} | Instruction-following language models demand robust methodologies for information retrieval to augment instructions for question-answering applications. A primary challenge is the resolution of coreferences in the context of chunking strategies for long documents. The critical barrier to experimentation of handling coreferences is a lack of open source datasets, specifically in question-answering tasks that require coreference resolution. In this work we present our Coreference Resolution in Question-Answering (CRaQAn) dataset, an open-source dataset that caters to the nuanced information retrieval requirements of coreference resolution in question-answering tasks by providing over 250 question-answer pairs containing coreferences. To develop this dataset, we developed a novel approach for creating high-quality datasets using an instruction-following model (GPT-4) and a Recursive Criticism and Improvement Loop. | Releasing the CRaQAn (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models | [
"Rob Grzywinski",
"Joshua D'Arcy",
"Robert Naidoff",
"Ashish Shukla",
"Alex Browne",
"Ren Gibbons",
"Brinnae Bent"
] | Workshop/Instruction | 2311.16338 | [
""
] | https://huggingface.co/papers/2311.16338 | 0 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=HVduJbHSSO | @inproceedings{
hu2023ciem,
title={{CIEM}: Contrastive Instruction Evaluation Method for Better Instruction Tuning},
author={Hongyu Hu and Jiyuan Zhang and Minyi Zhao and Zhenbang Sun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=HVduJbHSSO}
} | Nowadays, the research on Large Vision-Language Models (LVLMs) has been significantly promoted thanks to the success of Large Language Models (LLM). Nevertheless, these Vision-Language Models (VLMs) are suffering from the drawback of hallucination -- due to insufficient understanding of vision and language modalities, VLMs may generate incorrect perception information when doing downstream applications, for example, captioning a non-existent entity. To address the hallucination phenomenon, on the one hand, we introduce a $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{E}$valuation $\textbf{M}$ethod (CIEM), which is an automatic pipeline that leverages an annotated image-text dataset coupled with an LLM to generate factual/contrastive question-answer pairs for the evaluation of the hallucination of VLMs. On the other hand, based on CIEM, we further propose a new instruction tuning method called CIT (the abbreviation of $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{T}$uning) to alleviate the hallucination of VLMs by automatically producing high-quality factual/contrastive question-answer pairs and corresponding justifications for model tuning. Through extensive experiments on CIEM and CIT, we pinpoint the hallucination issues commonly present in existing VLMs, the disability of the current instruction-tuning dataset to handle the hallucination phenomenon and the superiority of CIT-tuned VLMs over both CIEM and public datasets. Please contact the authors for code and generated dataset. | CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning | [
"Hongyu Hu",
"Jiyuan Zhang",
"Minyi Zhao",
"Zhenbang Sun"
] | Workshop/Instruction | 2309.02301 | [
""
] | https://huggingface.co/papers/2309.02301 | 0 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=GO8aPQ9Odp | @inproceedings{
siththaranjan2023understanding,
title={Understanding Hidden Context in Preference Learning: Consequences for {RLHF}},
author={Anand Siththaranjan and Cassidy Laidlaw and Dylan Hadfield-Menell},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=GO8aPQ9Odp}
} | In practice, preference learning from human feedback depends on incomplete data with hidden context. Hidden context refers to data that affects the feedback received, but which is not represented in the data used to train a preference model. This captures common issues of data collection, such as having human annotators with varied preferences, cognitive processes that result in seemingly irrational behavior, and combining data labeled according to different criteria. We prove that standard applications of preference learning, including reinforcement learning from human feedback (RLHF), implicitly aggregate over hidden contexts according to a well-known voting rule called Borda count. We show this can produce counter-intuitive results that are very different from other methods which implicitly aggregate via expected utility. Furthermore, our analysis formalizes the way that preference learning from users with diverse values tacitly implements a social choice function. A key implication of this result is that annotators have an incentive to misreport their preferences in order to influence the learned model, leading to vulnerabilities in the deployment of RLHF. As a step towards mitigating these problems, we introduce a class of methods called distributional preference learning (DPL). DPL methods estimate a distribution of possible score values for each alternative in order to better account for hidden context. Experimental results indicate that applying DPL to RLHF for LLM chatbots identifies hidden context in the data and significantly reduces subsequent jailbreak vulnerability. | Understanding Hidden Context in Preference Learning: Consequences for RLHF | [
"Anand Siththaranjan",
"Cassidy Laidlaw",
"Dylan Hadfield-Menell"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G8ArB0aApM | @inproceedings{
shin2023past,
title={Past as a Guide: Leveraging Retrospective Learning for Python Code Completion},
author={Seungyoun Shin and Seunggyu Chang and Sungjoon Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=G8ArB0aApM}
} | This work presents Past as a Guide (PaG), a simple approach for Large Language Models (LLMs) to improve the coding capabilities by integrating the past history with interactive and iterative code refinements.
To be specific, inspired by human cognitive processes, the proposed method enables LLMs to utilize previous programming and debugging experiences to enhance the Python code completion tasks.
The framework facilitates LLMs to iteratively refine the Python code based on previous execution and debugging results and optimize learning and reasoning capabilities.
The proposed methodology achieved a 92\% pass@1 on HumanEval, demonstrating the potential to advance the field by leveraging retrospection from past experiences and interactive and iterative refinement processes without external correctness indicators. | Past as a Guide: Leveraging Retrospective Learning for Python Code Completion | [
"Seungyoun Shin",
"Seunggyu Chang",
"Sungjoon Choi"
] | Workshop/Instruction | 2311.07635 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FuOMomaQa8 | @inproceedings{
wang2023fingpt,
title={Fin{GPT}: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets},
author={Neng Wang and Hongyang Yang and Christina Wang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=FuOMomaQa8}
} | In the swiftly expanding domain of Natural Language Processing (NLP), the potential of GPT-based models for the financial sector is increasingly evident. However, the integration of these models with financial datasets presents challenges, notably in determining their adeptness and relevance. This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models, specifically adapted for financial contexts. Through this methodology, we capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration. We begin by explaining the Instruction Tuning paradigm, highlighting its effectiveness for immediate integration. The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression. Firstly, we assess basic competencies and fundamental tasks, such as Named Entity Recognition (NER) and sentiment analysis to enhance specialization. Next, we delve into a comprehensive model, executing multi-task operations by amalgamating all instructional tunings to examine versatility. Finally, we explore the zero-shot capabilities by earmarking unseen tasks and incorporating novel datasets to understand adaptability in uncharted terrains. Such a paradigm fortifies the principles of openness and reproducibility, laying a robust foundation for future investigations in open-source financial large language models (FinLLMs). | FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets | [
"Neng Wang",
"Hongyang Yang",
"Christina Wang"
] | Workshop/Instruction | 2310.04793 | [
"https://github.com/ai4finance-foundation/fingpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=EAuteBjTMw | @inproceedings{
qi2023large,
title={Large Language Models are Zero Shot Hypothesis Proposers},
author={Biqing Qi and Kaiyan Zhang and Haoxiang Li and Kai Tian and Sihang Zeng and Zhang-Ren Chen and Bowen Zhou},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=EAuteBjTMw}
} | Significant scientific discoveries have driven the progress of human civilisation. The explosion of scientific literature and data has created information barriers across disciplines that have slowed the pace of scientific discovery. Large Language Models (LLMs) hold a wealth of global and interdisciplinary knowledge that promises to break down these information barriers and foster a new wave of scientific discovery. However, the potential of LLMs for scientific discovery has not been formally explored. In this paper, we start from investigating whether LLMs can propose scientific hypotheses. To this end, we construct a dataset consist of background knowledge and hypothesis pairs from biomedical literature. The dataset is divided into training, seen, and unseen test sets based on the publication date to control visibility. We subsequently evaluate the hypothesis generation capabilities of various top-tier instructed models in zero-shot, few-shot, and fine-tuning settings, including both closed and open-source LLMs. Additionally, we introduce an LLM-based multi-agent cooperative framework with different role designs and external tools to enhance the capabilities related to generating hypotheses. We also design four metrics through a comprehensive review to evaluate the generated hypotheses for both ChatGPT-based and human evaluations. Through experiments and analyses, we arrive at the following findings: 1) LLMs surprisingly generate untrained yet validated hypotheses from testing literature. 2) Increasing uncertainty facilitates candidate generation, potentially enhancing zero-shot hypothesis generation capabilities. These findings strongly support the potential of LLMs as catalysts for new scientific discoveries and guide further exploration. | Large Language Models are Zero Shot Hypothesis Proposers | [
"Biqing Qi",
"Kaiyan Zhang",
"Haoxiang Li",
"Kai Tian",
"Sihang Zeng",
"Zhang-Ren Chen",
"Bowen Zhou"
] | Workshop/Instruction | 2311.05965 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CjrPqvvUXL | @inproceedings{
muennighoff2023octopack,
title={OctoPack: Instruction Tuning Code Large Language Models},
author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro Von Werra and Shayne Longpre},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=CjrPqvvUXL}
} | Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack’s benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack. | OctoPack: Instruction Tuning Code Large Language Models | [
"Niklas Muennighoff",
"Qian Liu",
"Armel Zebaze",
"Qinkai Zheng",
"Binyuan Hui",
"Terry Yue Zhuo",
"Swayam Singh",
"Xiangru Tang",
"Leandro Von Werra",
"Shayne Longpre"
] | Workshop/Instruction | 2308.07124 | [
"https://github.com/bigcode-project/octopack"
] | https://huggingface.co/papers/2308.07124 | 9 | 28 | 1 | 10 | [
"bigcode/octocoder",
"bigcode/octogeex",
"TheBloke/Octocoder-GGML",
"TheBloke/Octocoder-GPTQ",
"bigcode/santacoderpack",
"bigcode/santacoder-ldf"
] | [
"allenai/reward-bench",
"bigcode/humanevalpack",
"bigcode/commitpack",
"bigcode/commitpackft",
"bigcode/oasst-octopack",
"agicorp/commitpackft"
] | [
"bigcode/bigcode-models-leaderboard",
"allenai/reward-bench",
"Yeyito/llm_contamination_detector",
"LLM360/de-arena",
"bigcode/OctoCoder-Demo",
"AtlaAI/judge-arena",
"21world/bigcode-models-leaderboard",
"Muennighoff/code_eval_octopack",
"Adam666Eves/bigcode-octocoder",
"Delfigore/bigcode-octocoder",
"JayIsTheLord/test",
"tsteffek/de-arena",
"bomjin/code_eval_octopack"
] | [
"bigcode/octocoder",
"bigcode/octogeex",
"TheBloke/Octocoder-GGML",
"TheBloke/Octocoder-GPTQ",
"bigcode/santacoderpack",
"bigcode/santacoder-ldf"
] | [
"allenai/reward-bench",
"bigcode/humanevalpack",
"bigcode/commitpack",
"bigcode/commitpackft",
"bigcode/oasst-octopack",
"agicorp/commitpackft"
] | [
"bigcode/bigcode-models-leaderboard",
"allenai/reward-bench",
"Yeyito/llm_contamination_detector",
"LLM360/de-arena",
"bigcode/OctoCoder-Demo",
"AtlaAI/judge-arena",
"21world/bigcode-models-leaderboard",
"Muennighoff/code_eval_octopack",
"Adam666Eves/bigcode-octocoder",
"Delfigore/bigcode-octocoder",
"JayIsTheLord/test",
"tsteffek/de-arena",
"bomjin/code_eval_octopack"
] | 1 | poster |
null | https://openreview.net/forum?id=CZJOOFgXZj | @inproceedings{
li2023approximate,
title={Approximate Clustering for Extracting Task Relationships in Multi-Instruction Tuning},
author={Dongyue Li and Jinhong Yu and Hongyang R. Zhang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=CZJOOFgXZj}
} | The development of language models involves the evaluation of a broad range of learning tasks. Recent work has shown that by using carefully designed instructions to teach a large transformer model, they can be fine-tuned on a wide range of downstream tasks. However, when the number of instructions increases, they can negatively interfere with each other if trained together. Existing works have relied on domain expertise and manual inspection to construct multi-instruction sets, which can be time-consuming and difficult to scale. To address this challenge, this paper develops a clustering algorithm to find groups of similar tasks based on a given set of task affinity scores. This is an NP-hard problem, and conventional algorithms such as spectral and Llyod's clustering are sensitive to variations in the scale of task losses. Our algorithm instead uses a semidefinite relaxation to maximize the average density of clusters and then rounds the solution with a threshold. We adaptively build the clusters by gradually adding tasks so that the affinities only need to be computed in the existing clusters. Then, we construct an evaluation benchmark to assess task grouping algorithms with verified group structures. The evaluation set includes 63 cases, spanning multitask instruction tuning, multi-instruction tuning, and in-context learning of multiple functions. We validate our algorithm on this evaluation set by showing that it recovers the group structure found by an exhaustive search. We also show that our approach improves performance over multi-instruction and soft-prompt tuning by up to 6% on several sentence classification and structure-to-text generative tasks. | Approximate Clustering for Extracting Task Relationships in Multi-Instruction Tuning | [
"Dongyue Li",
"Jinhong Yu",
"Hongyang R. Zhang"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Bc3S2G1PxH | @inproceedings{
kirk2023understanding,
title={Understanding the Effects of {RLHF} on {LLM} Generalisation and Diversity},
author={Robert Kirk and Ishita Mediratta and Christoforos Nalmpantis and Jelena Luketina and Eric Hambro and Edward Grefenstette and Roberta Raileanu},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=Bc3S2G1PxH}
} | Large language models (LLMs) fine-tuned with reinforcement learning from human feedback (RLHF) have been used in some of the most widely deployed AI models to date, such as OpenAI’s ChatGPT or Anthropic’s Claude. While there has been significant work developing these methods, our understanding of the benefits and downsides of each stage in RLHF is still limited. To fill this gap, we present an extensive analysis of how each stage of the process (i.e. supervised fine-tuning (SFT), reward modelling, and RLHF) affects two key properties: out-of-distribution generalisation (OOD) and output diversity. OOD generalisation is crucial given the wide range of real-world scenarios in which these models are being used, while output diversity refers to the model’s ability to generate varied outputs, and is important for a variety of use cases. We perform our analysis across two base models on both summarisation and instruction following tasks, the latter being highly relevant for current LLM use cases. We find that RLHF generalises better than SFT to new inputs, particularly as the distribution shift between train and test becomes larger. However, RLHF significantly reduces output diversity compared to SFT across a variety of measures, implying a tradeoff in current LLM fine-tuning methods between generalisation and diversity. Our results provide guidance on which fine-tuning method should be used depending on the application, and show that more research is needed to improve the tradeoff between generalisation and diversity. | Understanding the Effects of RLHF on LLM Generalisation and Diversity | [
"Robert Kirk",
"Ishita Mediratta",
"Christoforos Nalmpantis",
"Jelena Luketina",
"Eric Hambro",
"Edward Grefenstette",
"Roberta Raileanu"
] | Workshop/Instruction | 2310.06452 | [
"https://github.com/facebookresearch/rlfh-gen-div"
] | https://huggingface.co/papers/2310.06452 | 1 | 2 | 0 | 7 | [] | [
"UCL-DARK/sequential-instructions",
"UCL-DARK/openai-tldr-filtered",
"UCL-DARK/openai-tldr-summarisation-preferences",
"UCL-DARK/openai-tldr-filtered-queries",
"UCL-DARK/alpaca-farm-id-test"
] | [] | [] | [
"UCL-DARK/sequential-instructions",
"UCL-DARK/openai-tldr-filtered",
"UCL-DARK/openai-tldr-summarisation-preferences",
"UCL-DARK/openai-tldr-filtered-queries",
"UCL-DARK/alpaca-farm-id-test"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=7KxUgWTZbz | @inproceedings{
zhao2023group,
title={Group Preference Optimization: Few-Shot Alignment of Large Language Models},
author={Siyan Zhao and John Dang and Aditya Grover},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=7KxUgWTZbz}
} | Many applications of large language models (LLMs), ranging from chatbots to creative writing, require nuanced subjective judgments that can differ significantly across different groups.
Existing alignment algorithms can be expensive to align for each group, requiring prohibitive amounts of group-specific preference data and computation for real-world use cases.
We introduce Group Preference Optimization (GPO), an alignment framework that steers language models to preferences of individual groups in a few-shot manner.
In GPO, we augment the base LLM with an independent transformer module trained to predict the preferences of a group for the LLM generations.
For few-shot learning, we parameterize this module as an in-context autoregressive transformer and train it via meta-learning on several groups. We empirically validate the efficacy of GPO through rigorous evaluations using LLMs with varied sizes on three human opinion adaptation tasks. These tasks involve adapting to the preferences of US demographic groups, global countries, and individual users. Our results demonstrate that GPO not only aligns models more accurately but also requires fewer group-specific preferences, and less training and inference computing resources, outperforming existing strategies such as in-context steering and fine-tuning methods. | Group Preference Optimization: Few-Shot Alignment of Large Language Models | [
"Siyan Zhao",
"John Dang",
"Aditya Grover"
] | Workshop/Instruction | 2310.11523 | [
"https://github.com/jamqd/Group-Preference-Optimization"
] | https://huggingface.co/papers/2310.11523 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=6579t0X8X2 | @inproceedings{
lee2023platypus,
title={Platypus: Quick, Cheap, and Powerful Refinement of {LLM}s},
author={Ariel Lee and Cole Hunter and Nataniel Ruiz},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=6579t0X8X2}
} | We present **Platypus**, a family of fine-tuned and merged Large Language Models (LLMs) that achieved the strongest performance and stood at first place in HuggingFace's Open LLM Leaderboard at the time of writing. In this work we describe (1) our curated dataset **Open-Platypus**, that is a subset of other open datasets and which we release to the public (2) our process of fine-tuning and merging LoRA modules in order to conserve the strong prior of pretrained LLMs, while bringing specific domain knowledge to the surface (3) our efforts in checking for test data leaks and contamination in the training data, which can inform future research. Specifically, the Platypus family achieves strong performance in quantitative LLM metrics across model sizes, topping the global Open LLM leaderboard while using just a fraction of the fine-tuning data and overall compute that are required for other state-of-the-art fine-tuned LLMs. In particular, a 13B Platypus model can be trained on a single A100 GPU using 25k questions in 5 hours. This is a testament of the quality of our Open-Platypus dataset, and opens opportunities for more improvements in the field. | Platypus: Quick, Cheap, and Powerful Refinement of LLMs | [
"Ariel Lee",
"Cole Hunter",
"Nataniel Ruiz"
] | Workshop/Instruction | 2308.07317 | [
"https://github.com/arielnlee/Platypus"
] | https://huggingface.co/papers/2308.07317 | 3 | 23 | 4 | 3 | [
"Open-Orca/OpenOrca-Platypus2-13B",
"garage-bAInd/Platypus2-70B-instruct",
"TheBloke/OpenOrca-Platypus2-13B-GGML",
"fangloveskari/ORCA_LLaMA_70B_QLoRA",
"TheBloke/OpenOrca-Platypus2-13B-GPTQ",
"uni-tianyan/Uni-TianYan",
"Riiid/sheep-duck-llama-2",
"TheBloke/Platypus2-70B-Instruct-GPTQ",
"kyujinpy/KO-Platypus2-7B-ex",
"NECOUDBFM/Jellyfish-13B",
"garage-bAInd/Platypus2-70B",
"Riiid/sheep-duck-llama-2-70b-v1.1",
"garage-bAInd/Stable-Platypus2-13B",
"garage-bAInd/Platypus2-13B",
"TheBloke/OpenOrca-Platypus2-13B-GGUF",
"TheBloke/Platypus2-70B-Instruct-GGML",
"garage-bAInd/Camel-Platypus2-70B",
"TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF",
"TheBloke/Platypus2-70B-Instruct-GGUF",
"TheBloke/Platypus2-70B-GGML",
"TheBloke/Uni-TianYan-70B-GGUF",
"TheBloke/Stable-Platypus2-13B-GPTQ",
"garage-bAInd/Platypus2-7B",
"TheBloke/Sheep-Duck-Llama-2-70B-GGUF",
"TheBloke/Camel-Platypus2-13B-GGML",
"TheBloke/Platypus2-70B-GPTQ",
"TheBloke/Stable-Platypus2-13B-GGML",
"TheBloke/Platypus2-13B-GGML",
"TheBloke/Camel-Platypus2-70B-GGML",
"TheBloke/Platypus2-70B-GGUF",
"TheBloke/Camel-Platypus2-13B-GPTQ",
"TheBloke/ORCA_LLaMA_70B_QLoRA-GPTQ",
"TheBloke/OpenOrca-Platypus2-13B-AWQ",
"TheBloke/Platypus2-13B-GPTQ",
"TheBloke/Camel-Platypus2-70B-GPTQ",
"garage-bAInd/Camel-Platypus2-13B",
"waldie/ORCA_LLaMA_70B_QLoRA-2.4bpw-h6-exl2",
"TheBloke/Camel-Platypus2-70B-GGUF",
"TheBloke/Uni-TianYan-70B-GPTQ",
"garage-bAInd/Platypus-70B-adapters",
"TheBloke/Camel-Platypus2-13B-GGUF",
"TheBloke/Platypus2-13B-GGUF",
"TheBloke/Stable-Platypus2-13B-GGUF",
"TheBloke/Sheep-Duck-Llama-2-70B-GPTQ",
"TheBloke/Camel-Platypus2-13B-AWQ",
"TheBloke/Camel-Platypus2-70B-AWQ",
"TheBloke/Platypus2-13B-AWQ",
"TheBloke/Platypus2-70B-Instruct-AWQ",
"TheBloke/Platypus2-70B-AWQ",
"TheBloke/ORCA_LLaMA_70B_QLoRA-AWQ",
"TheBloke/Uni-TianYan-70B-AWQ",
"AzureBlack/Platypus2-70B-instruct-4.1bpw-6h-exl2",
"Aryanne/OpenLlama-Platypus-3B-gguf",
"RichardErkhov/garage-bAInd_-_Platypus2-70B-gguf",
"garage-bAInd/Platypus-13B-adapters",
"malhajar/Platypus2-70B-instruct-4bit-gptq",
"uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b",
"uukuguy/speechless-orca-platypus-coig-lite-4k-0.5e-13b",
"uukuguy/speechless-orca-platypus-coig-lite-4k-0.6e-13b",
"uni-tianyan/Uni-TianYan-V1",
"TMK04/sheep-duck-llama-2-70b-v1.1-6.0bpw-h6-exl2",
"TheBloke/Stable-Platypus2-13B-AWQ",
"garage-bAInd/Platypus-7B-adapters",
"TheBloke/Sheep-Duck-Llama-2-70B-AWQ",
"DopeorNope/COLA3-7B",
"mncai/Mistral-7B-openplatypus-1k",
"mncai/Mistral-7B-Dolphin-1k",
"RichardErkhov/garage-bAInd_-_Platypus2-7B-8bits",
"RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-4bits",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-8bits",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-gguf",
"RichardErkhov/garage-bAInd_-_Stable-Platypus2-13B-gguf",
"RichardErkhov/garage-bAInd_-_Camel-Platypus2-13B-gguf"
] | [
"garage-bAInd/Open-Platypus",
"kyujinpy/KOpen-platypus",
"kyujinpy/Open-platypus-Commercial",
"weblab-GENIAC/Open-Platypus-Japanese-masked",
"botp/Open-Platypus"
] | [
"open-llm-leaderboard/open_llm_leaderboard",
"Intel/low_bit_open_llm_leaderboard",
"BAAI/open_cn_llm_leaderboard",
"open-llm-leaderboard-old/open_llm_leaderboard",
"Open-Orca/OpenOrca-Platypus2-13B",
"gsaivinay/open_llm_leaderboard",
"EvanTHU/MotionLLM",
"GTBench/GTBench",
"Vikhrmodels/small-shlepa-lb",
"felixz/open_llm_leaderboard",
"OPTML-Group/UnlearnCanvas-Benchmark",
"bardsai/performance-llm-board",
"neubla/neubla-llm-evaluation-board",
"HemaAM/GPT_train_on_LLaMa",
"anantgupta129/LitGPT-Pythia-160M",
"Satyam-Singh/garage-bAInd-Platypus2-70B",
"barunsaha/slides-wizard",
"RaviNaik/ERA-SESSION22",
"PrarthanaTS/tsai-gpt-from-scratch",
"Sijuade/GPTNEXTWORD",
"mikeee/s3nh-garage-bAInd-Stable-Platypus2-13B-GGML",
"rodrigomasini/data_only_open_llm_leaderboard",
"Docfile/open_llm_leaderboard",
"pngwn/open_llm_leaderboard-check",
"supra-e-acc/Pythia-160M-text-generate",
"TharunSivamani/GPT-Predictor",
"aydenbottos12/uni-tianyan-Uni-TianYan",
"aichampions/open_llm_leaderboard",
"Adeco/open_llm_leaderboard",
"RashiAgarwal/TSAIGPTRedPajama",
"neuralorbs/DialogGen",
"whichway/uni-tianyan-Uni-TianYan",
"anirudh937/open_llm_leaderboard",
"smothiki/open_llm_leaderboard2",
"MadhurGarg/TSAIGPTRedPajama",
"Navyabhat/ERAV1-Session-22",
"Lixense/uni-tianyan-Uni-TianYan",
"mjalg/IFEvalTR",
"VarunSivamani/GPT-From-Scratch",
"GunaKoppula/ERA-Session-22",
"Vaish2705/ERA_S22",
"h3dabeatgod/uni-tianyan-Uni-TianYan",
"smothiki/open_llm_leaderboard",
"ToletiSri/TSAI_S22",
"mdugger/garage-bAInd-Platypus2-70B",
"mkthoma/GPT_From_Scratch",
"The-Matrix/uni-tianyan-Uni-TianYan",
"hongjong/kyujinpy-KO-Platypus2-7B-ex",
"simpx/Riiid-sheep-duck-llama-2",
"venkyyuvy/GPT_redpajama",
"Saurabh418/uni-tianyan-Uni-TianYan",
"idkukkikoooii/Riiid-sheep-duck-llama-2-70b-v1.11",
"Saurabh418/Riiid-sheep-duck-llama-2",
"Zeros0sZero/garage-bAInd-Platypus2-70B-instruct",
"PSC420/uni-tianyan-Uni-TianYan",
"dackdel/Riiid-sheep-duck-llama-2-70b-v1.1",
"MattCase/Riiid-sheep-duck-llama-2",
"loganblack0/garage-bAInd-Platypus2-70B-instruct",
"sanjanatule/GPTNext",
"Edwardmonteirobr/Riiid-sheep-duck-llama-2-70b-v1.1new",
"riolu7600/llama-2-test",
"Utopian2/garage-bAInd-Platypus2-70B-instruct",
"mikeee/codellama-13b-python-ggml",
"Edwardmonteirobr/Riiid-sheep-duck-llama-2-70b-v1.1",
"vcastro8/Riiid-sheep-duck-llama-2_sl",
"blazingbunny/garage-bAInd-Platypus2-70B-instruct",
"kbmlcoding/open_llm_leaderboard_free",
"Fayzul/Riiid-sheep-duck-llama-2-70b-v1.1",
"vcastro8/Riiid-sheep-duck-llama-2",
"PeepDaSlan9/garage-bAInd-Platypus2-70B-instruct",
"cloneQ/internLMRAG",
"monkebonk/Riiid-sheep-duck-llama-2-70b-v1.1",
"mehranandi/Riiid-sheep-duck-llama-2",
"Vexvoi/garage-bAInd-Platypus2-70B-instruct",
"hujin0929/LlamaIndex_RAG",
"Aniquel/Riiid-sheep-duck-llama-2-70b-v1.1",
"asir0z/open_llm_leaderboard",
"Ragunandha/garage-bAInd-Platypus2-70B-instruct",
"flyfive0315/internLlamaIndex",
"kyungeun/llm_tasks_chat",
"scrambled-gabs/Riiid-sheep-duck-llama-2-70b-v1.1",
"fika9903/garage-bAInd-Platypus2-70B-instruct",
"Hyperion-js/Open-Orca-OpenOrca-Platypus2-13B",
"veerza/Riiid-sheep-duck-llama-2-70b-v1.1",
"saidloyens/garage-bAInd-Platypus2-70B-instruct",
"tellview/Open-Orca-OpenOrca-Platypus2-13B",
"ziedammak/Riiid-sheep-duck-llama-2-70b-v1.1",
"AV29/garage-bAInd-Platypus2-70B-instruct",
"something01/Open-Orca-OpenOrca-Platypus2-13B",
"Debargha-1225/Riiid-sheep-duck-llama-2-70b-v1.1",
"prasaugus/garage-bAInd-Platypus2-70B-instruct",
"bburli/Open-Orca-OpenOrca-Platypus2-13B",
"piyushgrover/MiniGPT_S22",
"cclarkson125/garage-bAInd-Platypus2-70B-instruct",
"AlexFierro9/Open-Orca-OpenOrca-Platypus2-13B",
"huaijin/garage-bAInd-Camel-Platypus2-70B",
"phxdev/garage-bAInd-Platypus2-70B-instruct",
"pri7ansh/Open-Orca-OpenOrca-Platypus2-13B",
"smothiki/open_llm_leaderboard_old",
"joaopaulopresa/workshop_llm_ufg_chatbot"
] | [
"Open-Orca/OpenOrca-Platypus2-13B",
"garage-bAInd/Platypus2-70B-instruct",
"TheBloke/OpenOrca-Platypus2-13B-GGML",
"fangloveskari/ORCA_LLaMA_70B_QLoRA",
"TheBloke/OpenOrca-Platypus2-13B-GPTQ",
"uni-tianyan/Uni-TianYan",
"Riiid/sheep-duck-llama-2",
"TheBloke/Platypus2-70B-Instruct-GPTQ",
"kyujinpy/KO-Platypus2-7B-ex",
"NECOUDBFM/Jellyfish-13B",
"garage-bAInd/Platypus2-70B",
"Riiid/sheep-duck-llama-2-70b-v1.1",
"garage-bAInd/Stable-Platypus2-13B",
"garage-bAInd/Platypus2-13B",
"TheBloke/OpenOrca-Platypus2-13B-GGUF",
"TheBloke/Platypus2-70B-Instruct-GGML",
"garage-bAInd/Camel-Platypus2-70B",
"TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF",
"TheBloke/Platypus2-70B-Instruct-GGUF",
"TheBloke/Platypus2-70B-GGML",
"TheBloke/Uni-TianYan-70B-GGUF",
"TheBloke/Stable-Platypus2-13B-GPTQ",
"garage-bAInd/Platypus2-7B",
"TheBloke/Sheep-Duck-Llama-2-70B-GGUF",
"TheBloke/Camel-Platypus2-13B-GGML",
"TheBloke/Platypus2-70B-GPTQ",
"TheBloke/Stable-Platypus2-13B-GGML",
"TheBloke/Platypus2-13B-GGML",
"TheBloke/Camel-Platypus2-70B-GGML",
"TheBloke/Platypus2-70B-GGUF",
"TheBloke/Camel-Platypus2-13B-GPTQ",
"TheBloke/ORCA_LLaMA_70B_QLoRA-GPTQ",
"TheBloke/OpenOrca-Platypus2-13B-AWQ",
"TheBloke/Platypus2-13B-GPTQ",
"TheBloke/Camel-Platypus2-70B-GPTQ",
"garage-bAInd/Camel-Platypus2-13B",
"waldie/ORCA_LLaMA_70B_QLoRA-2.4bpw-h6-exl2",
"TheBloke/Camel-Platypus2-70B-GGUF",
"TheBloke/Uni-TianYan-70B-GPTQ",
"garage-bAInd/Platypus-70B-adapters",
"TheBloke/Camel-Platypus2-13B-GGUF",
"TheBloke/Platypus2-13B-GGUF",
"TheBloke/Stable-Platypus2-13B-GGUF",
"TheBloke/Sheep-Duck-Llama-2-70B-GPTQ",
"TheBloke/Camel-Platypus2-13B-AWQ",
"TheBloke/Camel-Platypus2-70B-AWQ",
"TheBloke/Platypus2-13B-AWQ",
"TheBloke/Platypus2-70B-Instruct-AWQ",
"TheBloke/Platypus2-70B-AWQ",
"TheBloke/ORCA_LLaMA_70B_QLoRA-AWQ",
"TheBloke/Uni-TianYan-70B-AWQ",
"AzureBlack/Platypus2-70B-instruct-4.1bpw-6h-exl2",
"Aryanne/OpenLlama-Platypus-3B-gguf",
"RichardErkhov/garage-bAInd_-_Platypus2-70B-gguf",
"garage-bAInd/Platypus-13B-adapters",
"malhajar/Platypus2-70B-instruct-4bit-gptq",
"uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b",
"uukuguy/speechless-orca-platypus-coig-lite-4k-0.5e-13b",
"uukuguy/speechless-orca-platypus-coig-lite-4k-0.6e-13b",
"uni-tianyan/Uni-TianYan-V1",
"TMK04/sheep-duck-llama-2-70b-v1.1-6.0bpw-h6-exl2",
"TheBloke/Stable-Platypus2-13B-AWQ",
"garage-bAInd/Platypus-7B-adapters",
"TheBloke/Sheep-Duck-Llama-2-70B-AWQ",
"DopeorNope/COLA3-7B",
"mncai/Mistral-7B-openplatypus-1k",
"mncai/Mistral-7B-Dolphin-1k",
"RichardErkhov/garage-bAInd_-_Platypus2-7B-8bits",
"RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-4bits",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-8bits",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-gguf",
"RichardErkhov/garage-bAInd_-_Stable-Platypus2-13B-gguf",
"RichardErkhov/garage-bAInd_-_Camel-Platypus2-13B-gguf"
] | [
"garage-bAInd/Open-Platypus",
"kyujinpy/KOpen-platypus",
"kyujinpy/Open-platypus-Commercial",
"weblab-GENIAC/Open-Platypus-Japanese-masked",
"botp/Open-Platypus"
] | [
"open-llm-leaderboard/open_llm_leaderboard",
"Intel/low_bit_open_llm_leaderboard",
"BAAI/open_cn_llm_leaderboard",
"open-llm-leaderboard-old/open_llm_leaderboard",
"Open-Orca/OpenOrca-Platypus2-13B",
"gsaivinay/open_llm_leaderboard",
"EvanTHU/MotionLLM",
"GTBench/GTBench",
"Vikhrmodels/small-shlepa-lb",
"felixz/open_llm_leaderboard",
"OPTML-Group/UnlearnCanvas-Benchmark",
"bardsai/performance-llm-board",
"neubla/neubla-llm-evaluation-board",
"HemaAM/GPT_train_on_LLaMa",
"anantgupta129/LitGPT-Pythia-160M",
"Satyam-Singh/garage-bAInd-Platypus2-70B",
"barunsaha/slides-wizard",
"RaviNaik/ERA-SESSION22",
"PrarthanaTS/tsai-gpt-from-scratch",
"Sijuade/GPTNEXTWORD",
"mikeee/s3nh-garage-bAInd-Stable-Platypus2-13B-GGML",
"rodrigomasini/data_only_open_llm_leaderboard",
"Docfile/open_llm_leaderboard",
"pngwn/open_llm_leaderboard-check",
"supra-e-acc/Pythia-160M-text-generate",
"TharunSivamani/GPT-Predictor",
"aydenbottos12/uni-tianyan-Uni-TianYan",
"aichampions/open_llm_leaderboard",
"Adeco/open_llm_leaderboard",
"RashiAgarwal/TSAIGPTRedPajama",
"neuralorbs/DialogGen",
"whichway/uni-tianyan-Uni-TianYan",
"anirudh937/open_llm_leaderboard",
"smothiki/open_llm_leaderboard2",
"MadhurGarg/TSAIGPTRedPajama",
"Navyabhat/ERAV1-Session-22",
"Lixense/uni-tianyan-Uni-TianYan",
"mjalg/IFEvalTR",
"VarunSivamani/GPT-From-Scratch",
"GunaKoppula/ERA-Session-22",
"Vaish2705/ERA_S22",
"h3dabeatgod/uni-tianyan-Uni-TianYan",
"smothiki/open_llm_leaderboard",
"ToletiSri/TSAI_S22",
"mdugger/garage-bAInd-Platypus2-70B",
"mkthoma/GPT_From_Scratch",
"The-Matrix/uni-tianyan-Uni-TianYan",
"hongjong/kyujinpy-KO-Platypus2-7B-ex",
"simpx/Riiid-sheep-duck-llama-2",
"venkyyuvy/GPT_redpajama",
"Saurabh418/uni-tianyan-Uni-TianYan",
"idkukkikoooii/Riiid-sheep-duck-llama-2-70b-v1.11",
"Saurabh418/Riiid-sheep-duck-llama-2",
"Zeros0sZero/garage-bAInd-Platypus2-70B-instruct",
"PSC420/uni-tianyan-Uni-TianYan",
"dackdel/Riiid-sheep-duck-llama-2-70b-v1.1",
"MattCase/Riiid-sheep-duck-llama-2",
"loganblack0/garage-bAInd-Platypus2-70B-instruct",
"sanjanatule/GPTNext",
"Edwardmonteirobr/Riiid-sheep-duck-llama-2-70b-v1.1new",
"riolu7600/llama-2-test",
"Utopian2/garage-bAInd-Platypus2-70B-instruct",
"mikeee/codellama-13b-python-ggml",
"Edwardmonteirobr/Riiid-sheep-duck-llama-2-70b-v1.1",
"vcastro8/Riiid-sheep-duck-llama-2_sl",
"blazingbunny/garage-bAInd-Platypus2-70B-instruct",
"kbmlcoding/open_llm_leaderboard_free",
"Fayzul/Riiid-sheep-duck-llama-2-70b-v1.1",
"vcastro8/Riiid-sheep-duck-llama-2",
"PeepDaSlan9/garage-bAInd-Platypus2-70B-instruct",
"cloneQ/internLMRAG",
"monkebonk/Riiid-sheep-duck-llama-2-70b-v1.1",
"mehranandi/Riiid-sheep-duck-llama-2",
"Vexvoi/garage-bAInd-Platypus2-70B-instruct",
"hujin0929/LlamaIndex_RAG",
"Aniquel/Riiid-sheep-duck-llama-2-70b-v1.1",
"asir0z/open_llm_leaderboard",
"Ragunandha/garage-bAInd-Platypus2-70B-instruct",
"flyfive0315/internLlamaIndex",
"kyungeun/llm_tasks_chat",
"scrambled-gabs/Riiid-sheep-duck-llama-2-70b-v1.1",
"fika9903/garage-bAInd-Platypus2-70B-instruct",
"Hyperion-js/Open-Orca-OpenOrca-Platypus2-13B",
"veerza/Riiid-sheep-duck-llama-2-70b-v1.1",
"saidloyens/garage-bAInd-Platypus2-70B-instruct",
"tellview/Open-Orca-OpenOrca-Platypus2-13B",
"ziedammak/Riiid-sheep-duck-llama-2-70b-v1.1",
"AV29/garage-bAInd-Platypus2-70B-instruct",
"something01/Open-Orca-OpenOrca-Platypus2-13B",
"Debargha-1225/Riiid-sheep-duck-llama-2-70b-v1.1",
"prasaugus/garage-bAInd-Platypus2-70B-instruct",
"bburli/Open-Orca-OpenOrca-Platypus2-13B",
"piyushgrover/MiniGPT_S22",
"cclarkson125/garage-bAInd-Platypus2-70B-instruct",
"AlexFierro9/Open-Orca-OpenOrca-Platypus2-13B",
"huaijin/garage-bAInd-Camel-Platypus2-70B",
"phxdev/garage-bAInd-Platypus2-70B-instruct",
"pri7ansh/Open-Orca-OpenOrca-Platypus2-13B",
"smothiki/open_llm_leaderboard_old",
"joaopaulopresa/workshop_llm_ufg_chatbot"
] | 1 | poster |
Subsets and Splits