bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=eNL8QJlWxc | @inproceedings{
guo2023lowa,
title={{LOWA}: Localize Objects in the Wild with Attributes},
author={Xiaoyuan Guo and Kezhen Chen and Jinmeng Rao and Yawen Zhang and Baochen Sun and Jie Yang},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=eNL8QJlWxc}
} | Existing open-vocabulary object detectors can struggle with uncommon or fine-grained classes, as the model and users may have different understandings of object names. Incorporating attributes such as color, shape, and size can help to reduce this inconsistency and make interactive detection more convenient and flexible. Motivated by this, we present LOWA, a new method for localizing objects with attributes effectively in the wild. To train LOWA, we propose a multi-step vision-language training strategy to learn object detection and recognition with class names as well as attribute information, which empowers users to flexibly customize text queries and extend to fine-grained detection with attribute and object information for a wider range of applications. LOWA is built on top of a two-tower vision-language architecture and consists of a standard vision transformer as the image encoder and a similar transformer as the text encoder. To learn the alignment between visual and text inputs at the instance level, we train LOWA with three training steps: object-level training, attribute-aware learning, and free-text joint training of objects and attributes. This training strategy first ensures correct object detection, then incorporates instance-level attribute information, and finally balances the object class and attribute sensitivity. We evaluate our model performance of attribute classification and attribute localization on the Open-Vocabulary Attribute Detection (OVAD) benchmark and the Visual Attributes in the Wild (VAW) dataset, and experiments indicate strong zero-shot performance. Ablation studies additionally demonstrate the effectiveness of each training step of our approach. | LOWA: Localize Objects in the Wild with Attributes | [
"Xiaoyuan Guo",
"Kezhen Chen",
"Jinmeng Rao",
"Yawen Zhang",
"Baochen Sun",
"Jie Yang"
] | Workshop/R0-FoMo | 2305.20047 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=eLmjmG39KP | @inproceedings{
chen2023understanding,
title={Understanding the Vulnerability of {CLIP} to Image Compression},
author={Cangxiong Chen and Vinay P. Namboodiri and Julian Padget},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=eLmjmG39KP}
} | CLIP is a widely used foundational vision-language model that is used for zero-shot image recognition and other image-text alignment tasks. We demonstrate that CLIP is vulnerable to change in image quality under compression. This surprising result is further analysed using an attribution method-Integrated Gradients. Using this attribution method, we are able to better understand both quantitatively and qualitatively exactly the nature in which the compression affects the zero-shot recognition accuracy of this model. We evaluate this extensively on CIFAR-10 and STL-10. Our work provides the basis to understand this vulnerability of CLIP and can help us develop more effective methods to improve the robustness of CLIP and other vision-language models. | Understanding the Vulnerability of CLIP to Image Compression | [
"Cangxiong Chen",
"Vinay P. Namboodiri",
"Julian Padget"
] | Workshop/R0-FoMo | 2311.14029 | [
"https://github.com/CangxiongChen/understanding_CLIP_vulnerability"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=cmOzZuiFPs | @inproceedings{
kirsch2023towards,
title={Towards General-Purpose In-Context Learning Agents},
author={Louis Kirsch and James Harrison and C. Freeman and Jascha Sohl-Dickstein and J{\"u}rgen Schmidhuber},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=cmOzZuiFPs}
} | Reinforcement Learning (RL) algorithms are usually hand-crafted, driven by the research and engineering of humans. An alternative approach is to automate this research process via meta-learning. A particularly ambitious objective is to automatically discover new RL algorithms from scratch that use in-context learning to learn-how-to-learn entirely from data while also generalizing to a wide range of environments. Those RL algorithms are implemented entirely in neural networks, by conditioning on previous experience from the environment, without any explicit optimization-based routine at meta-test time. To achieve generalization, this requires a broad task distribution of diverse and challenging environments. Our Transformer-based Generally Learning Agents (GLAs) are an important first step in this direction. Our GLAs are meta-trained using supervised learning techniques on an offline dataset with experiences from RL environments that is augmented with random projections to generate task diversity. During meta-testing our agents perform in-context meta-RL on entirely different robotic control problems such as Reacher, Cartpole, or HalfCheetah that were not in the meta-training distribution. | Towards General-Purpose In-Context Learning Agents | [
"Louis Kirsch",
"James Harrison",
"C. Freeman",
"Jascha Sohl-Dickstein",
"Jürgen Schmidhuber"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=c4BeWwaUiN | @inproceedings{
halbe2023hepco,
title={He{PC}o: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning},
author={Shaunak Halbe and James Smith and Junjiao Tian and Zsolt Kira},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=c4BeWwaUiN}
} | In this paper, we focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts over time without sharing or storing any data. The complexity of this problem is compounded by challenges from both the Continual and Federated Learning perspectives. Specifically, models trained in a CFL setup suffer from catastrophic forgetting which is exacerbated by data heterogeneity across clients. Existing attempts at this problem tend to impose large overheads on clients and communication channels or require access to stored data which renders them unsuitable for real-world use due to privacy. We study this problem in the context of Foundation Models and showcase their effectiveness in mitigating forgetting while minimizing overhead costs and without requiring access to any stored data. We achieve this by leveraging a prompting based approach (such that only prompts and classifier heads have to be communicated) and proposing a novel and lightweight generation and distillation scheme to aggregate client models at the server. We formulate this problem for image classification and establish strong baselines for comparison, conduct experiments on CIFAR-100 as well as challenging, large-scale datasets like ImageNet-R and DomainNet. Our approach outperforms both existing methods and our own baselines by more than 7% while significantly reducing communication and client-level computation costs. | HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning | [
"Shaunak Halbe",
"James Smith",
"Junjiao Tian",
"Zsolt Kira"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=byI1dQkkf9 | @inproceedings{
kwon2023image,
title={Image Clustering Conditioned on Text Criteria},
author={Sehyun Kwon and Jaeseung Park and Minkyu Kim and Jaewoong Cho and Ernest K. Ryu and Kangwook Lee},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=byI1dQkkf9}
} | Classical clustering methods do not provide users with direct control of the clustering results, and the clustering results may not be consistent with the relevant criterion that a user has in mind. In this work, we present a new methodology for performing image clustering based on user-specified criteria in the form of text by leveraging modern Vision-Language Models and Large Language Models. We call our method Image Clustering Conditioned on Text Criteria (IC$|$TC), and it represents a different paradigm of image clustering. IC$|$TC requires a minimal and practical degree of human intervention and grants the user significant control over the clustering results in return. Our experiments show that IC$|$TC can effectively cluster images with various criteria, such as human action, physical location, or the person's mood, while significantly outperforming baselines. | Image Clustering Conditioned on Text Criteria | [
"Sehyun Kwon",
"Jaeseung Park",
"Minkyu Kim",
"Jaewoong Cho",
"Ernest K. Ryu",
"Kangwook Lee"
] | Workshop/R0-FoMo | 2310.18297 | [
"https://github.com/sehyunkwon/ictc"
] | https://huggingface.co/papers/2310.18297 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aKSiwNGqx1 | @inproceedings{
ackermann2023on,
title={On the Relationship between Skill Neurons and Robustness in Prompt Tuning},
author={Leon Ackermann and Xenia Ohmer},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=aKSiwNGqx1}
} | Prompt Tuning is a popular parameter-efficient finetuning method for pre-trained large language models (PLMs). Recently, based on experiments with RoBERTa, it has been suggested that Prompt Tuning activates specific neurons in the transformer's feed-forward networks, that are highly predictive and selective for the given task. In this paper, we study the robustness of Prompt Tuning in relation to these "skill neurons", using RoBERTa and T5. We show that prompts tuned for a specific task are transferable to tasks of the same type but are not very robust to adversarial data, with higher robustness for T5 than RoBERTa. At the same time, we replicate the existence of skill neurons in RoBERTa and further show that skill neurons also seem to exist in T5. Interestingly, the skill neurons of T5 determined on non-adversarial data are also among the most predictive neurons on the adversarial data, which is not the case for RoBERTa. We conclude that higher adversarial robustness may be related to a model's ability to activate the relevant skill neurons on adversarial data. | On the Relationship between Skill Neurons and Robustness in Prompt Tuning | [
"Leon Ackermann",
"Xenia Ohmer"
] | Workshop/R0-FoMo | 2309.12263 | [
"https://github.com/leonackermann/robust-neurons"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=a3ZQVXD0Hv | @inproceedings{
xu2023latent,
title={Latent Skill Discovery for Chain-of-Thought Reasoning},
author={Zifan Xu and Haozhu Wang and Dmitriy Bespalov and Peter Stone and Yanjun Qi},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=a3ZQVXD0Hv}
} | Recent advances in Large Language Models (LLMs) have led to an emergent ability of chain-of-thought (CoT) prompting, a prompt reasoning strategy that adds intermediate rationale steps between questions and answers to construct prompts. Conditioned on these prompts, LLMs can effectively learn in context to generate rationales that lead to more accurate answers than when answering the same question directly. To design LLM prompts, one important setting, called demonstration selection, considers selecting demonstrations from an example bank. Existing methods use various heuristics for this selection, but for CoT prompting, which involves unique rationales, it is essential to base the selection upon the intrinsic skills that CoT rationales need, for instance, the skills of addition or subtraction for math word problems.
To address this requirement, we introduce a novel approach named Reasoning Skill Discovery (RSD) that uses unsupervised learning to create a latent space representation of rationales, called a reasoning skill. Simultaneously, RSD learns a reasoning policy to determine the required reasoning skill for a given question. This can then guide the selection of examples that demonstrate the required reasoning skills. Our approach offers several desirable properties: it is (1) theoretically grounded, (2) sample-efficient, requiring no LLM inference or manual prompt design, and (3) LLM-agnostic. Empirically, RSD outperforms existing methods by up to 6% in terms of the answer accuracy across multiple reasoning tasks. | Latent Skill Discovery for Chain-of-Thought Reasoning | [
"Zifan Xu",
"Haozhu Wang",
"Dmitriy Bespalov",
"Peter Stone",
"Yanjun Qi"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=YrYcoV2dAk | @inproceedings{
zhang2023visual,
title={Visual Cropping Improves Zero-Shot Question Answering of Multimodal Large Language Models},
author={Jiarui Zhang and Mahyar Khayatkhoei and Prateek Chhikara and Filip Ilievski},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=YrYcoV2dAk}
} | Multimodal Large Language Models (LLMs) have recently achieved promising zero-shot accuracy on visual question answering (VQA) -- a fundamental task affecting various downstream applications and domains. Given the great potential for the broad use of these models, it is important to investigate their limitations in dealing with different image and question properties. In this work, we investigate whether multimodal LLMs can perceive small details as well as large details in images. In particular, we show that their zero-shot accuracy in answering visual questions is very sensitive to the size of the visual subject of the question, declining up to $46\%$ with size. Furthermore, we show that this effect is causal by observing that human visual cropping can significantly mitigate their sensitivity to size. Inspired by the usefulness of human cropping, we then propose three automatic visual cropping methods as inference time mechanisms to improve the zero-shot performance of multimodal LLMs. We study their effectiveness on four popular VQA datasets, and a subset of the VQAv2 dataset tailored towards fine visual details. Our findings suggest that multimodal LLMs should be used with caution in detail-sensitive VQA applications, and that visual cropping is a promising direction to improve their zero-shot performance. | Visual Cropping Improves Zero-Shot Question Answering of Multimodal Large Language Models | [
"Jiarui Zhang",
"Mahyar Khayatkhoei",
"Prateek Chhikara",
"Filip Ilievski"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Yd2S8flZKm | @inproceedings{
tanneru2023quantifying,
title={Quantifying Uncertainty in Natural Language Explanations of Large Language Models},
author={Sree Harsha Tanneru and Chirag Agarwal and Himabindu Lakkaraju},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=Yd2S8flZKm}
} | Large Language Models (LLMs) are increasingly used as powerful tools for several high-stakes natural language processing (NLP) applications. Recent works on prompting claim to elicit intermediate reasoning steps and key tokens that serve as proxy explanations for LLM predictions. However, there is no certainty whether these explanations are reliable and reflect the LLM’s behavior. In this work, we make one of the first attempts at quantifying the uncertainty in explanations of LLMs. To this end, we propose two novel metrics --- $\textit{Verbalized Uncertainty}$ and $\textit{Probing Uncertainty}$ --- to quantify the uncertainty of generated explanations. While verbalized uncertainty involves prompting the LLM to express its confidence in its explanations, probing uncertainty leverages sample and model perturbations as a means to quantify the uncertainty. Our empirical analysis of benchmark datasets reveals that verbalized uncertainty is not a reliable estimate of explanation confidence. Further, we show that the probing uncertainty estimates are correlated with the faithfulness of an explanation, with lower uncertainty corresponding to explanations with higher faithfulness. Our study provides insights into the challenges and opportunities of quantifying uncertainty in LLM explanations, contributing to the broader discussion of the trustworthiness of foundation models. | Quantifying Uncertainty in Natural Language Explanations of Large Language Models | [
"Sree Harsha Tanneru",
"Chirag Agarwal",
"Himabindu Lakkaraju"
] | Workshop/R0-FoMo | 2311.03533 | [
"https://github.com/harsha070/uncertainty-quantification-nle"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=YMutYSbvVe | @inproceedings{
sun2023benchmarking,
title={Benchmarking Robustness of Text-Image Composed Retrieval},
author={Shitong Sun and Jindong Gu and Shaogang Gong},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=YMutYSbvVe}
} | Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich images and concise language to precisely express the requirements for target images. However, the robustness of these approaches against real-world corruptions or further text understanding has never been studied. In this paper, we perform the first robustness study and establish three new diversified benchmarks for systematically analysis of text-image composed retrieval against natural corruptions in both vision and text and further probe textural understanding. For natural corruption analysis, we introduce two new large-scale benchmark datasets, CIRR-C and FashionIQ-C for testing in open domain and fashion domain respectively, both of which apply 15 visual corruptions and 7 textural corruptions. For textural understanding analysis, we introduce a new diagnostic dataset CIRR-D by expanding the original raw data with synthetic data, which contains modified text so to better probe textual understanding ability including numerical variation, attribute variation, object removal, background variation, and fine-grained evaluation. The code and benchmark datasets are available at https://github.com/SunTongtongtong/Benchmark-Robustness-Text-Image-Compose-Retrieval. | Benchmarking Robustness of Text-Image Composed Retrieval | [
"Shitong Sun",
"Jindong Gu",
"Shaogang Gong"
] | Workshop/R0-FoMo | 2311.14837 | [
"https://github.com/suntongtongtong/benchmark-robustness-text-image-compose-retrieval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=XoacWibt7b | @inproceedings{
adila2023foundation,
title={Foundation Models Can Robustify Themselves, For Free},
author={Dyah Adila and Changho Shin and Linrong Cai and Frederic Sala},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=XoacWibt7b}
} | Zero-shot inference is a powerful paradigm that enables the use of large pretrained models for downstream classification tasks without further training. However, these models are vulnerable to inherited biases that can impact their performance. The traditional solution is fine-tuning, but this undermines the key advantage of pretrained models, which is their ability to be used out-of-the-box. We propose RoboShot, a method that improves the robustness of pretrained model embeddings in a fully zero-shot fashion. First, we use language models (LMs) to obtain useful insights from task descriptions. These insights are embedded and used to remove harmful and boost useful components in embeddings---without any supervision. Theoretically, we provide a simple and tractable model for biases in zero-shot embeddings and give a result characterizing under what conditions our approach can boost performance. Empirically, we evaluate RoboShot on nine image and NLP classification tasks and show an average improvement of 15.98% over several zero-shot baselines. Additionally, we demonstrate that RoboShot is compatible with a variety of pretrained and language models. | Foundation Models Can Robustify Themselves, For Free | [
"Dyah Adila",
"Changho Shin",
"Linrong Cai",
"Frederic Sala"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=VxEr7qpxJo | @inproceedings{
albalak2023improving,
title={Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data},
author={Alon Albalak and Colin Raffel and William Yang Wang},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=VxEr7qpxJo}
} | Few-shot learning is valuable in many real-world applications, but learning a generalizable model without overfitting to the few labeled datapoints is challenging. In this work, we focus on Few-shot Learning with Auxiliary Data (FLAD), a training paradigm that assumes access to auxiliary data during few-shot learning in hopes of improving generalization. Previous works have proposed automated methods for mixing auxiliary and target data, but these methods typically scale linearly (or worse) with the number of auxiliary datasets, limiting their practicality. In this work we relate FLAD to the explore-exploit dilemma that is central to the multi-armed bandit setting and derive algorithms whose computational complexity is independent of the number of auxiliary datasets, allowing us to scale to 100x more auxiliary datasets than prior methods. We propose two algorithms -- EXP3-FLAD and UCB1-FLAD -- and compare them with prior FLAD methods that either explore or exploit, finding that the combination of exploration and exploitation is crucial. Through extensive experimentation we find that our methods outperform all pre-existing FLAD methods by 4\% and lead to the first 3 billion parameter language models that outperform the 175 billion parameter GPT-3. | Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data | [
"Alon Albalak",
"Colin Raffel",
"William Yang Wang"
] | Workshop/R0-FoMo | 2302.00674 | [
"https://github.com/alon-albalak/flad"
] | https://huggingface.co/papers/2302.00674 | 2 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=VU4h3siRAw | @inproceedings{
saxena2023predicting,
title={Predicting the Performance of Foundation Models via Agreement-on-the-line},
author={Rahul Saxena and Aman Mehra and Taeyoun Kim and Christina Baek and J Zico Kolter and Aditi Raghunathan},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=VU4h3siRAw}
} | Estimating out-of-distribution (OOD) performance is critical to safely deploying machine learning models. Recently, Baek et al showed that the phenomenon ``agreement-on-the-line'' can be a reliable method for predicting OOD accuracy of models in an ensemble consisting largely of CNNs trained from scratch. However, it is now increasingly common to lightly fine-tune foundation models, and it is unclear whether such fine-tuning is sufficient to produce enough diversity in models for such agreement-based methods to work properly. In this paper, we develop methods for reliably applying agreement-on-the-line-based performance estimation to fine-tuned foundation models. In particular, we first study the case of fine-tuning a single foundation model, where we extensively study how different types of randomness (linear head initialization, hyperparameter selection, data subsetting, and data shuffling) contribute to the agreement-on-the-line of the resulting model sets; we find, somewhat surprisingly, that it is typically possible to obtain strong agreement via random initialization of the linear head alone. Next, we study how multiple foundation models, pretrained on different data sets but fine-tuned on the same task, may or may not produce agreement; we show, again rather surprisingly, that the diversity of such models is already sufficient and not too disparate for them to all lie on the same agreement line. In total, these methods enable reliable and efficient estimation of OOD accuracy for fine-tuned foundation models, without leveraging any labeled OOD data. | Predicting the Performance of Foundation Models via Agreement-on-the-line | [
"Rahul Saxena",
"Aman Mehra",
"Taeyoun Kim",
"Christina Baek",
"J Zico Kolter",
"Aditi Raghunathan"
] | Workshop/R0-FoMo | 2404.01542 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=V70F9FByZp | @inproceedings{
yu2023automatic,
title={Automatic Hallucination Assessment for Aligned Large Language Models via Transferable Adversarial Attacks},
author={Xiaodong Yu and Hao Cheng and Xiaodong Liu and Dan Roth and Jianfeng Gao},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=V70F9FByZp}
} | Although remarkable progress has been achieved preventing LLMs hallucinations, using instruction tuning and retrieval augmentation, it is currently difficult to measure the reliability of LLMs using available static data that is often not challenging enough and could suffer from data leakage. Inspired by adversarial machine learning, this paper aims to develop an automatic method for generating new evaluation data by appropriately modifying existing data on which LLMs behave faithfully. Specifically, this paper presents AutoDebug, an LLM-based framework for using prompt chaining to generate transferable adversarial attacks (in the form of question-answering examples). We seek to understand the extent to which these trigger hallucination behavior in LLMs. We first implement our framework using ChatGPT and evaluate the resulting two variants of a popular open-domain question-answering dataset, Natural Questions (NQ) on a collection of open-source and proprietary LLMs under various prompting settings. Our generated evaluation data is human-readable and, as we show, humans can answer these modified questions well. Nevertheless, we observe pronounced accuracy drops across multiple LLMs including GPT-4. Our experimental results confirm that LLMs are likely to hallucinate in two categories of question-answering scenarios where (1) there are conflicts between knowledge given in the prompt and their parametric knowledge, or (2) the knowledge expressed in the prompt is complex. Finally, the adversarial examples generated by the proposed method are transferrable across all considered LLMs, making our approach viable for LLM-based debugging using more cost-effective LLMs. | Automatic Hallucination Assessment for Aligned Large Language Models via Transferable Adversarial Attacks | [
"Xiaodong Yu",
"Hao Cheng",
"Xiaodong Liu",
"Dan Roth",
"Jianfeng Gao"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=UcsXamgPtT | @inproceedings{
chitale2023task,
title={Task Arithmetic with Lo{RA} for Continual Learning},
author={Rajas Chitale and Ankit Vaidya and Aditya Kane and Archana Santosh Ghotkar},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=UcsXamgPtT}
} | Continual learning refers to the problem where the training data is available in sequential chunks, termed "tasks". The majority of progress in continual learning has been stunted by the problem of catastrophic forgetting, which is caused by sequential training of the model on streams of data. Moreover, it becomes computationally expensive to sequentially train large models multiple times. To mitigate both of these problems at once, we propose a novel method to continually train transformer-based vision models using low-rank adaptation and task arithmetic. Our method completely bypasses the problem of catastrophic forgetting, as well as reducing the computational requirement for training models on each task. When aided with a small memory of 10 samples per class, our method achieves performance close to full-set finetuning. We present rigorous ablations to support the prowess of our method. | Task Arithmetic with LoRA for Continual Learning | [
"Rajas Chitale",
"Ankit Vaidya",
"Aditya Kane",
"Archana Santosh Ghotkar"
] | Workshop/R0-FoMo | 2311.02428 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=SkEG9q1Rtw | @inproceedings{
zhou2023batch,
title={Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering},
author={Han Zhou and Xingchen Wan and Lev Proleev and Diana Mincu and Jilin Chen and Katherine Heller and Subhrajit Roy},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=SkEG9q1Rtw}
} | Prompting and in-context learning (ICL) have become efficient learning paradigms for large language models (LLMs). However, LLMs suffer from prompt brittleness and various bias factors in the prompt, including but not limited to the formatting, the choice verbalizers, and the ICL examples. To address this problem that results in unexpected performance degradation, calibration methods have been developed to mitigate the effects of these biases while recovering LLM performance. In this work, we first conduct a systematic analysis of the existing calibration methods, where we both provide a unified view and reveal the failure cases. Inspired by these analyses, we propose Batch Calibration (BC), a simple yet intuitive method that controls the contextual bias from the batched input, unifies various prior approaches, and effectively addresses the aforementioned issues. BC is zero-shot, inference-only, and incurs negligible additional costs. We validate the effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate state-of-the-art performance over previous calibration baselines across more than 10 natural language understanding tasks. | Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering | [
"Han Zhou",
"Xingchen Wan",
"Lev Proleev",
"Diana Mincu",
"Jilin Chen",
"Katherine Heller",
"Subhrajit Roy"
] | Workshop/R0-FoMo | 2309.17249 | [
""
] | https://huggingface.co/papers/2309.17249 | 1 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=SJwXWwc47T | @inproceedings{
hewitt2023teaching,
title={Teaching language models with canonical examples},
author={John Hewitt and Sarah Li Chen and Percy Liang and Christopher D Manning},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=SJwXWwc47T}
} | It is easy to write a desirable or undesirable language model behavior (e.g., knowledge---The capital of Mauritius is Port Louis---or undesirable stereotypes---Researchers are always coldhearted) but it is difficult to make the model robustly generalize from these canonical examples. We formalize this task: a learning method takes a model and simple canonical examples and must produce a model that (1) generalizes to naturalistic examples, (2) stays within a bound of the original model's loss, and (3) performs well on a ``hard negative'' distribution to test overgeneralization. We build on the Backpack language model; its predictions take the form of a sparse weighted sum over a very large sense vector bank. We select and finetune a few Backpack senses per canonical example and find that this substantially outperforms other training methods. The Backpack we work with is only 170m parameters; yet, we find that it can improve much larger models: a product-of-experts ensemble between the 35x larger GPT-J-6B and the ratio of finetuned to pretrained Backpack outperforms finetuning GPT-J itself. | Teaching language models with canonical examples | [
"John Hewitt",
"Sarah Li Chen",
"Percy Liang",
"Christopher D Manning"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=S2FtwvKiiY | @inproceedings{
guo2023how,
title={How Do Large Multimodal Models Really Fare in Classical Vision Few-Shot Challenges? A Deep Dive},
author={Qing Guo and Prashan Wanigasekara and Jian Zheng and Jacob Zhiyuan Fang and Xinwei Deng and Chenyang Tao},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=S2FtwvKiiY}
} | Recent advances in multimodal foundational models have demonstrated marvelous in-context learning capabilities for diverse vision-language tasks. However, existing literature has mainly focused on few-shot learning tasks similar to their NLP counterparts. It is unclear whether these foundation models can also address classical vision challenges such as few-shot classification, which in some settings (e.g., 5-way 5-shot) necessitates sophisticated reasoning over several dozens of images -- a challenging task for learning systems.
In this work, we take a deep dive to probe the potential and limitations of existing multimodal models on this problem. Our investigation reveals that while these models under careful calibration can outperform dedicated visual models in complex narratable scenes, they can falter with more abstract visual inputs. Moreover, we also investigate curriculum learning and find out how it can mitigate the performance gap via smoothly bridging verbal and nonverbal reasoning for vision language tasks. | How Do Large Multimodal Models Really Fare in Classical Vision Few-Shot Challenges? A Deep Dive | [
"Qing Guo",
"Prashan Wanigasekara",
"Jian Zheng",
"Jacob Zhiyuan Fang",
"Xinwei Deng",
"Chenyang Tao"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RvmR9gOYXB | @inproceedings{
goyal2023think,
title={Think before you speak: Training Language Models With Pause Tokens},
author={Sachin Goyal and Ziwei Ji and Ankit Singh Rawat and Aditya Krishna Menon and Sanjiv Kumar and Vaishnavh Nagarajan},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=RvmR9gOYXB}
} | Language models generate responses by producing a series of tokens in immediate succession: the $(K+1)^{\rm th}$ token is an outcome of manipulating $K$ hidden vectors per layer, one vector per preceding token. What if instead we were to let the model manipulate say, $K+10$ hidden vectors, before it outputs the $(K+1)^{\rm th}$ token? We operationalize this idea by performing
training and inference on language models with a (learnable) $\textit{pause}$ token, a sequence of which is appended to the input prefix. We then delay extracting the model's outputs until the last pause token is seen, thereby allowing the model to process extra computation before committing to an answer. We empirically evaluate $\textit{pause-training}$ on decoder-only models of 1B and 130M parameters with causal pretraining on C4, and on downstream tasks covering reasoning, question-answering, general understanding and fact recall. Our main finding is that inference-time delays show gains when the model is both pre-trained and finetuned with delays. For the 1B model, we witness gains on eight tasks, most prominently, a gain of $18\\%$ EM score on the QA task of SQuAD, $8\\%$ on CommonSenseQA and $1\\%$ accuracy on the reasoning task of GSM8k. Our work raises a range of conceptual and practical future research questions on making delayed next-token prediction a widely applicable new paradigm. | Think before you speak: Training Language Models With Pause Tokens | [
"Sachin Goyal",
"Ziwei Ji",
"Ankit Singh Rawat",
"Aditya Krishna Menon",
"Sanjiv Kumar",
"Vaishnavh Nagarajan"
] | Workshop/R0-FoMo | 2310.02226 | [
""
] | https://huggingface.co/papers/2310.02226 | 0 | 2 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=RSGmZ7HZaA | @inproceedings{
khona2023stepwise,
title={Stepwise Inference in Transformers: Exploring a Synthetic Graph Navigation Task},
author={Mikail Khona and Maya Okawa and Rahul Ramesh and Kento Nishi and Robert P. Dick and Ekdeep Singh Lubana and Hidenori Tanaka},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=RSGmZ7HZaA}
} | Taking correct steps through elementary logical operations is the essence of logical reasoning, culminating in precise planning outcomes.
While such \emph{stepwise inference} approaches have demonstrated benefits in Large Language Models (LLMs), conducting an accurate quantitative evaluation is challenging, given their extensive scale, complexity, and lack of accessibility.
We introduce a minimal synthetic setup, where an autoregressive language model solves a navigation task on directed acyclic graphs (DAGs), taking inspiration from computational graphs and execution traces.
By implementing training with sample paths from start to goal node in a 'step-by-step' manner, we perform systematic experiments and develop novel analyses illustrating that stepwise navigation proves advantageous when the underlying graph is hierarchical and generalization necessitates the stitching of subpaths observed during pretraining.
Further, we observe a diversity-accuracy tradeoff while varying sampling temperature and a bias towards generating shorter paths.
We next elucidate how in-context chain-of-thought exemplars can steer the model's navigation.
Importantly, these exemplars can guide the model to follow a path of reasoning we provide, instead of relying on its potentially biased priors.
Together, this work showcases the utility and adaptability of this paradigm in exploring the complexities of logical reasoning and planning in LLMs. | Stepwise Inference in Transformers: Exploring a Synthetic Graph Navigation Task | [
"Mikail Khona",
"Maya Okawa",
"Rahul Ramesh",
"Kento Nishi",
"Robert P. Dick",
"Ekdeep Singh Lubana",
"Hidenori Tanaka"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PFS4ffN9Yx | @inproceedings{
khattab2023dspy,
title={{DSP}y: Compiling Declarative Language Model Calls into Self-Improving Pipelines},
author={Omar Khattab and Arnav Singhvi and Paridhi Maheshwari and Zhiyuan Zhang and Keshav Santhanam and Sri Vardhamanan A and Saiful Haq and Ashutosh Sharma and Thomas T. Joshi and Hanna Moazam and Heather Miller and Matei Zaharia and Christopher Potts},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=PFS4ffN9Yx}
} | The ML community is rapidly exploring techniques for prompting language models (LMs), but existing LM pipelines often rely on hard-coded “prompt templates” discovered via trial and error. We introduce DSPy, a programming model that abstracts LM pipelines as imperative computation graphs where LMs are invoked through declarative modules. DSPy modules are parameterized so they can learn to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies and show that a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting and pipelines with expert-created demonstrations. | DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines | [
"Omar Khattab",
"Arnav Singhvi",
"Paridhi Maheshwari",
"Zhiyuan Zhang",
"Keshav Santhanam",
"Sri Vardhamanan A",
"Saiful Haq",
"Ashutosh Sharma",
"Thomas T. Joshi",
"Hanna Moazam",
"Heather Miller",
"Matei Zaharia",
"Christopher Potts"
] | Workshop/R0-FoMo | 2310.03714 | [
"https://github.com/stanfordnlp/dspy"
] | https://huggingface.co/papers/2310.03714 | 8 | 30 | 1 | 13 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=NIeCTX8prp | @inproceedings{
ranjan2023fooling,
title={Fooling {GPT} with adversarial in-context examples for text classification},
author={Sudhanshu Ranjan and Chung-En Sun and Linbo Liu and Tsui-Wei Weng},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=NIeCTX8prp}
} | Deep learning-based methods helped solve NLP tasks more efficiently than traditional methods, and adversarial attacks for these methods have been extensively explored. However, Large Language Models (LLMs) have set up a new paradigm of few-shot prompting, which opens up the possibility for novel attacks. In this study, we show that LLMs can be vulnerable to adversarial prompts. We develop the first method to attack the few-shot examples in the text classification setup. We can degrade the model performance significantly during the test time by only slightly perturbing the examples based on optimization. Our method achieves a performance degradation of up to 50% without distorting the semantic meaning. | Fooling GPT with adversarial in-context examples for text classification | [
"Sudhanshu Ranjan",
"Chung-En Sun",
"Linbo Liu",
"Tsui-Wei Weng"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NDNb6L5xjI | @inproceedings{
luo2023dricl,
title={Dr.{ICL}: Demonstration-Retrieved In-context Learning},
author={Man Luo and Xin Xu and Zhuyun Dai and Panupong Pasupat and Mehran Kazemi and Chitta Baral and Vaiva Imbrasaite and Vincent Y Zhao},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=NDNb6L5xjI}
} | In-context learning (ICL), which teaches a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs. While early studies primarily used a fixed or random set of demonstrations for all test queries, recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance. This work expands the applicability of retrieval-based ICL approaches along several dimensions. We extend the success of retrieval-based ICL to instruction-finetuned LLMs as well as Chain-of-Thought (CoT) prompting. While the prior work utilizes general Large Language Models (LLMs), such as GPT-3, we find that retrieved demonstrations also enhance instruction-finetuned LLMs. This insight implies that training data, despite being exposed during the fine-tuning phase, can still be effectively used through retrieval and in-context demonstrations during testing, resulting in superior outcomes when compared to utilizing no demonstrations or selecting them at random. For CoT, when the demonstrations contain reasoning chains, we get improvements by retrieving based on such chains. Finally, we train a task-specific demonstration retriever that outperforms off-the-shelf retrievers. | Dr.ICL: Demonstration-Retrieved In-context Learning | [
"Man Luo",
"Xin Xu",
"Zhuyun Dai",
"Panupong Pasupat",
"Mehran Kazemi",
"Chitta Baral",
"Vaiva Imbrasaite",
"Vincent Y Zhao"
] | Workshop/R0-FoMo | 2305.14128 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=MpDSo3Rglq | @inproceedings{
zhang2023trained,
title={Trained Transformers Learn Linear Models In-Context},
author={Ruiqi Zhang and Spencer Frei and Peter Bartlett},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=MpDSo3Rglq}
} | Attention-based neural network sequence models such as transformers have the capacity to act as supervised learning algorithms: They can take as input a sequence of labeled examples and output predictions for unlabeled test examples. Indeed, recent work by Garg et al. has shown that when training GPT2 architectures over random instances of linear regression problems, these models' predictions mimic those of ordinary least squares. Towards understanding the mechanisms underlying this phenomenon, we investigate the dynamics of in-context learning of linear predictors for a transformer with a single linear self-attention layer trained by gradient flow. We show that despite the non-convexity of the underlying optimization problem, gradient flow with a random initialization finds a global minimum of the objective function. Moreover, when given a prompt of labeled examples from a new linear prediction task, the trained transformer achieves small prediction error on unlabeled test examples. We further characterize the behavior of the trained transformer under distribution shifts. | Trained Transformers Learn Linear Models In-Context | [
"Ruiqi Zhang",
"Spencer Frei",
"Peter Bartlett"
] | Workshop/R0-FoMo | 2306.09927 | [
""
] | https://huggingface.co/papers/2306.09927 | 1 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=M958yKkxe9 | @inproceedings{
manuvinakurike2023zeroshot,
title={Zero-shot Conversational Summarization Evaluations with small Large Language Models},
author={Ramesh Manuvinakurike and Saurav Sahay and Sangeeta Manepalli and Lama Nachman},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=M958yKkxe9}
} | Large Language Models (LLMs) exhibit powerful summarization abilities. However, their capabilities on conversational summarization remains under explored. In this work we evaluate LLMs (~10 billion parameters) on conversational summarization and showcase their performance on various prompts. We show that the summaries generated by models depend on the instructions and the performance of LLMs vary with different instructions sometimes resulting steep drop in ROUGE scores if prompts are not selected carefully. We also evaluate the models with human evaluations and discuss the limitations of the models on conversational summarization. | Zero-shot Conversational Summarization Evaluations with small Large Language Models | [
"Ramesh Manuvinakurike",
"Saurav Sahay",
"Sangeeta Manepalli",
"Lama Nachman"
] | Workshop/R0-FoMo | 2311.18041 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=LMg88bFhNJ | @inproceedings{
panwar2023incontext,
title={In-Context Learning and Bayesian Inference},
author={Madhur Panwar and Kabir Ahuja and Navin Goyal},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=LMg88bFhNJ}
} | In-context learning (ICL) is one of the surprising and useful features of large language models and subject of intense research. Recently, stylized meta-learning-like ICL setups have been devised that train transformers on sequences of input-output pairs $(x, f(x))$ using the language modeling loss. The function $f$ comes from a function class and generalization is checked by evaluation on sequences for unseen functions from the same class. One of the main discoveries in this line of research has been that for several function classes, such as linear regression, transformers successfully generalize to new functions in the class. However, it is unclear if transformers trained on multiple function classes (a setup closer to that of real-world LLMs) also exhibit this generalization. Moreover, the inductive biases of these models resulting in this generalization are not clearly understood. A model with unlimited training data and compute is a Bayesian predictor: it learns the pretraining distribution. In this paper, we empirically examine how far this Bayesian perspective can help us understand ICL. To this end, we generalize the previous meta-ICL setup to hierarchical meta-ICL setup which involves unions of multiple task families. We instantiate this setup on a diverse range of linear and nonlinear function families and find that transformers can do ICL in this setting as well. Where Bayesian inference is tractable, we find evidence that high-capacity transformers mimic the Bayesian predictor. Via the example of learning Fourier series, we also study the inductive bias for in-context learning. We find that in-context learning may or may not have simplicity bias depending on the pretraining data distribution. The Bayesian perspective provides insights into these inductive biases and how transformers perform a particular task when trained on multiple tasks. | In-Context Learning and Bayesian Inference | [
"Madhur Panwar",
"Kabir Ahuja",
"Navin Goyal"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=LBzGS2j4m4 | @inproceedings{
tsao2023autovp,
title={Auto{VP}: An Automated Visual Prompting Framework and Benchmark},
author={Hsi-Ai Tsao and Lei Hsiung and Pin-Yu Chen and Sijia Liu and Tsung-Yi Ho},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=LBzGS2j4m4}
} | Visual prompting (VP) is an emerging parameter-efficient fine-tuning approach to adapting pre-trained vision models to solve various downstream image-classification tasks. However, there has hitherto been little systematic study of the design space of VP and no clear benchmark for evaluating its performance. To bridge this gap, we propose AutoVP, an end-to-end expandable framework for automating VP design choices, along with 12 downstream image-classification tasks that can serve as a holistic VP-performance benchmark. Our design space covers 1) the joint optimization of the prompts; 2) the selection of pre-trained models, including image classifiers and text-image encoders; and 3) model output mapping strategies, including nonparametric and trainable label mapping. Our extensive experimental results show that AutoVP outperforms the best-known current VP methods by a substantial margin, having up to 6.7% improvement in accuracy; and attains a maximum performance increase of 27.5% compared to linear-probing (LP) baseline. AutoVP thus makes a two-fold contribution: serving both as an efficient tool for hyperparameter tuning on VP design choices, and as a comprehensive benchmark that can reasonably be expected to accelerate VP’s development. The source code is available at [https://github.com/IBM/AutoVP](https://github.com/IBM/AutoVP). | AutoVP: An Automated Visual Prompting Framework and Benchmark | [
"Hsi-Ai Tsao",
"Lei Hsiung",
"Pin-Yu Chen",
"Sijia Liu",
"Tsung-Yi Ho"
] | Workshop/R0-FoMo | 2310.08381 | [
"https://github.com/IBM/AutoVP"
] | https://huggingface.co/papers/2310.08381 | 1 | 1 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=KIhFggzePM | @inproceedings{
ramesh2023how,
title={How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks},
author={Rahul Ramesh and Mikail Khona and Robert P. Dick and Hidenori Tanaka and Ekdeep Singh Lubana},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=KIhFggzePM}
} | Transformers trained on huge text corpora exhibit a remarkable set of capabilities. Given the inherent compositional nature of language, one can expect the model to learn to compose these capabilities, potentially yielding a combinatorial explosion of what operations it can perform on an input. Motivated by the above, we aim to assess in this paper "how capable can a transformer become?". In this work, we train Transformer models on a data-generating process that involves compositions of a set of well-defined monolithic capabilities and show that: (1) Transformers generalize to exponentially or even combinatorially many functions not seen in the training data; (2) Transformers that generate the intermediate outputs of the composition are more effective at generalizing to unseen compositions; (3) The training data has a significant impact on the model's ability to compose functions (4) Attention layers in the latter half of the model seem critical to compositionality. | How Capable Can a Transformer Become? A Study on Synthetic, Interpretable Tasks | [
"Rahul Ramesh",
"Mikail Khona",
"Robert P. Dick",
"Hidenori Tanaka",
"Ekdeep Singh Lubana"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Jd8mD3SU8j | @inproceedings{
huq2023whats,
title={What{\textquoteright}s important here?: Opportunities and Challenges of {LLM} in retrieving information from Web Interface},
author={Faria Huq and Jeffrey P. Bigham and Nikolas Martelaro},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=Jd8mD3SU8j}
} | Large language models (LLMs) that have been trained on large corpus of codes exhibit a remarkable ability to understand HTML code [1]. As web interfaces are mainly constructed using HTML, we designed an in-depth study to see how the code understanding ability of LLMs can be used to retrieve and locate important elements for a user given query (i.e. task description) in web interface. In contrast with prior works, which primarily focused on autonomous web navigation, we decompose the problem as an even atomic operation - Can LLMs find out the important information in the web page for a user given query? This decomposition enables us to scrutinize the current capabilities of LLMs and uncover the opportunities and challenges they present. Our empirical experiments show that the LLMs exhibit a reasonable level of competence, there is still a substantial room for improvement. We hope our investigation will inspire follow-up works in overcoming the current challenges in this domain. | What’s important here?: Opportunities and Challenges of LLM in retrieving information from Web Interface | [
"Faria Huq",
"Jeffrey P. Bigham",
"Nikolas Martelaro"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IaNJC1IRds | @inproceedings{
anand2023one,
title={One shot localization and segmentation of medical images with Foundation Models},
author={Deepa Anand and Gurunath Reddy and Vanika Singhal and Dattesh D. Shanbhag and Shriram KS and Uday Patil and Chitresh Bhushan and Kavitha Manickam and Dawei Gui and Rakesh Mullick and Avinash Gopal and Parminder Bhatia and Taha Kass-Hout},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=IaNJC1IRds}
} | Recent advances in Vision Transformers (ViT) and Stable Diffusion (SD) models with their ability to capture rich semantic features of the image have been used for image correspondence tasks on natural images. In this paper, we examine the ability of a variety of pre-trained ViT (DINO, DINOv2, SAM, CLIP) and SD models, trained exclusively on natural images, for solving the correspondence problems on medical images. While many works have made a case for in-domain training, we show that the models trained on natural images can offer good performance on medical images across different modalities (CT,MR,Ultrasound) sourced from various manufacturers, over multiple anatomical regions (brain, thorax, abdomen, extremities), and on wide variety of tasks. Further, we leverage the correspondence with respect to a template image to prompt a Segment Anything (SAM) model to arrive at single shot segmentation, achieving dice range of 62%-90% across tasks, using just one image as reference. We also show that our single-shot method outperforms the recently proposed few-shot segmentation method - UniverSeg (Dice range 47%-80%) on most of the semantic segmentation tasks(six out of seven) across medical imaging modalities. | One shot localization and segmentation of medical images with Foundation Models | [
"Deepa Anand",
"Gurunath Reddy",
"Vanika Singhal",
"Dattesh D. Shanbhag",
"Shriram KS",
"Uday Patil",
"Chitresh Bhushan",
"Kavitha Manickam",
"Dawei Gui",
"Rakesh Mullick",
"Avinash Gopal",
"Parminder Bhatia",
"Taha Kass-Hout"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HAqPAqztEU | @inproceedings{
juneja2023a,
title={A Universal Prompt Generator for Large Language Models},
author={Gurusha Juneja and Amit Sharma},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=HAqPAqztEU}
} | LLMs are primarily reliant on high-quality and task-specific prompts. However, the prompt engineering process relies on clever heuristics and requires multiple iterations. Some recent works attempt to automate this process by improving upon human written prompts. However, creating high-quality prompts from scratch is still an unresolved challenge owing to its inherent complexity. In this work, we propose UniPrompt, a novel technique for generating high-quality human-like prompts from scratch. To do so, we identify characteristic features of human-generated prompts such as being detailed and consisting of multiple sections. Our proposed method, UniPrompt, takes as input a single sentence description of the task and generates human-like sectioned prompts using an auxiliary language model. We train the model in two stages. First, the model is finetuned on multiple tasks using a novel dataset curated using GPT-4 across over 500 tasks. Second, we align the auxiliary model to generate task-relevant (high accuracy) prompts by collecting a prompt preference dataset and optimizing the model using the Direct Preference Optimization method. Importantly, UniPrompt is task-agnostic: once trained, it can be used to generate prompts for any task. We find that UniPrompt outperforms human-generated prompts, GPT-generated prompts, and other prompt optimization techniques across diverse tasks on medicine, causality, and hate speech by up to 5.1 %, 7.2 %, and 11.1 % respectively. | A Universal Prompt Generator for Large Language Models | [
"Gurusha Juneja",
"Amit Sharma"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=FJo2lroF7R | @inproceedings{
madaan2023automix,
title={AutoMix: Mixing Models with Few-shot Self and Meta Verification},
author={Aman Madaan and Pranjal Aggarwal and Ankit Anand and Srividya Pranavi Potharaju and Swaroop Mishra and Pei Zhou and Aditya Gupta and Dheeraj Rajagopal and Yiming Yang and Shyam Upadhyay and Mausam . and Manaal Faruqui},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=FJo2lroF7R}
} | Large language models (LLMs) are now available in various sizes and configurations from cloud API providers. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix is a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring training. Given that verifications can be noisy, we employ a meta verifier in \ours to refine the accuracy of these assessments. Our experiments using LLAMA2-13B and LLAMA2-70B, on five context-grounded reasoning datasets demonstrate that AutoMix surpasses established baselines, improving the incremental benefit per cost by up to 57%. | AutoMix: Mixing Models with Few-shot Self and Meta Verification | [
"Aman Madaan",
"Pranjal Aggarwal",
"Ankit Anand",
"Srividya Pranavi Potharaju",
"Swaroop Mishra",
"Pei Zhou",
"Aditya Gupta",
"Dheeraj Rajagopal",
"Yiming Yang",
"Shyam Upadhyay",
"Mausam .",
"Manaal Faruqui"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EztQmfnMLg | @inproceedings{
lin2023coded,
title={Coded Prompts for Large Language Models},
author={Ziqian Lin and Yicong Chen and Yuchen Zeng and Kangwook Lee},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=EztQmfnMLg}
} | While Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks and various prompting techniques have been proposed, there remains room for performance enhancement. In this work, we introduce a novel dimension to prompt design -- *coded prompts* for LLM inference. Drawing inspiration from coding theory, where coded symbols communicate or store functions of multiple information symbols, we design coded prompts to process multiple inputs simultaneously. We validate this approach through experiments on two distinct tasks: identifying the maximum prime number within a range and sentence toxicity prediction. Our results indicate that coded prompts can indeed improve task performance. We believe that coded prompts will pave a new way for innovative strategies to enhance the efficiency and effectiveness of LLMs. | Coded Prompts for Large Language Models | [
"Ziqian Lin",
"Yicong Chen",
"Yuchen Zeng",
"Kangwook Lee"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EEIPgU1oO6 | @inproceedings{
esfandiari2023deep,
title={Deep Embedded Clustering in Few-shot Representations ({DEC}i{FR})},
author={Yasaman Esfandiari and Rodolfo Valiente Romero and Amir Rahimi},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=EEIPgU1oO6}
} | Few-shot Learning has been the center of attention in the deep learning community as it can potentially address the problem of data inaccessibility. Several approaches have been proposed to learn from a few samples efficiently, nevertheless, the majority of them use a large dataset to generalize the feature representation obtained from a single or pre-defined set of backbones before adapting to novel classes. In this paper, different from prior works that use a single best-performing backbone, we present a model-agnostic framework that does not require to "decipher" which backbone is more suitable for the specific FSL task. We propose the Deep Embedded Clustering in Few-shot Representations (DECiFR) algorithm that leverages Deep Embedded Clustering (DEC) to abstract discriminative information from the best combination of features from different backbones, by simultaneously mapping and clustering feature representations using deep neural networks. Subsequently, we propose a contrastive variant of KNN to enhance the cluster separation by propagating through the samples that minimize the inter-class distance and maximize the intra-class distance.
Empirical results show that our approach not only enhances the feature embeddings but also boosts the classification accuracy, approaching or surpassing state-of-the-art performance on numerous datasets. | Deep Embedded Clustering in Few-shot Representations (DECiFR) | [
"Yasaman Esfandiari",
"Rodolfo Valiente Romero",
"Amir Rahimi"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ED7E1fUAk2 | @inproceedings{
fereydooni2023divide,
title={Divide and Conquer: Two-Level Problem Remodeling for Large-Scale Few-Shot Learning},
author={Mohamadreza Fereydooni and Hosein Hasani and Ali Razghandi and Mahdieh Soleymani Baghshah},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=ED7E1fUAk2}
} | Few-shot learning methods have achieved notable performance in recent years. However, few-shot learning in large-scale settings with hundreds of classes is still challenging.
In this paper, we tackle the problems of large-scale few-shot learning by taking advantage of pre-trained foundation models. We recast the original problem in two levels with different granularity. At the coarse-grained level, we introduce a novel object recognition approach with robustness to sub-population shifts. At the fine-grained level, generative experts are designed for few-shot learning, specialized for different superclasses.
A Bayesian schema is considered to combine coarse-grained information with fine-grained predictions in a winner-takes-all fashion.
Extensive experiments on large-scale datasets and different architectures show that the proposed method is both effective and efficient besides its simplicity and natural problem remodeling. The code is publicly available at https://github.com/mohamadreza99/divide_and_conquer. | Divide and Conquer: Two-Level Problem Remodeling for Large-Scale Few-Shot Learning | [
"Mohamadreza Fereydooni",
"Hosein Hasani",
"Ali Razghandi",
"Mahdieh Soleymani Baghshah"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CiRnwYfXuU | @inproceedings{
mehrabi2023jab,
title={{JAB}: Joint Adversarial Prompting and Belief Augmentation},
author={Ninareh Mehrabi and Palash Goyal and Anil Ramakrishna and Jwala Dhamala and Shalini Ghosh and Richard Zemel and Kai-Wei Chang and Aram Galstyan and Rahul Gupta},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=CiRnwYfXuU}
} | With the recent surge of language models in different applications, attention to safety and robustness of these models has gained significant importance. Here we introduce a joint framework in which we simultaneously probe and improve the robustness of a black-box target model via adversarial prompting and belief augmentation using iterative feedback loops. This framework utilizes an automated red teaming approach to probe the target model, along with a belief augmenter to generate instructions for the target model to improve its robustness to those adversarial probes. Importantly, the adversarial model and the belief generator leverage the feedback from past interactions to improve the effectiveness of the adversarial prompts and beliefs, respectively. In our experiments, we demonstrate that such a framework can reduce toxic content generation both in dynamic cases where an adversary directly interacts with a target model and static cases where we use a static benchmark dataset to evaluate our model. | JAB: Joint Adversarial Prompting and Belief Augmentation | [
"Ninareh Mehrabi",
"Palash Goyal",
"Anil Ramakrishna",
"Jwala Dhamala",
"Shalini Ghosh",
"Richard Zemel",
"Kai-Wei Chang",
"Aram Galstyan",
"Rahul Gupta"
] | Workshop/R0-FoMo | 2311.09473 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CaXs5JGpzd | @inproceedings{
hajali2023functionconstrained,
title={Function-constrained Program Synthesis},
author={Patrick Anthony Hajali and Ignas Budvytis},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=CaXs5JGpzd}
} | This work introduces: (1) a technique that allows pre-trained large language models (LLMs) to leverage user-provided code when solving programming tasks and (2) a method to iteratively generate modular sub-functions that can aid future code generation attempts when the initial code generated by the LLM is inadequate. Generating computer programs in general-purpose programming languages like Python poses a challenge for LLMs when restricted to using only code provided in the prompt. A naive approach is to present a chat-based LLM (e.g. GPT-4, Claude) with relevant code snippets and prompt the model to synthesize the target algorithm using the provided code. Alternatively, code-specific LLMs (e.g. GitHub Copilot, CodeLlama2) can generate code completions in real-time by drawing on all code available in the integrated development environment. However, restricting code-specific LLMs to use only in-context code is not straightforward, as the model is not explicitly instructed to use the user-generated code and users cannot highlight precisely which snippets of code the model should incorporate into its context for subsequent code-generations. Moreover, chat and code LLMs lack effective recovery methods, forcing users to iteratively re-prompt the model with modified prompts until a sufficient solution is reached.
Our method differs from traditional LLM-powered code-generation by constraining code-generation to an explicit function set and enabling recovery from failed attempts through automatically generated sub-functions. When the LLM cannot produce working code, we generate modular sub-functions to aid subsequent attempts at generating functional code. A by-product of our method is a library of reusable sub-functions that can solve related tasks (imitating a software team where efficiency scales with experience).
We also introduce a new “half-shot” evaluation paradigm that provides tighter estimates of LLMs' coding abilities compared to traditional zero-shot evaluation. Our proposed method encourages models to output solutions in a structured format, decreasing syntax errors that can be mistaken for poor coding ability. | Function-constrained Program Synthesis | [
"Patrick Anthony Hajali",
"Ignas Budvytis"
] | Workshop/R0-FoMo | 2311.15500 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CPpkklQWQW | @inproceedings{
nguyen2023on,
title={On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation},
author={Duy Minh Ho Nguyen and Tan Ngoc Pham and Nghiem Tuong Diep and Nghi Quoc Phan and Quang Pham and Vinh Tong and Binh T. Nguyen and Ngan Hoang Le and Nhat Ho and Pengtao Xie and Daniel Sonntag and Mathias Niepert},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=CPpkklQWQW}
} | Constructing a robust model that can effectively generalize to test samples under
distribution shifts remains a significant challenge in the field of medical imaging. The foundational models for vision and language, pre-trained on extensive sets of natural image and text data, have emerged as a promising approach. It showcases impressive learning abilities across different tasks with the need for only a limited amount of annotated samples. While numerous techniques have
focused on developing better fine-tuning strategies to adapt these models for specific domains, we instead examine their robustness to domain shifts in the medical image segmentation task. To this end, we compare the generalization performance to unseen domains of various pre-trained models after being fine-tuned on the same in-distribution dataset and show that foundation-based models enjoy better robustness than other architectures. From here, we further developed a new Bayesian uncertainty estimation for frozen models and used them as an indicator to characterize the model’s performance on out-of-distribution (OOD) data, proving particularly beneficial for real-world applications. Our experiments not only reveal the limitations of current indicators like accuracy on the line or agreement on the line commonly used in natural image applications but also emphasize the promise of the introduced Bayesian uncertainty. Specifically, lower uncertainty predictions.
usually tend to higher out-of-distribution (OOD) performance. | On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation | [
"Duy Minh Ho Nguyen",
"Tan Ngoc Pham",
"Nghiem Tuong Diep",
"Nghi Quoc Phan",
"Quang Pham",
"Vinh Tong",
"Binh T. Nguyen",
"Ngan Hoang Le",
"Nhat Ho",
"Pengtao Xie",
"Daniel Sonntag",
"Mathias Niepert"
] | Workshop/R0-FoMo | 2311.11096 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=AwEQ0YrW17 | @inproceedings{
dun2023sweeping,
title={Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for {LLM} Task Adaptation},
author={Chen Dun and Mirian Del Carmen Hipolito Garcia and Guoqing Zheng and Ahmed Hassan Awadallah and Anastasios Kyrillidis and Robert Sim},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=AwEQ0YrW17}
} | Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind.
Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new --but often individual-- downstream tasks.
Thus, how one would expand prompt tuning to handle --concomitantly-- heterogeneous tasks and data distributions is a widely open question.
To address this gap, we suggest the use of Mixture of Prompts, or MoPs, associated with smart gating functionality: the latter --whose design is one of the contributions of this paper-- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task.
Additionally, MoPs are empirically agnostic to any model compression technique applied --for efficiency reasons-- as well as instruction data source and task composition.
In practice, MoPs can simultaneously mitigate prompt training "interference" in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations.
As a highlight, MoPs manage to decrease final perplexity from $\sim20\%$ up to $\sim70\%$, as compared to baselines, in the federated scenario, and from $\sim 3\%$ up to $\sim30\%$ in the centralized scenario. | Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation | [
"Chen Dun",
"Mirian Del Carmen Hipolito Garcia",
"Guoqing Zheng",
"Ahmed Hassan Awadallah",
"Anastasios Kyrillidis",
"Robert Sim"
] | Workshop/R0-FoMo | 2310.02842 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=AJiBZ1BPH5 | @inproceedings{
zhang2023zeroshot,
title={Zero-shot Improvement of Object Counting with {CLIP}},
author={Ruisu Zhang and Yicong Chen and Kangwook Lee},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=AJiBZ1BPH5}
} | We focus on the object counting limitations of vision-language models, with a particular emphasis on Contrastive Language-Image Pre-Training (CLIP) models. We assess the counting performance of CLIP using a custom dataset, which uncovers significant variations across diverse objects. To address this, we introduce a zero-shot, training-free method aimed at improving counting accuracy by manipulating the text embedding space of CLIP. Through comprehensive experiments, we demonstrate that our method not only enhances the counting capabilities of CLIP but also boosts the performance of text-to-image generative models like Stable Diffusion, particularly in generating images with precise object counts. | Zero-shot Improvement of Object Counting with CLIP | [
"Ruisu Zhang",
"Yicong Chen",
"Kangwook Lee"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=9Tze4oy4lw | @inproceedings{
albalak2023efficient,
title={Efficient Online Data Mixing For Language Model Pre-Training},
author={Alon Albalak and Liangming Pan and Colin Raffel and William Yang Wang},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=9Tze4oy4lw}
} | The data used to pretrain large language models has a decisive impact on a model’s downstream performance, which has led to a large body of work on data selection methods that aim to automatically determine the most suitable data to use for pretraining. Existing data selection methods suffer from slow and computationally expensive processes, a problem amplified by the increasing size of models and of pretraining datasets. Data mixing, on the other hand, reduces the complexity of data selection by grouping data points together and determining sampling probabilities across entire groups. However, data mixing proportions are typically fixed before training and therefore cannot adapt to changing training dynamics. To address these limitations, we develop an efficient algorithm for Online Data Mixing (ODM) that combines elements from both data selection and data mixing. Based on multi-armed bandit algorithms, our online approach optimizes the data mixing proportions during training. Remarkably, our method trains a model that reaches the final perplexity of the next best method with 19% fewer training iterations, and improves performance on the 5-shot MMLU benchmark by 1.9% relative accuracy, while adding negligible wall-clock time during pretraining. | Efficient Online Data Mixing For Language Model Pre-Training | [
"Alon Albalak",
"Liangming Pan",
"Colin Raffel",
"William Yang Wang"
] | Workshop/R0-FoMo | 2312.02406 | [
""
] | https://huggingface.co/papers/2312.02406 | 1 | 1 | 0 | 4 | [
"akswelh/NEOX"
] | [] | [] | [
"akswelh/NEOX"
] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=9Eu2NMT0Ya | @inproceedings{
jacob2023the,
title={The Consensus Game: Language Model Generation via Equilibrium Search},
author={Athul Paul Jacob and Yikang Shen and Gabriele Farina and Jacob Andreas},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=9Eu2NMT0Ya}
} | When applied to question answering and other text generation tasks, language models (LMs) may be queried generatively (by sampling answers from their output distribution) or discriminatively (by using them to score or rank a set of candidate answers). These procedures sometimes yield very different predictions. How do we reconcile mutually incompatible scoring procedures to obtain coherent LM predictions? We introduce a new, a training-free, game-theoretic procedure for language model decoding. Our approach casts language model decoding as a regularized imperfect-information sequential signaling game—which we term the concensus game—in which a generator seeks to communicate an abstract correctness parameter using natural language sentences to a discriminator. We develop computational procedures for finding approximate equilibria of this game, resulting in a decoding algorithm we call equilibrium-ranking. Applied to a large number of tasks (including reading comprehension, commonsense reasoning, mathematical problem-solving, and assistive dialog), equilibrium-ranking consistently improves performance over existing LM decoding procedures. These improvements are sometimes substantial—on multiple benchmarks, we observe that applying equilibrium-ranking to LLaMA-7B outperforms the much larger LLaMA-65B and PaLM-540B models. | The Consensus Game: Language Model Generation via Equilibrium Search | [
"Athul Paul Jacob",
"Yikang Shen",
"Gabriele Farina",
"Jacob Andreas"
] | Workshop/R0-FoMo | 2310.09139 | [
""
] | https://huggingface.co/papers/2310.09139 | 2 | 12 | 3 | 4 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=8KgUJqPUOb | @inproceedings{
sakhinana2023crossmodal,
title={Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning},
author={Sagar Sakhinana and Venkataramana Runkana},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=8KgUJqPUOb}
} | In the field of chemistry, the objective is to create novel molecules with desired properties, facilitating accurate property predictions for applications such as material design and drug screening. However, existing graph deep learning methods face limitations that curb their expressive power. To address this, we explore the integration of vast molecular domain knowledge from Large Language Models
(LLMs) with the complementary strengths of Graph Neural Networks (GNNs) to enhance performance in property prediction tasks. We introduce a Multi-Modal Fusion (MMF) framework that synergistically harnesses the analytical prowess of GNNs and the linguistic generative and predictive abilities of LLMs, thereby improving accuracy and robustness in predicting molecular properties. Our framework
combines the effectiveness of GNNs in modeling graph-structured data with the zero-shot and few-shot learning capabilities of LLMs, enabling improved predictions while reducing the risk of overfitting. Furthermore, our approach effectively addresses distributional shifts, a common challenge in real-world applications, and showcases the efficacy of learning cross-modal representations, surpassing
state-of-the-art baselines on benchmark datasets for property prediction tasks. | Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning | [
"Sagar Sakhinana",
"Venkataramana Runkana"
] | Workshop/R0-FoMo | 2408.14964 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=7mEOK0EnbY | @inproceedings{
panigrahi2023trainable,
title={Trainable Transformer in Transformer},
author={Abhishek Panigrahi and Sadhika Malladi and Mengzhou Xia and Sanjeev Arora},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7mEOK0EnbY}
} | Recent works attribute the capability of in-context learning (ICL) in large pre-trained language models to implicitly simulating and fine-tuning an internal model (e.g., linear or 2-layer MLP) during inference. However, such constructions require large memory overhead, which makes simulation of more sophisticated internal models intractable. In this work, we propose a new efficient construction, Transformer in Transformer (in short, TINT), that allows a transformer to simulate and fine-tune more complex models during inference (e.g., pre-trained language models). In particular, we introduce innovative approximation techniques that allow a TINT model with less than 2 billion parameters to simulate and fine-tune a 125 million parameter transformer model within a single forward pass. TINT accommodates many common transformer variants and its design ideas also improve the efficiency of past instantiations of simple models inside transformers. We conduct end-to-end experiments to validate the internal fine-tuning procedure of TINT on various language modeling and downstream tasks. For example, even with a limited one-step budget, we observe TINT for a OPT-125M model improves performance by 4 − 16% absolute on average compared to OPT-125M. These findings suggest that large pre-trained language models are capable of performing intricate subroutines. To facilitate further work, a modular and extensible codebase for TINT will be open-sourced. | Trainable Transformer in Transformer | [
"Abhishek Panigrahi",
"Sadhika Malladi",
"Mengzhou Xia",
"Sanjeev Arora"
] | Workshop/R0-FoMo | 2307.01189 | [
"https://github.com/abhishekpanigrahi1996/transformer_in_transformer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=7jmtHtv9Ch | @inproceedings{
li2023overprompt,
title={OverPrompt: Enhancing Chat{GPT} through Efficient In-Context Learning},
author={Jiazheng Li and Runcong Zhao and Yongxin Yang and Yulan He and Lin Gui},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7jmtHtv9Ch}
} | The remarkable performance of pre-trained large language models has revolutionised various natural language processing applications. Due to huge parameter sizes and extensive running costs, companies or organisations tend to transfer the models to the target task by zero-shot prompting techniques. However, the prohibitive costs of tokens and time have hindered their adoption in applications. We propose OverPrompt, leveraging the in-context learning capability of LLMs to handle multiple task inputs, thereby reducing token and time costs. This approach could potentially improve task performance during API queries due to better conditional distribution mapping. Evaluated across diverse classification datasets, our experiments show that OverPrompt can achieve cost-efficient zero-shot classification without causing significant detriment to task performance, and in some cases, even improving it. An ablation study conducted on various LLMs, along with an investigation into the robustness of our prompting strategy to different input ordering, offers valuable insights into the broader applicability of our method across diverse tasks. These findings also suggest a more seamless integration of our method with LLMs through an API. | OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning | [
"Jiazheng Li",
"Runcong Zhao",
"Yongxin Yang",
"Yulan He",
"Lin Gui"
] | Workshop/R0-FoMo | 2305.14973 | [
"https://github.com/lijiazheng99/overprompt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=7MEIYPueMd | @inproceedings{
allen2023fewshot,
title={Fewshot learning on global multimodal embeddings for earth observation tasks},
author={Matthew Allen and Francisco Dorr and Joseph Alejandro Gallego Mejia and Laura Mart{\'\i}nez-Ferrer and Anna Jungbluth and Freddie Kalaitzis and Ra{\'u}l Ramos-Poll{\'a}n},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7MEIYPueMd}
} | In this work we pretrain a CLIP/ViT based model using three different modalities of satellite imagery across five AOIs covering over ~10\% of Earth's total landmass, namely Sentinel 2 RGB optical imagery, Sentinel 1 SAR radar amplitude and interferometric coherence. This model uses $\sim 250$ M parameters. Then, we use the embeddings produced for each modality with a classical machine learning method to attempt different downstream tasks for earth observation related to vegetation, built up surface, croplands and permanent water. We consistently show how we reduce the need for labeled data by 99\%, so that with ~200-500 randomly selected labeled examples (around 4K-10K km$^2$) we reach performance levels analogous to those achieved with the full labeled datasets (about 150K image chips or 3M km$^2$ in each area of interest - AOI) on all modalities, AOIs and downstream tasks. This leads us to think that the model has captured significant earth features useful in a wide variety of scenarios. To enhance our model's usability in practice, its architecture allows inference in contexts with missing modalities and even missing channels within each modality. Additionally, we visually show that this embedding space, obtained with no labels, is sensible to the different earth features represented by the labelled datasets we selected. | Fewshot learning on global multimodal embeddings for earth observation tasks | [
"Matthew Allen",
"Francisco Dorr",
"Joseph Alejandro Gallego Mejia",
"Laura Martínez-Ferrer",
"Anna Jungbluth",
"Freddie Kalaitzis",
"Raúl Ramos-Pollán"
] | Workshop/R0-FoMo | 2310.00119 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=7GHPcloiHq | @inproceedings{
khan2023selective,
title={Selective Prediction For Open-Ended Question Answering in Black-Box Vision-Language Models},
author={Zaid Khan and Yun Fu},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7GHPcloiHq}
} | When mistakes have serious consequences, reliable use of a model requires understanding when the predictions of the model are trustworthy. One approach is selective prediction, in which a model is allowed to abstain if it is uncertain. Existing methods for selective prediction require access to model internals, retraining, or large number of model evaluations, and cannot be used for black box models available only through an API. This is a barrier to the use of powerful commercial foundation models in risk-sensitive applications. Furthermore, existing work has largely focused on unimodal foundation models. We propose a method to improve selective prediction in a black box vision-language model by measuring consistency over the neighbors of a visual question. Although direct sampling of the neighborhood is not possible, we propose using a probing model as a proxy. We describe experiments testing the proposed method on in-distribution, out-of-distribution and adversarial questions. We find that the consistency of a vision-language model across rephrasings of a visual question can be used to identify and reject high-risk visual questions, even in out-of-distribution and adversarial settings, constituting a step towards safe use of black-box vision-language models. | Selective Prediction For Open-Ended Question Answering in Black-Box Vision-Language Models | [
"Zaid Khan",
"Yun Fu"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=7Dd8uBHo90 | @inproceedings{
zohar2023lovm,
title={{LOVM}: Language-Only Vision Model Selection},
author={Orr Zohar and Shih-Cheng Huang and Kuan-Chieh Wang and Serena Yeung},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=7Dd8uBHo90}
} | Pre-trained multi-modal vision-language models (VLMs) excel in downstream applications, especially in the few- and zero-shot settings.
However, choosing the optimal VLM for some downstream applications is challenging due to task and dataset dependencies.
Exhaustive evaluation of all VLMs is impractical and requires the collection of a labeled dataset for evaluation. As the number of open-source VLM variants increases, there is a need for an efficient model selection strategy that does not require access to a curated evaluation dataset. To address this, we introduce a novel task, LOVM: **L**anguage-**O**nly **V**ision **M**odel Selection, where methods are expected to perform both model selection and performance prediction based solely on a text description of the desired downstream application. We also present an extensive LOVM benchmark consisting of ground-truth evaluations of 23 pre-trained VLMs and 35 datasets, enabling effective ranking and performance prediction of VLMs. Our code, full paper, and dataset are available at https://github.com/orrzohar/LOVM. | LOVM: Language-Only Vision Model Selection | [
"Orr Zohar",
"Shih-Cheng Huang",
"Kuan-Chieh Wang",
"Serena Yeung"
] | Workshop/R0-FoMo | 2306.08893 | [
"https://github.com/orrzohar/lovm"
] | https://huggingface.co/papers/2306.08893 | 2 | 7 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=6adcWzmtHR | @inproceedings{
gupta2023context,
title={Context is Environment},
author={Sharut Gupta and David Lopez-Paz and Stefanie Jegelka and Kartik Ahuja},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=6adcWzmtHR}
} | Two lines of work are taking center stage in AI research. On the one hand, increasing efforts are being made to build models that generalize out-of-distribution (OOD). Unfortunately, a hard lesson so far is that no proposal convincingly outperforms a simple empirical risk minimization baseline. On the other hand, large language models (LLMs) have erupted as algorithms able to learn in-context, generalizing on-the-fly to the eclectic contextual circumstances. We argue that context is environment, and posit that in-context learning holds the key to better domain generalization. Via extensive theory and experiments, we show that paying attention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as they arrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context Risk Minimization (ICRM) algorithm to zoom-in on the test environment risk minimizer, leading to significant OOD performance improvements. From all of this, two messages are worth taking home: researchers in domain generalization should consider environment as context, and harness the adaptive power of in-context learning. Researchers in LLMs should consider context as environment, to better structure data towards generalization. | Context is Environment | [
"Sharut Gupta",
"David Lopez-Paz",
"Stefanie Jegelka",
"Kartik Ahuja"
] | Workshop/R0-FoMo | 2309.09888 | [
""
] | https://huggingface.co/papers/2309.09888 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=6FwaSOEeKD | @inproceedings{
ajith2023instructeval,
title={InstructEval: Systematic Evaluation of Instruction Selection Methods},
author={Anirudh Ajith and Mengzhou Xia and Ameet Deshpande and Karthik R Narasimhan},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=6FwaSOEeKD}
} | In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations. Recent work has shown that precise details of the inputs used in the ICL prompt significantly impact performance, which has incentivized instruction selection algorithms. The effect of instruction-choice however is severely underexplored, with existing analyses restricted to shallow subsets of models and tasks, limiting the generalizability of their insights. We develop InstructEval, an ICL evaluation suite to conduct a thorough assessment of these techniques. The suite includes 13 open-sourced LLMs of varying scales from four model families, and covers nine tasks across three categories. Using the suite, we evaluate the relative performance of seven popular instruction selection methods over five metrics relevant to ICL. Our experiments reveal that using curated manually-written instructions or simple instructions without any task-specific descriptions often elicits superior ICL performance overall than that of automatic instruction-induction methods, pointing to a lack of generalizability among the latter. We release our evaluation suite for benchmarking instruction selection approaches and enabling more generalizable methods in this space. | InstructEval: Systematic Evaluation of Instruction Selection Methods | [
"Anirudh Ajith",
"Mengzhou Xia",
"Ameet Deshpande",
"Karthik R Narasimhan"
] | Workshop/R0-FoMo | 2307.00259 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=5TsfEEwRsu | @inproceedings{
golovneva2023pathfinder,
title={{PATHFINDER}: Guided Search over Multi-Step Reasoning Paths},
author={Olga Golovneva and Sean O'Brien and Ramakanth Pasunuru and Tianlu Wang and Luke Zettlemoyer and Maryam Fazel-Zarandi and Asli Celikyilmaz},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=5TsfEEwRsu}
} | With recent advancements in large language models, methods like chain-of-thought prompting to elicit reasoning chains have been shown to improve results on reasoning tasks. However, tasks that require multiple steps of reasoning still pose significant challenges to state-of-the-art models. Drawing inspiration from the beam search algorithm, we propose PATHFINDER, a tree-search-based reasoning path generation approach. It enhances diverse branching and multi-hop reasoning through the integration of dynamic decoding, enabled by varying sampling methods and parameters. Using constrained reasoning, PATHFINDER integrates novel quality constraints, pruning, and exploration methods to enhance the efficiency and the quality of generation. Moreover, it includes scoring and ranking features
to improve candidate selection. Our approach outperforms competitive baselines on three complex arithmetic and commonsense reasoning tasks by 6% on average. Our model generalizes well to longer, unseen reasoning chains, reflecting similar complexities to beam search with large branching factors. | PATHFINDER: Guided Search over Multi-Step Reasoning Paths | [
"Olga Golovneva",
"Sean O'Brien",
"Ramakanth Pasunuru",
"Tianlu Wang",
"Luke Zettlemoyer",
"Maryam Fazel-Zarandi",
"Asli Celikyilmaz"
] | Workshop/R0-FoMo | 2312.05180 | [
""
] | https://huggingface.co/papers/2312.05180 | 4 | 9 | 1 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=4uiOPSvbN6 | @inproceedings{
mousavi2023enhancing,
title={Enhancing Large Language Models with Ensemble of Critics for Mitigating Toxicity and Hallucination},
author={Sajad Mousavi and Ricardo Luna Gutierrez and Desik Rengarajan and Vineet Gundecha and Ashwin Ramesh Babu and Avisek Naug and Antonio Guillen and Soumyendu Sarkar},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=4uiOPSvbN6}
} | We propose a self-correction mechanism for Large Language Models (LLMs) to mitigate issues such as toxicity and fact hallucination. This method involves refining model outputs through an ensemble of critics and the model's own feedback. Drawing inspiration from human behavior, we explore whether LLMs can emulate the self-correction process observed in humans who often engage in self-reflection and seek input from others to refine their understanding of complex topics. Our approach is model-agnostic and can be applied across various domains to enhance trustworthiness by addressing fairness, bias, and robustness concerns. We consistently observe performance improvements in LLMs for reducing toxicity and correcting factual errors. | Enhancing Large Language Models with Ensemble of Critics for Mitigating Toxicity and Hallucination | [
"Sajad Mousavi",
"Ricardo Luna Gutierrez",
"Desik Rengarajan",
"Vineet Gundecha",
"Ashwin Ramesh Babu",
"Avisek Naug",
"Antonio Guillen",
"Soumyendu Sarkar"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3MpDQ0YA7V | @inproceedings{
krasheninnikov2023meta,
title={Meta- (out-of-context) learning in neural networks},
author={Dmitrii Krasheninnikov and Egor Krasheninnikov and Bruno Mlodozeniec and David Krueger},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=3MpDQ0YA7V}
} | Brown et al. (2020) famously introduced the phenomenon of in-context learning in large language models (LLMs). We establish the existence of a phenomenon we call **meta-out-of-context learning (meta-OCL)** via carefully designed synthetic experiments with LLMs. Our results suggest that meta-OCL leads LLMs to more readily “internalize” the semantic content of text that is, *or appears to be*, broadly useful (such as true statements, or text from authoritative sources) and use it in appropriate circumstances. We further demonstrate meta-OCL in a synthetic computer vision setting, and propose two hypotheses for the emergence of meta-OCL: one relying on the way models store knowledge in their parameters, and another suggesting that the implicit gradient alignment bias of gradient-descent-based optimizers may be responsible. Finally, we reflect on what our results might imply about capabilities of future AI systems, and discuss potential risks. Our code is available at https://github.com/krasheninnikov/internalization. | Meta- (out-of-context) learning in neural networks | [
"Dmitrii Krasheninnikov",
"Egor Krasheninnikov",
"Bruno Mlodozeniec",
"David Krueger"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=2gytoWpJGf | @inproceedings{
lowe2023zeroshot,
title={Zero-shot Clustering of Embeddings with Pretrained and Self-Supervised Learnt Encoders},
author={Scott C Lowe and Joakim Bruslund Haurum and Sageev Oore and Thomas B. Moeslund and Graham W. Taylor},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=2gytoWpJGf}
} | We explore whether large pretrained models can provide a useful representation space for datasets they were not trained on, and whether these representations can be used to group novel unlabelled data into meaningful clusters. To this end, we conduct experiments using image encoders pretrained on ImageNet using either supervised or self-supervised training techniques. These encoders are deployed on image datasets that were not seen during training, and we investigate whether their embeddings can be clustered with conventional clustering algorithms. We find that it is possible to create well-defined clusters using self-supervised feature encoders, especially when using the Agglomerative Clustering method, and that it is possible to do so even for very fine-grained datasets such as NABirds. We also find indications that the Silhouette score is a good proxy of cluster quality for self-supervised feature encoders when no ground-truth is available. | Zero-shot Clustering of Embeddings with Pretrained and Self-Supervised Learnt Encoders | [
"Scott C Lowe",
"Joakim Bruslund Haurum",
"Sageev Oore",
"Thomas B. Moeslund",
"Graham W. Taylor"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=2LkTVY15SM | @inproceedings{
foster2023flexible,
title={Flexible visual prompts for in context learning in computer vision},
author={Thomas Foster and Ioana Croitoru and Robert Dorfman and Christoffer Edlund and Thomas Varsavsky and Jon Almaz{\'a}n},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=2LkTVY15SM}
} | In this work, we address in-context learning (ICL) for the task of image segmentation, introducing a novel approach that adapts a modern Video Object Segmentation (VOS) technique for visual in-context learning. This adaptation is inspired by the VOS method's ability to efficiently and flexibly learn objects from a few examples. Through evaluations across a range of support set sizes and on diverse segmentation datasets, our method consistently surpasses existing techniques. Notably, it excels with data containing classes not encountered during training. Additionally, we propose a technique for support set selection, which involves choosing the most relevant images to include in this set. By employing support set selection, the performance increases for all tested methods without the need for additional training or prompt tuning. The code can be found at https://github.com/v7labs/XMem_ICL. | Flexible visual prompts for in context learning in computer vision | [
"Thomas Foster",
"Ioana Croitoru",
"Robert Dorfman",
"Christoffer Edlund",
"Thomas Varsavsky",
"Jon Almazán"
] | Workshop/R0-FoMo | [
"https://github.com/v7labs/xmem_icl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=2J8xnFLMgF | @inproceedings{
shi2023why,
title={Why Larger Language Models Do In-context Learning Differently?},
author={Zhenmei Shi and Junyi Wei and Zhuoyan Xu and Yingyu Liang},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=2J8xnFLMgF}
} | Large language models (LLM) have emerged as a powerful tool for many AI problems and are deeply involved in many aspects of human activity. One important emergent ability is in-context learning (ICL), where LLM can perform well on unseen tasks based on a brief series of task examples without necessitating any adjustments to the model's parameters. Many works trying to study ICL and one recent interesting counter-intuitive observation is that different scale language models may have different ICL behaviors. Despite the tremendous success made by ICL, why different ICL behaviors remains a mystery. In this work, we are trying to answer this question. As a limited understanding of the ICL mechanism, we study a simplified setting, one-layer single-head linear self-attention network pretrained on linear regression in-context task. We characterize language model scale as the rank of key and query matrix in attention. We show that smaller language models are more robust to noise, while larger language models are easily distracted, leading to different ICL behaviors. We also conduct ICL experiments utilizing the LLaMA model families. The results are consistent with previous work and our analysis. | Why Larger Language Models Do In-context Learning Differently? | [
"Zhenmei Shi",
"Junyi Wei",
"Zhuoyan Xu",
"Yingyu Liang"
] | Workshop/R0-FoMo | 2405.19592 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=1fuyNbblEt | @inproceedings{
chen2023analyzing,
title={Analyzing Chat{GPT}{\textquoteright}s Behavior Shifts Over Time},
author={Lingjiao Chen and Matei Zaharia and James Zou},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=1fuyNbblEt}
} | GPT-3.5 and GPT-4 are the two most widely used large language model (LLM) services. However, when and how these models are updated over time is opaque. Here, we evaluate the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 on two tasks: 1) solving math problems, and 2) generating code. We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time. For example, GPT-4 (March 2023) was reasonable at identifying prime vs. composite numbers ($84\%$ accuracy) but GPT-4 (June 2023) was poor on these same questions ($51\%$ accuracy). This is partly explained by a drop in GPT-4's amenity to follow chain-of-thought prompting. Interestingly, GPT-3.5 was much better in June than in March in this task. Both GPT-4 and GPT-3.5 had more formatting mistakes in code generation in June than in March. We provide evidence that GPT-4's ability to follow user instructions has decreased over time, which is one common factor behind the many behavior drifts. Overall, our findings show that the behavior of the ``same'' LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLMs. | Analyzing ChatGPT’s Behavior Shifts Over Time | [
"Lingjiao Chen",
"Matei Zaharia",
"James Zou"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=1G7n7LW3mF | @inproceedings{
kroeger2023are,
title={Are Large Language Models Post Hoc Explainers?},
author={Nicholas Kroeger and Dan Ley and Satyapriya Krishna and Chirag Agarwal and Himabindu Lakkaraju},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=1G7n7LW3mF}
} | Large Language Models (LLMs) are increasingly used as powerful tools for a plethora of natural language processing (NLP) applications. A recent innovation, in-context learning (ICL), enables LLMs to learn new tasks by supplying a few examples in the prompt during inference time, thereby eliminating the need for model fine-tuning. While LLMs have been utilized in several applications, their applicability in explaining the behavior of other models remains relatively unexplored. Despite the growing number of new explanation techniques, many require white-box access to the model and/or are computationally expensive, highlighting a need for next-generation post hoc explainers. In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models. More specifically, we propose a novel framework encompassing multiple prompting strategies: i) Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL, and iv) Explanation-based ICL, with varying levels of information about the underlying ML model and the local neighborhood of the test sample. We conduct extensive experiments with real-world benchmark datasets to demonstrate that LLM generated explanations perform on par with state-of-the-art post hoc explainers using their ability to leverage ICL examples and their internal knowledge in generating model explanations. On average, across four datasets and two ML models, we observe that LLMs identify the most important feature with 72.19% accuracy, indicating promising avenues for further research into LLM based explanation frameworks within explainable artificial intelligence (XAI). | Are Large Language Models Post Hoc Explainers? | [
"Nicholas Kroeger",
"Dan Ley",
"Satyapriya Krishna",
"Chirag Agarwal",
"Himabindu Lakkaraju"
] | Workshop/R0-FoMo | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0hTtit3AAm | @inproceedings{
li2023clipav,
title={{CLIPA}-v2: Scaling {CLIP} Training with 81.1\% Zero-shot ImageNet Accuracy within a \$10,000 Budget},
author={Xianhang Li and Zeyu Wang and Cihang Xie},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=0hTtit3AAm}
} | The recent work CLIPA presents an inverse scaling law for CLIP training --- whereby the larger the image/text encoders used, the shorter the sequence length of image/text tokens that can be applied in training. This finding enables us to train high-performance CLIP models with significantly reduced computations. Building upon this work, we hereby present CLIPA-v2 with two key contributions. Technically, we find this inverse scaling law is also applicable in the finetuning stage, enabling further reduction in computational needs. Empirically, we explore CLIPA at scale, extending the experiments up to the H/14 model with approximately 13B image-text pairs seen during training.
Our results are exciting --- by only allocating a budget of $\textdollar$10,000, our CLIP model achieves an impressive zero-shot ImageNet accuracy of 81.1%, surpassing the prior best CLIP model (from OpenCLIP, 80.1%) by 1.0\% and meanwhile reducing the computational cost by approximately $39\times$. Moreover, with an additional investment of $4,000, we can further elevate the zero-shot ImageNet accuracy to 81.8%.
By upscaling a G/14 model, we've achieved an impressive state-of-the-art zero-shot ImageNet accuracy of 83.0%, relying solely on open-source data. | CLIPA-v2: Scaling CLIP Training with 81.1 | [
"Xianhang Li",
"Zeyu Wang",
"Cihang Xie"
] | Workshop/R0-FoMo | [
"https://github.com/ucsc-vlaa/clipa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0GsHDvnzHg | @inproceedings{
bendou2023inferring,
title={Inferring Latent Class Statistics from Text for Robust Visual Few-Shot Learning},
author={Yassir Bendou and Bastien Pasdeloup and Giulia Lioi and Vincent Gripon and Fabien Cardinaux and Ghouthi BOUKLI HACENE and Lukas Mauch},
booktitle={R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models},
year={2023},
url={https://openreview.net/forum?id=0GsHDvnzHg}
} | In the realm of few-shot learning, foundation models like CLIP have proven effective but exhibit limitations in cross-domain robustness especially in few-shot settings. Recent works add text as an extra modality to enhance the performance of these models. Most of these approaches treat text as an auxiliary modality without fully exploring its potential to elucidate the underlying class visual features distribution. In this paper, we present a novel approach that leverages text-derived statistics to predict the mean and covariance of the visual feature distribution for each class. This predictive framework enriches the latent space, yielding more robust and generalizable few-shot learning models. We demonstrate the efficacy of incorporating both mean and covariance statistics in improving few-shot classification performance across various datasets. Our method shows that we can use text to predict the mean and covariance of the distribution offering promising improvements in few-shot learning scenarios. | Inferring Latent Class Statistics from Text for Robust Visual Few-Shot Learning | [
"Yassir Bendou",
"Bastien Pasdeloup",
"Giulia Lioi",
"Vincent Gripon",
"Fabien Cardinaux",
"Ghouthi BOUKLI HACENE",
"Lukas Mauch"
] | Workshop/R0-FoMo | 2311.14544 | [
"https://github.com/ybendou/fs-text2stats"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=zrw68dPsdt | @inproceedings{
hodgkinson2023a,
title={A {PAC}-Bayesian Perspective on the Interpolating Information Criterion},
author={Liam Hodgkinson and Chris van der Heide and Robert Salomone and Fred Roosta and Michael Mahoney},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=zrw68dPsdt}
} | Deep learning is renowned for its theory-practice gap, whereby principled theory typically fails to provide much beneficial guidance for implementation in practice. This has been highlighted recently by the benign overfitting phenomenon: when neural networks become sufficiently large to interpolate the dataset perfectly, model performance appears to improve with increasing model size, in apparent contradiction with the well-known bias--variance tradeoff. While such phenomena have proven challenging to theoretically study for general models, the recently proposed Interpolating Information Criterion (IIC) provides a valuable theoretical framework to examine performance for overparameterized models. Using the IIC, a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence generalization performance in the interpolating regime. From the provided bound, we quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, optimizer, and parameter-initialization scheme; the spectrum of the empirical neural tangent kernel; curvature of the loss landscape; and noise present in the data. | A PAC-Bayesian Perspective on the Interpolating Information Criterion | [
"Liam Hodgkinson",
"Chris van der Heide",
"Robert Salomone",
"Fred Roosta",
"Michael Mahoney"
] | Workshop/M3L | 2311.07013 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=zarvq21MVP | @inproceedings{
huang2023graph,
title={Graph Neural Networks Benefit from Structural Information Provably: A Feature Learning Perspective},
author={Wei Huang and Yuan Cao and Haonan Wang and Xin Cao and Taiji Suzuki},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=zarvq21MVP}
} | Graph neural networks (GNNs) have shown remarkable capabilities in learning from graph-structured data, outperforming traditional multilayer perceptrons (MLPs) in numerous graph applications. Despite these advantages, there has been limited theoretical exploration into why GNNs are so effective, particularly from the perspective of feature learning. This study aims to address this gap by examining the role of graph convolution in feature learning theory under a specific data generative model. We undertake a comparative analysis of the optimization and generalization between two-layer graph convolutional networks (GCNs) and their convolutional neural network (CNN) counterparts. Our findings reveal that graph convolution significantly enhances the regime of low test error over CNNs. This highlights a substantial discrepancy between GNNs and MLPs in terms of generalization capacity, a conclusion further supported by our empirical simulations on both synthetic and real-world datasets. | Graph Neural Networks Benefit from Structural Information Provably: A Feature Learning Perspective | [
"Wei Huang",
"Yuan Cao",
"Haonan Wang",
"Xin Cao",
"Taiji Suzuki"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=zaeQGiPVYY | @inproceedings{
ahn2023linear,
title={Linear attention is (maybe) all you need (to understand transformer optimization)},
author={Kwangjun Ahn and Xiang Cheng and Minhak Song and Chulhee Yun and Ali Jadbabaie and Suvrit Sra},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=zaeQGiPVYY}
} | Transformer training is notoriously difficult, requiring a careful design of optimizers and use of various heuristics. We make progress towards understanding the subtleties of training transformers by carefully studying a simple yet canonical linearized shallow transformer model. Specifically, we train linear transformers to solve regression tasks, inspired by J. von Oswald et al. (ICML 2023), and K. Ahn et al. (NeurIPS 2023). Most importantly, we observe that our proposed linearized models can reproduce several prominent aspects of transformer training dynamics. Consequently, the results obtained in this paper suggest that a simple linearized transformer model could actually be a valuable, realistic abstraction for understanding transformer optimization. | Linear attention is (maybe) all you need (to understand transformer optimization) | [
"Kwangjun Ahn",
"Xiang Cheng",
"Minhak Song",
"Chulhee Yun",
"Ali Jadbabaie",
"Suvrit Sra"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=xzJ8Xt6wy7 | @inproceedings{
phunyaphibarn2023large,
title={Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study},
author={Prin Phunyaphibarn and Junghyun Lee and Bohan Wang and Huishuai Zhang and Chulhee Yun},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=xzJ8Xt6wy7}
} | Although gradient descent with momentum is widely used in modern deep learning, a concrete understanding of its effects on the training trajectory still remains elusive. In this work, we empirically show that momentum gradient descent with a large learning rate and learning rate warmup displays large catapults, driving the iterates towards flatter minima than those found by gradient descent. We then provide empirical evidence and theoretical intuition that the large catapult is caused by momentum ``amplifying'' the self-stabilization (Damian et al., 2023). | Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study | [
"Prin Phunyaphibarn",
"Junghyun Lee",
"Bohan Wang",
"Huishuai Zhang",
"Chulhee Yun"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=xxYfmRTwyX | @inproceedings{
yang2023feature,
title={Feature Learning in Infinite-Depth Neural Networks},
author={Greg Yang and Dingli Yu and Chen Zhu and Soufiane Hayou},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=xxYfmRTwyX}
} | By classifying infinite-width neural networks and identifying the *optimal* limit, Tensor Programs IV and V demonstrated a universal way, called $\mu$P, for *widthwise hyperparameter transfer*, i.e., predicting optimal hyperparameters of wide neural networks from narrow ones. Here we investigate the analogous classification for *depthwise parametrizations* of deep residual networks (resnets). We classify depthwise parametrizations of block multiplier and learning rate by their infinite-width-then-depth limits. In resnets where each block has only one layer, we identify a unique optimal parametrization, called Depth-$\mu$P that extends $\mu$P and show empirically it admits depthwise hyperparameter transfer. We identify *feature diversity* as a crucial factor in deep networks, and Depth-$\mu$P can be characterized as maximizing both feature learning and feature diversity. Exploiting this, we find that absolute value, among all homogeneous nonlinearities, maximizes feature diversity and indeed empirically leads to significantly better performance. However, if each block is deeper (such as modern transformers), then we find fundamental limitations in all possible infinite-depth limits of such parametrizations, which we illustrate both theoretically and empirically on simple networks as well as Megatron transformer trained on Common Crawl. | Feature Learning in Infinite-Depth Neural Networks | [
"Greg Yang",
"Dingli Yu",
"Chen Zhu",
"Soufiane Hayou"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=wsgXCcqiQY | @inproceedings{
dhuliawala2023variational,
title={Variational Classification},
author={Shehzaad Dhuliawala and Mrinmaya Sachan and Carl Allen},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=wsgXCcqiQY}
} | We present *variational classification* (VC), a latent variable generalisation of neural network softmax classification under cross-entropy loss. Our approach provides a novel probabilistic interpretation of the highly familiar softmax classification model, to which it relates comparably to variational vs deterministic autoencoders. We derive a training objective based on the evidence lower bound (ELBO) that is non-trivial to optimize, and an adversarial approach to maximise it. We reveal an inherent inconsistency within softmax classification that VC addresses, while also allowing flexible choices of distributions in the latent space in place of assumptions implicit in standard softmax classifiers. Empirical evaluation demonstrates that VC maintains accuracy while improving properties such as calibration and adversarial robustness, particularly under distribution shift and low data settings. This work brings new theoretical insight to modern machine learning practice. | Variational Classification | [
"Shehzaad Dhuliawala",
"Mrinmaya Sachan",
"Carl Allen"
] | Workshop/M3L | 2305.10406 | [
"https://github.com/shehzaadzd/variational-classification"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vXkC6AOupO | @inproceedings{
dherin2023implicit,
title={Implicit biases in multitask and continual learningfrom a backward error analysis perspective},
author={Benoit Dherin},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=vXkC6AOupO}
} | Using backward error analysis, we compute implicit training biases in multitask and continual learning settings for neural networks trained with stochastic gradient descent. In particular, we derive modified losses that are implicitly minimized during training. They have three terms: the original loss, accounting for convergence, an implicit flatness regularization term proportional to the learning rate, and a last term, the conflict term, which can theoretically be detrimental to both convergence and implicit regularization.
In multitask, the conflict term is a well-known quantity, measuring the gradient alignment between the tasks, while in continual learning the conflict term is a new quantity in deep learning optimization, although a basic tool in differential geometry: The Lie bracket between the task gradients. | Implicit biases in multitask and continual learningfrom a backward error analysis perspective | [
"Benoit Dherin"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tMCsGRtzK2 | @inproceedings{
ebrahimpour-boroojeny2023spectrum,
title={Spectrum Extraction and Clipping for Implicitly Linear Layers},
author={Ali Ebrahimpour-Boroojeny and Matus Telgarsky and Hari Sundaram},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=tMCsGRtzK2}
} | We show the effectiveness of automatic differentiation in efficiently and correctly computing and controlling the spectrum of implicitly linear operators, a rich family of layer types including all standard convolutional and dense layers. we provide the first clipping method which is correct for general convolution layers, and illuminate the representational limitation that caused correctness issues in prior work. by comparing the accuracy and performance of our methods to existing methods, using various experiments, show they lead to better generalization and adversarial robustness of the models. in addition to these advantages over the state-of-the-art methods, we show they are much faster than the alternatives. | Spectrum Extraction and Clipping for Implicitly Linear Layers | [
"Ali Ebrahimpour-Boroojeny",
"Matus Telgarsky",
"Hari Sundaram"
] | Workshop/M3L | 2402.16017 | [
"https://github.com/ali-e/fastclip"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=qxy72wUf90 | @inproceedings{
wang2023the,
title={The Noise Geometry of Stochastic Gradient Descent: A Quantitative and Analytical Characterization},
author={Mingze Wang and Lei Wu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=qxy72wUf90}
} | Empirical studies have demonstrated that the noise in stochastic gradient descent (SGD) aligns favorably with the local geometry of loss landscape. However, theoretical and quantitative explanations for this phenomenon remain sparse. In this paper, we offer a comprehensive theoretical investigation into the aforementioned {\em noise geometry} for over-parameterized linear (OLMs) models and two-layer neural networks. We scrutinize both average and directional alignments, paying special attention to how factors like sample size and input data degeneracy affect the alignment strength. As a specific application, we leverage our noise geometry characterizations to study how SGD escapes from sharp minima, revealing that the escape direction has significant components along flat directions. This is in stark contrast to GD, which escapes only along the sharpest directions. To substantiate our theoretical findings, both synthetic and real-world experiments are provided. | The Noise Geometry of Stochastic Gradient Descent: A Quantitative and Analytical Characterization | [
"Mingze Wang",
"Lei Wu"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ovTv99C921 | @inproceedings{
alvarado2023curvaturedimension,
title={Curvature-Dimension Tradeoff for Generalization in Hyperbolic Space},
author={Nico Alvarado and Hans Lobel and Mircea Petrache},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ovTv99C921}
} | The inclusion of task-relevant geometric embeddings in deep learning models is an important emerging direction of research, particularly when using hierarchical data. For instance, negatively curved geometries such as hyperbolic spaces are known to allow low-distortion embedding of tree-like hierarchical structures, which Euclidean spaces do not afford. Learning techniques for hyperbolic spaces, such as Hyperbolic Neural Networks (HNNs), have shown empirical accuracy improvement over classical Deep Neural Networks in tasks involving semantic or multi-scale information, such as recommender systems or molecular generation. However, no research has investigated generalization properties specific to such geometries. In this work, we introduce generalization bounds for learning tasks in hyperbolic spaces, marking the first time such bounds have been proposed. We highlight a previously unnoticed and important difference with Euclidean embedding models, namely, under embeddings into spaces of negative curvature $-\kappa<0$ and dimension $d$, only the product $\sqrt{\kappa}\ d$ influences generalization bounds. Hence, the curvature parameter of the space can be varied at fixed $d$ with the same effect on generalization as when varying $d$. | Curvature-Dimension Tradeoff for Generalization in Hyperbolic Space | [
"Nico Alvarado",
"Hans Lobel",
"Mircea Petrache"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ocN0nmbAVo | @inproceedings{
qiu2023complexity,
title={Complexity Matters: Dynamics of Feature Learning in the Presence of Spurious Correlations},
author={GuanWen Qiu and Da Kuang and Surbhi Goel},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ocN0nmbAVo}
} | Existing research often posits spurious features as "easier" to learn than core features in neural network optimization, but the nuanced impact of their relative simplicity remains under-explored. In this paper, we propose a theoretical framework and associated synthetic dataset grounded in boolean function analysis. Our framework allows for fine-grained control on both the relative complexity (compared to core features) and correlation strength (with respect to the label) of spurious features. Experimentally, we observe that the presence of _stronger_ spurious correlations or _simpler_ spurious features leads to a slower rate of learning for the core features in networks when trained with (stochastic) gradient descent. Perhaps surprisingly, we also observe that spurious features are not forgotten even when the network has _perfectly_ learned the core features. We give theoretical justifications for these observations for the special case of learning with parity features on a one-layer hidden network. Our findings justify the success of retraining the last layer for accelerating core feature convergence and identify limitations of debiasing algorithms that exploit early learning of spurious features. We corroborate our findings through experiments on real-world vision datasets, thereby validating the practical relevance of our framework. | Complexity Matters: Dynamics of Feature Learning in the Presence of Spurious Correlations | [
"GuanWen Qiu",
"Da Kuang",
"Surbhi Goel"
] | Workshop/M3L | 2403.03375 | [
"https://github.com/NayutaQiu/Boolean_Spurious"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=oB6tknFuXF | @inproceedings{
sabanayagam2023unveiling,
title={Unveiling the Hessian's Connection to the Decision Boundary},
author={Mahalakshmi Sabanayagam and Freya Behrens and Urte Adomaityte and Anna Dawid},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=oB6tknFuXF}
} | Understanding the properties of well-generalizing minima is at the heart of deep learning research. On the one hand, the generalization of neural networks has been connected to the decision boundary complexity, which is hard to study in the high-dimensional input space. Conversely, the flatness of a minimum has become a controversial proxy for generalization. In this work, we provide the missing link between the two approaches and show that the Hessian top eigenvectors characterize the decision boundary learned by the neural network. Notably, the number of outliers in the Hessian spectrum is proportional to the complexity of the decision boundary. Based on this finding, we provide a new and straightforward approach to studying the complexity of a high-dimensional decision boundary. | Unveiling the Hessian's Connection to the Decision Boundary | [
"Mahalakshmi Sabanayagam",
"Freya Behrens",
"Urte Adomaityte",
"Anna Dawid"
] | Workshop/M3L | 2306.07104 | [
"https://github.com/shmoo137/hessian-and-decision-boundary"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=mCjgbk31w1 | @inproceedings{
zhang2023nonparametric,
title={Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks},
author={Zixuan Zhang and Kaiqi Zhang and Minshuo Chen and Yuma Takeda and Mengdi Wang and Tuo Zhao and Yu-Xiang Wang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=mCjgbk31w1}
} | Convolutional residual neural networks (ConvResNets), though overparameterized, can achieve remarkable prediction performance in practice, which cannot be well explained by conventional wisdom. To bridge this gap, we study the performance of ConvResNeXts, which cover ConvResNets as a special case, trained with weight decay from the perspective of nonparametric classification. Our analysis allows for infinitely many building blocks in ConvResNeXts, and shows that weight decay implicitly enforces sparsity on these blocks. Specifically, we consider a smooth target function supported on a low-dimensional manifold, then prove that ConvResNeXts can adapt to the function smoothness and low-dimensional structures and efficiently learn the function without suffering from the curse of dimensionality. Our findings partially justify the advantage of overparameterized ConvResNeXts over conventional machine learning models. | Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks | [
"Zixuan Zhang",
"Kaiqi Zhang",
"Minshuo Chen",
"Yuma Takeda",
"Mengdi Wang",
"Tuo Zhao",
"Yu-Xiang Wang"
] | Workshop/M3L | 2307.01649 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=lIdMba8zHg | @inproceedings{
lobacheva2023large,
title={Large Learning Rates Improve Generalization: But How Large Are We Talking About?},
author={Ekaterina Lobacheva and Eduard Pokonechny and Maxim Kodryan and Dmitry Vetrov},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=lIdMba8zHg}
} | Inspired by recent research that recommends starting neural networks training with large learning rates (LRs) to achieve the best generalization, we explore this hypothesis in detail. Our study clarifies the initial LR ranges that provide optimal results for subsequent training with a small LR or weight averaging. We find that these ranges are in fact significantly narrower than generally assumed. We conduct our main experiments in a simplified setup that allows precise control of the learning rate hyperparameter and validate our key findings in a more practical setting. | Large Learning Rates Improve Generalization: But How Large Are We Talking About? | [
"Ekaterina Lobacheva",
"Eduard Pokonechny",
"Maxim Kodryan",
"Dmitry Vetrov"
] | Workshop/M3L | 2311.11303 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=kuAQRCHQNX | @inproceedings{
kosson2023understanding,
title={Understanding the Role of Noisy Statistics in the Regularization Effect of Batch Normalization},
author={Atli Kosson and Dongyang Fan and Martin Jaggi},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=kuAQRCHQNX}
} | Normalization layers have been shown to benefit the training stability and generalization of deep neural networks in various ways. For Batch Normalization (BN), the noisy statistics have been observed to have a regularization effect that depends on the batch size. Following this observation, Hoffer et. al. proposed Ghost Batch Normalization (GBN), where BN is explicitly performed independently on smaller sub-batches, resulting in improved generalization in many settings. In this study we analyze and isolate the effect of the noisy statistics by comparing BN and GBN, introducing a noise injection method. We then quantitatively assess the effects of the noise, juxtaposing it with other regularizers like dropout and examining its potential role in the generalization disparities between batch normalization and its alternatives, including layer normalization and normalization-free methods. | Understanding the Role of Noisy Statistics in the Regularization Effect of Batch Normalization | [
"Atli Kosson",
"Dongyang Fan",
"Martin Jaggi"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=kXf5CfXBbU | @inproceedings{
chen2023generalization,
title={Generalization Guarantees of Deep ResNets in the Mean-Field Regime},
author={Yihang Chen and Fanghui Liu and Yiping Lu and Grigorios Chrysos and Volkan Cevher},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=kXf5CfXBbU}
} | Despite the widespread empirical success of ResNet, the generalization ability of deep ResNet is rarely explored beyond the lazy-training regime. In this work, we investigate ResNet in the limit of infinitely deep and wide neural networks, of which the gradient flow is described by a partial differential equation in the large-neural network limit, i.e., the \emph{mean-field} regime.
To derive the generalization bounds under this setting, our analysis necessitates a shift from the conventional time-invariant Gram matrix employed in the lazy training regime to a time-variant, distribution-dependent version tailored to the mean-field regime.
To this end, we provide a lower bound on the minimum eigenvalue of the Gram matrix under the mean-field regime.
Besides, the traceability of the dynamic of Kullback-Leibler (KL) divergence is also required under the mean-field regime.
We therefore establish the linear convergence of the empirical error and estimate the upper bound of the KL divergence over parameters distribution.
The above two results are employed to build the uniform convergence for generalization bound via Rademacher complexity.
Our results offer new insights into the generalization ability of deep ResNet beyond the lazy training regime and contribute to advancing the understanding of the fundamental properties of deep neural networks. | Generalization Guarantees of Deep ResNets in the Mean-Field Regime | [
"Yihang Chen",
"Fanghui Liu",
"Yiping Lu",
"Grigorios Chrysos",
"Volkan Cevher"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=jRqhooP4f9 | @inproceedings{
kumano2023theoretical,
title={Theoretical Explanation for Generalization from Adversarial Perturbations},
author={Soichiro Kumano and Hiroshi Kera and Toshihiko Yamasaki},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=jRqhooP4f9}
} | It is not fully understood why adversarial examples can deceive neural networks and transfer between different networks. To elucidate this, several studies hypothesized that adversarial perturbations contain data features that are imperceptible to humans but still recognizable by neural networks. Empirical evidence has shown that neural networks trained on mislabeled samples with these perturbations can generalize to natural test data. However, a theoretical understanding of this counterintuitive phenomenon is limited. In this study, assuming orthogonal training samples, we first prove that one-hidden-layer neural networks can learn natural data structures from adversarial perturbations. Our results indicate that, under mild conditions, the decision boundary from learning perturbations aligns with that from natural data, except for specific points in the input space. | Theoretical Explanation for Generalization from Adversarial Perturbations | [
"Soichiro Kumano",
"Hiroshi Kera",
"Toshihiko Yamasaki"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=iKiEzVD0DC | @inproceedings{
huang2023incontext,
title={In-Context Convergence of Transformers},
author={Yu Huang and Yuan Cheng and Yingbin Liang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=iKiEzVD0DC}
} | Transformers have recently revolutionized many domains in modern machine learning and one salient discovery is their remarkable in-context learning capability, where models can solve an unseen task by utilizing task-specific prompts without further parameters fine-tuning. This also inspired recent theoretical studies aiming to understand the in-context learning mechanism of transformers, which however focused only on $\textbf{linear}$ transformers. In this work, we take the first step toward studying the learning dynamics of a one-layer transformer with $\textbf{softmax}$ attention trained via gradient descent in order to in-context learn linear function classes. We consider a structured data model, where each token is randomly sampled from a set of feature vectors in either balanced or imbalanced fashion. For data with balanced features, we establish the finite-time convergence guarantee with near-zero prediction error by navigating our analysis over two phases of the training dynamics of the attention map. More notably, for data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process, where the transformer first converges to a near-zero prediction error for the query tokens of dominant features, and then converges later to a near-zero prediction error for the query tokens of under-represented features, respectively via one and four training phases. Our proof features new techniques for analyzing the competing strengths of two types of attention weights, the change of which determines different phases. | In-Context Convergence of Transformers | [
"Yu Huang",
"Yuan Cheng",
"Yingbin Liang"
] | Workshop/M3L | 2310.05249 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=iBDcaBLhz2 | @inproceedings{
dandi2023how,
title={How Two-Layer Neural Networks Learn, One (Giant) Step at a Time},
author={Yatin Dandi and Florent Krzakala and Bruno Loureiro and Luca Pesce and Ludovic Stephan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=iBDcaBLhz2}
} | We investigate theoretically how the features of a $2$-layer neural network adapt to the structure of the target function through a few large batch gradient descent steps, leading to improvement in the approximation capacity with respect to the initialization.
We compare the influence of batch size and that of multiple (but finitely many) steps. For a single gradient step, a batch of size $n =\mathcal{O}(d)$ is both necessary and sufficient to align with the target function, although only a single direction can be learned. In contrast, $n=\mathcal{O}(d^2)$ is essential for neurons to specialize to multiple relevant directions of the target with a single gradient step. Even in this case, we show there might exist ``hard'' directions requiring $n=\mathcal{O}(d^\ell)$ samples to be learned, where $\ell$ is known as the leap index of the target. The picture drastically improves over multiple gradient steps: we show that a batch-size of $n =\mathcal{O}(d)$ is indeed enough to learn multiple target directions satisfying a staircase property, where more and more directions can be learned over time. Finally, we discuss how these directions allow to drastically improve the approximation capacity and generalization error over the initialization, illustrating a separation of scale between the random features/lazy regime, and the feature learning regime. Our technical analysis leverages a combination of techniques related to concentration, projection-based conditioning, and Gaussian equivalence which we believe are of independent interest. By pinning down the conditions necessary for specialization and learning, our results highlight the interaction between batch size and number of iterations, and lead to a hierarchical depiction where learning performance exhibits a stairway to accuracy over time and batch size, shedding new light on how neural nets adapt to features of the data. | How Two-Layer Neural Networks Learn, One (Giant) Step at a Time | [
"Yatin Dandi",
"Florent Krzakala",
"Bruno Loureiro",
"Luca Pesce",
"Ludovic Stephan"
] | Workshop/M3L | 2305.18270 | [
"https://github.com/lucpoisson/giantstep"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=hWDqKtIwSo | @inproceedings{
wang2023two,
title={Two Facets of {SDE} Under an Information-Theoretic Lens: Generalization of {SGD} via Training Trajectories and via Terminal States},
author={Ziqiao Wang and Yongyi Mao},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hWDqKtIwSo}
} | Stochastic differential equations (SDEs) have been shown recently to well characterize the dynamics of training machine learning models with SGD. This provides two opportunities for better understanding the generalization behaviour of SGD through its SDE approximation. Firstly, viewing SGD as full-batch gradient descent with Gaussian gradient noise allows us to obtain trajectories-based generalization bound using the information-theoretic bound. Secondly, assuming mild conditions, we estimate the steady-state weight distribution of SDE and use the information-theoretic bound to establish terminal-state-based generalization bounds. | Two Facets of SDE Under an Information-Theoretic Lens: Generalization of SGD via Training Trajectories and via Terminal States | [
"Ziqiao Wang",
"Yongyi Mao"
] | Workshop/M3L | 2211.10691 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=hVcGX9oOeE | @inproceedings{
gong2023unraveling,
title={Unraveling the Complexities of Simplicity Bias: Mitigating and Amplifying Factors},
author={Xuchen Gong and Tianwen Fu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hVcGX9oOeE}
} | The success of neural networks depends on the generalization ability, while Shah et al. conclude that the inherent bias towards simplistic features, a phenomenon called *Simplicity Bias*, hurts generalization by preferring simple but noisy features to complex yet predictive ones. We aim to understand the scenarios when simplicity bias occurs more severely and the factors that help mitigate its effects. We show that many traditional insights such as increasing training size and increasing informative feature dimensions are not as effective as balancing the modes of our data distribution, distorting the simplistic features, or even searching for a good initialization. Our empirical results reveal intriguing factors of simplicity bias, and we call for future investigations to a more thorough understanding of simplicity bias and its interplay with the related fields. | Unraveling the Complexities of Simplicity Bias: Mitigating and Amplifying Factors | [
"Xuchen Gong",
"Tianwen Fu"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=gLwzzmh79K | @inproceedings{
tarzanagh2023transformers,
title={Transformers as Support Vector Machines},
author={Davoud Ataee Tarzanagh and Yingcong Li and Christos Thrampoulidis and Samet Oymak},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=gLwzzmh79K}
} | The transformer architecture has led to revolutionary advancements in NLP. The attention layer within the transformer admits a sequence of input tokens $X$ and makes them interact through pairwise similarities computed as $\texttt{softmax}(XQK^\top X^\top)$, where $(K,Q)$ are the trainable key-query parameters. In this work, we establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem that separates optimal input tokens from non-optimal tokens using linear constraints on the outer-products of token pairs. This formalism allows us to characterize the implicit bias of 1-layer transformers optimized with gradient descent: (1) Optimizing the attention layer, parameterized by $(K,Q)$, with vanishing regularization, converges in direction to an SVM solution minimizing the nuclear norm of the combined parameter $W:=KQ^\top$. Instead, directly parameterizing by $W$ minimizes a Frobenius norm SVM objective. (2) Complementing this, for $W$-parameterization, we prove the local/global directional convergence of gradient descent under suitable geometric conditions, and propose a more general SVM equivalence that predicts the implicit bias of attention with nonlinear heads/MLPs. | Transformers as Support Vector Machines | [
"Davoud Ataee Tarzanagh",
"Yingcong Li",
"Christos Thrampoulidis",
"Samet Oymak"
] | Workshop/M3L | 2308.16898 | [
"https://github.com/umich-sota/tf-as-svm"
] | https://huggingface.co/papers/2308.16898 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=eHZhjP5QdR | @inproceedings{
kim2023symmetric,
title={Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems},
author={Juno Kim and Kakei Yamamoto and Kazusato Oko and Zhuoran Yang and Taiji Suzuki},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=eHZhjP5QdR}
} | In this paper, we extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates. We propose \emph{mean-field Langevin averaged gradient} (MFL-AG), a single-loop algorithm that implements gradient descent ascent in the distribution spaces with a novel weighted averaging, and establish average-iterate convergence to the mixed Nash equilibrium. We also study both time and particle discretization regimes and prove a new uniform-in-time propagation of chaos result which accounts for the dependency of the particle interactions on all previous distributions. Furthermore, we propose \emph{mean-field Langevin anchored best response} (MFL-ABR), a symmetric double-loop algorithm based on best response dynamics with linear last-iterate convergence. Finally, we study applications to zero-sum Markov games and conduct simulations demonstrating long-term optimality. | Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems | [
"Juno Kim",
"Kakei Yamamoto",
"Kazusato Oko",
"Zhuoran Yang",
"Taiji Suzuki"
] | Workshop/M3L | 2312.01127 | [
""
] | https://huggingface.co/papers/2312.01127 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=dq5QGXGxoJ | @inproceedings{
izzo2023a,
title={A Theoretical Study of Dataset Distillation},
author={Zachary Izzo and James Zou},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=dq5QGXGxoJ}
} | Modern machine learning models are often trained using massive amounts of data. Such large datasets come at a high cost in terms of both storage and computation, especially when the data will need to be used repeatedly (e.g., for neural architecture search or continual learning). _Dataset distillation_ (DD) describes the process of constructing a smaller ``distilled'' dataset (usually consisting of synthetic examples), such that models trained on the distilled dataset will be similar to models trained on the original dataset. In this paper, we study DD from a theoretical perspective. We show that for generalized linear models, it is possible to construct a distilled dataset with only a _single point_ which will exactly recover the model trained on the original dataset, regardless of the original number of points. We provide a specialized distillation for linear regression with size independent of the original number of points, but which perfectly reconstructs the model obtained from the original dataset with _any_ data-independent regularizer, or by combining the original dataset with any additional data. We also provide impossibility results showing that similar constructions are impossible for logistic regression, and that DD cannot be accomplished in general for kernel regression, even if the goal is only to recover a single model. | A Theoretical Study of Dataset Distillation | [
"Zachary Izzo",
"James Zou"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dE5MEi9906 | @inproceedings{
fu2023transformers,
title={Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models},
author={Deqing Fu and Tianqi Chen and Robin Jia and Vatsal Sharan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=dE5MEi9906}
} | Transformers are remarkably good at *in-context learning* (ICL)---learning from demonstrations without parameter updates---but how they perform ICL remains a mystery. Recent work suggests that Transformers may learn in-context by internally running Gradient Descent, a first-order optimization method. In this paper, we instead demonstrate that Transformers learn to implement higher-order optimization methods to perform ICL. Focusing on in-context linear regression, we show that Transformers learn to implement an algorithm very similar to *Iterative Newton's Method*, a higher-order optimization method, rather than Gradient Descent. Empirically, we show that predictions from successive Transformer layers closely match different iterations of Newton's Method *linearly*, with each middle layer roughly computing 3 iterations. In contrast, *exponentially* more Gradient Descent steps are needed to match an additional Transformers layer;
this suggests that Transformers have an comparable rate of convergence with high-order methods such as Iterative Newton, which are exponentially faster than Gradient Descent. We also show that Transformers can learn in-context on ill-conditioned data, a setting where Gradient Descent struggles but Iterative Newton succeeds. Finally, we show theoretical results which support our empirical findings and have a close correspondence with them: we prove that Transformers can implement $k$ iterations of Newton's method with $\mathcal{O}(k)$ layers. | Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models | [
"Deqing Fu",
"Tianqi Chen",
"Robin Jia",
"Vatsal Sharan"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=c71B6zW70d | @inproceedings{
schweighofer2023introducing,
title={Introducing an Improved Information-Theoretic Measure of Predictive Uncertainty},
author={Kajetan Schweighofer and Lukas Aichberger and Mykyta Ielanskyi and Sepp Hochreiter},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=c71B6zW70d}
} | Applying a machine learning model for decision-making in the real world requires to distinguish what the model knows from what it does not. A critical factor in assessing the knowledge of a model is to quantify its predictive uncertainty. Predictive uncertainty is commonly measured by the entropy of the Bayesian model average (BMA) predictive distribution. Yet, the properness of this current measure of predictive uncertainty was recently questioned. We provide new insights regarding those limitations. Our analyses show that the current measure erroneously assumes that the BMA predictive distribution is equivalent to the predictive distribution of the true model that generated the dataset. Consequently, we introduce a theoretically grounded measure to overcome these limitations. We experimentally verify the benefits of our introduced measure of predictive uncertainty. We find that our introduced measure behaves more reasonably in controlled synthetic tasks. Moreover, our evaluations on ImageNet demonstrate that our introduced measure is advantageous in real-world applications utilizing predictive uncertainty. | Introducing an Improved Information-Theoretic Measure of Predictive Uncertainty | [
"Kajetan Schweighofer",
"Lukas Aichberger",
"Mykyta Ielanskyi",
"Sepp Hochreiter"
] | Workshop/M3L | 2311.08309 | [
""
] | https://huggingface.co/papers/2311.08309 | 1 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=aBeZ3jid9i | @inproceedings{
wibisono2023on,
title={On the Role of Unstructured Training Data in Transformers' In-Context Learning Capabilities},
author={Kevin Christian Wibisono and Yixin Wang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=aBeZ3jid9i}
} | Transformers have exhibited impressive in-context learning (ICL) capabilities: they can generate predictions for new query inputs based on sequences of inputs and outputs (i.e., prompts) without parameter updates. Efforts to provide theoretical explanations for the emergence of these abilities have primarily focused on the structured data setting, where input-output pairings in the training data are known. This scenario can enable simplified transformers (e.g., ones comprising a single attention layer without the softmax activation) to achieve notable ICL performance. However, transformers are primarily trained on unstructured data that rarely include such input-output pairings. To better understand how ICL emerges, we propose to study transformers that are trained on unstructured data, namely data that lack prior knowledge of input-output pairings. This new setting elucidates the pivotal role of softmax attention in the robust ICL abilities of transformers, particularly those with a single attention layer. We posit that the significance of the softmax activation partially stems from the equivalence of softmax-based attention models with mixtures of experts, facilitating the implicit inference of input-output pairings in the test prompts. Additionally, a probing analysis reveals where these pairings are learned within the model. While subsequent layers predictably encode more information about these pairings, we find that even the first attention layer contains a significant amount of pairing information. | On the Role of Unstructured Training Data in Transformers' In-Context Learning Capabilities | [
"Kevin Christian Wibisono",
"Yixin Wang"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=a1JCT4NPyP | @inproceedings{
huben2023attentiononly,
title={Attention-Only Transformers and Implementing {MLP}s with Attention Heads},
author={Robert Huben and Valerie Morris},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=a1JCT4NPyP}
} | The transformer architecture is widely used in machine learning models and consists of two alternating sublayers: attention heads and MLPs. We prove that an MLP neuron can be implemented by a masked attention head with internal dimension 1 so long as the MLP's activation function comes from a restricted class including SiLU and close approximations of ReLU and GeLU. This allows one to convert an MLP-and-attention transformer into an attention-only transformer at the cost of greatly increasing the number of attention heads. | Attention-Only Transformers and Implementing MLPs with Attention Heads | [
"Robert Huben",
"Valerie Morris"
] | Workshop/M3L | 2309.08593 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZBqB3XiN6M | @inproceedings{
bombari2023privacy,
title={Privacy at Interpolation: Precise Analysis for Random and {NTK} Features},
author={Simone Bombari and Marco Mondelli},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=ZBqB3XiN6M}
} | Deep learning models are able to
memorize the training set. This makes them vulnerable to recovery attacks, raising privacy concerns to users, and many widespread algorithms such as empirical risk minimization (ERM) do not directly enforce safety guarantees. In this paper, we study the safety of ERM models when the training samples are interpolated (i.e., *at interpolation*) against a family of powerful black-box information retrieval attacks. Our analysis quantifies this safety via two separate terms: *(i)* the model *stability* with respect to individual training samples, and *(ii)* the *feature alignment* between attacker query and original data. While the first term is well established in learning theory and it
is connected to the generalization error in classical work, the second one is, to the best of our knowledge, novel.
Our key technical result characterizes precisely the feature alignment for the two prototypical settings of random features (RF) and neural tangent kernel (NTK) regression.
This proves that privacy strengthens with an increase in generalization capability, unveiling the role of the model and of its activation function.
Numerical experiments show an agreement with our theory not only for RF/NTK models, but also for deep neural networks trained on standard datasets (MNIST, CIFAR-10). | Privacy at Interpolation: Precise Analysis for Random and NTK Features | [
"Simone Bombari",
"Marco Mondelli"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Z7UaGFmg8O | @inproceedings{
kausik2023denoising,
title={Denoising Low-Rank Data Under Distribution Shift: Double Descent and Data Augmentation},
author={Chinmaya Kausik and Kashvi Srivastava and Rishi Sonthalia},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Z7UaGFmg8O}
} | Despite the importance of denoising in modern machine learning and ample empirical work on supervised denoising, its theoretical understanding is still relatively scarce. One concern about studying supervised denoising is that one might not always have noiseless training data from the test distribution. It is more reasonable to have access to noiseless training data from a different dataset than the test dataset. Motivated by this, we study supervised denoising and noisy-input regression under distribution shift. We add three considerations to increase the applicability of our theoretical insights to real-life data and modern machine learning. First, while most past theoretical work assumes that the data covariance matrix is full-rank and well-conditioned, empirical studies have shown that real-life data is approximately low-rank. Thus, we assume that our data matrices are low-rank. Second, we drop independence assumptions on our data. Third, the rise in computational power and dimensionality of data have made it important to study non-classical regimes of learning. Thus, we work in the non-classical proportional regime, where data dimension $d$ and number of samples $N$ grow as $d/N = c + o(1)$.
For this setting, we derive general test error expressions for both denoising and noisy-input regression, and study when overfitting the noise is benign, tempered or catastrophic. We show that the test error exhibits double descent under general distribution shift, providing insights for data augmentation and the role of noise as an implicit regularizer. We also perform experiments using real-life data, where we match the theoretical predictions with under 1\% MSE error for low-rank data. | Denoising Low-Rank Data Under Distribution Shift: Double Descent and Data Augmentation | [
"Chinmaya Kausik",
"Kashvi Srivastava",
"Rishi Sonthalia"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=XMHpZIIOXk | @inproceedings{
moniri2023a,
title={A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks},
author={Behrad Moniri and Donghwan Lee and Hamed Hassani and Edgar Dobriban},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=XMHpZIIOXk}
} | Feature learning is thought to be one of the fundamental reasons for the success of deep neural networks.
It is rigorously known that in two-layer fully-connected neural networks under certain conditions, one step of gradient descent on the first layer followed by ridge regression on the second layer can lead to feature learning; characterized by the appearance of a separated rank-one component---spike---in the spectrum of the feature matrix.
However, with a constant gradient descent step size, this spike only carries information from the linear component of the target function and therefore learning non-linear components is impossible.
We show that with a learning rate that grows with the sample size,
such training in fact introduces
multiple rank-one components,
each corresponding to a specific polynomial feature.
We further prove that the limiting large-dimensional and large sample training and test errors of the updated neural networks are fully characterized by these spikes.
By precisely analyzing the improvement in the loss, we demonstrate that these non-linear features can enhance learning. | A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks | [
"Behrad Moniri",
"Donghwan Lee",
"Hamed Hassani",
"Edgar Dobriban"
] | Workshop/M3L | 2310.07891 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=WvZV3JvmeR | @inproceedings{
xu2023benign,
title={Benign Overfitting and Grokking in Re{LU} Networks for {XOR} Cluster Data},
author={Zhiwei Xu and Yutong Wang and Spencer Frei and Gal Vardi and Wei Hu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=WvZV3JvmeR}
} | Neural networks trained by gradient descent (GD) have exhibited a number of surprising generalization behaviors. First, they can achieve a perfect fit to noisy training data and still generalize near-optimally, showing that overfitting can sometimes be benign. Second, they can undergo a period of classical, harmful overfitting---achieving a perfect fit to training data with near-random performance on test data---before transitioning (''grokking'') to near-optimal generalization later in training. In this work, we show that both of these phenomena provably occur in two-layer ReLU networks trained by GD on XOR cluster data where a constant fraction of the training labels are flipped. In this setting, we show that after the first step of GD, the network achieves 100\% training accuracy, perfectly fitting the noisy labels in the training data, but achieves near-random test accuracy. At a later training step, the network achieves near-optimal test accuracy while still fitting the random labels in the training data, exhibiting a ''grokking'' phenomenon. This provides the first theoretical result of benign overfitting in neural network classification when the data distribution is not linearly separable. Our proofs rely on analyzing the feature learning process under GD, which reveals that the network implements a non-generalizable linear classifier after one step and gradually learns generalizable features in later steps. | Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data | [
"Zhiwei Xu",
"Yutong Wang",
"Spencer Frei",
"Gal Vardi",
"Wei Hu"
] | Workshop/M3L | 2310.02541 | [
""
] | https://huggingface.co/papers/2310.02541 | 2 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=WooXHaAvKQ | @inproceedings{
zhou2023how,
title={How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks},
author={Mo Zhou and Rong Ge},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=WooXHaAvKQ}
} | The ability of learning useful features is one of the major advantages of neural networks. Although recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning, many works also demonstrate the potential for neural networks to go beyond NTK regime and perform feature learning. Recently, a line of work highlighted the feature learning capabilities of the early stages of gradient-based training. In this paper we consider another mechanism for feature learning via gradient descent through a local convergence analysis. We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions. Our results demonstrate that feature learning not only happens at the initial gradient steps, but can also occur towards the end of training. | How does Gradient Descent Learn Features — A Local Analysis for Regularized Two-Layer Neural Networks | [
"Mo Zhou",
"Rong Ge"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=WGWM0MzWAg | @inproceedings{
chen2023understanding,
title={Understanding Transferable Representation Learning and Zero-shot Transfer in {CLIP}},
author={Zixiang Chen and Yihe Deng and Yuanzhi Li and Quanquan Gu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=WGWM0MzWAg}
} | Multi-modal learning has become increasingly popular due to its ability to leverage information from different data sources. Recently, CLIP has emerged as an effective approach that employs vision-language contrastive pretraining to learn joint image and text representations and exhibits remarkable performance in zero-shot learning and text-guided natural image generation. Despite the huge practical success of CLIP, its theoretical understanding remains elusive. In this paper, we formally study transferrable representation learning underlying CLIP and demonstrate how features from different modalities get aligned. We also analyze its zero-shot transfer performance on the downstream tasks. Inspired by our analysis, we propose a new CLIP-type approach, which achieves better performance than CLIP and other state-of-the-art methods on benchmark datasets. | Understanding Transferable Representation Learning and Zero-shot Transfer in CLIP | [
"Zixiang Chen",
"Yihe Deng",
"Yuanzhi Li",
"Quanquan Gu"
] | Workshop/M3L | 2310.00927 | [
""
] | https://huggingface.co/papers/2310.00927 | 2 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=Vg6oMb7fbh | @inproceedings{
zhao2023provably,
title={Provably Efficient {CV}aR {RL} in Low-rank {MDP}s},
author={Yulai Zhao and Wenhao Zhan and Xiaoyan Hu and Ho-fung Leung and Farzan Farnia and Wen Sun and Jason Lee},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Vg6oMb7fbh}
} | We study risk-sensitive Reinforcement Learning (RL), where we aim to maximize
the Conditional Value at Risk (CVaR) with a fixed risk tolerance $\tau$.
Prior theoretical work studying risk-sensitive RL focuses on the tabular Markov Decision Processes (MDPs) setting.
To extend CVaR RL to settings where state space is large, function approximation must be deployed.
We study CVaR RL in low-rank MDPs with nonlinear function approximation. Low-rank MDPs assume the underlying transition kernel admits a low-rank decomposition, but unlike prior linear models, low-rank MDPs do not assume the feature or state-action representation is known.
We propose a novel Upper Confidence Bound (UCB) bonus-driven algorithm to carefully balance the interplay between exploration, exploitation, and representation learning in CVaR RL.
We prove that our algorithm achieves a sample complexity of $\tilde{O}\left(\frac{H^7 A^2 d^4}{\tau^2 \epsilon^2}\right)$ rate to yield an $\epsilon$-optimal CVaR, where $H$ is the length of each episode, $A$ is the capacity of action space, and $d$ is the dimension of representations.
Computational-wise, we design a novel discretized Least-Squares Value Iteration (LSVI) algorithm for the CVaR objective as the planning oracle and show that we can find the near-optimal policy in a polynomial running time with a Maximum Likelihood Estimation oracle.
To our knowledge, this is the first provably efficient CVaR RL algorithm in low-rank MDPs. | Provably Efficient CVaR RL in Low-rank MDPs | [
"Yulai Zhao",
"Wenhao Zhan",
"Xiaoyan Hu",
"Ho-fung Leung",
"Farzan Farnia",
"Wen Sun",
"Jason Lee"
] | Workshop/M3L | 2311.11965 | [
""
] | https://huggingface.co/papers/2311.11965 | 1 | 0 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=StN285pphC | @inproceedings{
mehra2023analysis,
title={Analysis of Task Transferability in Large Pre-trained Classifiers},
author={Akshay Mehra and Yunbei Zhang and Jihun Hamm},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=StN285pphC}
} | Transfer learning is a cornerstone of modern machine learning, enabling models to transfer the knowledge acquired from a source task to downstream target tasks with minimal fine-tuning. However, the relationship between the source task performance and the downstream target task performance (i.e., transferability) is poorly understood. In this work, we rigorously analyze the transferability of large pre-trained models on downstream classification tasks after linear fine-tuning. We use a novel Task Transfer Analysis approach that transforms the distribution (and classifier) of the source task to produce a new distribution (and classifier) similar to that of the target task. Using this, we propose an upper bound on transferability composed of the Wasserstein distance between the transformed source and the target distributions, the conditional entropy between the label distributions of the two tasks, and the weighted loss of the source classifier on the source task. We propose an optimization problem that minimizes the proposed bound to estimate transferability. Using state-of-the-art pre-trained models, we show that the proposed upper bound accurately estimates transferability on various datasets and demonstrates the importance of high relatedness between the source and target tasks for achieving high transferability. | Analysis of Task Transferability in Large Pre-trained Classifiers | [
"Akshay Mehra",
"Yunbei Zhang",
"Jihun Hamm"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Shqnglu4En | @inproceedings{
tahmasebi2023on,
title={On Scale-Invariant Sharpness Measures},
author={Behrooz Tahmasebi and Ashkan Soleymani and Stefanie Jegelka and Patrick Jaillet},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=Shqnglu4En}
} | Recently, there has been a substantial surge of interest in the development of optimization algorithms tailored for overparameterized models. This interest centers around the objective of minimizing a concept of sharpness in conjunction with the original loss function, e.g., the Sharpness-Aware Minimization (SAM) algorithm shown effective in practice. Nevertheless, the majority of sharpness measures exhibit sensitivity to parameter scaling in neural networks, and they may even experience significant magnification when subjected to rescaling operations. Motivated by this issue, in this paper, we introduce a new class of scale-invariant sharpness measures, that gives rise to a new class of scale-invariant sharpness-aware objective functions. Furthermore, we prove that the newly introduced objective functions are explicitly biased towards the minimization of our scale-invariant sharpness measures. | On Scale-Invariant Sharpness Measures | [
"Behrooz Tahmasebi",
"Ashkan Soleymani",
"Stefanie Jegelka",
"Patrick Jaillet"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SU6KGZweUJ | @inproceedings{
chen2023gibbsbased,
title={Gibbs-Based Information Criteria and the Over-Parameterized Regime},
author={Haobo Chen and Yuheng Bu and Gregory Wornell},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=SU6KGZweUJ}
} | Double-descent refers to the unexpected drop in test loss of a learning algorithm beyond an interpolating threshold with over-parameterization, which is not predicted by information criteria in their classical forms due to the limitations in the standard asymptotic approach. We update these analyses using the information risk minimization framework and provide Bayesian Information Criterion (BIC) for models trained by the Gibbs algorithm. Notably, the BIC penalty term for the Gibbs algorithm corresponds to a specific information measure, i.e., KL divergence. We extend this information-theoretic analysis to over-parameterized models by characterizing the Gibbs-based BIC for the random feature model in the regime where the number of parameters $p$ and the number of samples $n$ tend to infinity, with $p/n$ fixed. Our experiments demonstrate that the Gibbs-based BIC can select the high-dimensional model and reveal the mismatch between marginal likelihood and population risk in the over-parameterized regime, providing new insights for understanding the double-descent phenomenon. | Gibbs-Based Information Criteria and the Over-Parameterized Regime | [
"Haobo Chen",
"Yuheng Bu",
"Gregory Wornell"
] | Workshop/M3L | 2306.05583 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QPMfCLnIqf | @inproceedings{
mohamadi2023grokking,
title={Grokking modular arithmetic can be explained by margin maximization},
author={Mohamad Amin Mohamadi and Zhiyuan Li and Lei Wu and Danica Sutherland},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=QPMfCLnIqf}
} | We present a margin-based generalization theory explaining the “grokking” phenomenon (Power et, al. 2022), where the model generalizes long after overfitting to arithmetic datasets. Specifically, we study two-layer quadratic networks on mod-$p$ arithmetic problems, and show that solutions with maximal margin normalized by $\ell_\infty$ norm generalize with $\tilde O(p^{5/3})$ samples. To the best of our knowledge, this is the first sample complexity bound strictly better than a trivial $O(p^2)$ complexity for modular addition. Empirically, we find that GD on unregularized $\ell_2$ or cross entropy loss tend to maximize the margin. In contrast, we show that kernel-based models, such as networks that are well-approximated by their neural tangent kernel, need $\Omega(p^2)$ samples to achieve non-trivial $\ell_2$ loss. Our theory suggests that grokking might be caused by overfitting in the kernel regime of early training, followed by generalization as gradient descent eventually leaves the kernel regime and maximizes the normalized margin. | Grokking modular arithmetic can be explained by margin maximization | [
"Mohamad Amin Mohamadi",
"Zhiyuan Li",
"Lei Wu",
"Danica Sutherland"
] | Workshop/M3L | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QBOV4DqFh6 | @inproceedings{
ayed2023overparameterised,
title={Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: {\textbackslash}{\textbackslash} Global Convergence Guarantees and Feature Learning},
author={Fadhel Ayed and Francois Caron and Paul Jung and Juho Lee and Hoil Lee and Hongseok Yang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=QBOV4DqFh6}
} | We consider gradient-based optimisation of wide, shallow neural networks with hidden-node ouputs scaled by positive scale parameters. The scale parameters are non-identical, differing from classical Neural Tangent Kernel (NTK) parameterisation. We prove that, for large networks, with high probability, gradient flow converges to a global minimum AND can learn features, unlike in the NTK regime. | Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling:
Global Convergence Guarantees and Feature Learning | [
"Fadhel Ayed",
"Francois Caron",
"Paul Jung",
"Juho Lee",
"Hoil Lee",
"Hongseok Yang"
] | Workshop/M3L | [
"https://github.com/anomdoubleblind/asymmetrical_scaling"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.