bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=v3WOdLUnwL | @inproceedings{
anonymous2023learning,
title={Learning Bit Allocations for Z-Order Layouts in Analytic Data Systems},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=v3WOdLUnwL}
} | To improve the performance of scanning and filtering, modern analytic data systems such as Amazon Redshift and Databricks Delta Lake give users the ability to sort a table using a Z-order, which maps each row to a "Z-value" by interleaving the binary representations of the row's attributes, then sorts rows by their Z-values. These Z-order layouts essentially sort the table by multiple columns simultaneously and can achieve superior performance to single-column sort orders when the user's queries filter over multiple columns. However, the Z-orders currently used by modern systems treat all columns as equally important, which often does not result in the best performance due to the unequal impact that different columns have on query performance. In this work, we investigate the performance impact of using Z-orders that place unequal importance on columns: instead of using an equal number of bits from each column in the Z-value interleaving, we allow unequal bit allocation. We introduce a technique that automatically learns the best bit allocation for a Z-order layout on a given dataset and query workload. Z-order layouts using our learned bit allocations outperform traditional Z-order layouts by up to 1.6X in query runtime and up to 2X in rows scanned. | Learning Bit Allocations for Z-Order Layouts in Analytic Data Systems | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=u0Ncut8ru5 | @inproceedings{
anonymous2023learning,
title={Learning Distributed Protocols with Zero Knowledge},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=u0Ncut8ru5}
} | The success of AlphaGo Zero shows that a computer can learn to play a complicated board game without relying on the knowledge from human players. We observe that designing a distributed protocol is similar to playing board games to some extent: when determining the next action to take, they both want to ensure they can win even when a smart opponent tries to drive the game/protocol to the worst case. In this work, we explore whether we can apply similar techniques to learn a distributed protocol with zero knowledge. Towards this goal, we model the process in a distributed protocol as a state machine, and further rely on model checking to validate the correctness of the learned state machine. With this approach, we successfully learned a correct atomic commit protocol with three processes, and upon that, we further discuss future work. | Learning Distributed Protocols with Zero Knowledge | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tevEeHxpZm | @inproceedings{
anonymous2023compile,
title={ComPile: A Large {IR} Dataset from Production Sources},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=tevEeHxpZm}
} | Code is increasingly becoming a core data modality of modern machine learning research impacting not only the way we write code
with conversational agents like OpenAI's ChatGPT, Google's Bard, or Anthropic's Claude, the way we translate code from one language
into another, but also the compiler infrastructure underlying the language. While modeling approaches may vary and representations differ,
the targeted tasks often remain the same within the individual classes of models. Relying solely on the ability of modern models to extract
information from unstructured code does not take advantage of 70 years of programming language and compiler development by not utilizing the
structure inherent to programs in the data collection. This detracts from the performance of models working over a tokenized representation
of input code and precludes the use of these models in the compiler itself. To work towards the first intermediate
representation (IR) based models, we fully utilize the LLVM compiler infrastructure, shared by a number of languages, to generate
a 182B token dataset of LLVM IR. We generated this dataset from programming languages built on the shared LLVM
infrastructure, including Rust, Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's package
manager or the compiler directly to extract the dataset of intermediate representations from production grade programs.
Our dataset shows great promise for large language model training, and machine-learned compiler components. | ComPile: A Large IR Dataset from Production Sources | null | Workshop/MLSys | 2309.15432 | [
""
] | https://huggingface.co/papers/2309.15432 | 3 | 0 | 0 | 9 | [] | [
"llvm-ml/ComPile"
] | [] | [] | [
"llvm-ml/ComPile"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=t7BEqidKsk | @inproceedings{
anonymous2023plpilot,
title={{PLP}ilot: Benchmark an Automated Programming Language Design Framework Enabled by Large Language Models},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=t7BEqidKsk}
} | The design of new programming languages traditionally requires expertise across syntax and semantics. Recently, large language models(LLMs) have provided unprecedented power in the code generation field, which has the potential to revolutionize the current programming language design stack, including automating writing passes and formally defining a programming language's semantics and syntax. However, there is yet no framework to leverage LLMs to support programming language design. We propose an programming language design framework enabled by large language models, which decouples every part in the programming language design process into a form acceptable by LLMs. We then propose a set of benchmarks on LLM-based programming language tasks. We evaluate this framework on eight decoupled programming language design stages, which shows great productivity improvements over manually designed languages. | PLPilot: Benchmark an Automated Programming Language Design Framework Enabled by Large Language Models | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=mHShSE7MSU | @inproceedings{
anonymous2023predicting,
title={Predicting User Experience on Laptops from Hardware Specifications},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=mHShSE7MSU}
} | Estimating the overall user experience (UX) on a device is a common challenge faced by manufacturers. Today, device makers primarily rely on microbenchmark scores, such as Geekbench, that stress test specific hardware components, such as CPU or RAM, but do not satisfactorily capture real-life consumer workloads. System designers often rely on domain-specific heuristics and extensive testing of prototypes to reach a desired UX goal, and yet there is often a mismatch between the manufacturers’ performance claims and the consumers’ experience.
We present our initial results on predicting real-life experience on laptops from their hardware specifications. We target web applications that run on Chromebooks (ChromeOS laptops) for a simple and fair aggregation of experience across applications and workloads. On 54 laptops, we track 9 UX metrics on common end-user workloads: web browsing, video playback and audio / video calls. We focus on a subset of high-level metrics exposed by the Chrome browser, that are part of the Web Vitals initiative for measuring user experience on web applications.
With a dataset of 100K UX data points, we train gradient boosted regression trees that predict the metric values from device specifications. Across our 9 metrics, we note a mean $R^2$ score (goodness-of-fit on our dataset) of 97.8% and a mean MAAPE (percentage error in prediction on unseen data) of 10.1%. | Predicting User Experience on Laptops from Hardware Specifications | null | Workshop/MLSys | 2402.08964 | [
""
] | https://huggingface.co/papers/2402.08964 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=kg2zuomCLD | @inproceedings{
anonymous2023redco,
title={Redco: A Lightweight Tool to Automate Distributed Training of {LLM}s on Any {GPU}/{TPU}s},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=kg2zuomCLD}
} | The recent progress of AI can be largely attributed to large language models (LLMs). However, their escalating memory requirements introduce challenges for machine learning (ML) researchers and engineers. Addressing this requires developers to partition a large model to distribute it across multiple GPUs or TPUs. This necessitates considerable coding and intricate configuration efforts with existing model parallel tools, such as Megatron-LM, DeepSpeed, and Alpa. These tools require users' expertise in machine learning systems (MLSys), creating a bottleneck in LLM development, particularly for developers without MLSys background. In this work, we present *Red Coast (Redco)*, a lightweight and user-friendly tool crafted to automate distributed training and inference for LLMs, as well as to simplify ML pipeline development. The design of Redco emphasizes two key aspects. First, to automate model parallism, our study identifies two straightforward rules to generate tensor parallel strategies for any given LLM. Integrating these rules into Redco facilitates effortless distributed LLM training and inference, eliminating the need of additional coding or complex configurations. We demonstrate the effectiveness by applying Redco to a set of LLM architectures, such as GPT-J, LLaMA, T5, and OPT, up to the model size of 66B, and in the setting of multi-host. Second, we propose a mechanism that allows for the customization of diverse ML pipelines through the definition of merely three functions, eliminating redundant and formulaic code like multi-host related processing. This mechanism proves adaptable across a spectrum of ML algorithms, from foundational language modeling to complex algorithms like meta-learning and reinforcement learning. Consequently, Redco implementations exhibit much fewer code lines compared to their official counterparts. Redco is released under Apache License 2.0 at [https://github.com/tanyuqian/redco](https://github.com/tanyuqian/redco). | Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=k0FIPHpeR4 | @inproceedings{
anonymous2023acltuner,
title={{ACLT}uner: A Profiling-Driven Fast Tuning to Optimized Deep Learning Inference},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=k0FIPHpeR4}
} | Deep learning has expanded its footprint across diverse domains. The performance of these computations hinges on the interplay between deep learning compilers and inference libraries. While compilers adapt efficiently to new deep learning operations or models, their tuning processes are too time-consuming.
In contrast, inference libraries offer quick execution but with adaptability limitations.
To address these challenges, we propose ACLTuner, which optimizes execution configurations using existing inference library kernels.
ACLTuner identifies and assigns the optimal kernel through targeted device profiling.
Compared to ArmNN, AutoTVM, Ansor, ONNXRuntime, and TFLite, ACLTuner not only achieves up to 2.0x faster execution time across seven deep learning models, but also reduces the average tuning time by 95%. | ACLTuner: A Profiling-Driven Fast Tuning to Optimized Deep Learning Inference | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=jWloGZ03o9 | @inproceedings{
anonymous2023accelerating,
title={Accelerating Text-to-image Editing via Cache-enabled Sparse Diffusion Inference},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=jWloGZ03o9}
} | Due to the recent success of diffusion models, text-to-image generation is becoming increasingly popular and achieves a wide range of applications. Among them, text-to-image editing, or continuous text-to-image generation, attracts lots of attention and can potentially improve the quality of generated images. It's common to see that users may want to slightly edit the generated image by making minor modifications to their input textual descriptions for several rounds of diffusion inference. However, such an image editing process suffers from long-standing heuristics and low inference efficiency. This means that the extent of image editing is uncontrollable, and unnecessary editing invariably leads to extra computation. To solve this problem, we introduce Fast Image Semantically Edit (FISEdit), a cached-enabled sparse diffusion model inference method for efficient text-to-image editing. Extensive empirical results show that FISEdit can be $3.4\times$ and $4.4\times$ faster than existing methods on NVIDIA TITAN RTX and A100 GPUs respectively, and even generates more satisfactory images. | Accelerating Text-to-image Editing via Cache-enabled Sparse Diffusion Inference | null | Workshop/MLSys | 2305.17423 | [
"https://github.com/hankpipi/diffusers-hetu"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=jHjfHmxLiX | @inproceedings{
anonymous2023on,
title={On a Foundation Model for Operating Systems},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=jHjfHmxLiX}
} | This paper lays down the research agenda for a domain-specific foundation model for operating systems (OSes).
Our case for a foundation model revolves around the observations that several OS components {such as CPU, memory, and network subsystems} are interrelated and that OS traces offer the ideal dataset for a foundation model to grasp the intricacies of diverse OS components and their behavior in varying environments and workloads.
We discuss a wide range of possibilities that then arise, from employing foundation models as policy agents to utilizing them as generators and predictors to assist traditional OS control algorithms.
Our hope is that this paper spurs further research into OS foundation models and creating the next generation of operating systems for the evolving computing landscape. | On a Foundation Model for Operating Systems | null | Workshop/MLSys | 2312.07813 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=jDK2fTP8mt | @inproceedings{
anonymous2023learning,
title={Learning Collaborative Information Dissemination with Graph-based Multi-Agent Reinforcement Learning},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=jDK2fTP8mt}
} | In modern communication systems, efficient and reliable information dissemination is crucial for supporting critical operations across domains like disaster response, autonomous vehicles, and sensor networks. This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach as a significant step forward in achieving more decentralized, efficient, and collaborative solutions. We propose a Partially Observable Stochastic Game (POSG) formulation for information dissemination empowering each agent to decide on message forwarding independently, based on their one-hop neighborhood and the degree of connectivity of each neighbor. This constitutes a significant paradigm shift from traditional heuristics based on Multi-Point Relay (MPR) selection. Our approach harnesses Graph Convolutional Reinforcement Learning, employing Graph Attention Networks (GAT) with dynamic attention to capture essential network features. We propose two approaches, L-DGN and HL-DGN, which differ in the information that is exchanged among agents. We evaluate the performance of our decentralized approaches, by comparing them with a widely-used MPR heuristic, and we show that our trained policies are able to efficiently cover the network while bypassing the MPR set selection process. Our approach promises a first step toward supporting the resilience of real-world broadcast communication infrastructures via learned, collaborative information dissemination. | Learning Collaborative Information Dissemination with Graph-based Multi-Agent Reinforcement Learning | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=gxnmqD6tGy | @inproceedings{
anonymous2023mitigating,
title={Mitigating Tail Catastrophe in Steered Database Query Optimization with Risk-Averse Contextual Bandits},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=gxnmqD6tGy}
} | Contextual bandits with average-case statistical guarantees are inadequate in risk-averse situations because they might trade off degraded worst-case behaviour for better average performance. Designing a risk-averse contextual bandit is challenging because exploration is necessary but risk-aversion is sensitive to the entire distribution of rewards; nonetheless we exhibit the first risk-averse contextual bandit algorithm with an online regret guarantee. We apply the technique to a self-tuning software scenario in a production exascale data processing system, where worst-case outcomes should be avoided. | Mitigating Tail Catastrophe in Steered Database Query Optimization with Risk-Averse Contextual Bandits | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dcNIY2s7Pj | @inproceedings{
anonymous2023deepref,
title={DeePref: Deep Reinforcement Learning For Video Prefetching In Content Delivery Networks},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=dcNIY2s7Pj}
} | Content Delivery Networks carry the majority of Internet traffic, and the increasing demand for video content as a major IP traffic across the Internet highlights the importance of caching and prefetching optimization algorithms. Prefetching aims to make data available in the cache before the requester places its request to reduce access time and improve Quality of Experience on the user side. Prefetching is well investigated in operating systems, compiler instructions, in-memory cache, local storage systems, high-speed networks, and cloud systems. Traditional prefetching techniques are well adapted to a particular access pattern, but fail to adapt to sudden variations or randomization in workloads. This paper explores the use of reinforcement learning to tackle the changes in user access patterns and automatically adapt over time. To this end, we propose, DeePref, a Deep Reinforcement Learning agent for online video content prefetching in Content Delivery Networks. DeePref is a prefetcher implemented on edge networks and is agnostic to hardware design, operating systems, and applications. Our results show that DeePref DRQN, using a real-world dataset, achieves a 17% increase in prefetching accuracy and a 28% increase in prefetching coverage on average compared to baseline approaches that use video content popularity as a building block to statically or dynamically make prefetching decisions. We also study possible transfer learning of statistical models from one edge network into another, where unseen user requests from unknown distribution are observed. In terms of transfer learning, the increase in prefetching accuracy and prefetching coverage are [30%, 10%], respectively. Our source code will be available on Github. | DeePref: Deep Reinforcement Learning For Video Prefetching In Content Delivery Networks | null | Workshop/MLSys | 2310.07881 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=cgakfZ6acw | @inproceedings{
anonymous2023zero,
title={Ze{RO}++: Extremely Efficient Collective Communication for Large Model Training},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=cgakfZ6acw}
} | While the Zero Redundancy Optimizer (ZeRO) excels in training large-scale models, it struggles to achieve good throughput in environments with limited bandwidth or small batches where communication becomes a major bottleneck. Inspired by the principles of fine-grained quantization in machine learning algorithms, we designed ZeRO++, an optimizer robust to quantization effects that allows for significant communication volume reduction using low-precision quantization techniques. ZeRO++ composes of three communication volume reduction techniques (low-precision all-gather, data remapping, and low-precision gradient averaging) to significantly reduce the communication volume up to 4x that enables up to 2.16x better throughput at 384 GPU scale. Our results also show ZeRO++ can speedup the RLHF by 3.3x compared to vanilla ZeRO. To verify the convergence of ZeRO++, we test up to 13B model for pretraining with 8/6-bits all gather and up to 30B model for finetuning with 4-bit or 2-bit all gather, and demonstrate on-par accuracy as original ZeRO (aka standard training). As a byproduct, the model trained with ZeRO++ is naturally weight-quantized, which can be directly used for inference without post-training quantization or quantization-aware training. | ZeRO++: Extremely Efficient Collective Communication for Large Model Training | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Z7v6mxNVdU | @inproceedings{
anonymous2023mase,
title={{MASE}: An Efficient Representation for Software-Defined {ML} Hardware System Exploration},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=Z7v6mxNVdU}
} | Machine learning (ML) accelerators have been studied and used extensively to compute ML models with high performance and low power. However, designing such accelerators normally takes a long time and requires significant effort. Unfortunately, the pace of development of ML software models is much faster than the accelerator design cycle, leading to frequent and drastic modifications in the model architecture, thus rendering many accelerators obsolete. Existing design tools and frameworks can provide quick accelerator prototyping, but only for a limited range of models that can fit into a single hardware device, such as an FPGA. Furthermore, with the emergence of large language models, such as GPT-3, there is an increased need for hardware prototyping of these large models within a many-accelerator system to ensure the hardware can scale with the ever-growing model sizes.
The design space is often huge, involving both software and hardware optimization. To address this, we propose a novel representation named MASE IR (Machine-learning Accelerator System Exploration Intermediate Representation) that describes data types, software algorithms, and hardware design constraints. MASEIR opens up opportunities for exploring software and hardware co-optimization at scale. As an application of MASEIR, we implemented a PyTorch-based framework named MASE that automatically optimizes and maps an ML model onto an efficient hardware accelerator system. We believe MASE IR will open new research opportunities for ML system design. | MASE: An Efficient Representation for Software-Defined ML Hardware System Exploration | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=TMvtla5bOP | @inproceedings{
anonymous2023vmrl,
title={{VMR}2L: Virtual Machines Rescheduling Using Reinforcement Learning in Data Centers},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=TMvtla5bOP}
} | Modern industry-scale data centers receive thousands of virtual machine (VM) requests per minute. Due to the continual creation and release of VMs, many small resource fragments are scattered across physical machines (PMs). To handle these fragments, data centers periodically reschedule some VMs to alternative PMs. Despite the increasing importance of VM rescheduling as data centers grow in size, the problem remains understudied. We first show that, unlike most combinatorial optimization tasks, the inference time of VM rescheduling algorithms significantly influences their performance, causing many existing methods to scale poorly. Therefore, we develop a reinforcement learning system for VM rescheduling, VMR2L, which incorporates a set of customized techniques, such as a two-stage framework that accommodates diverse constraints and workload conditions as well as an effective feature extraction module. Our experiments on an industry-scale data center show that VMR2L can achieve a performance comparable to the optimal solution, but with a running time of seconds. | VMR2L: Virtual Machines Rescheduling Using Reinforcement Learning in Data Centers | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PUQLQGtbOx | @inproceedings{
anonymous2023enhancing,
title={Enhancing {ML} model accuracy for Digital {VLSI} circuits using diffusion models: A study on synthetic data generation},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=PUQLQGtbOx}
} | Generative AI has seen remarkable growth over the past few years, with diffusion models being state-of-the-art for image generation. This study investigates the use of diffusion models in generating artificial data generation for electronic circuits for enhancing the accuracy of subsequent machine learning models in tasks such as performance assessment, design, and testing when training data is usually known to be very limited. We utilize simulations in the HSPICE design environment with 22nm CMOS technology nodes to obtain representative real training data for our proposed diffusion model. Our results demonstrate the close resemblance of synthetic data using diffusion model to real data. We validate the quality of generated data, and demonstrate that data augmentation is certainly effective in predictive analysis of VLSI design for digital circuits. | Enhancing ML model accuracy for Digital VLSI circuits using diffusion models: A study on synthetic data generation | null | Workshop/MLSys | 2310.10691 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=OFYGs7VPoG | @inproceedings{
anonymous2023cloudevalyaml,
title={CloudEval-{YAML}: A Realistic and Scalable Benchmark for Cloud Configuration Generation},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=OFYGs7VPoG}
} | Among the thriving ecosystem of cloud computing and the proliferation of Large Language Model (LLM)-based code generation tools, there is a lack of benchmarking for code generation in cloud-native applications. In response to this need, we present CloudEval-YAML, a practical benchmark for cloud configuration generation. CloudEval-YAML tackles the diversity challenge by focusing on YAML, the de facto standard of numerous cloud-native tools. We develop the CloudEval-YAML benchmark with practicality in mind: the dataset consists of hand-written problems with unit tests targeting practical scenarios. To improve practicality during evaluation, we build a scalable evaluation platform for CloudEval-YAML that achieves a 20 times speedup over a single machine. To the best of our knowledge, the CloudEval-YAML dataset is the first hand-written dataset targeting cloud-native applications. We present an in-depth evaluation of 13 LLMs, leading to a deeper understanding of the problems and LLMs, as well as effective methods to improve task performance and reduce cost. The codebase is released at https://github.com/alibaba/CloudEval-YAML. | CloudEval-YAML: A Realistic and Scalable Benchmark for Cloud Configuration Generation | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IsfGse2SVg | @inproceedings{
anonymous2023silhouette,
title={Silhouette: Toward Performance-Conscious and Transferable {CPU} Embeddings},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=IsfGse2SVg}
} | Learned embeddings are widely used to obtain concise data representation and enable transfer learning between different data sets and tasks. In this paper, we present our approach Silhouette, that leverages publicly-available CPU performance data sets to learn CPU performance embeddings. We show how Silhouette enables transfer learning across different types of CPU and leads to a significant improvement in performance prediction accuracy for the target CPUs. | Silhouette: Toward Performance-Conscious and Transferable CPU Embeddings | null | Workshop/MLSys | 2212.08046 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=IY7M6sqCxq | @inproceedings{
anonymous2023improving,
title={Improving Large Language Model Hardware Generating Quality through Post-{LLM} Search},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=IY7M6sqCxq}
} | As large language models (LLMs) like ChatGPT exhibited unprecedented machine intelligence, it also shows great performance in assisting hardware engineers to realize higher-efficiency logic design via natural language interaction. However, due to the limitation of LLM, existing LLM-based hardware generating frameworks generate verilog register transfer language(RTL) without considering its performance, power, area(PPA). To overcome this challenge, we design a post LLM search approach to **merge design space exploration(DSE) process into current LLM hardware generation workflow**, which enables the PPA optimization. At first, our framework begins by generating prompts for the LLM, which then produces initial Verilog programs. Second, an output manager corrects and optimizes these programs before collecting them into the final design space, which is constructed as a HDL search tree. Eventually, the most important post-search stage, our work will do search through this space to select the optimal design under the target metrics.
The evaluation shows that our approach improves generating Verilog quality, and shows broader design optimization space compared to prior work and native LLMs alone. | Improving Large Language Model Hardware Generating Quality through Post-LLM Search | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HTgdQIAgvJ | @inproceedings{
anonymous2023on,
title={On the Promise and Challenges of Foundation Models for Learning-based Cloud Systems Management},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=HTgdQIAgvJ}
} | Foundation models (FMs) are machine learning models that are trained broadly on large-scale data and can be adapted to a set of downstream tasks via fine-tuning, few-shot learning, or even zero-shot learning. Despite the successes of FMs in the language and vision domain, we have yet to see an attempt to develop FMs for cloud systems management (or known as cloud intelligence/AIOps). In this work, we explore the opportunities of developing FMs for cloud systems management. We propose an initial FM design (i.e., the FLASH framework) based on meta-learning and demonstrate its usage in the task of resource configuration search and workload autoscaling. Preliminary results show that FLASH achieves 52.3-90.5% less performance degradation with no adaptation and provides 5.5x faster adaptation. We conclude this paper by discussing the unique risks and challenges of developing FMs for cloud systems management. | On the Promise and Challenges of Foundation Models for Learning-based Cloud Systems Management | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=GBftREPtdK | @inproceedings{
anonymous2023parm,
title={{PARM}: Adaptive Resource Allocation for Datacenter Power Capping},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=GBftREPtdK}
} | Energy efficiency is pressing in today's cloud datacenters. Various power management strategies, such as oversubscription, power capping, and dynamic voltage and frequency scaling, have been proposed and are in use by datacenter operators to better control power consumption at any management unit (e.g., node-level or rack-level) without breaking power budgets. In addition, by gaining more control over different management units within a datacenter (or across datacenters), operators are able to shift the energy consumption either spatially or temporally to optimize carbon footprint based on the spatio-temporal patterns of carbon intensity. The drive for automation has resulted in the exploration of learning-based resource management approaches. In this work, we first systematically investigate the impact of power capping on both latency-critical datacenter workloads and learning-based resource management solutions (i.e., reinforcement learning or RL). We show that even a 20% reduction in power limit (power capping) leads to an 18% degradation in resource management effectiveness (i.e., defined by an RL reward function) which causes 50% higher application latency. We then propose PARM, an adaptive resource allocation framework that provides graceful performance-preserving transition under power capping for latency-critical workloads. Evaluation results show that PARM achieves 10.2-99.3% improvement in service-level objective (SLO) preservation under power capping while improving 3.1-5.8% utilization. | PARM: Adaptive Resource Allocation for Datacenter Power Capping | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=G9TUaacGwU | @inproceedings{
anonymous2023multiagent,
title={Multi-Agent Join},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=G9TUaacGwU}
} | Real-time performance is crucial for interactive and exploratory data analysis,
where users require quick access to subsets or progressive presentations of query
results. Delivering real-time results over large data for common relational binary
operators like join is challenging, as join algorithms often spend considerable time
scanning and attempting to join parts of relations that may not produce any results.
Existing solutions often involve repetitive preprocessing, which is costly and may
not be feasible for interactive workloads or evolving datasets. Additionally, these
solutions may support only restricted types of joins. This paper presents a novel
approach for achieving efficient progressive join processing. The scan operator of
the join learns online during query execution, identifying portions of its underlying
relation that satisfy the join condition. Additionally, an algorithm is introduced
where both scan operators collaboratively learn to optimize join execution. | Multi-Agent Join | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=C4nDgK47OJ | @inproceedings{
anonymous2023can,
title={Can Semi-Supervised Learning Improve Prediction of Deep Learning Model Resource Consumption?},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=C4nDgK47OJ}
} | With the increasing computational demands of Deep Learning (DL), predicting training characteristics like training time and memory usage is crucial for efficient hardware allocation. Traditional methods rely solely on supervised learning for such predictions. Our work integrates a semi-supervised approach for improved accuracy. We present TraPPM, which utilizes a graph autoencoder to understand representations of unlabeled DL graphs, then combined with a supervised graph neural network training to predict the metrics. Our model significantly surpasses standard methods in prediction accuracy, with MAPE values of 9.51\% for training step time and 4.92\% for memory usage. The code and dataset are available at https://github.com/karthickai/trappm | Can Semi-Supervised Learning Improve Prediction of Deep Learning Model Resource Consumption? | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=AilSfYM8wN | @inproceedings{
anonymous2023renamer,
title={Renamer: A Transformer Architecture Invariant to Variable Renaming},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=AilSfYM8wN}
} | Many modeling tasks involve learning functions which are invariant to certain types of input transformations. We study a specific class of invariance: semantics- preserving variable renaming for models of code. We show that vanilla Transformers trained on renaming-invariant tasks do not exhibit renaming invariance. We propose Renamer, a Transformer architecture which is itself invariant to semantics- preserving variable renaming. On a CPU simulation task, Renamer reduces error by between 24.79% and 52.8% compared to a vanilla Transformer. | Renamer: A Transformer Architecture Invariant to Variable Renaming | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=7eg9DuEVn8 | @inproceedings{
anonymous2023efficient,
title={Efficient Prompt Caching for Large Language Model Inference via Embedding Similarity},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=7eg9DuEVn8}
} | Large language models (LLMs) have achieved huge success in numerous natural language process (NLP) tasks. However, it faces the challenge of significant resource consumption during inference. In this paper, we aim to improve the inference efficiency of LLMs by prompt caching, i.e., if the current prompt can be answered by the same response of a previous prompt, one can directly utilize that response without calling the LLM. Specifically, we focus on the prediction accuracy of prompt caching for single-round question-answering tasks via embedding similarity. The existing embeddings of prompts mostly focus on whether two prompts are semantically similar, which is not necessarily equivalent to whether the same response can answer them. Therefore, we propose a distillation-based method to fine-tune the existing embeddings for better caching prediction. Theoretically, we provide finite-sample guarantees for the convergence of our method under different types of loss functions. Empirically, we construct a dataset based on Kwiatkowski et al. [2019] and fine-tune the embedding from Wang et al. [2022], which improves the AUC of caching prediction from 0.85 to 0.92 within 10 minutes of training. The
resulting embedding model improves the throughput over the initial embedding
model. | Efficient Prompt Caching for Large Language Model Inference via Embedding Similarity | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=6GR8KqWCWf | @inproceedings{
anonymous2023reinforcement,
title={Reinforcement Learning for {FPGA} Placement},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=6GR8KqWCWf}
} | This paper introduces the problem of learning to place blocks in Field-Programmable Gate Arrays (FPGAs) and a preliminary learning-based method. In contrast to previous FPGA placement algorithms, we depart from simulated annealing techniques and instead employ deep reinforcement learning (deep RL) for the placement task with the objective of minimizing wirelength. To facilitate the agent's decision making, we design unique state representations including the chipboard observations and interconnections between different blocks. Additionally, we ground representation learning in the supervised task of predicting placement quality to enhance the RL policy's generalization capabilities. To the best of our knowledge, we are the first to introduce a deep RL agent for FPGA placement, with preliminary results to suggest the feasibility of our approach. We hope that this paper will attract more attention to using RL in FPGAs by electronic design automation engineers. | Reinforcement Learning for FPGA Placement | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=5Pr2AVPk6z | @inproceedings{
anonymous2023performance,
title={Performance Roulette: How Cloud Weather Affects {ML}-Based System Optimization},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=5Pr2AVPk6z}
} | As system complexity, workload diversity, and cloud computing adoption continue to grow, both operators and developers are turning to machine learning (ML) based approaches for optimizing systems. ML based approaches typically perform measurements to evaluate candidate system configurations to discover the most optimal configuration. However, it is widely recognized that cloud systems can be effected by "cloud weather", i.e., shifts in performance due to hardware heterogeneity, interference from co-located workloads, virtualization overheads, etc. Given these two trends, in this work we ask: how much can performance variability during training affect ML approaches applied to systems?
Using DBMS knob configuration tuning as a case study, we present two measurement studies that show how ML based optimizers can be affected by noise. This leads to four main observable problems: (1) there exist of very sensitive configurations, the performance of which do not transfer across machines of the same type, (2) unstable configurations during training significantly impact configuration transferability, (3) tuning in an environment with non-representative noise degrades final performance in the deployment environment, (4) sampling noise causes a convergence slowdown. Finally, we propose a set of methods to mitigate the challenges in measurements for training ML based system components. | Performance Roulette: How Cloud Weather Affects ML-Based System Optimization | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3UanvOb2nR | @inproceedings{
anonymous2023secrecy,
title={Secrecy and Sensitivity: Privacy-Performance Trade-Offs in Encrypted Traffic Classification},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=3UanvOb2nR}
} | As datasets and models grow in size and complexity to increase performance, the risks associated with sensitive data also grow. Differential privacy (DP) offers a framework for designing mechanisms that provide a degree of privacy that can help conceal sensitive features or information. However, different domains and applications can naturally exhibit different rates of trade-offs between privacy and performance depending on their characteristics. In contrast to well-studied areas (e.g., healthcare), one relatively unexplored domain is network traffic analysis where the data contains sensitive information on users' communications. In this paper, we apply DP to various machine learning models trained to classify between encrypted and non-encrypted packets from network traffic; we emphasize that our goal is to examine a relatively unexplored area to analyze the trade-offs between privacy and performance when the data contains both encrypted and un-encrypted observations. We show how varying model architecture and feature sets can be a relatively simple way to achieve more optimal performance-privacy trade-offs; we also compare and contextualize reasonable privacy budgets from our analysis in the network traffic domain against those in other more well-studied domains. | Secrecy and Sensitivity: Privacy-Performance Trade-Offs in Encrypted Traffic Classification | null | Workshop/MLSys | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3Jpbm4hwTR | @inproceedings{
anonymous2023llmdv,
title={{LLM}4{DV}: Using Large Language Models for Hardware Test Stimuli Generation},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=3Jpbm4hwTR}
} | Test stimuli generation has been a crucial but labour-intensive task in hardware design verification. In this paper, we revolutionize this process by harnessing the power of large language models (LLMs) and present a novel benchmarking framework, LLM4DV. This framework introduces a prompt template for interactively eliciting test stimuli from the LLM, along with four innovative prompting improvements to support the pipeline execution and further enhance its performance. We compare LLM4DV to traditional constrained-random testing (CRT), using three self-designed design-under-test (DUT) modules. Experiments demonstrate that LLM4DV excels in efficiently handling straightforward DUT scenarios, leveraging its ability to employ basic mathematical reasoning and pre-trained knowledge. While it exhibits reduced efficiency in complex task settings, it still outperforms CRT in relative terms. The proposed framework and the DUT modules used in our experiments are open-sourced. | LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation | null | Workshop/MLSys | 2310.04535 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=1IFPVsOESo | @inproceedings{
anonymous2023adrec,
title={Ad-Rec: Advanced Feature Interactions to Address Covariate-Shifts in Recommendation Networks},
author={Anonymous},
booktitle={Machine Learning for Systems 2023},
year={2023},
url={https://openreview.net/forum?id=1IFPVsOESo}
} | Recommendation models enhance user experiences by utilizing input feature correlations. However, deep learning-based models encounter challenges from changing user behavior and item features, leading to data distribution shifts. Effective cross-feature learning is crucial in addressing this. We introduce Ad-Rec, an advanced network that leverages feature interaction techniques to tackle these issues. It utilizes masked transformers to learn higher-order cross-features while mitigating data distribution drift. Our approach improves model quality, accelerates convergence, and reduces training time. We demonstrate scalability of Ad-Rec and its superior model quality through extensive ablation studies. | Ad-Rec: Advanced Feature Interactions to Address Covariate-Shifts in Recommendation Networks | null | Workshop/MLSys | 2308.14902 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=z03bW0doni | @inproceedings{
min2024silo,
title={{SILO} Language Models: Isolating Legal Risk In a Nonparametric Datastore},
author={Sewon Min and Suchin Gururangan and Eric Wallace and Weijia Shi and Hannaneh Hajishirzi and Noah A. Smith and Luke Zettlemoyer},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=z03bW0doni}
} | The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on low-risk text (e.g., out-of-copyright books or government documents), due to its limited size and domain coverage. We present SILO, a new language model that manages this risk-performance tradeoff during inference. SILO is built by (1) training a parametric LM on the Open License Corpus (OLC), a new corpus we curate with 228B tokens of public domain and permissively licensed text and (2) augmenting it with a more general and easily modifiable nonparametric datastore (e.g., containing copyrighted books or news) that is only queried during inference. The datastore allows use of high-risk data without training on it, supports sentence-level data attribution, and enables data producers to opt out from the model by removing content from the store. These capabilities can foster compliance with data-use regulations such as the fair use doctrine in the United States and the GDPR in the European Union. Our experiments show that the parametric LM struggles on its own with domains not covered by OLC. However, access to the datastore greatly improves out of domain performance, closing 90% of the performance gap with an LM trained on the Pile, a more diverse corpus with mostly high-risk text. We also analyze which nonparametric approach works best, where the remaining errors lie, and how performance scales with datastore size. Our results suggest that it is possible to build high quality language models while mitigating legal risk. | SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore | [
"Sewon Min",
"Suchin Gururangan",
"Eric Wallace",
"Weijia Shi",
"Hannaneh Hajishirzi",
"Noah A. Smith",
"Luke Zettlemoyer"
] | Workshop/DistShift | 2308.04430 | [
"https://github.com/kernelmachine/silo-lm"
] | https://huggingface.co/papers/2308.04430 | 3 | 9 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=wxR5vAGz6o | @inproceedings{
xiao2024the,
title={The {SVHN} Dataset Is Deceptive for Probabilistic Generative Models Due to a Distribution Mismatch},
author={Tim Z. Xiao and Johannes Zenn and Robert Bamler},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=wxR5vAGz6o}
} | The Street View House Numbers (SVHN) dataset is a popular benchmark dataset in deep learning. Originally designed for digit classification tasks, the SVHN dataset has been widely used as a benchmark for various other tasks including generative modeling. However, with this work, we aim to warn the community about an issue of the SVHN dataset as a benchmark for generative modeling tasks: we discover that the official split into training set and test set of the SVHN dataset are not drawn from the same distribution. We empirically show that this distribution mismatch has little impact on the classification task (which may explain why this issue has not been detected before), but it severely affects the evaluation of probabilistic generative models, such as Variational Autoencoders and diffusion models. As a workaround, we propose to mix and re-split the official training and test set when SVHN is used for tasks other than classification. We publish a new split and the corresponding indices we used to create it. | The SVHN Dataset Is Deceptive for Probabilistic Generative Models Due to a Distribution Mismatch | [
"Tim Z. Xiao",
"Johannes Zenn",
"Robert Bamler"
] | Workshop/DistShift | 2312.02168 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wkQy8mLIb9 | @inproceedings{
kotha2024understanding,
title={Understanding Catastrophic Forgetting in Language Models via Implicit Inference},
author={Suhas Kotha and Jacob Springer and Aditi Raghunathan},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=wkQy8mLIb9}
} | We lack a systematic understanding of the effects of fine-tuning (via methods such as instruction-tuning or reinforcement learning from human feedback), particularly on tasks outside the narrow fine-tuning distribution. In a simplified scenario, we demonstrate that improving performance on fine-tuning tasks comes at the expense of other pretraining capabilities. We hypothesize that models implicitly infer the task of the prompt and that fine-tuning skews this inference towards fine-tuning tasks. We find that artificially making the task look farther from the fine-tuning distribution while requiring the same capability can recover some of the pretraining capabilities on our synthetic setup. Since real fine-tuning distributions are predominantly English, we apply conjugate prompting to recover pretrained capabilities in LLMs by simply translating the prompts to different languages. This allows us to recover the in-context learning abilities lost via instruction tuning, and more concerningly, recover harmful content generation suppressed by safety fine-tuning in chatbots like ChatGPT. | Understanding Catastrophic Forgetting in Language Models via Implicit Inference | [
"Suhas Kotha",
"Jacob Springer",
"Aditi Raghunathan"
] | Workshop/DistShift | 2309.10105 | [
"https://github.com/kothasuhas/understanding-forgetting"
] | https://huggingface.co/papers/2309.10105 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=wMWU5kl21R | @inproceedings{
mehra2024predicting,
title={Predicting the Performance of Foundation Models via Agreement-on-the-Line},
author={Aman Mehra and Rahul Saxena and Taeyoun Kim and Christina Baek and J Zico Kolter and Aditi Raghunathan},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=wMWU5kl21R}
} | Estimating out-of-distribution performance is critical to safely deploy machine learning models. Recently, Baek et al. showed that the phenomenon ``agreement-on-the-line'' can be a reliable method for predicting the OOD accuracy of models in an ensemble consisting largely of CNNs trained from scratch. However, it is now increasingly common to lightly fine-tune foundation models, and it is unclear whether such fine-tuning is sufficient to produce enough diversity in model predictions for such agreement-based methods to work properly. In this paper, we develop methods for reliably applying agreement-on-the-line-based performance estimation to fine-tuned foundation models. In particular, we first study the case of fine-tuning a single foundation model, where we extensively study how different types of randomness (linear head initialization, data shuffling, and data subsetting) contribute to agreement-on-the-line of the resulting model sets. Somewhat surprisingly, we find that it is possible to obtain strong agreement via random initialization of the linear head alone. Next, we find how _multiple_ foundation models, pretrained on different data sets but fine-tuned on the same task, also observe agreement-on-the-line. Again rather surprisingly, the diversity of such models is not too disparate, and they all lie on the same agreement line. In total, these methods enable reliable and efficient estimation of OOD accuracy for fine-tuned foundation models, without leveraging any labeled OOD data. | Predicting the Performance of Foundation Models via Agreement-on-the-Line | [
"Aman Mehra",
"Rahul Saxena",
"Taeyoun Kim",
"Christina Baek",
"J Zico Kolter",
"Aditi Raghunathan"
] | Workshop/DistShift | 2404.01542 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vTapqwaTSi | @inproceedings{
zhang2024openood,
title={Open{OOD} v1.5: Enhanced Benchmark for Out-of-Distribution Detection},
author={Jingyang Zhang and Jingkang Yang and Pengyun Wang and Haoqi Wang and Yueqian Lin and Haoran Zhang and Yiyou Sun and Xuefeng Du and Kaiyang Zhou and Wayne Zhang and Yixuan Li and Ziwei Liu and Yiran Chen and Hai Li},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=vTapqwaTSi}
} | Out-of-Distribution (OOD) detection is critical for the reliable operation of open-world intelligent systems. Despite the emergence of an increasing number of OOD detection methods, the evaluation inconsistencies present challenges for tracking the progress in this field.
OpenOOD v1 initiated the unification of the OOD detection evaluation but faced limitations in scalability and scope. In response, this paper presents OpenOOD v1.5, a significant improvement from its predecessor that ensures accurate and standardized evaluation of OOD detection methodologies at large scale. Notably, OpenOOD v1.5 extends its evaluation capabilities to large-scale datasets (ImageNet) and foundation models (e.g., CLIP and DINOv2), and expands its scope to investigate full-spectrum OOD detection which considers semantic and covariate distribution shifts at the same time. This work also contributes in-depth analysis and insights derived from comprehensive experimental results, thereby enriching the knowledge pool of OOD detection methodologies. With these enhancements, OpenOOD v1.5 aims to drive advancements and offer a more robust and comprehensive evaluation benchmark for OOD detection research. | OpenOOD v1.5: Enhanced Benchmark for Out-of-Distribution Detection | [
"Jingyang Zhang",
"Jingkang Yang",
"Pengyun Wang",
"Haoqi Wang",
"Yueqian Lin",
"Haoran Zhang",
"Yiyou Sun",
"Xuefeng Du",
"Yixuan Li",
"Ziwei Liu",
"Yiran Chen",
"Hai Li"
] | Workshop/DistShift | 2306.09301 | [
"https://github.com/jingkang50/openood"
] | https://huggingface.co/papers/2306.09301 | 3 | 0 | 0 | 14 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=oW7oQHas3m | @inproceedings{
yadlowsky2024can,
title={Can Transformer Models Generalize Via In-Context Learning Beyond Pretraining Data?},
author={Steve Yadlowsky and Lyric Doshi and Nilesh Tripuraneni},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=oW7oQHas3m}
} | Transformer models, notably large language models (LLMs), have the remarkable ability to perform in-context learning (ICL) -- to perform new tasks when prompted with unseen input-output examples without any explicit model training. In this work, we study how effectively transformers can generalize beyond their pretraining data mixture, comprised of one or multiple function classes, to identify and learn new functions in-context which are outside the pretraining distribution. To investigate this question in a controlled setting, we focus on the transformers ability to in-context learn functions from simulated data. While these models do well at generalizing to new functions withing the pretrained function class, when presented with tasks or functions which are out-of-distribution from their pretraining data, we demonstrate various failure modes of transformers. Together our results suggest that the impressive ICL abilities of high-capacity transformer models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities. | Can Transformer Models Generalize Via In-Context Learning Beyond Pretraining Data? | [
"Steve Yadlowsky",
"Lyric Doshi",
"Nilesh Tripuraneni"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=laQAXlAKq8 | @inproceedings{
hu2024iteratively,
title={Iteratively Refined Behavior Regularization for Offline Reinforcement Learning},
author={Xiaohan Hu and Yi Ma and Chenjun Xiao and YAN ZHENG and Jianye HAO},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=laQAXlAKq8}
} | One of the fundamental challenges for offline reinforcement learning (RL) is ensuring robustness to data distribution.
Whether the data originates from a near-optimal policy or not, we anticipate that an algorithm should demonstrate its ability to learn an effective control policy that seamlessly aligns with the inherent distribution of offline data. Unfortunately, behavior regularization, a simple yet effective offline RL algorithm, tends to struggle in this regard. In this paper, we propose a new algorithm that substantially enhances behavior-regularization based on conservative policy iteration. Our key observation is that by iteratively refining the reference policy used for behavior regularization, conservative policy update guarantees gradually improvement, while also implicitly avoiding querying out-of-sample actions to prevent catastrophic learning failures. We prove that in the tabular setting this algorithm is capable of learning the optimal policy covered by the offline dataset, commonly referred to as the in-sample optimal policy. We then explore several implementation details of the algorithm when function approximations are applied. The resulting algorithm is easy to implement, requiring only a few lines of code modification to existing methods. Experimental results on the D4RL benchmark indicate that our method outperforms previous state-of-the-art baselines in most tasks, clearly demonstrate its superiority over
behavior regularization. | Iteratively Refined Behavior Regularization for Offline Reinforcement Learning | [
"Xiaohan Hu",
"Yi Ma",
"Chenjun Xiao",
"YAN ZHENG",
"Jianye HAO"
] | Workshop/DistShift | 2306.05726 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=kN5BXh9buY | @inproceedings{
rashtchian2024probing,
title={Probing the Equivariance of Image Embeddings},
author={Cyrus Rashtchian and Charles Herrmann and Chun-Sung Ferng and Ayan Chakrabarti and Dilip Krishnan and Deqing Sun and Da-Cheng Juan and Andrew Tomkins},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=kN5BXh9buY}
} | Probes are small networks that predict properties of underlying data from embeddings, and they provide a targeted way to illuminate the information in embeddings. While analysis with probes has become standard in NLP, there has been less exploration in vision. Our goal is to understand the invariance vs. equivariance of popular image embeddings (e.g., MAE, SimCLR, or CLIP) under certain distribution shifts. By doing so, we investigate what visual aspects from the raw images are encoded into the embeddings by these foundation models. Our probing is based on a systematic transformation prediction task that measures the visual content of embeddings along many axes, including neural style transfer, recoloring, icon/text overlays, noising, and blurring. Surprisingly, six embeddings (including SimCLR) encode enough non-semantic information to identify dozens of transformations. We also consider a generalization task, where we group similar transformations and hold out several for testing. Image-text models (CLIP, ALIGN) are better at recognizing new examples of style transfer than masking-based models (CAN, MAE). Our results show that embeddings from foundation models are equivariant and encode more non-semantic features than a supervised baseline. Hence, their OOD generalization abilities are not due to invariance to such distribution shifts. | Probing the Equivariance of Image Embeddings | [
"Cyrus Rashtchian",
"Charles Herrmann",
"Chun-Sung Ferng",
"Ayan Chakrabarti",
"Dilip Krishnan",
"Deqing Sun",
"Da-Cheng Juan",
"Andrew Tomkins"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=k9EfAJhFZc | @inproceedings{
shnitzer2024llm,
title={{LLM} Routing with Benchmark Datasets},
author={Tal Shnitzer and Anthony Ou and M{\'\i}rian Silva and Kate Soule and Yuekai Sun and Justin Solomon and Neil Thompson and Mikhail Yurochkin},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=k9EfAJhFZc}
} | There is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, no single model typically achieves the best accuracy in all tasks and use cases. In this work, we address the challenge of selecting the best LLM out of a collection of models for new tasks. We propose a new formulation for the problem, in which benchmark datasets are repurposed to learn a ``router'' model for this LLM selection, and we show that this problem can be reduced to a collection of binary classification tasks. We demonstrate the utility and limitations of learning model routers from various benchmark datasets. The extended version of the paper is available here: https://arxiv.org/pdf/2309.15789.pdf. | LLM Routing with Benchmark Datasets | [
"Tal Shnitzer",
"Anthony Ou",
"Mírian Silva",
"Kate Soule",
"Yuekai Sun",
"Justin Solomon",
"Neil Thompson",
"Mikhail Yurochkin"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=jQ4SZXV7Wk | @inproceedings{
mart{\'\i}nez-ferrer2024exploring,
title={Exploring Generalisability of Self-Distillation with No Labels for {SAR}-Based Vegetation Prediction},
author={Laura Mart{\'\i}nez-Ferrer and Anna Jungbluth and Joseph Alejandro Gallego Mejia and Matt Allen and Francisco Dorr and Freddie Kalaitzis and Ra{\'u}l Ramos-Poll{\'a}n},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=jQ4SZXV7Wk}
} | In this work we pre-train a DINO-ViT based model using two Synthetic Aperture Radar datasets (S1GRD or GSSIC) across three regions (China, Conus, Europe). We fine-tune the models on smaller labeled datasets to predict vegetation percentage, and empirically study the connection between the embedding space of the models and their ability to generalize across diverse geographic regions and to unseen data. For S1GRD, embedding spaces of different regions are clearly separated, while GSSIC's overlaps. Positional patterns remain during fine-tuning, and greater distances in embeddings often result in higher errors for unfamiliar regions. With this, our work increases our understanding of generalizability for self-supervised models applied to remote sensing. | Exploring Generalisability of Self-Distillation with No Labels for SAR-Based Vegetation Prediction | [
"Laura Martínez-Ferrer",
"Anna Jungbluth",
"Joseph Alejandro Gallego Mejia",
"Matt Allen",
"Francisco Dorr",
"Freddie Kalaitzis",
"Raúl Ramos-Pollán"
] | Workshop/DistShift | 2310.02048 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=iv0i39JlbP | @inproceedings{
choi2024autoft,
title={Auto{FT}: Robust Fine-Tuning by Optimizing Hyperparameters on {OOD} Data},
author={Caroline Choi and Yoonho Lee and Annie S Chen and Allan Zhou and Aditi Raghunathan and Chelsea Finn},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=iv0i39JlbP}
} | Foundation models encode a rich representation that can be adapted to a desired task by fine-tuning on task-specific data.
However, fine-tuning a model on one particular data distribution often compromises the model's original performance on other distributions.
Current methods for robust fine-tuning utilize various hand-crafted regularization techniques to constrain the fine-tuning process towards the base foundation model.
Yet, it is hard to directly specify what characteristics of the foundation model to retain during fine-tuning, as this is influenced by the complex interplay between the pre-training, fine-tuning, and evaluation distributions.
We propose AutoFT, a data-driven method for guiding foundation model adaptation: optimizing hyperparameters for fine-tuning with respect to post-adaptation performance on a small out-of-distribution (OOD) validation set.
We find that when optimizing hyperparameters for OOD generalization, it is especially beneficial to use a highly expressive hyperparameter space such as per-layer learning rates and loss weight coefficients.
Our evaluation demonstrates state-of-the-art performance on OOD distributions unseen during fine-tuning and hyperparameter optimization. | AutoFT: Robust Fine-Tuning by Optimizing Hyperparameters on OOD Data | [
"Caroline Choi",
"Yoonho Lee",
"Annie S Chen",
"Allan Zhou",
"Aditi Raghunathan",
"Chelsea Finn"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ifOMuiwsGJ | @inproceedings{
raman2024turn,
title={Turn Down the Noise: Leveraging Diffusion Models for Test-time Adaptation via Pseudo-label Ensembling},
author={Mrigank Raman and Rohan Shah and Akash Kannan and Pranit Chawla},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=ifOMuiwsGJ}
} | The goal of test-time adaptation is to adapt a source-pretrained model to a continuously changing target domain without relying on any source data. Typically, this is either done by updating the parameters of the model (model adaptation) using inputs from the target domain or by modifying the inputs themselves (input adaptation). However, methods that modify the model suffer from the issue of compounding noisy updates whereas methods that modify the input need to adapt to every new data point from scratch while also struggling with certain domain shifts. We introduce an approach that leverages a pre-trained diffusion model to project the target domain images closer to the source domain and iteratively updates the model via pseudo-label ensembling. Our method combines the advantages of model and input adaptations while mitigating their shortcomings. Our experiments on CIFAR-10C demonstrate the superiority of our approach, outperforming the strongest baseline by an average of 1.7% across 15 diverse corruptions and surpassing the strongest input adaptation baseline by an average of 18%. | Turn Down the Noise: Leveraging Diffusion Models for Test-time Adaptation via Pseudo-label Ensembling | [
"Mrigank Raman",
"Rohan Shah",
"Akash Kannan",
"Pranit Chawla"
] | Workshop/DistShift | 2311.18071 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=iRz8qi7QB8 | @inproceedings{
rannen-triki2024revisiting,
title={Revisiting Dynamic Evaluation: Online Adaptation for Large Language Models},
author={Amal Rannen-Triki and Jorg Bornschein and Razvan Pascanu and Alexandre Galashov and Michalis Titsias and Marcus Hutter and Andr{\'a}s Gy{\"o}rgy and Yee Whye Teh},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=iRz8qi7QB8}
} | We consider the problem of online finetuning the parameters of a language model at test time, also known as dynamic evaluation. While it is generally known that this approach improves the overall predictive performance, especially when considering distributional shift between training and evaluation data, we here emphasize the perspective that online-adaptation turns parameters into temporally changing states and provides a form of context-length extension with _memory in weights_, more in line with the concept of _memory_ in neuroscience.
We pay particular attention to the speed of adaptation (in terms of sample efficiency), sensitivity to overall distributional drift,
and computational overhead for performing gradient computation and parameter updates. Our empirical study provides insights on when online adaptation is particularly interesting. We highlight that with online adaptation the conceptual distinction between in-context learning and finetuning blurs: Both are methods to condition the model on previously observed tokens. | Revisiting Dynamic Evaluation: Online Adaptation for Large Language Models | [
"Amal Rannen-Triki",
"Jorg Bornschein",
"Razvan Pascanu",
"Alexandre Galashov",
"Michalis Titsias",
"Marcus Hutter",
"András György",
"Yee Whye Teh"
] | Workshop/DistShift | 2403.01518 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=i2BzYZOhaZ | @inproceedings{
roychowdhury2024tackling,
title={Tackling Concept Shift in Text Classification using Entailment-style modeling},
author={Sumegh Roychowdhury and Siva Rajesh Kasa and Karan Gupta and Prasanna Srinivasa Murthy and Alok Chandra},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=i2BzYZOhaZ}
} | Pre-trained language models (PLMs) have seen tremendous success in text classification (TC) problems in the context of Natural Language Processing (NLP). In many real-world text classification tasks, the class definitions being learned do not remain constant but rather change with time - this is known as concept shift. Most techniques for handling concept shift rely on retraining the old classifiers with the newly labelled data. However, given the amount of training data required to fine-tune large DL models for the new concepts, the associated labelling costs can be prohibitively expensive and time consuming.
In this work, we propose a reformulation, converting vanilla classification into an entailment-style problem that requires significantly less data to re-train the text classifier to adapt to new concepts. We demonstrate the effectiveness of our proposed method on both real world & synthetic datasets achieving absolute F1 gains upto 7% and 40% respectively in few-shot settings. Further, upon deployment, our solution also helped save 75% of labeling costs overall. | Tackling Concept Shift in Text Classification using Entailment-style modeling | [
"Sumegh Roychowdhury",
"Siva Rajesh Kasa",
"Karan Gupta",
"Prasanna Srinivasa Murthy",
"Alok Chandra"
] | Workshop/DistShift | 2311.03320 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=fh0nxeyXDr | @inproceedings{
kim2024reliable,
title={Reliable Test-Time Adaptation via Agreement-on-the-Line},
author={Eungyeup Kim and Mingjie Sun and Aditi Raghunathan and J Zico Kolter},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=fh0nxeyXDr}
} | Test-time adaptation (TTA) methods aim to improve robustness to distribution shifts by adapting models using unlabeled data from the shifted test distribution. However, there remain unresolved challenges that undermine the reliability of TTA, which include difficulties in evaluating TTA performance, miscalibration after TTA, and unreliable hyperparameter tuning for adaptation. In this work, we make a notable and surprising observation that TTAed models strongly show the agreement-on-the-line phenomenon (Baek et al., 2022) across a wide range of distribution shifts. We find such linear trends occur consistently in a wide range of models adapted with various hyperparameters, and persist in distributions where the phenomenon fails to hold in vanilla model (i.e., before adaptation). We leverage these observations to make TTA methods more reliable from three perspectives: (i) estimating OOD accuracy (without labeled data) to determine when TTA helps and when it hurts, (ii) calibrating TTAed models again without any labeled data, and (iii) reliably determining hyperparameters for TTA without any labeled validation data. Through extensive experiments, we demonstrate that various TTA methods can be precisely evaluated, both in terms of their improvements and degradations. Moreover, our proposed methods on unsupervised calibration and hyperparameters tuning for TTA achieve results close to the ones assuming access to ground-truth labels, in both OOD accuracy and calibration error. | Reliable Test-Time Adaptation via Agreement-on-the-Line | [
"Eungyeup Kim",
"Mingjie Sun",
"Aditi Raghunathan",
"J Zico Kolter"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ecufIfDNn0 | @inproceedings{
eisenstein2024reward,
title={Reward Model Underspecification in Language Model Alignment},
author={Jacob Eisenstein and Jonathan Berant and Chirag Nagpal and Alekh Agarwal and Ahmad Beirami and Alexander Nicholas D'Amour and Krishnamurthy Dj Dvijotham and Katherine A Heller and Stephen Robert Pfohl and Deepak Ramachandran},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=ecufIfDNn0}
} | Reward models play a key role in aligning language model applications towards human preferences. However, this setup can create a dynamic in which the policy model has the incentive to exploit errors in the reward model to achieve high reward. This means that the success of reward-based alignment depends on the ability of reward models to transfer to new distributions created by the aligned policy model. We show that reward models are \emph{underspecified}, in the sense that models that perform similarly in-distribution can yield very different rewards on policy model outputs. These differences propagate to the aligned policies, which we show to be heavily influenced by the random seed used during \emph{pretraining} of the reward model. We show that even a simple alignment strategy --- best-of-$n$ reranking --- creates a semi-adversarial dynamic between the policy and reward models, promoting outputs on which the reward models are more likely to disagree. Finally, we show that a simple ensembling strategy can help to address this issue. | Reward Model Underspecification in Language Model Alignment | [
"Jacob Eisenstein",
"Jonathan Berant",
"Chirag Nagpal",
"Alekh Agarwal",
"Ahmad Beirami",
"Alexander Nicholas D'Amour",
"Krishnamurthy Dj Dvijotham",
"Katherine A Heller",
"Stephen Robert Pfohl",
"Deepak Ramachandran"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eBcXM9ebdP | @inproceedings{
liu2024learning,
title={Learning Causally-Aware Representations of Multi-Agent Interactions},
author={Yuejiang Liu and Ahmad Rahimi and Po-Chien Luan and Frano Raji{\v{c}} and Alexandre Alahi},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=eBcXM9ebdP}
} | Modeling spatial-temporal interactions between neighboring agents is at the heart of multi-agent problems such as motion forecasting and crowd navigation. Despite notable progress, it remains unclear to which extent modern representations can capture the causal relationships behind agent interactions. In this work, we take an in-depth look at the causal awareness of the learned representations, from computational formalism to controlled simulations to real-world practice. First, we cast doubt on the notion of non-causal robustness studied in the recent CausalAgents benchmark. We show that recent representations are already partially resilient to perturbations of non-causal agents, and yet modeling indirect causal effects involving mediator agents remains challenging. Further, we introduce a simple but effective regularization approach leveraging causal annotations of varying granularity. Through controlled experiments, we find that incorporating finer-grained causal annotations not only leads to higher degrees of causal awareness but also yields stronger out-of-distribution robustness. Finally, we extend our method to a sim-to-real causal transfer framework by means of cross-domain multi-task learning, which boosts generalization in practical settings even without real-world annotations. We hope our work provides more clarity to the challenges and opportunities of learning causally-aware representations in the multi-agent context while making a first step towards a practical solution. | Learning Causally-Aware Representations of Multi-Agent Interactions | [
"Yuejiang Liu",
"Ahmad Rahimi",
"Po-Chien Luan",
"Frano Rajič",
"Alexandre Alahi"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dvbqXi8O30 | @inproceedings{
wang2024fusing,
title={Fusing Models with Complementary Expertise},
author={Hongyi Wang and Felipe Maia Polo and Yuekai Sun and Souvik Kundu and Eric P. Xing and Mikhail Yurochkin},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=dvbqXi8O30}
} | Training AI models that generalize across tasks and domains has long been among the open problems driving AI research. The emergence of Foundation Models made it easier to obtain expert models for a given task, but the heterogeneity of data that may be encountered at test time often means that any single expert is insufficient. We consider the Fusion of Experts (FoE) problem of fusing outputs of expert models with complementary knowledge of the data distribution and formulate it as an instance of supervised learning. Our method is applicable to both discriminative and generative tasks and leads to significant performance improvements in image and text classification, text summarization, multiple-choice QA, and automatic evaluation of generated text. We also extend our method to the "frugal" setting where it is desired to reduce the number of expert model evaluations at test time. | Fusing Models with Complementary Expertise | [
"Hongyi Wang",
"Felipe Maia Polo",
"Yuekai Sun",
"Souvik Kundu",
"Eric P. Xing",
"Mikhail Yurochkin"
] | Workshop/DistShift | 2310.01542 | [
"https://github.com/hwang595/foe-iclr2024"
] | https://huggingface.co/papers/2310.01542 | 4 | 1 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=ar9IclPk8O | @inproceedings{
yang2024on,
title={On Mitigating Shortcut Learning for Fair Chest X-ray Classification under Distribution Shift},
author={Yuzhe Yang and Haoran Zhang and Dina Katabi and Marzyeh Ghassemi},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=ar9IclPk8O}
} | As machine learning models reach human level performance on many real-world medical imaging tasks, it is crucial to consider the mechanisms they may be using to make such predictions. Prior work has demonstrated the surprising ability of deep learning models to recover demographic information from chest X-rays. This suggests that disease classification models could potentially be utilizing these demographics as shortcuts, leading to prior observed performance gaps between demographic groups. In this work, we start by investigating whether chest X-ray models indeed use demographic information as shortcuts when classifying four different diseases. Next, we apply five existing methods for tackling spurious correlations, and examine performance and fairness both for the original dataset and five external hospitals. Our results indicate that shortcut learning can be corrected to remedy in-distribution fairness gaps, though this reduction often does not transfer under domain shift. We also find trade-offs between fairness and other important metrics, raising the question of whether it is beneficial to remove such shortcuts in the first place. | On Mitigating Shortcut Learning for Fair Chest X-ray Classification under Distribution Shift | [
"Yuzhe Yang",
"Haoran Zhang",
"Dina Katabi",
"Marzyeh Ghassemi"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Zz7eZoR1VN | @inproceedings{
kaai2024are,
title={Are all classes created equal? Domain Generalization for Domain-Linked Classes},
author={Kimathi Kaai and Saad Hossain and Sirisha Rambhatla},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=Zz7eZoR1VN}
} | Domain generalization (DG) focuses on transferring domain-invariant knowledge from multiple source domains (available at train time) to an $\textit{a priori}$ unseen target domain(s). This task implicitly assumes that a class of interest is expressed in multiple source domains ($\textit{domain-shared}$), which helps break the spurious correlations between domain and class and enables domain-invariant learning. However, we observe that this results in extremely poor generalization performance for classes only expressed in a specific domain ($\textit{domain-linked}$). To this end, we develop a contrastive and fairness based algorithm -- $\texttt{FOND}$ -- to learn generalizable representations for these domain-linked classes by transferring useful representations from domain-shared classes. We perform rigorous experiments against popular baselines across benchmark datasets to demonstrate that given a sufficient number of domain-shared classes $\texttt{FOND}$ achieves SOTA results for domain-linked DG. | Are all classes created equal? Domain Generalization for Domain-Linked Classes | [
"Kimathi Kaai",
"Saad Hossain",
"Sirisha Rambhatla"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ZZNzQ810dK | @inproceedings{
pezeshki2024discovering,
title={Discovering environments with {XRM}},
author={Mohammad Pezeshki and Diane Bouchacourt and Mark Ibrahim and Nicolas Ballas and Pascal Vincent and David Lopez-Paz},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=ZZNzQ810dK}
} | Successful out-of-distribution generalization requires environment annotations. Unfortunately, these are resource-intensive to obtain, and their relevance to model performance is limited by the expectations and perceptual biases of human annotators. Therefore, to enable robust AI systems across applications, we must develop algorithms to automatically discover environments inducing broad generalization. Current proposals, which divide examples based on their training error, suffer from one fundamental problem. These methods add hyper-parameters and early-stopping criteria that are impossible to tune without a validation set with human-annotated environments, the very information subject to discovery. In this paper, we propose Cross-Risk-Minimization (XRM) to address this issue. XRM trains two twin networks, each learning from one random half of the training data, while imitating confident held-out mistakes made by its sibling. XRM provides a recipe for hyper-parameter tuning, does not require early-stopping, and can discover environments for all training and validation data. Domain generalization algorithms built on top of XRM environments achieve oracle worst-group-accuracy, solving a long-standing problem in out-of-distribution generalization. | Discovering environments with XRM | [
"Mohammad Pezeshki",
"Diane Bouchacourt",
"Mark Ibrahim",
"Nicolas Ballas",
"Pascal Vincent",
"David Lopez-Paz"
] | Workshop/DistShift | 2309.16748 | [
"https://github.com/facebookresearch/XRM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ZKtZ7KQ6G5 | @inproceedings{
fang2024data,
title={Data Filtering Networks},
author={Alex Fang and Albin Madappally Jose and Amit Jain and Ludwig Schmidt and Alexander T Toshev and Vaishaal Shankar},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=ZKtZ7KQ6G5}
} | Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem of learning a *data filtering network* (DFN) for this second step of filtering a large uncurated dataset. Our key finding is that the quality of a network for filtering is distinct from its performance on downstream tasks: for instance, a model that performs well on ImageNet can yield worse training sets than a model with low ImageNet accuracy that is trained on a small amount of high-quality data. Based on our insights, we construct new data filtering networks that induce state-of-the-art image-text datasets. Specifically, our best performing dataset DFN-5B enables us to train state-of-the-art models for their compute budgets: among other improvements on a variety of tasks, a ViT-H trained on our dataset achieves 83.0% zero-shot transfer accuracy on ImageNet, out-performing models trained on other datasets such as LAION-2B, DataComp-1B, or OpenAI’s WIT. In order to facilitate further research in dataset design, we also release a new 2 billion example dataset DFN-2B and show that high performance data filtering networks can be trained from scratch using only publicly available data. | Data Filtering Networks | [
"Alex Fang",
"Albin Madappally Jose",
"Amit Jain",
"Ludwig Schmidt",
"Alexander T Toshev",
"Vaishaal Shankar"
] | Workshop/DistShift | 2309.17425 | [
""
] | https://huggingface.co/papers/2309.17425 | 0 | 6 | 1 | 6 | [
"apple/DFN5B-CLIP-ViT-H-14-378",
"apple/DFN5B-CLIP-ViT-H-14",
"apple/DFN2B-CLIP-ViT-L-14",
"apple/DFN2B-CLIP-ViT-B-16",
"apple/MobileCLIP-S1-OpenCLIP",
"apple/MobileCLIP-B-LT-OpenCLIP",
"apple/MobileCLIP-B-LT",
"apple/MobileCLIP-S2-OpenCLIP",
"apple/mobileclip_s0_timm",
"apple/mobileclip_b_lt_timm",
"apple/MobileCLIP-S0",
"apple/MobileCLIP-S1",
"apple/MobileCLIP-S2",
"Citaman/VeCLIP",
"apple/mobileclip_s2_timm",
"apple/mobileclip_b_timm",
"apple/mobileclip_s1_timm",
"apple/MobileCLIP-B",
"apple/MobileCLIP-B-OpenCLIP",
"TingfengLuo/apple444",
"apple/DFN2B-CLIP-ViT-L-14-39B",
"apple/DFN-public"
] | [
"apf1/datafilteringnetworks_2b"
] | [] | [
"apple/DFN5B-CLIP-ViT-H-14-378",
"apple/DFN5B-CLIP-ViT-H-14",
"apple/DFN2B-CLIP-ViT-L-14",
"apple/DFN2B-CLIP-ViT-B-16",
"apple/MobileCLIP-S1-OpenCLIP",
"apple/MobileCLIP-B-LT-OpenCLIP",
"apple/MobileCLIP-B-LT",
"apple/MobileCLIP-S2-OpenCLIP",
"apple/mobileclip_s0_timm",
"apple/mobileclip_b_lt_timm",
"apple/MobileCLIP-S0",
"apple/MobileCLIP-S1",
"apple/MobileCLIP-S2",
"Citaman/VeCLIP",
"apple/mobileclip_s2_timm",
"apple/mobileclip_b_timm",
"apple/mobileclip_s1_timm",
"apple/MobileCLIP-B",
"apple/MobileCLIP-B-OpenCLIP",
"TingfengLuo/apple444",
"apple/DFN2B-CLIP-ViT-L-14-39B",
"apple/DFN-public"
] | [
"apf1/datafilteringnetworks_2b"
] | [] | 1 | poster |
null | https://openreview.net/forum?id=Z05m9cRpRa | @inproceedings{
yu2024skillmix,
title={Skill-Mix: A Flexible and Expandable Family of Evaluations for {AI} Models},
author={Dingli Yu and Simran Kaur and Arushi Gupta and Jonah Brown-Cohen and Anirudh Goyal and Sanjeev Arora},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=Z05m9cRpRa}
} | With LLMs shifting their role from statistical modeling of language to serving as general-purpose AI agents, how should LLM evaluations change? Arguably, a key ability of an AI agent is to flexibly combine, as needed, the basic skills it has learned. This capability to combine skills plays an important role in (human) pedagogy and also in a recent paper on emergence phenomena (Arora & Goyal, 2023). A new evaluation, Skill-Mix, is introduced to measure this capability. Using a list of $N$ skills the evaluator repeatedly picks random subsets of $k$ skills and asks the LLM to produce text combining that subset of skills. Since the number of subsets grows like $N^k$, for even modest $k$ this evaluation will, with high probability, require the LLM to produce text it has not seen in the training set. The paper develops a methodology for (a) designing and administering such an evaluation, and (b) automatic grading (plus spot-checking by humans) of the results using GPT-4 as well as the open LLaMA-2 70B model.
Administering a version of Skill-Mix to popular chatbots gave results that, while generally in line with prior expectations, contained surprises. Sizeable differences exist among model capabilities ---including suspected cases of ``cramming for the leaderboard''--- that are not captured by their ranking on popular LLM leaderboards. Our methodology can flexibly change to future models and model capabilities, by expanding the set of skills being tested and increasing $k$. By publicly releasing the Skill-Mix methodology, we hope it may grow into an eco-system of open evaluations for AI capabilities, including in multi-modal settings. These may serve as more trustworthy gauges of model capabilities than current leaderboards. | Skill-Mix: A Flexible and Expandable Family of Evaluations for AI Models | [
"Dingli Yu",
"Simran Kaur",
"Arushi Gupta",
"Jonah Brown-Cohen",
"Anirudh Goyal",
"Sanjeev Arora"
] | Workshop/DistShift | 2310.17567 | [
""
] | https://huggingface.co/papers/2310.17567 | 0 | 1 | 0 | 6 | [] | [] | [
"dingliyu/skillmix"
] | [] | [] | [
"dingliyu/skillmix"
] | 1 | poster |
null | https://openreview.net/forum?id=YV3MJo1uRa | @inproceedings{
hu2024pseudocalibration,
title={Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Domain Adaptation},
author={Dapeng Hu and Jian Liang and Xinchao Wang and Chuan-Sheng Foo},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=YV3MJo1uRa}
} | Unsupervised domain adaptation (UDA) improves model accuracy in an unlabeled target domain using a labeled source domain. However, UDA models often lack calibrated predictive uncertainty on target data, posing risks in safety-critical applications. In this paper, we address this under-explored challenge with Pseudo-Calibration (PseudoCal), a novel post-hoc calibration framework. In contrast to prior approaches, we consider UDA calibration as a target-domain specific unsupervised problem rather than a \emph{covariate shift} problem across domains. With a synthesized labeled pseudo-target set that captures the structure of the real target, we turn the unsupervised calibration problem into a supervised one, readily solvable with \emph{temperature scaling}. Extensive empirical evaluation across 5 diverse UDA scenarios involving 10 UDA methods, along with unsupervised fine-tuning of foundation models such as CLIP, consistently demonstrates the superior performance of PseudoCal over alternative calibration methods. Code is available at \url{https://github.com/LHXXHB/PseudoCal}. | Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Domain Adaptation | [
"Dapeng Hu",
"Jian Liang",
"Xinchao Wang",
"Chuan-Sheng Foo"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=XQ1cGxdB3o | @inproceedings{
cohen-wang2024ask,
title={Ask Your Distribution Shift if Pre-Training is Right for You},
author={Benjamin Cohen-Wang and Joshua Vendrow and Aleksander Madry},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=XQ1cGxdB3o}
} | Pre-training is a widely used approach to develop models that are robust to distribution shifts. However, in practice, its effectiveness varies: fine-tuning a pre-trained model improves robustness significantly in some cases but *not at all* in others (compared to training from scratch). In this work, we seek to characterize the failure modes that pre-training *can* and *cannot* address. In particular, we focus on two possible failure modes of models under distribution shift: poor extrapolation (e.g., they cannot generalize to a different domain) and biases in the training data (e.g., they rely on spurious features). Our study suggests that, as a rule of thumb, pre-training can help mitigate poor extrapolation but not dataset biases. After providing theoretical motivation and empirical evidence for this finding, we explore an implication for developing robust models: fine-tuning on a (very) small, non-diverse but *de-biased* dataset can result in significantly more robust models than fine-tuning on a large and diverse but biased dataset. | Ask Your Distribution Shift if Pre-Training is Right for You | [
"Benjamin Cohen-Wang",
"Joshua Vendrow",
"Aleksander Madry"
] | Workshop/DistShift | 2403.00194 | [
"https://github.com/madrylab/pretraining-distribution-shift-robustness"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=VGS4RCX3OY | @inproceedings{
ge2024maximum,
title={Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift},
author={Jiawei Ge and Shange Tang and Jianqing Fan and Cong Ma and Chi Jin},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=VGS4RCX3OY}
} | A key challenge of modern machine learning systems is to achieve Out-of-Distribution (OOD) generalization---generalizing to target data whose distribution differs from that of source data. Despite its significant importance, the fundamental question of ``what are the most effective algorithms for OOD generalization'' remains open even under the standard setting of covariate shift.
This paper addresses this fundamental question by proving that, surprisingly, classical Maximum Likelihood Estimation (MLE) purely using source data (without any modification) achieves the *minimax* optimality for covariate shift under the *well-specified* setting. This result holds for a very large class of parametric models, including but not limited to linear regression, logistic regression, and phase retrieval, and does not require any boundedness condition on the density ratio. This paper further complement the study by proving that for the *misspecified setting*, MLE can perform poorly, and the Maximum Weighted Likelihood Estimator (MWLE) emerges as minimax optimal in specific scenarios, outperforming MLE. | Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift | [
"Jiawei Ge",
"Shange Tang",
"Jianqing Fan",
"Cong Ma",
"Chi Jin"
] | Workshop/DistShift | 2311.15961 | [
""
] | https://huggingface.co/papers/2311.15961 | 1 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=SfRa1zGxCg | @inproceedings{
hu2024simplifying,
title={Simplifying and Stabilizing Model Selection in Unsupervised Domain Adaptation},
author={Dapeng Hu and Mi Luo and Jian Liang and Chuan-Sheng Foo},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=SfRa1zGxCg}
} | Ensuring reliable model selection is crucial for unleashing the full potential of advanced unsupervised domain adaptation (UDA) methods to improve model performance in unlabeled target domains. However, existing model selection methods in UDA often struggle to maintain reliable selections across diverse UDA methods and scenarios, suffering from highly risky worst-case selections. This limitation significantly hinders their practicality and reliability for researchers and practitioners in the community. In this paper, we introduce EnsV, a novel ensemble-based approach that makes pivotal strides in reliable model selection by avoiding the selection of the worst model. EnsV is built on an off-the-shelf ensemble that is theoretically guaranteed to outperform the worst candidate model, ensuring high reliability.
Notably, EnsV relies solely on predictions of unlabeled target data without making any assumptions about domain distribution shifts, offering high simplicity and versatility for various practical UDA problems. In our experiments, we compare EnsV to 8 competitive model selection approaches. Our evaluation involves 12 UDA methods across 5 diverse UDA benchmarks and 5 popular UDA scenarios. The results consistently demonstrate that EnsV stands out as a highly simple, versatile, and reliable approach for practical model selection in UDA scenarios. Code is available at \url{https://github.com/LHXXHB/EnsV}. | Simplifying and Stabilizing Model Selection in Unsupervised Domain Adaptation | [
"Dapeng Hu",
"Mi Luo",
"Jian Liang",
"Chuan-Sheng Foo"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SAu298HU2I | @inproceedings{
fifty2024contextaware,
title={Context-Aware Meta-Learning},
author={Christopher Fifty and Dennis Duan and Ronald Guenther Junkins and Ehsan Amid and Jure Leskovec and Christopher Re and Sebastian Thrun},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=SAu298HU2I}
} | Large Language Models like ChatGPT demonstrate a remarkable capacity to learn new concepts during inference without any fine-tuning. However, visual models trained to detect new objects during inference have been unable to replicate this ability, and instead either perform poorly or require meta-training and/or fine-tuning on similar objects. In this work, we propose a meta-learning algorithm that emulates Large Language Models by learning new visual concepts during inference without fine-tuning. Our approach leverages a frozen pre-trained feature extractor, and analogous to in-context learning, recasts meta-learning as sequence modeling over datapoints with known labels and a test datapoint with an unknown label. On 8 out of 11 meta-learning benchmarks, our approach---without meta-training or fine-tuning---exceeds or matches the state-of-the-art algorithm, P>M>F, which is meta-trained on these benchmarks. | Context-Aware Meta-Learning | [
"Christopher Fifty",
"Dennis Duan",
"Ronald Guenther Junkins",
"Ehsan Amid",
"Jure Leskovec",
"Christopher Re",
"Sebastian Thrun"
] | Workshop/DistShift | 2310.10971 | [
"https://github.com/cfifty/CAML"
] | https://huggingface.co/papers/2310.10971 | 3 | 16 | 1 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=S9h0eLl71q | @inproceedings{
oh2024towards,
title={Towards Calibrated Robust Fine-Tuning of Vision-Language Models},
author={Changdae Oh and Mijoo Kim and Hyesu Lim and Junhyeok Park and Euiseog Jeong and Zhi-Qi Cheng and Kyungwoo Song},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=S9h0eLl71q}
} | While fine-tuning unleashes the potential of a pre-trained model to a specific task, it trades off the model’s generalization capability on out-of-distribution (OOD) datasets. To mitigate this, robust fine-tuning aims to ensure performance on OOD datasets as well as an in-distribution (ID) dataset for which the model is tuned. However, another criterion for reliable machine learning (ML) – confidence calibration, is overlooked despite its increasing demand for real-world high-stakes ML applications (e.g. autonomous driving). For the first, we raise concerns about the calibration of fine-tuned vision-language models (VLMs) by showing that naive fine-tuning and even state-of-the-art robust fine-tuning methods hurt the calibration of pre-trained VLMs, especially on OOD datasets. To address this, we provide a simple approach, called a calibrated robust fine-tuning (CaRot), that incentivizes the calibration and robustness on both ID and OOD datasets. Empirical results on ImageNet-1K distribution shift evaluation verify the effectiveness of our method. | Towards Calibrated Robust Fine-Tuning of Vision-Language Models | [
"Changdae Oh",
"Mijoo Kim",
"Hyesu Lim",
"Junhyeok Park",
"Euiseog Jeong",
"Zhi-Qi Cheng",
"Kyungwoo Song"
] | Workshop/DistShift | 2311.01723 | [
"https://github.com/MLAI-Yonsei/CaRot"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=QomMx4zOEI | @inproceedings{
rajak2024transfer,
title={Transfer Learning, Reinforcement Learning for Adaptive Control Optimization under Distribution Shift},
author={Pankaj Rajak and Wojciech Kowalinski and Fei Wang},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=QomMx4zOEI}
} | Many control systems rely on a pipeline of machine learning models and hand-coded rules to make decisions. However, due to changes in the operating environment, these rules require constant tuning to maintain optimal system performance. Reinforcement learning (RL) can automate the online optimization of rules based on incoming data. However, RL requires extensive training data and exploration, which limits its application to new rules or those with sparse data. Here, we propose a transfer learning approach called Learning from Behavior Prior (LBP) to enable fast, sample-efficient RL optimization by transferring knowledge from an expert controller. We demonstrate this approach by optimizing the rule thresholds in a simulated control pipeline across differing operating conditions. Our method converges 5x faster than vanilla RL, with greater robustness to distribution shift between the expert and target environments. LBP reduces negative impacts during live training, enabling automated optimization even for new controllers. | Transfer Learning, Reinforcement Learning for Adaptive Control Optimization under Distribution Shift | [
"Pankaj Rajak",
"Wojciech Kowalinski",
"Fei Wang"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PSjFGJpyZI | @inproceedings{
gupta2024context,
title={Context is Environment},
author={Sharut Gupta and David Lopez-Paz and Stefanie Jegelka and Kartik Ahuja},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=PSjFGJpyZI}
} | Two lines of work are taking center stage in AI research. On the one hand, increasing efforts are being made to build models that generalize out-of-distribution (OOD). Unfortunately, a hard lesson so far is that no proposal convincingly outperforms a simple empirical risk minimization baseline. On the other hand, large language models (LLMs) have erupted as algorithms able to learn in-context, generalizing on-the-fly to the eclectic contextual circumstances. We argue that context is environment, and posit that in-context learning holds the key to better domain generalization. Via extensive theory and experiments, we show that paying attention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as they arrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context Risk Minimization (ICRM) algorithm to zoom-in on the test environment risk minimizer, leading to significant OOD performance improvements. | Context is Environment | [
"Sharut Gupta",
"David Lopez-Paz",
"Stefanie Jegelka",
"Kartik Ahuja"
] | Workshop/DistShift | 2309.09888 | [
""
] | https://huggingface.co/papers/2309.09888 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=Og4BHvCA8h | @inproceedings{
mayilvahanan2024does,
title={Does {CLIP}{\textquoteright}s generalization performance mainly stem from high train-test similarity?},
author={Prasanna Mayilvahanan and Thadd{\"a}us Wiedemer and Evgenia Rusak and Matthias Bethge and Wieland Brendel},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=Og4BHvCA8h}
} | Foundation models like CLIP are trained on hundreds of millions of samples and effortlessly generalize to new tasks and inputs. Out of the box, CLIP shows stellar zero-shot and few-shot capabilities on a wide range of out-of-distribution (OOD) benchmarks, which prior works attribute mainly to today's large and comprehensive training dataset (like LAION). However, it is questionable how meaningful terms like out-of-distribution generalization are for CLIP as it seems likely that web-scale datasets like LAION simply contain many samples that are similar to common OOD benchmarks originally designed for ImageNet. To test this hypothesis, we retrain CLIP on pruned LAION splits that replicate ImageNet’s train-test similarity with respect to common OOD benchmarks. While we observe a performance drop on some benchmarks, surprisingly, CLIP’s overall performance remains high. This shows that high train-test similarity is insufficient to explain CLIP’s performance. | Does CLIP’s generalization performance mainly stem from high train-test similarity? | [
"Prasanna Mayilvahanan",
"Thaddäus Wiedemer",
"Evgenia Rusak",
"Matthias Bethge",
"Wieland Brendel"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=NbaYFHLTd8 | @inproceedings{
halbe2024hepco,
title={He{PC}o: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning},
author={Shaunak Halbe and James Seale Smith and Junjiao Tian and Zsolt Kira},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=NbaYFHLTd8}
} | In this paper, we focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts over time without sharing or storing any data. The complexity of this problem is compounded by challenges from both the Continual and Federated Learning perspectives. Specifically, models trained in a CFL setup suffer from catastrophic forgetting which is exacerbated by data heterogeneity across clients.
Existing attempts at this problem tend to impose large overheads on clients and communication channels or require access to stored data which renders them unsuitable for real-world use due to privacy.
We study this problem in the context of Foundation Models and showcase their effectiveness in mitigating forgetting while minimizing overhead costs and without requiring access to any stored data. We achieve this by leveraging a prompting based approach and proposing a novel and lightweight generation and distillation scheme to aggregate client models at the server.
Our approach outperforms both existing methods and our own baselines by more than 7\% on challenging image-classification benchmarks while significantly reducing communication and client-level computation costs. | HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning | [
"Shaunak Halbe",
"James Seale Smith",
"Junjiao Tian",
"Zsolt Kira"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=KfgxnxTvp3 | @inproceedings{
jourdan2024a,
title={A Nearest Neighbor-Based Concept Drift Detection Strategy for Reliable Condition Monitoring},
author={Nicolas Jourdan},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=KfgxnxTvp3}
} | Condition monitoring is one of the most prominent industrial use cases for machine learning today. As condition monitoring applications are commonly developed using static training datasets, their long-term performance is vulnerable to concept drift in the form of time-dependent changes in environmental and operating conditions as well as data quality problems or sensor drift. When the data distribution changes, machine learning models can fail catastrophically. We show that two-sample tests of homogeneity, which form the basis of most of the available concept drift detection strategies, fail in this domain, as the live data is highly correlated and does not follow the assumption of being independent and identically distributed (i.i.d.) that is often made in academia. We propose a novel drift detection approach called
Localized Reference Drift Detection (LRDD) to address this challenge by refining the reference set for the two-sample tests. We demonstrate the performance of the proposed approach in a preliminary evaluation on a tool condition monitoring case study. | A Nearest Neighbor-Based Concept Drift Detection Strategy for Reliable Condition Monitoring | [
"Nicolas Jourdan"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=KWvKwgIxnD | @inproceedings{
lewis2024improving,
title={Improving Domain Generalization in Contrastive Learning using Domain-Aware Temperature Control},
author={Robert A Lewis and Katie Matton and Rosalind Picard and John Guttag},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=KWvKwgIxnD}
} | Self-supervised pre-training with contrastive learning is a powerful method for learning from sparsely labeled data. However, performance can drop considerably when there is a shift in the distribution of data from training to test time. We study this phenomenon in a setting in which the training data come from multiple domains, and the test data come from a domain not seen at training that is subject to significant covariate shift. We present a new method for contrastive learning that incorporates domain labels to increase the domain invariance of learned representations, leading to improved out-of-distribution generalization. Our method adjusts the temperature parameter in the InfoNCE loss -- which controls the relative weighting of negative pairs -- using the probability that a negative sample comes from the same domain as the anchor. This upweights pairs from more similar domains, encouraging the model to discriminate samples based on domain-invariant attributes. Through experiments on a variant of the MNIST dataset, we demonstrate that our method yields better out-of-distribution performance than domain generalization baselines. Furthermore, our method maintains strong in-distribution task performance, substantially outperforming baselines on this measure. | Improving Domain Generalization in Contrastive Learning using Domain-Aware Temperature Control | [
"Robert A Lewis",
"Katie Matton",
"Rosalind Picard",
"John Guttag"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IITCBnEYiV | @inproceedings{
galashov2024stochastic,
title={Stochastic linear dynamics in parameters to deal with Neural Networks plasticity loss},
author={Alexandre Galashov and Michalis Titsias and Razvan Pascanu and Yee Whye Teh and Maneesh Sahani},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=IITCBnEYiV}
} | Plasticity loss has become an active topic of interest in the continual learning community. Over time, when faced with non-stationary data, standard gradient descent loses its ability to learn. It comes in two forms, the inability of the network to generalize and its inability to fit the training data. Several causes have been proposed including ill-conditioning or the saturation of activation functions. In this work we focus on the inability of neural networks to optimize due to saturating activations, which particularly affects online reinforcement learning settings, where the learning process itself creates a non-stationary setting even if the environment is kept fixed. Recent works have proposed to answer this problem by relying on dynamically resetting units that seem inactive, allowing them to be tuned further. We explore an alternative approach to this based on stochastic linear dynamics in parameters which allows to model non-stationarity and provides a mechanism to adaptively and stochastically drift the parameters towards the prior, implementing a mechanism of soft parameters reset. | Stochastic linear dynamics in parameters to deal with Neural Networks plasticity loss | [
"Alexandre Galashov",
"Michalis Titsias",
"Razvan Pascanu",
"Yee Whye Teh",
"Maneesh Sahani"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HpY9tkX3Ui | @inproceedings{
tripuraneni2024can,
title={Can Transformers In-Context Learn Task Mixtures?},
author={Nilesh Tripuraneni and Lyric Doshi and Steve Yadlowsky},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=HpY9tkX3Ui}
} | In-context learning (ICL) refers to the ability of Large Language Models (LLMs) to perform new tasks by conditioning on input-output samples without any parameter updates. Previous work has established that, in a controlled setting, transformers can optimally perform ICL for tasks from a single task family, here a single function class, when they are pretrained on example tasks from that family. Using this setting, we probe the relationship between the pretraining data mixtures and downstream ICL performance. In particular, we empirically explore the ability of pretrained transformers to \textit{select a family of tasks} (i.e. amongst distinct function classes) and \textit{perform learning within that task family} (i.e. learn a function within a function class), all in-context. We show, for pretraining task mixtures balanced across task families, the cost of unsupervised downstream ICL task-family selection is near-zero. For task families rarely seen in pretraining, downstream ICL learning curves exhibit complex, task-dependent non-monotonic behavior. We also characterize the benefit of conditional pretraining in this simplified model, showing how task-family instructions can reduce the overhead of in-context task-family selection. | Can Transformers In-Context Learn Task Mixtures? | [
"Nilesh Tripuraneni",
"Lyric Doshi",
"Steve Yadlowsky"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HSBV4cCheG | @inproceedings{
chuang2024evolving,
title={Evolving Domain Adaptation of Pretrained Language Models for Text Classification},
author={Yun-Shiuan Chuang and Rheeya Uppaal and Yi Wu and Luhang Sun and Makesh Narsimhan Sreedhar and Sijia Yang and Timothy T. Rogers and Junjie Hu},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=HSBV4cCheG}
} | Pre-trained language models have shown impressive performance in various text classification tasks. However, the performance of these models is highly dependent on the quality and domain of the labeled examples. In dynamic real-world environments, text data content naturally evolves over time, leading to a natural $\textit{evolving domain shift}$. Over time, this continuous temporal shift impairs the performance of static models, as their training becomes increasingly outdated.
To address this issue, we propose two dynamic buffer-based adaptation strategies: one utilizes self-training with pseudo-labeling, and the other employs a tuning-free, in-context learning approach for large language models (LLMs).
We validate our methods with extensive experiments on two longitudinal real-world social media datasets, demonstrating their superiority compared to unadapted baselines.
Furthermore, we introduce a COVID-19 vaccination stance detection dataset, serving as a benchmark for evaluating pre-trained language models within evolving domain adaptation settings. | Evolving Domain Adaptation of Pretrained Language Models for Text Classification | [
"Yun-Shiuan Chuang",
"Rheeya Uppaal",
"Yi Wu",
"Luhang Sun",
"Makesh Narsimhan Sreedhar",
"Sijia Yang",
"Timothy T. Rogers",
"Junjie Hu"
] | Workshop/DistShift | 2311.09661 | [
""
] | https://huggingface.co/papers/2311.09661 | 1 | 0 | 0 | 10 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=HDE2jw5np7 | @inproceedings{
li2024robustness,
title={Robustness May be More Brittle than We Think under Different Degrees of Distribution Shifts},
author={Kaican Li and Yifan Zhang and Lanqing HONG and Zhenguo Li and Nevin L. Zhang},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=HDE2jw5np7}
} | Out-of-distribution (OOD) generalization is a complicated problem due to the idiosyncrasies of possible distribution shifts between training and test domains. Most benchmarks employ diverse datasets to address the issue; however, the degree of the distribution shift between the training domains and the test domains of each dataset remains largely fixed. Our study delves into a more nuanced evaluation setting that covers a broad range of shift degrees. We show that the robustness of neural networks can be quite brittle and inconsistent under different shift degrees, and therefore one should be more cautious in drawing conclusions from evaluations under a limited set of degrees. In addition, we find that CLIP, a representative of vision-language foundation models, can be sensitive to even minute distribution shifts of novel downstream tasks. This suggests that while pre-training may improve downstream in-distribution performance, it could have minimal or even adverse effects on generalization in certain OOD scenarios of the downstream task. A longer version of this paper can be found at https://arxiv.org/abs/2310.06622. | Robustness May be More Brittle than We Think under Different Degrees of Distribution Shifts | [
"Kaican Li",
"Yifan Zhang",
"Lanqing HONG",
"Zhenguo Li",
"Nevin L. Zhang"
] | Workshop/DistShift | 2310.06622 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=FiqXqKR26c | @inproceedings{
cattelan2024on,
title={On selective classification under distribution shift},
author={Lu{\'\i}s Felipe Prates Cattelan and Danilo Silva},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=FiqXqKR26c}
} | This paper addresses the problem of selective classification for deep neural networks, where a model is allowed to abstain from low-confidence predictions to avoid potential errors. Specifically, we investigate whether the selective classification performance of ImageNet classifiers is robust to distribution shift. Motivated by the intriguing observation in recent work that many classifiers appear to have a ``broken'' confidence estimator, we start by evaluating methods to fix this issue. We focus on so-called post-hoc methods, which replace the confidence estimator of a given classifier without retraining or modifying it, thus being practically appealing.
We perform an extensive experimental study of many existing and proposed confidence estimators applied to 84 pre-trained ImageNet classifiers available from popular repositories. Our results show that a simple $p$-norm normalization of the logits, followed by taking the maximum logit as the confidence estimator, can lead to considerable gains in selective classification performance, completely fixing the pathological behavior observed in many classifiers. As a consequence, the selective classification performance of any classifier becomes almost entirely determined by its corresponding accuracy. Then, we show these results are consistent under distribution shift: a method that enhances performance in the in-distribution scenario also provides similar gains under distribution shift. Moreover, although a slight degradation in selective classification performance is observed under distribution shift, this can be explained by the drop in accuracy of the classifier, together with the slight dependence of selective classification performance on accuracy. | On selective classification under distribution shift | [
"Luís Felipe Prates Cattelan",
"Danilo Silva"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Fd00jISBD0 | @inproceedings{
pfohl2024understanding,
title={Understanding subgroup performance differences of fair predictors using causal models},
author={Stephen Robert Pfohl and Natalie Harris and Chirag Nagpal and David Madras and Vishwali Mhasawade and Olawale Elijah Salaudeen and Katherine A Heller and Sanmi Koyejo and Alexander Nicholas D'Amour},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=Fd00jISBD0}
} | A common evaluation paradigm compares the performance of a machine learning model across subgroups to assess properties related to fairness. In this work, we argue that distributional differences across subgroups can render this approach to evaluation of fairness misleading. We consider distributional differences across subgroups as a source of confounding that can lead to differences in performance metrics across subgroups even if the relationship between covariates and a label of interest is modeled as well as possible for each subgroup. We show that these differences in model performance can be anticipated and characterized based on the causal structure of the data generating process and the choices made during the model fitting procedure (e.g. whether subgroup membership is used as a predictor). We demonstrate how to construct alternative evaluation procedures that control for this source of confounding during evaluation by implicitly matching the distribution of confounding variables across subgroups. We emphasize that the selection of appropriate control variables requires domain knowledge and selection of contextually inappropriate control variables can produce misleading results. | Understanding subgroup performance differences of fair predictors using causal models | [
"Stephen Robert Pfohl",
"Natalie Harris",
"Chirag Nagpal",
"David Madras",
"Vishwali Mhasawade",
"Olawale Elijah Salaudeen",
"Katherine A Heller",
"Sanmi Koyejo",
"Alexander Nicholas D'Amour"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EOPSlQGl2f | @inproceedings{
tsao2024autovp,
title={Auto{VP}: An Automated Visual Prompting Framework and Benchmark},
author={Hsi-Ai Tsao and Lei Hsiung and Pin-Yu Chen and Sijia Liu and Tsung-Yi Ho},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=EOPSlQGl2f}
} | Visual prompting (VP) is an emerging parameter-efficient fine-tuning approach to adapting pre-trained vision models to solve various downstream image-classification tasks. However, there has hitherto been little systematic study of the design space of VP and no clear benchmark for evaluating its performance. To bridge this gap, we propose AutoVP, an end-to-end expandable framework for automating VP design choices, along with 12 downstream image-classification tasks that can serve as a holistic VP-performance benchmark. Our design space covers 1) the joint optimization of the prompts; 2) the selection of pre-trained models, including image classifiers and text-image encoders; and 3) model output mapping strategies, including nonparametric and trainable label mapping. Our extensive experimental results show that AutoVP outperforms the best-known current VP methods by a substantial margin, having up to 6.7% improvement in accuracy; and attains a maximum performance increase of 27.5% compared to linear-probing (LP) baseline. AutoVP thus makes a two-fold contribution: serving both as an efficient tool for hyperparameter tuning on VP design choices, and as a comprehensive benchmark that can reasonably be expected to accelerate VP’s development. The source code is available at [https://github.com/IBM/AutoVP](https://github.com/IBM/AutoVP). | AutoVP: An Automated Visual Prompting Framework and Benchmark | [
"Hsi-Ai Tsao",
"Lei Hsiung",
"Pin-Yu Chen",
"Sijia Liu",
"Tsung-Yi Ho"
] | Workshop/DistShift | 2310.08381 | [
"https://github.com/IBM/AutoVP"
] | https://huggingface.co/papers/2310.08381 | 1 | 1 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=DwEENEkkLi | @inproceedings{
schirmer2024beyond,
title={Beyond Top-Class Agreement: Using Divergences to Forecast Performance under Distribution Shift},
author={Mona Schirmer and Dan Zhang and Eric Nalisnick},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=DwEENEkkLi}
} | Knowing if a model will generalize to data `in the wild' is crucial for safe deployment. To this end, we study model disagreement notions that consider the full predictive distribution - specifically disagreement based on Hellinger distance, Jensen-Shannon and Kullback–Leibler divergence. We find that divergence-based scores provide better test error estimates and detection rates on out-of-distribution data compared to their top-1 counterparts. Experiments involve standard vision and foundation models. | Beyond Top-Class Agreement: Using Divergences to Forecast Performance under Distribution Shift | [
"Mona Schirmer",
"Dan Zhang",
"Eric Nalisnick"
] | Workshop/DistShift | 2312.08033 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=DbsCOyoPRl | @inproceedings{
wistuba2024continual,
title={Continual Learning with Low Rank Adaptation},
author={Martin Wistuba and Prabhu Teja S and Lukas Balles and Giovanni Zappella},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=DbsCOyoPRl}
} | Recent work using pretrained transformers has shown impressive performance when fine-tuned with data from the downstream problem of interest. However, they struggle to retain that performance when the data characteristics changes. In this paper, we focus on continual learning, where a pre-trained transformer is updated to perform well on new data, while retaining its performance on data it was previously trained on. Earlier works have tackled this primarily through methods inspired from prompt tuning. We question this choice, and investigate the applicability of Low Rank Adaptation (LoRA) to continual learning. On a range of domain-incremental learning benchmarks, our LoRA-based solution, CoLoR, yields state-of-the-art performance, while still being as parameter efficient as the prompt tuning based methods. | Continual Learning with Low Rank Adaptation | [
"Martin Wistuba",
"Prabhu Teja S",
"Lukas Balles",
"Giovanni Zappella"
] | Workshop/DistShift | 2311.17601 | [
""
] | https://huggingface.co/papers/2311.17601 | 0 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=DSone8L3Me | @inproceedings{
bair2024adaptive,
title={Adaptive Sharpness-Aware Pruning for Robust Sparse Networks},
author={Anna Bair and Hongxu Yin and Maying Shen and Pavlo Molchanov and Jose M. Alvarez},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=DSone8L3Me}
} | Robustness and compactness are two essential attributes of deep learning models that are deployed in the real world.
The goals of robustness and compactness may seem to be at odds, since robustness requires generalization across domains, while the process of compression exploits specificity in one domain.
We introduce \textit{Adaptive Sharpness-Aware Pruning (AdaSAP)}, which unifies these goals through the lens of network sharpness.
The AdaSAP method produces sparse networks that are robust to input variations which are \textit{unseen at training time}.
We achieve this by strategically incorporating weight perturbations in order to optimize the loss landscape. This allows the model to be both primed for pruning and regularized for improved robustness.
AdaSAP improves the robust accuracy of pruned models on classification and detection over recent methods by up to +6\% on OOD datasets, over a wide range of compression ratios, pruning criteria, and architectures. | Adaptive Sharpness-Aware Pruning for Robust Sparse Networks | [
"Anna Bair",
"Hongxu Yin",
"Maying Shen",
"Pavlo Molchanov",
"Jose M. Alvarez"
] | Workshop/DistShift | 2306.14306 | [
""
] | https://huggingface.co/papers/2306.14306 | 2 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=D67r01BYYP | @inproceedings{
grangier2024bilevel,
title={Bilevel Optimization to Learn Training Distributions for Language Modeling under Domain Shift},
author={David Grangier and Pierre Ablin and Awni Hannun},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=D67r01BYYP}
} | Language models trained on very large web corpora have become a central piece of modern language processing. In this paradigm, the large, heterogeneous training set rarely matches the distribution of the application domain. This work considers modifying the training distribution in the case where one can observe a small sample of data reflecting the test conditions. We propose an algorithm based on recent formulation of this problem as an online, bilevel optimization problem. We show that this approach compares favorably with alternative strategies from the domain adaptation literature. [Extended version available at arXiv:2311.11973] | Bilevel Optimization to Learn Training Distributions for Language Modeling under Domain Shift | [
"David Grangier",
"Pierre Ablin",
"Awni Hannun"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CDBNgMEE6t | @inproceedings{
liu2024geometrycalibrated,
title={Geometry-Calibrated {DRO}: Combating Over-Pessimism with Free Energy Implications},
author={Jiashuo Liu and Jiayun Wu and Tianyu Wang and Hao Zou and Peng Cui},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=CDBNgMEE6t}
} | Distributionally Robust Optimization (DRO) optimizes the worst-case risk within an uncertainty set to resist distribution shifts. However, DRO suffers from over-pessimism, leading to low-confidence predictions, poor parameter estimations as well as poor generalization in practice. In this work, we uncover one probable root cause of over-pessimism: excessive focus on noisy samples. To alleviate the impact of noise, we incorporate data geometry into calibration terms in DRO, resulting in our novel Geometry-Calibrated DRO (GCDRO) \emph{for regression}. We establish that our risk objective aligns with the Helmholtz free energy in statistical physics, which could extend to standard DRO methods. Leveraging gradient flow in Wasserstein space, we develop an approximate minimax optimization algorithm with a bounded error ratio and elucidate how our approach mitigates noisy sample effects. A full version of this paper can be found at https://arxiv.org/pdf/2311.05054.pdf. | Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications | [
"Jiashuo Liu",
"Jiayun Wu",
"Tianyu Wang",
"Hao Zou",
"Peng Cui"
] | Workshop/DistShift | 2311.05054 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=BTOBu7y2ZD | @inproceedings{
vianna2024channel,
title={Channel Selection for Test-Time Adaptation Under Distribution Shift},
author={Pedro Vianna and Muawiz Sajjad Chaudhary and An Tang and Guy Cloutier and Guy Wolf and Michael Eickenberg and Eugene Belilovsky},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=BTOBu7y2ZD}
} | To ensure robustness and generalization to real-world scenarios, test-time adaptation has been recently studied as an approach to adjust models to a new data distribution during inference. Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks by recalculating batch normalization statistics on test batches. However, in many practical applications this technique is vulnerable to label distribution shifts. We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts. We find that adapted models significantly improve the performance compared to the baseline models and counteract unknown label shifts. | Channel Selection for Test-Time Adaptation Under Distribution Shift | [
"Pedro Vianna",
"Muawiz Sajjad Chaudhary",
"An Tang",
"Guy Cloutier",
"Guy Wolf",
"Michael Eickenberg",
"Eugene Belilovsky"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Agekm5fdW3 | @inproceedings{
jain2024better,
title={Better than Balancing: Debiasing through Data Attribution},
author={Saachi Jain and Kimia Hamidieh and Kristian Georgiev and Marzyeh Ghassemi and Aleksander Madry},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=Agekm5fdW3}
} | Spurious correlations in the training data can cause serious problems for machine learning deployment. However, common debiasing approaches which intervene on the training procedure (e.g., by adjusting the loss) can be especially sensitive to regularization and hyperparameter selection. In this paper, we advocate for a data-based perspective on model debiasing by directly targeting the root causes of the bias within the training data itself. Specifically, we leverage data attribution techniques to isolate specific examples that disproportionally drive reliance on the spurious correlation. We find that removing these training examples can efficiently debias the final classifier. Moreover, our method requires no additional hyperparameters, and does not require group annotations for the training data. | Better than Balancing: Debiasing through Data Attribution | [
"Saachi Jain",
"Kimia Hamidieh",
"Kristian Georgiev",
"Marzyeh Ghassemi",
"Aleksander Madry"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=9TVx8T0U1h | @inproceedings{
ding2024enhancing,
title={Enhancing Robustness of Foundation Model Representations under Provenance-related Distribution Shifts},
author={Xiruo Ding and Zhecheng Sheng and Brian Hur and Feng Chen and Serguei V. S. Pakhomov and Trevor Cohen},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=9TVx8T0U1h}
} | Foundation models are a current focus of attention in both industry and academia. While they have shown their capabilities in a variety of tasks, in-depth research is required to determine their robustness to distribution shift when used as a basis for supervised machine learning. This is especially important in the context of clinical data, with particular limitations related to data accessibility, lack of pretraining materials, and limited availability of high-quality annotations. In this work, we examine the stability of models based on representations from foundation models under distribution shift. We focus on confounding by provenance, a form of distribution shift that emerges in the context of multi-institutional datasets when there are differences in source-specific language use and class distributions. Using a sampling strategy that synthetically induces varying degrees of distribution shift, we evaluate the extent to which representations from foundation models result in predictions that are inherently robust to confounding by provenance. Additionally, we examine the effectiveness of a straightforward confounding adjustment method inspired by Pearl's conception of backdoor adjustment. Results indicate that while foundation models do show some out-of-the-box robustness to confounding-by-provenance related distribution shifts, this can be considerably improved through adjustment. These findings suggest a need for deliberate adjustment of predictive models using representations from foundation models in the context of source-specific distributional differences. | Enhancing Robustness of Foundation Model Representations under Provenance-related Distribution Shifts | [
"Xiruo Ding",
"Zhecheng Sheng",
"Brian Hur",
"Feng Chen",
"Serguei V. S. Pakhomov",
"Trevor Cohen"
] | Workshop/DistShift | 2312.05435 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=8fKaSNB0nX | @inproceedings{
klemmer2024towards,
title={Towards Global, General-Purpose Pretrained Geographic Location Encoders},
author={Konstantin Klemmer and Esther Rolf and Caleb Robinson and Lester Mackey and Marc Ru{\ss}wurm},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=8fKaSNB0nX}
} | Location information is essential for modeling tasks in climate-related fields ranging from ecology to the Earth system sciences. However, obtaining meaningful location representation is challenging and requires a model to distill semantic location information from available data, such as remote sensing imagery. To address this challenge, we introduce SatCLIP, a global, general-purpose geographic location encoder that provides vector embeddings summarizing the characteristics of a given location for convenient usage in diverse downstream tasks. We show that SatCLIP embeddings, pretrained on multi-spectral Sentinel-2 satellite data, can be used for various predictive out-of-domain tasks, including temperature prediction and animal recognition in imagery, and outperform existing competing approaches. SatCLIP embeddings also prove helpful in overcoming geographic domain shift. This demonstrates the potential of general-purpose location encoders and opens the door to learning meaningful representations of our planet from the vast, varied, and largely untapped modalities of geospatial data. | Towards Global, General-Purpose Pretrained Geographic Location Encoders | [
"Konstantin Klemmer",
"Esther Rolf",
"Caleb Robinson",
"Lester Mackey",
"Marc Rußwurm"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=7yVBYSPI8Z | @inproceedings{
lee2024hypernetwork,
title={HyperNetwork Approximating Future Parameters for Time Series Forecasting under Temporal Drifts},
author={Jaehoon Lee and Chan Kim and Gyumin Lee and Haksoo Lim and Jeongwhan Choi and Kookjin Lee and Dongeun Lee and Sanghyun Hong and Noseong Park},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=7yVBYSPI8Z}
} | Models for time series forecasting require the ability to extrapolate from previous observations. Yet, extrapolation is challenging, especially when the data spanning several periods is under temporal drifts where each period has a different distribution. To address this problem, we propose HyperGPA, a hypernetwork that generates a target model's parameters that are expected to work well (i.e., be an optimal model) for each period. HyperGPA discovers an underlying hidden dynamics which causes temporal drifts over time, and generates the model parameters for a target period, aided by the structures of computational graphs. In comprehensive evaluations, we show that target models whose parameters are generated by HyperGPA are up to 64.1\% more accurate than baselines. | HyperNetwork Approximating Future Parameters for Time Series Forecasting under Temporal Drifts | [
"Jaehoon Lee",
"Chan Kim",
"Gyumin Lee",
"Haksoo Lim",
"Jeongwhan Choi",
"Kookjin Lee",
"Dongeun Lee",
"Sanghyun Hong",
"Noseong Park"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=7CUutNeDDg | @inproceedings{
shi2024lcaontheline,
title={{LCA}-on-the-Line: Benchmarking Out of Distribution Generalization with Class Taxonomies},
author={Jia Shi and Gautam Rajendrakumar Gare and Jinjin Tian and Siqi Chai and Zhiqiu Lin and Arun Balajee Vasudevan and Di Feng and Francesco Ferroni and Shu Kong and Deva Ramanan},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=7CUutNeDDg}
} | We introduce `Least Common Ancestor (LCA)-on-the-line' as a method for predicting models' Out-of-Distribution (OOD) performance using in-distribution measurements, without the need for OOD data. We revisit the LCA distance, a concept from the pre-deep-learning era, which calculates the hierarchical distance between labels and predictions in a predefined class hierarchy tree, such as WordNet. Our evaluation of 75 models across five significantly shifted ImageNet-OOD datasets demonstrates the robustness of LCA-on-the-line. It reveals a strong linear correlation between in-domain ImageNet LCA distance and OOD Top-1 accuracy across various datasets, including ImageNet-S/R/A/ObjectNet. Compared to previous methods such as Accuracy-on-the-line and Agreement-on-the-line, LCA-on-the-line shows superior generalization across a wide range of models. This includes models trained with different supervision types, such as class labels for vision models (VMs) and textual captions for vision-language models (VLMs). Our method offers a compelling alternative perspective on why vision-language models tend to generalize better to OOD data compared to vision models, even those with similar or lower in-domain (ID) performance. In addition to presenting an OOD performance indicator, we also demonstrate that aligning model predictions more closely with the class hierarchy and integrating a training loss objective with soft-labels can enhance model OOD performance. | LCA-on-the-Line: Benchmarking Out of Distribution Generalization with Class Taxonomies | [
"Jia Shi",
"Gautam Rajendrakumar Gare",
"Jinjin Tian",
"Siqi Chai",
"Zhiqiu Lin",
"Arun Balajee Vasudevan",
"Di Feng",
"Francesco Ferroni",
"Shu Kong",
"Deva Ramanan"
] | Workshop/DistShift | [
"https://github.com/elvishelvis/lca-on-the-line"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=75A7QJgNey | @inproceedings{
kirsch2024towards,
title={Towards General-Purpose In-Context Learning Agents},
author={Louis Kirsch and James Harrison and C. Daniel Freeman and Jascha Sohl-Dickstein and J{\"u}rgen Schmidhuber},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=75A7QJgNey}
} | Reinforcement Learning (RL) algorithms are usually hand-crafted, driven by the research and engineering of humans. An alternative approach is to automate this research process via meta-learning. A particularly ambitious objective is to automatically discover new RL algorithms from scratch that use in-context learning to learn-how-to-learn entirely from data while also generalizing to a wide range of environments. Those RL algorithms are implemented entirely in neural networks, by conditioning on previous experience from the environment, without any explicit optimization-based routine at meta-test time. To achieve generalization, this requires a broad task distribution of diverse and challenging environments. Our Transformer-based Generally Learning Agents (GLAs) are an important first step in this direction. Our GLAs are meta-trained using supervised learning techniques on an offline dataset with experiences from RL environments that is augmented with random projections to generate task diversity. During meta-testing our agents perform in-context meta-RL on entirely different robotic control problems such as Reacher, Cartpole, or HalfCheetah that were not in the meta-training distribution. | Towards General-Purpose In-Context Learning Agents | [
"Louis Kirsch",
"James Harrison",
"C. Daniel Freeman",
"Jascha Sohl-Dickstein",
"Jürgen Schmidhuber"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=6dNxE7ikrw | @inproceedings{
zhao2024do,
title={Do Transformers Parse while Predicting the Masked Word?},
author={Haoyu Zhao and Abhishek Panigrahi and Rong Ge and Sanjeev Arora},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=6dNxE7ikrw}
} | Pre-trained language models have been shown to encode linguistic structures like parse trees in their embeddings while being trained unsupervised. Some doubts have been raised whether the models are doing parsing or only some computation weakly correlated with it. Concretely: (a) Is it possible to explicitly describe transformers with realistic embedding dimensions, number of heads, etc. that are capable of doing parsing ---or even approximate parsing? (b) Why do pre-trained models capture parsing structure? This paper takes a step toward answering these questions in the context of generative modeling with PCFGs. We show that masked language models like BERT or RoBERTa of moderate sizes can approximately execute the Inside-Outside algorithm for the English PCFG (Marcus et al., 1993). We also show that the Inside-Outside algorithm is optimal for masked language modeling loss on the PCFG-generated data. We conduct probing experiments on models pre-trained on PCFG-generated data to show that this not only allows recovery of approximate parse tree, but also recovers marginal span probabilities computed by the Inside-Outside algorithm, which suggests an implicit bias of masked language modeling towards this algorithm. | Do Transformers Parse while Predicting the Masked Word? | [
"Haoyu Zhao",
"Abhishek Panigrahi",
"Rong Ge",
"Sanjeev Arora"
] | Workshop/DistShift | 2303.08117 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=6RiV6z7KjH | @inproceedings{
balachandar2024domain,
title={Domain constraints improve risk prediction when outcome data is missing},
author={Sidhika Balachandar and Nikhil Garg and Emma Pierson},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=6RiV6z7KjH}
} | Machine learning models often predict the outcome resulting from a human decision. For example, if a doctor tests a patient for disease, will the patient test positive? A challenge is that the human decision *censors* the outcome data: we only observe test outcomes for patients doctors historically tested. Untested patients, for whom outcomes are unobserved, may differ from tested patients along observed and unobserved dimensions. We describe a Bayesian model to capture this setting whose purpose is to estimate risk for both tested and untested patients. To aid model estimation, we propose two *domain-specific* constraints which are plausible in health settings: a *prevalence constraint*, where the overall disease prevalence is known, and an *expertise constraint*, where the human decision-maker deviates from purely risk-based decision-making only along a constrained feature set. We show theoretically and on synthetic data that the constraints can improve parameter inference. We apply our model to a case study of cancer risk prediction, showing that the model can identify suboptimalities in test allocation and that the prevalence constraint increases the plausibility of inferences. | Domain constraints improve risk prediction when outcome data is missing | [
"Sidhika Balachandar",
"Nikhil Garg",
"Emma Pierson"
] | Workshop/DistShift | 2312.03878 | [
""
] | https://huggingface.co/papers/2312.03878 | 1 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=5ck1WQ4yW4 | @inproceedings{
shao2024retrievalbased,
title={Retrieval-based Language Models Using a Multi-domain Datastore},
author={Rulin Shao and Sewon Min and Luke Zettlemoyer and Pang Wei Koh},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=5ck1WQ4yW4}
} | Retrieval-based language models (LMs) can generalize well to unseen test domains, but typically assume access to a datastore of examples from the target domain. It remains an open question if these models are robust with more general datastores, which may include other out of domain data or cover multiple different test domains. In this paper, we study this question by constructing a multi-domain datastore, using a kNN-LM approach. We first show that, on domains that are part of the multi-domain datastore, the model is comparable to or even better than the model with an oracle test domain datastore. We also find that, on domains that are unseen during training and not part of the datastore, using a multi-domain datastore consistently outperforms an oracle single-domain datastore. Together, our results show that kNN-LM is highly robust at out-of-distribution generalization and can effectively target many domains at once, without the oracle domain knowledge assumptions included in all previous work. | Retrieval-based Language Models Using a Multi-domain Datastore | [
"Rulin Shao",
"Sewon Min",
"Luke Zettlemoyer",
"Pang Wei Koh"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=5LR6Ant2Sd | @inproceedings{
garg2024ticclip,
title={TiC-{CLIP}: Continual Training of {CLIP} Models},
author={Saurabh Garg and Mehrdad Farajtabar and Hadi Pouransari and Raviteja Vemulapalli and Sachin Mehta and Oncel Tuzel and Vaishaal Shankar and Fartash Faghri},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=5LR6Ant2Sd}
} | Keeping large foundation models up to date on latest data is inherently expensive. To avoid the prohibitive costs of constantly retraining, it is imperative to continually train these models. This problem is exacerbated by the lack of any large scale continual learning benchmarks or baselines. We introduce the first set of web-scale Time-Continual (TiC) benchmarks for training vision-language models: TIC-DataComp, TIC-YFCC, and TIC-RedCaps with over 12.7B timestamped image-text pairs spanning 9 years (2014--2022). We first use our benchmarks to curate various dynamic evaluations to measure temporal robustness of existing models. We show OpenAI’s CLIP (trained on data up to 2020) loses $\approx 8%$ zero-shot accuracy on our curated retrieval task from 2021--2022 compared with more recently trained models in OpenCLIP repository. We then study how to efficiently train models on time-continuous data. We demonstrate that a simple rehearsal-based approach that continues training from the last checkpoint and replays old data reduces compute by $2.5\times$ when compared to the standard practice of retraining from scratch. | TiC-CLIP: Continual Training of CLIP Models | [
"Saurabh Garg",
"Mehrdad Farajtabar",
"Hadi Pouransari",
"Raviteja Vemulapalli",
"Sachin Mehta",
"Oncel Tuzel",
"Vaishaal Shankar",
"Fartash Faghri"
] | Workshop/DistShift | 2310.16226 | [
"https://github.com/apple/ml-tic-clip"
] | https://huggingface.co/papers/2310.16226 | 6 | 8 | 1 | 8 | [
"apple/TiC-CLIP-bestpool-cumulative",
"apple/TiC-CLIP-basic-cumulative",
"apple/TiC-CLIP-basic-oracle",
"apple/TiC-CLIP-bestpool-sequential",
"apple/TiC-CLIP-basic-sequential",
"apple/TiC-CLIP-bestpool-oracle"
] | [
"apple/TiC-DataComp"
] | [] | [
"apple/TiC-CLIP-bestpool-cumulative",
"apple/TiC-CLIP-basic-cumulative",
"apple/TiC-CLIP-basic-oracle",
"apple/TiC-CLIP-bestpool-sequential",
"apple/TiC-CLIP-basic-sequential",
"apple/TiC-CLIP-bestpool-oracle"
] | [
"apple/TiC-DataComp"
] | [] | 1 | oral |
null | https://openreview.net/forum?id=562Bx0qZT5 | @inproceedings{
wang2024continually,
title={Continually Adapting Optimizers Improve Meta-Generalization},
author={Wenyi Wang and Louis Kirsch and Francesco Faccio and Mingchen Zhuge and J{\"u}rgen Schmidhuber},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=562Bx0qZT5}
} | Meta-learned optimizers increasingly outperform analytical handcrafted optimizers such as SGD and Adam. On some tasks, however, they fail to generalize strongly, underperforming handcrafted methods. Then one can fall back on handcrafted methods through a guard, to combine the efficiency benefits of learned optimizers and the guarantees of analytical methods. At some point in the iterative optimization process, however, such guards may make the learned optimizer incompatible with the remaining optimization, and thus useless for further progress. Our novel method Meta Guard keeps adapting the learned optimizer to the target optimization problem. It experimentally outperforms other baselines, adapting to new tasks during training. | Continually Adapting Optimizers Improve Meta-Generalization | [
"Wenyi Wang",
"Louis Kirsch",
"Francesco Faccio",
"Mingchen Zhuge",
"Jürgen Schmidhuber"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4tt5vJshAg | @inproceedings{
qu2024connect,
title={Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations},
author={Helen Qu and Sang Michael Xie},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=4tt5vJshAg}
} | Models trained on a labeled source domain (e.g., bright, nearby astronomical objects) often generalize poorly when deployed on an out-of-distribution (OOD) target domain (e.g., faint, distant objects). In the domain adaptation setting where unlabeled target data is available, self-supervised pretraining (e.g., masked autoencoding or contrastive learning) is a promising method to mitigate this performance drop. Pretraining improves OOD error when the generic data augmentations used (e.g., masking or cropping) connect the source and target domains, which may be far apart in the input space. In this paper, we show on real-world tasks that standard fine-tuning after pretraining does not consistently improve OOD error over just supervised learning on labeled source data. To better leverage pretraining for distribution shifts, we propose Connect Later: after pretraining with generic augmentations to learn good representations within the source and target domains, fine-tune with targeted augmentations designed with knowledge of the distribution shift to better connect the domains. Connect Later improves average OOD error over standard fine-tuning and supervised learning with targeted augmentations on 4 real-world datasets: astronomical time-series classification (AstroClassification) by 12%, redshift prediction for astronomical time-series (Redshifts) by 0.03 RMSE (11% relative), wildlife species identification (iWildCam-WILDS) by 0.9%, and tumor detection (Camelyon17-WILDS), achieving the state-of-the-art on AstroClassification, iWildCam-WILDS with ResNet-50, and Camelyon17-WILDS with DenseNet121. | Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations | [
"Helen Qu",
"Sang Michael Xie"
] | Workshop/DistShift | 2402.03325 | [
"https://github.com/helenqu/connect-later"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=36yDnEOgKn | @inproceedings{
chen2024confidencebased,
title={Confidence-Based Model Selection: When to Take Shortcuts in Spurious Settings},
author={Annie S Chen and Yoonho Lee and Amrith Setlur and Sergey Levine and Chelsea Finn},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=36yDnEOgKn}
} | Effective machine learning models learn both robust features that directly determine the outcome of interest (e.g., an object with wheels is more likely to be a car), and shortcut features (e.g., an object on a road is more likely to be a car). The latter can be a source of error under distributional shift, when the correlations change at test-time. The prevailing sentiment in the robustness literature is to avoid such correlative shortcut features and learn robust predictors. However, while robust predictors perform better on worst-case distributional shifts, they often sacrifice accuracy on majority subpopulations. In this paper, we argue that shortcut features should not be entirely discarded. Instead, if we can identify the subpopulation to which an input belongs, we can adaptively choose among models with different strengths to achieve high performance on both majority and minority subpopulations. We propose COnfidence-baSed MOdel Selection (COSMOS), where we observe that model confidence can effectively guide model selection. Notably, COSMOS does not require any target labels or group annotations, either of which may be difficult to obtain or unavailable. We evaluate COSMOS on four datasets with spurious correlations, each with multiple test sets with varying levels of data distribution shift. We find that COSMOS achieves 2-5% lower average regret across all subpopulations, compared to using only robust predictors or other model aggregation methods. | Confidence-Based Model Selection: When to Take Shortcuts in Spurious Settings | [
"Annie S Chen",
"Yoonho Lee",
"Amrith Setlur",
"Sergey Levine",
"Chelsea Finn"
] | Workshop/DistShift | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=2aXFFOp4nX | @inproceedings{
wang2024twostage,
title={Two-stage {LLM} Fine-tuning with Less Specialization and More Generalization},
author={Yihan Wang and Si Si and Daliang Li and Michal Lukasik and Felix Yu and Cho-Jui Hsieh and Inderjit S Dhillon and Sanjiv Kumar},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=2aXFFOp4nX}
} | Pretrained large language models (LLMs) are general purpose problem solvers applicable to a diverse set of tasks with prompts. They can be further improved towards a specific task by fine-tuning on a specialized dataset. However, fine-tuning usually makes the model narrowly specialized on this dataset with reduced general in-context learning performances, which is undesirable whenever the fine-tuned model needs to handle additional tasks where no fine-tuning data is available.
In this work, we first demonstrate that fine-tuning on a single task indeed decreases LLMs' general in-context learning performance. We discover one important cause of such forgetting, format specialization, where the model overfits to the format of the fine-tuned task.
We further show that format specialization happens at the very beginning of fine-tuning. To solve this problem, we propose Prompt Tuning with MOdel Tuning (ProMoT), a simple yet effective two-stage fine-tuning framework that reduces format specialization and improves generalization.
ProMoT offloads task-specific format learning into additional and removable parameters by first doing prompt tuning and then fine-tuning the model itself with this soft prompt attached.
With experiments on several fine-tuning tasks and 8 in-context evaluation tasks, we show that ProMoT achieves comparable performance on fine-tuned tasks to standard fine-tuning, but with much less loss of in-context learning performances across a board range of out-of-domain evaluation tasks. More importantly, ProMoT can even enhance generalization on in-context learning tasks that are semantically related to the fine-tuned task, e.g. ProMoT on En-Fr translation significantly improves performance on other language pairs, and ProMoT on NLI improves performance on summarization.
Experiments also show that ProMoT can improve the generalization performance of multi-task training. | Two-stage LLM Fine-tuning with Less Specialization and More Generalization | [
"Yihan Wang",
"Si Si",
"Daliang Li",
"Michal Lukasik",
"Felix Yu",
"Cho-Jui Hsieh",
"Inderjit S Dhillon",
"Sanjiv Kumar"
] | Workshop/DistShift | 2211.00635 | [
""
] | https://huggingface.co/papers/2211.00635 | 1 | 0 | 1 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=22WdsLtGot | @inproceedings{
winter2024an,
title={An Empirical Study of Uncertainty Estimation Techniques for Detecting Drift in Data Streams},
author={Anton Winter and Nicolas Jourdan and Tristan Wirth and Volker Knauthe and Arjan Kuijper},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=22WdsLtGot}
} | In safety-critical domains such as autonomous driving and medical diagnosis, the reliability of machine learning models is crucial. One significant challenge to reliability is concept drift, which can cause model deterioration over time. Traditionally, drift detectors rely on true labels, which are often scarce and costly. This study conducts a comprehensive empirical evaluation of using uncertainty values as substitutes for error rates in detecting drifts, aiming to alleviate the reliance on labeled post-deployment data. We examine five uncertainty estimation methods in conjunction with the ADWIN detector across seven real-world datasets. Our results reveal that while the SWAG method exhibits superior calibration, the overall accuracy in detecting drifts is not notably impacted by the choice of uncertainty estimation method, with even the most basic method demonstrating competitive performance. These findings offer valuable insights into the practical applicability of uncertainty-based drift detection in real-world, safety-critical applications. | An Empirical Study of Uncertainty Estimation Techniques for Detecting Drift in Data Streams | [
"Anton Winter",
"Nicolas Jourdan",
"Tristan Wirth",
"Volker Knauthe",
"Arjan Kuijper"
] | Workshop/DistShift | 2311.13374 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=0Xemsc4Nlp | @inproceedings{
zeng2024outlierrobust,
title={Outlier-Robust Group Inference via Gradient Space Clustering},
author={Yuchen Zeng and Kristjan Greenewald and Luann Jung and Kangwook Lee and Justin Solomon and Mikhail Yurochkin},
booktitle={NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models},
year={2024},
url={https://openreview.net/forum?id=0Xemsc4Nlp}
} | Traditional machine learning models focus on achieving good performance on the overall training distribution, but they often underperform on minority groups. Existing methods can improve the worst-group performance, but they can have several limitations: (i) they require group annotations, which are often expensive and sometimes infeasible to obtain, and/or (ii) they are sensitive to outliers. Most related works fail to solve these two issues simultaneously as they focus on conflicting perspectives of minority groups and outliers. We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters. We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art both in terms of group identification and downstream worst-group performance. | Outlier-Robust Group Inference via Gradient Space Clustering | [
"Yuchen Zeng",
"Kristjan Greenewald",
"Luann Jung",
"Kangwook Lee",
"Justin Solomon",
"Mikhail Yurochkin"
] | Workshop/DistShift | 2210.06759 | [
"https://github.com/yzeng58/private_demographics"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=zu80h9YryU | @inproceedings{
santos2023physicsinformed,
title={Physics-Informed Transformer Networks},
author={Fabricio Dos Santos and Tara Akhound-Sadegh and Siamak Ravanbakhsh},
booktitle={The Symbiosis of Deep Learning and Differential Equations III},
year={2023},
url={https://openreview.net/forum?id=zu80h9YryU}
} | Physics-informed neural networks (PINNs) have been recognized as a viable alternative to conventional numerical solvers for Partial Differential Equations (PDEs). The main appeal of PINNs is that since they directly enforce the PDE equation, one does not require access to costly ground truth solutions for training the model. However, a key challenge is their limited generalization across varied initial conditions. Addressing this, our study presents a novel Physics-Informed Transformer (PIT) model for learning the solution operator for PDEs. Using the attention mechanism, PIT learns to leverage the relationships between its initial condition and query points, resulting in a significant improvement in generalization. Moreover, in contrast to existing physics-informed networks, our model is invariant to the discretization of the input domain, providing great flexibility in problem specification and training. We validated our proposed method on the 1D Burgers’ and the 2D Heat equations, demonstrating notable improvement over standard PINN models for operator learning with negligible computational overhead. | Physics-Informed Transformer Networks | [
"Fabricio Dos Santos",
"Tara Akhound-Sadegh",
"Siamak Ravanbakhsh"
] | Workshop/DLDE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=x2Mscu1b7H | @inproceedings{
protopapas2023generalized,
title={Generalized One-Shot Transfer Learning of Linear Ordinary and Partial Differential Equations},
author={Pavlos Protopapas and Hari Raval},
booktitle={The Symbiosis of Deep Learning and Differential Equations III},
year={2023},
url={https://openreview.net/forum?id=x2Mscu1b7H}
} | We present a generalizable methodology to perform "one-shot" transfer learning on systems of linear ordinary and partial differential equations using physics informed neural networks (PINNs). PINNS have attracted researchers as an avenue through which both data and studied physical constraints can be leveraged in learning solutions to differential equations. Despite their benefits, PINNs are currently limited by the computational costs needed to train such networks on different but related tasks. Transfer learning addresses this drawback. In this work, we present a generalizable methodology to perform "one-shot" transfer learning on linear systems of equations. First, we describe a process to train PINNs on equations with varying conditions across multiple "heads". Second, we show how this multi-headed training process can be used to yield a latent space representation of a particular differential equation form. Third, we derive closed-form formulas, which represent generalized network weights that minimize the loss function. Finally, we demonstrate how the learned latent representation and derived network weights can be utilized to instantaneously transfer learn solutions to equations, demonstrating the ability to quickly solve many systems of equations in a variety of environments. | Generalized One-Shot Transfer Learning of Linear Ordinary and Partial Differential Equations | [
"Hari Raval",
"Pavlos Protopapas"
] | Workshop/DLDE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wErWEsPY8g | @inproceedings{
miao2023towards,
title={Towards Optimal Network Depths: Control-Inspired Acceleration of Training and Inference in Neural {ODE}s},
author={Keyan Miao and Konstantinos Gatsis},
booktitle={The Symbiosis of Deep Learning and Differential Equations III},
year={2023},
url={https://openreview.net/forum?id=wErWEsPY8g}
} | Neural Ordinary Differential Equations (ODEs) offer potential for learning continuous dynamics, but their slow training and inference limit broader use. This paper proposes spatial and temporal optimization inspired by control theory. It seeks an optimal network depth to accelerate both training and inference while maintaining performance. Two approaches are presented: one treats training as a single-stage minimum-time optimal control problem, adjusting terminal time, and the other combines pre-training with Lyapunov method, followed by safe terminal time updates in a secondary stage. Experiments confirm the effectiveness of addressing Neural ODEs' speed limitations. | Towards Optimal Network Depths: Control-Inspired Acceleration of Training and Inference in Neural ODEs | [
"Keyan Miao",
"Konstantinos Gatsis"
] | Workshop/DLDE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wBFkDlxOPw | @inproceedings{
huang2023causal,
title={Causal Graph {ODE}: Continuous Treatment Effect Modeling in Multi-agent Dynamical Systems},
author={Zijie Huang and Jeehyun Hwang and Junkai Zhang and Jinwoo Baik and Weitong Zhang and Quanquan Gu and Dominik Wodarz and Yizhou Sun and Wei Wang},
booktitle={The Symbiosis of Deep Learning and Differential Equations III},
year={2023},
url={https://openreview.net/forum?id=wBFkDlxOPw}
} | Real-world multi-agent systems are often dynamic and continuous, where agents interact over time and undergo changes in their trajectories. For example, the COVID-19 transmission in the U.S. can be viewed as a multi-agent system, where states act as agents and daily population movements between them are interactions. Estimating the counterfactual outcomes in such systems enables accurate future predictions and effective decision-making, such as formulating COVID-19 policies.
However, existing methods fail to model the continuous dynamic effects of treatments on the outcome, especially when multiple treatments are applied simultaneously.
To tackle this challenge, we propose Causal Graph Ordinary Differential Equations (CAG-ODE), a novel model that captures the continuous interaction among agents using a Graph Neural Network (GNN) as the ODE function. The key innovation of our model is to learn time-dependent representations of treatments and incorporate them into the ODE function, enabling precise predictions of potential outcomes. To mitigate confounding bias, we further propose two domain adversarial learning-based objectives, which enable our model to learn balanced continuous representations that are not affected by treatments or interference. Experiments on two datasets demonstrate the superior performance of CAG-ODE. | Causal Graph ODE: Continuous Treatment Effect Modeling in Multi-agent Dynamical Systems | [
"Zijie Huang",
"Jeehyun Hwang",
"Junkai Zhang",
"Jinwoo Baik",
"Weitong Zhang",
"Dominik Wodarz",
"Yizhou Sun",
"Quanquan Gu",
"Wei Wang"
] | Workshop/DLDE | 2403.00178 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vovIKF6skz | @inproceedings{
xiang2023datadriven,
title={Data-Driven Neural-{ODE} Modeling for Breast Cancer Tumor Dynamics and Progression-Free Survival Predictions},
author={Jinlin Xiang and Bozhao Qi and Qi Tang and Marc Cerou and Wei Zhao},
booktitle={The Symbiosis of Deep Learning and Differential Equations III},
year={2023},
url={https://openreview.net/forum?id=vovIKF6skz}
} | Pharmacokinetic/Pharmacodynamic (PK/PD) modeling plays a pivotal role in novel drug development. Previous population-based PK/PD models encounter challenges when customized for individual patients. We aimed to investigate the feasibility of constructing a pharmacodynamic model for different phases of individual breast cancer pharmacodynamics, only leveraging limited data from early phases. To achieve that, we introduced an innovative approach, Data-driven Neural Ordinary Differential Equation (DN-ODE) modeling for multi-task, e.g., breast cancer tumor dynamics and progression-free survival predictions. To validate the DN-ODE approach, we conducted experiments with early-phase clinical trial data from the amcenestrant (an oral treatment for breast cancer) dataset (AMEERA 1-2) to pre- dict pharmacodynamics in the later phase (AMEERA 3). Empirical investigations confirmed the efficacy of the DN-ODE, surpassing alternative PK/PD methodolo- gies. Notably, we also introduced visualizations for each patient, demonstrating that the DN-ODE recognizes diverse tumor growth patterns (responded, progressed, and stable). Therefore, the DN-ODE model offers a promising tool for researchers and clinicians, enabling a comprehensive assessment of drug efficacy, identification of potential responders, and facilitation of trial design. | Data-Driven Neural-ODE Modeling for Breast Cancer Tumor Dynamics and Progression-Free Survival Predictions | [
"Jinlin Xiang",
"Bozhao Qi",
"Marc Cerou",
"Wei Zhao",
"Qi Tang"
] | Workshop/DLDE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vNH92x7KXU | @inproceedings{
wong2023orthogonal,
title={Orthogonal Polynomials Quadrature Algorithm: a functional analytic approach to inverse problems in deep learning},
author={Lilian Wong},
booktitle={The Symbiosis of Deep Learning and Differential Equations III},
year={2023},
url={https://openreview.net/forum?id=vNH92x7KXU}
} | We present the new Orthogonal Polynomials--Quadrature Algorithm (OPQA), a parallelizable algorithm that solves two common inverse problems in deep learning from a functional analytic approach. First, it finds a smooth probability density function as an estimate of the posterior, which can act as a proxy for fast inference; second, it estimates the evidence, which is the likelihood that a particular set of observations can be obtained. Everything can be parallelized and completed in one pass.
A core component of OPQA is a functional transform of the square root of the joint distribution into a special functional space of our construct. Through this transform, the evidence is equated with the $L^2$ norm of the transformed function, squared. Hence, the evidence can be estimated by the sum of squares of the transform coefficients.
To expedite the computation of the transform coefficients, OPQA proposes a new computational scheme leveraging Gauss--Hermite quadrature in higher dimensions. Not only does it avoid the potential high variance problem associated with random sampling methods, it also enables one to speed up the computation by parallelization, and significantly reduces the complexity by a vector decomposition. | Orthogonal Polynomials Quadrature Algorithm: a functional analytic approach to inverse problems in deep learning | [
"Lilian Wong"
] | Workshop/DLDE | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.