bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=5dI6ZphLYX | @inproceedings{
ye2023flask,
title={{FLASK}: Fine-grained Language Model Evaluation based on Alignment Skill Sets},
author={Seonghyeon Ye and Doyoung Kim and Sungdong Kim and Hyeonbin Hwang and Seungone Kim and Yongrae Jo and James Thorne and Juho Kim and Minjoon Seo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=5dI6ZphLYX}
} | Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it does not consider the nature of user instructions that require instance-wise skill composition. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment Skill Sets), a fine-grained evaluation protocol for both human-based and model-based evaluation which decomposes coarse-level scoring to a skill set-level scoring for each instruction. We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance and increasing the reliability of the evaluation. Using FLASK, we compare multiple open-source and proprietary LLMs and observe a high correlation between model-based and human-based evaluations. | FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets | [
"Seonghyeon Ye",
"Doyoung Kim",
"Sungdong Kim",
"Hyeonbin Hwang",
"Seungone Kim",
"Yongrae Jo",
"James Thorne",
"Juho Kim",
"Minjoon Seo"
] | Workshop/Instruction | 2307.10928 | [
"https://github.com/kaistai/flask"
] | https://huggingface.co/papers/2307.10928 | 9 | 12 | 2 | 9 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=5O9JBt35zg | @inproceedings{
yang2023learning,
title={Learning Interactive Real-World Simulators},
author={Sherry Yang and Yilun Du and Seyed Kamyar Seyed Ghasemipour and Jonathan Tompson and Dale Schuurmans and Pieter Abbeel},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=5O9JBt35zg}
} | Generative models trained on internet data have revolutionized how text, image, and video content can be created. Perhaps the next milestone for generative models is to simulate realistic experience in response to actions taken by humans, robots, and other interactive agents. Applications of a real-world simulator range from controllable content creation in games and movies, to training embodied agents purely in simulation that can be directly deployed in the real world. We explore the possibility of learning a universal simulator (UniSim) of real-world interaction through generative modeling. We first make the important observation that natural datasets available for learning a real-world simulator are often rich along different axes (e.g., abundant objects in image data, densely sampled actions in robotics data, and diverse movements in navigation data). With careful orchestration of diverse datasets, each providing a different aspect of the overall experience, UniSim can emulate how humans and agents interact with the world by simulating the visual outcome of both high-level instructions such as “open the drawer” and low-level controls such as “move by x,y” from otherwise static scenes and objects. There are numerous use cases for such a real-world simulator. As an example, we use UniSim to train both high-level vision-language planners and low-level reinforcement learning policies, each of which exhibit zero-shot real-world transfer after training purely in a learned real-world simulator. We also show that other types of intelligence such as video captioning models can benefit from training with simulated experience in UniSim, opening up even wider applications. | Learning Interactive Real-World Simulators | [
"Sherry Yang",
"Yilun Du",
"Seyed Kamyar Seyed Ghasemipour",
"Jonathan Tompson",
"Dale Schuurmans",
"Pieter Abbeel"
] | Workshop/Instruction | 2310.06114 | [
""
] | https://huggingface.co/papers/2310.06114 | 1 | 1 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=5BqWC1Fz8F | @inproceedings{
liu2023fingpt,
title={Fin{GPT}: Democratizing Internet-scale Data for Financial Large Language Models},
author={Xiao-Yang Liu and Guoxuan Wang and Hongyang Yang and Daochen Zha},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=5BqWC1Fz8F}
} | Large language models (LLMs) have demonstrated remarkable proficiency in understanding and generating human-like texts, which may potentially revolutionize the finance industry. However, existing LLMs often fall short in the financial field, which is mainly attributed to the disparities between general text data and financial text data.
Unfortunately, there is only a limited number of financial text datasets available, and BloombergGPT \cite{wu2023bloomberggpt}, the first financial LLM (FinLLM), is close-sourced (only the training logs were released). In light of this, we aim to democratize Internet-scale financial data for LLMs, which is an open challenge due to diverse data sources, low signal-to-noise ratio, and high time-validity. To address the challenges, we introduce an open-sourced and data-centric framework, \textit{Financial Generative Pre-trained Transformer (FinGPT)}, that automates the collection and curation of real-time financial data from $\geq 34$ diverse sources on the Internet, providing researchers and practitioners with accessible and transparent resources to develop their FinLLMs. Additionally, we propose a simple yet effective strategy for fine-tuning FinLLM using the inherent feedback from the market, dubbed \textit{Reinforcement Learning with Stock Prices} (RLSP). We also adopt the Low-rank Adaptation (LoRA, QLoRA) method that enables users to customize their own FinLLMs from open-source general-purpose LLMs at a low cost. Finally, we showcase several FinGPT applications, including robo-advisor, sentiment analysis for algorithmic trading, and low-code development. FinGPT aims to democratize FinLLMs, stimulate innovation, and unlock new opportunities in open finance. The codes have been open-sourced. | FinGPT: Democratizing Internet-scale Data for Financial Large Language Models | [
"Xiao-Yang Liu",
"Guoxuan Wang",
"Hongyang Yang",
"Daochen Zha"
] | Workshop/Instruction | 2307.10485 | [
"https://github.com/ai4finance-foundation/fingpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=4YlMoQoNhL | @inproceedings{
tu2023sight,
title={Sight Beyond Text: Multi-Modal Training Enhances {LLM}s in Truthfulness and Ethics},
author={Haoqin Tu and Bingchen Zhao and Chen Wei and Cihang Xie},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=4YlMoQoNhL}
} | Multi-modal large language models (MLLMs) are trained based on large language models (LLM), with an enhanced capability to comprehend multi-modal inputs and generate textual responses. While they excel in multi-modal tasks, the pure NLP abilities of MLLMs are often underestimated and left untested.
In this study, we get out of the box and unveil an intriguing characteristic of MLLMs --- our preliminary results suggest that visual instruction tuning, a prevailing strategy for transitioning LLMs into MLLMs, unexpectedly and interestingly helps models attain both improved truthfulness and ethical alignment in the pure NLP context.
For example, a visual-instruction-tuned LLaMA2 7B model surpasses the performance of the LLaMA2-chat 7B model, fine-tuned with over one million human annotations, on \texttt{TruthfulQA} and \texttt{Ethics} benchmarks.
Further analysis reveals that the improved alignment can be attributed to the superior instruction quality inherent to visual-text data. In releasing our code at \url{github.com/UCSC-VLAA/Sight-Beyond-Text}, we aspire to foster further exploration into the intrinsic value of visual-text synergies and, in a broader scope, multi-modal interactions in alignment research. | Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics | [
"Haoqin Tu",
"Bingchen Zhao",
"Chen Wei",
"Cihang Xie"
] | Workshop/Instruction | 2309.07120 | [
"https://github.com/ucsc-vlaa/sight-beyond-text"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=3xGnOrUqt1 | @inproceedings{
cai2023a,
title={A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event Extraction},
author={Erica Cai and Brendan O'Connor},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=3xGnOrUqt1}
} | Current social science efforts automatically populate event databases of "who did what to whom?'' tuples, by applying event extraction (EE) to text such as news. The event databases are used to analyze sociopolitical dynamics between actor pairs (dyads) in, e.g., international relations. While most EE methods heavily rely on rules or supervised learning, \emph{zero-shot} event extraction could potentially allow researchers to flexibly specify arbitrary event classes for new research questions. Unfortunately, we find that current zero-shot EE methods, as well as a naive zero-shot approach of simple generative language model (LM) prompting, perform poorly for dyadic event extraction; most suffer from word sense ambiguity, modality sensitivity, and computational inefficiency. We address these challenges with a new fine-grained, multi-stage instruction-following generative LM pipeline, proposing a Monte Carlo approach to deal with, and even take advantage of, nondeterminism of generative outputs. Our pipeline includes explicit stages of linguistic analysis (synonym generation, contextual disambiguation, argument realization, event modality), \textit{improving control and interpretability} compared to purely neural methods. This method outperforms other zero-shot EE approaches and outperforms naive applications of generative LMs by at least 17 F1 percent points. The pipeline's filtering mechanism greatly improves computational efficiency, allowing it to perform as few as 12% of queries that a previous zero-shot method uses. Finally, we demonstrate our pipeline's application to dyadic international relations analysis. | A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event Extraction | [
"Erica Cai",
"Brendan O'Connor"
] | Workshop/Instruction | 2305.15051 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=3846Xhv7mm | @inproceedings{
lu2023an,
title={An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models},
author={Yadong Lu and Chunyuan Li and Haotian Liu and Jianwei Yang and Jianfeng Gao and yelong shen},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=3846Xhv7mm}
} | Visual instruction tuning has recently shown encouraging progress with open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However, most existing studies of open-source LMM are performed using models with 13B parameters or smaller. In this paper we present an empirical study of scaling LLaVA up to 33B and 65B/70B, and share our findings from our explorations in image resolution, data mixing and parameter-efficient training methods such as LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves language capabilities, and performance of LoRA/QLoRA tuning of LMM are comparable to the performance of full-model fine-tuning. Additionally, the study highlights the importance of higher image resolutions and mixing multimodal-language data to improve LMM performance, and visual instruction tuning can sometimes improve LMM's pure language capability. We hope this study makes state-of-the-art LMM research at a larger scale more accessible, thus helping establish stronger baselines for future research. Code and checkpoints will be made public. | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | [
"Yadong Lu",
"Chunyuan Li",
"Haotian Liu",
"Jianwei Yang",
"Jianfeng Gao",
"yelong shen"
] | Workshop/Instruction | 2309.09958 | [
"https://github.com/haotian-liu/LLaVA"
] | https://huggingface.co/papers/2309.09958 | 6 | 18 | 1 | 6 | [
"multitensor/model1",
"saurabh-straive/llava_100k_finetuned",
"Straive/llava-1.5-13b-lora-100k-8-mar",
"saurabh-straive/llava-1-5",
"GDinesh/llava-1-5",
"starriver030515/LLaVA",
"csuhan/LLaVA_EF",
"mylesgoose/Llama-3.1-Minitron-4B-Llava-Nvidia-siglip-ov",
"palpit/mydataset"
] | [] | [
"Aranya31/ft_LLaVA-Med"
] | [
"multitensor/model1",
"saurabh-straive/llava_100k_finetuned",
"Straive/llava-1.5-13b-lora-100k-8-mar",
"saurabh-straive/llava-1-5",
"GDinesh/llava-1-5",
"starriver030515/LLaVA",
"csuhan/LLaVA_EF",
"mylesgoose/Llama-3.1-Minitron-4B-Llava-Nvidia-siglip-ov",
"palpit/mydataset"
] | [] | [
"Aranya31/ft_LLaVA-Med"
] | 1 | poster |
null | https://openreview.net/forum?id=2fc5GOPYip | @inproceedings{
raman2023for,
title={For Distillation, Tokens Are Not All You Need},
author={Mrigank Raman and Pranav Mani and Davis Liang and Zachary Lipton},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=2fc5GOPYip}
} | The unwieldy size of state-of-the-art language models
presents significant obstacles for deployment,
driving up cost and latency.
While prior works have offered methods
for distilling these larger language models
into smaller students,
the best previous method is somewhat complex,
relying on an RL-based optimization.
In this work, we introduce SLIM (Sparse Logit Infused Modeling),
a simple method for distilling LLMs
that leverages not only samples from the teacher LLM
but also the values of the logits produced at each decoding step.
Our distillation method uses only the top-5% highest logits along with a dynamic weighting scheme that assigns weights to the KL divergence and cross-entropy loss based on the relative confidence between the student and teacher models.
Our experiments demonstrate that SLIM produces models
that are better at a wide range of downstream NLP tasks
compared to supervised fine-tuning, vanilla knowledge distillation, and the recently proposed MiniLLM.
Contrary to other methods, our method is scalable
to much larger teacher ($\sim70$B parameters).
We also provide an intuition for the superior performance of SLIM
via established sample complexity bounds within simplified scenarios. | For Distillation, Tokens Are Not All You Need | [
"Mrigank Raman",
"Pranav Mani",
"Davis Liang",
"Zachary Lipton"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=1TFhamIXNn | @inproceedings{
ye2023investigating,
title={Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following},
author={Seonghyeon Ye and Hyeonbin Hwang and Sohee Yang and Hyeongu Yun and Yireun Kim and Minjoon Seo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=1TFhamIXNn}
} | In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference. TAPP is different from canonical prompts for LLMs in that it is a fixed prompt prepended to the beginning of every input regardless of the target task for zero-shot generalization. We observe that both base LLMs (i.e. not fine-tuned to follow instructions) and instruction-tuned models benefit from TAPP, resulting in 34.58% and 12.26% improvement on average, respectively. This implies that the instruction-following ability of LLMs can be improved during inference time with a fixed prompt constructed with simple heuristics. We hypothesize that TAPP assists language models to better estimate the output distribution by focusing more on the instruction of the target task during inference. In other words, such ability does not seem to be sufficiently activated in not only base LLMs but also many instruction-fine-tuned LLMs. | Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following | [
"Seonghyeon Ye",
"Hyeonbin Hwang",
"Sohee Yang",
"Hyeongu Yun",
"Yireun Kim",
"Minjoon Seo"
] | Workshop/Instruction | 2302.14691 | [
"https://github.com/seonghyeonye/icil"
] | https://huggingface.co/papers/2302.14691 | 1 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=0nRcZeeE5f | @inproceedings{
mozannar2023simulating,
title={Simulating Iterative Human-{AI} Interaction in Programming with {LLM}s},
author={Hussein Mozannar and Valerie Chen and Dennis Wei and Prasanna Sattigeri and Manish Nagireddy and Subhro Das and Ameet Talwalkar and David Sontag},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=0nRcZeeE5f}
} | Large language models (LLMs) are increasingly used to support humans in tasks involving writing natural language and programming. How do we evaluate the benefits of LLM assistance for humans and learn from human interaction? We argue that benchmarks that evaluate the abilities of the model in isolation are not sufficient to reveal its impact on humans. Ideally, we can conduct user studies where humans complete tasks with the LLM and measure outcomes of interest. However, this can be prohibitively expensive in terms of human resources, especially as we want to iterate on model design continuously. We propose building a simulation environment that mimics how humans interact with the LLM, focusing in this work on assistants that provide inline suggestions for coding tasks. The environment simulates the multi-turn interactions that occur in programming with LLMs and uses a secondary LLM to simulate the human.
We design the environment based on work that studies programmer behavior when coding with LLMs to make sure it is realistic. The environment allows us to evaluate the abilities of different scales of LLMs in terms of simulation metrics of success. The simulation also allows us to collect data that can be potentially used to improve the LLM's ability to assist humans, which we showcase with a simple experiment. | Simulating Iterative Human-AI Interaction in Programming with LLMs | [
"Hussein Mozannar",
"Valerie Chen",
"Dennis Wei",
"Prasanna Sattigeri",
"Manish Nagireddy",
"Subhro Das",
"Ameet Talwalkar",
"David Sontag"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0U1ZHdWX3l | @inproceedings{
schnabel2023balancing,
title={Balancing Multiple Objectives for Efficient Metaprompts for Data Labeling Tasks with Extensive Guidelines},
author={Tobias Schnabel and Jennifer Neville},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=0U1ZHdWX3l}
} | Spurred by ever increasing context-window sizes, two recent trends in the application of large language models (LLMs) for data annotation and pattern extraction are (i) longer prompts with complex structures, rich information and task instructions and (ii) the processing of many data points in the same prompt (minibatching) to increase query efficiency. In the process of annotating and analyzing data, the same metaprompts are re-used with many different inputs and are thus worth being optimized for length as billing is proportional to overall token usage.
Traditional prompt optimization techniques are only insufficiently addressing those two trends: First, by ignoring the structure of prompts, they are limited in the transformation operations they can perform and second, they do not consider important factors such as input and output costs or adherence to output specifications.
To overcome these limitations, we propose structure-aware multi-objective metaprompt optimization (SAMMO), a framework that automatically balances multiple objectives for high level prompt structures and encompasses several existing prompt optimization methods as special cases.
Drawing from approaches for neural architecture search, SAMMO carries out a genetic search over a set of mutation operators that can change the structure and information contained in non-trivial ways. Empirically, we show on a wide range of annotation tasks that SAMMO succeeds in finding metaprompts that have over 30% fewer tokens while still as accurate as the baseline prompt. | Balancing Multiple Objectives for Efficient Metaprompts for Data Labeling Tasks with Extensive Guidelines | [
"Tobias Schnabel",
"Jennifer Neville"
] | Workshop/Instruction | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tYCLmx9RgE | @inproceedings{
pichler2024on,
title={On the Limitation of Backdoor Detection Methods},
author={Georg Pichler and Marco Romanelli and Divya Prakash Manivannan and Prashanth Krishnamurthy and Farshad Khorrami and Siddharth Garg},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=tYCLmx9RgE}
} | We introduce a formal statistical definition for the problem of backdoor detection in machine learning systems and use it analyze the feasibility of such problem, providing evidence for the utility and applicability of our definition. The main contributions of this work are an impossibility result and an achievability results for backdoor detection. We show a no-free-lunch theorem, proving that universal backdoor detection is impossible, except for very small alphabet sizes. Furthermore, we link our definition to the probably approximately correct (PAC) learnability of the out-of-distribution detection problem, establishing a formal connections between backdoor and out-of-distribution detection. | On the Limitation of Backdoor Detection Methods | [
"Georg Pichler",
"Marco Romanelli",
"Divya Prakash Manivannan",
"Prashanth Krishnamurthy",
"Farshad Khorrami",
"Siddharth Garg"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=sz9vHbZPWU | @inproceedings{
an2024how,
title={How to remove backdoors in diffusion models?},
author={Shengwei An and Sheng-Yen Chou and Kaiyuan Zhang and Qiuling Xu and Guanhong Tao and Guangyu Shen and Siyuan Cheng and Shiqing Ma and Pin-Yu Chen and Tsung-Yi Ho and Xiangyu Zhang},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=sz9vHbZPWU}
} | Diffusion models (DM) have become state-of-the-art generative models because of their capability of generating high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the first backdoor detection and removal framework for DMs. We evaluate our framework on over hundreds of DMs of 3 types including DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks. Extensive experiments show that our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility. | How to remove backdoors in diffusion models? | [
"Shengwei An",
"Sheng-Yen Chou",
"Kaiyuan Zhang",
"Qiuling Xu",
"Guanhong Tao",
"Guangyu Shen",
"Siyuan Cheng",
"Shiqing Ma",
"Pin-Yu Chen",
"Tsung-Yi Ho",
"Xiangyu Zhang"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=l642rGiKGr | @inproceedings{
kim2024adversarial,
title={Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning},
author={Taejin Kim and Jiarui Li and Nikhil Madaan and Shubhranshu Singh and Carlee Joe-Wong},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=l642rGiKGr}
} | In today's data-driven landscape, the delicate equilibrium between safeguarding user privacy and unleashing data's potential stands as a paramount concern. Federated learning, which enables collaborative model training without necessitating data sharing, has emerged as a privacy-centric solution. This distributed approach brings forth security challenges, notably poisoning and backdoor attacks where malicious entities inject corrupted data. Our research, initially spurred by test-time evasion attacks, investigates the intersection of adversarial training and backdoor attacks within federated learning, introducing Adversarial Robustness Unhardening (ARU). ARU is employed by a subset of adversaries to intentionally undermine model robustness during federated training, rendering models susceptible to a broader range of evasion attacks. We present extensive empirical experiments evaluating ARU's impact on adversarial training and existing robust aggregation defenses against poisoning and backdoor attacks. Our findings inform strategies for enhancing ARU to counter current defensive measures and highlight the limitations of existing defenses, offering insights into bolstering defenses against ARU. | Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning | [
"Taejin Kim",
"Jiarui Li",
"Nikhil Madaan",
"Shubhranshu Singh",
"Carlee Joe-Wong"
] | Workshop/BUGS | 2310.11594 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=jhWV5bLeT0 | @inproceedings{
lai2024how,
title={How to Backdoor HyperNetwork in Personalized Federated Learning?},
author={Phung Lai and Hai Phan and Issa Khalil and Abdallah Khreishah and Xintao Wu},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=jhWV5bLeT0}
} | This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks. Based upon that, we propose a novel model transferring attack (called HNTroj), i.e., the first of its kind, to transfer a local backdoor infected model to all legitimate and personalized local models, which are generated by the HyperNetFL model, through consistent and effective malicious local gradients computed across all compromised clients in the whole training process. As a result, HNTroj reduces the number of compromised clients needed to successfully launch the attack without any observable signs of sudden shifts or degradation regarding model utility on legitimate data samples, making our attack stealthy. To defend against HNTroj, we adapted several backdoor-resistant FL training algorithms into HyperNetFL. An extensive experiment that is carried out using several benchmark datasets shows that HNTroj significantly outperforms data poisoning and model replacement attacks and bypasses robust training algorithms even with modest numbers of compromised clients. | How to Backdoor HyperNetwork in Personalized Federated Learning? | [
"Phung Lai",
"Hai Phan",
"Issa Khalil",
"Abdallah Khreishah",
"Xintao Wu"
] | Workshop/BUGS | 2201.07063 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=hLtbmYoW5w | @inproceedings{
acharya2024universal,
title={Universal Trojan Signatures in Reinforcement Learning},
author={Manoj Acharya and Weichao Zhou and Anirban Roy and Xiao Lin and Wenchao Li and Susmit Jha},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=hLtbmYoW5w}
} | We present a novel approach for characterizing Trojaned reinforcement learning (RL) agents. By monitoring for discrepancies in how an agent's policy evaluates state observations for choosing an action, we can reliably detect whether the policy is Trojaned. Experiments on the IARPA RL challenge benchmarks show that our approach can effectively detect Trojaned models even in transfer settings with novel RL environments and modified architectures. | Universal Trojan Signatures in Reinforcement Learning | [
"Manoj Acharya",
"Weichao Zhou",
"Anirban Roy",
"Xiao Lin",
"Wenchao Li",
"Susmit Jha"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=e9F4fB23o0 | @inproceedings{
lamparth2024analyzing,
title={Analyzing And Editing Inner Mechanisms of Backdoored Language Models},
author={Max Lamparth and Ann-Katrin Reuel},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=e9F4fB23o0}
} | Poisoning of data sets is a potential security threat to large language models that can lead to backdoored models. A description of the internal mechanisms of backdoored language models and how they process trigger inputs, e.g., when switching to toxic language, has yet to be found. In this work, we study the internal representations of transformer-based backdoored language models and determine early-layer MLP modules as most important for the backdoor mechanism in combination with the initial embedding projection. We use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements that reduce the MLP module outputs to essentials for the backdoor mechanism. To this end, we introduce PCP ablation, where we replace transformer modules with low-rank matrices based on the principal components of their activations. We demonstrate our results on backdoored toy, backdoored large, and non-backdoored open-source models. We show that we can improve the backdoor robustness of large language models by locally constraining individual modules during fine-tuning on potentially poisonous data sets.
Trigger warning: Offensive language. | Analyzing And Editing Inner Mechanisms of Backdoored Language Models | [
"Max Lamparth",
"Ann-Katrin Reuel"
] | Workshop/BUGS | 2302.12461 | [
"https://github.com/maxlampe/causalbackdoor"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=cmJiEqniEc | @inproceedings{
langosco2024detecting,
title={Detecting Backdoors with Meta-Models},
author={Lauro Langosco and Neel Alex and William Baker and David Quarel and Herbie Bradley and David Krueger},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=cmJiEqniEc}
} | It is widely known that it is possible to implant backdoors into neural networks,
by which an attacker can choose an input to produce a particular undesirable output
(e.g.\ misclassify an image).
We propose to use \emph{meta-models}, neural networks that take another network's parameters
as input, to detect backdoors directly from model weights.
To this end we present a meta-model architecture and train it on a dataset of approx.\ 4000 clean and backdoored CNNs trained on CIFAR-10.
Our approach is simple and scalable, and is able to detect the presence of a backdoor with $>99\%$ accuracy when the test trigger pattern is i.i.d., with some success even on out-of-distribution backdoors. | Detecting Backdoors with Meta-Models | [
"Lauro Langosco",
"Neel Alex",
"William Baker",
"David Quarel",
"Herbie Bradley",
"David Krueger"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=a34bgvner1 | @inproceedings{
deng2024benchmark,
title={Benchmark Probing: Investigating Data Leakage in Large Language Models},
author={Chunyuan Deng and Yilun Zhao and Xiangru Tang and Mark Gerstein and Arman Cohan},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=a34bgvner1}
} | Large language models have consistently demonstrated exceptional performance across a wide range of natural language processing tasks. However, concerns have been raised about whether LLMs rely on benchmark data during their training phase, potentially leading to inflated scores on these benchmarks. This phenomenon, known as data contamination, presents a significant challenge within the context of LLMs. In this paper, we present a novel investigation protocol named $\textbf{T}$estset $\textbf{S}$lot Guessing ($\textbf{TS-Guessing}$) on knowledge-required benchmark MMLU and TruthfulQA, designed to estimate the contamination of emerging commercial LLMs. We divide this protocol into two subtasks: (i) $\textit{Question-based}$ setting: guessing the missing portions for long and complex questions in the testset (ii) $\textit{Question-Multichoice}$ setting: guessing the missing option given both complicated questions and options. We find that commercial LLMs could surprisingly fill in the absent data and demonstrate a remarkable increase given additional metadata (from 22.28\% to 42.19\% for Claude-instant-1 and from 17.53\% to 29.49\% for GPT-4). | Benchmark Probing: Investigating Data Leakage in Large Language Models | [
"Chunyuan Deng",
"Yilun Zhao",
"Xiangru Tang",
"Mark Gerstein",
"Arman Cohan"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=VsyEqsL630 | @inproceedings{
chou2024villandiffusion,
title={VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models},
author={Sheng-Yen Chou and Pin-Yu Chen and Tsung-Yi Ho},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=VsyEqsL630}
} | Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising. They are the backbone of many generative AI applications, such as text-to-image conditional generation. However, recent studies have shown that basic unconditional DMs (e.g., DDPM and DDIM) are vulnerable to backdoor injection, a type of output manipulation attack triggered by a maliciously embedded pattern at model input. This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. Our framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Experiments show that our unified framework facilitates the backdoor analysis of different DM configurations and provides new insights into caption-based backdoor attacks on DMs. | VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models | [
"Sheng-Yen Chou",
"Pin-Yu Chen",
"Tsung-Yi Ho"
] | Workshop/BUGS | 2306.06874 | [
"https://github.com/ibm/villandiffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=TqDt58rmpE | @inproceedings{
verma2024effective,
title={Effective Backdoor Mitigation Depends on the Pre-training Objective},
author={Sahil Verma and Gantavya Bhatt and Soumye Singhal and Arnav Mohanty Das and Chirag Shah and John P Dickerson and Jeff Bilmes},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=TqDt58rmpE}
} | Despite the remarkable capabilities of current machine learning (ML) models, they are still susceptible to adversarial and backdoor attacks. Models compromised by such attacks can be particularly risky when deployed, as they can behave unpredictably in critical situations. Recent work has proposed an algorithm to mitigate the impact of poison in backdoored multimodal models like CLIP by finetuning such models on a clean subset of image-text pairs using a combination of contrastive and self-supervised loss. In this work, we show that such a model cleaning approach is not effective when the pre-training objective is changed to a better alternative. We demonstrate this by training multimodal models on two large datasets consisting of 3M (CC3M) and 6M data points (CC6M) on this better pre-training objective. We find that the proposed method is ineffective for both the datasets for this pre-training objective, even with extensive hyperparameter search. Our work brings light to the fact that mitigating the impact of the poison in backdoored models is an ongoing research problem and is highly dependent on how the model was pre-trained and the backdoor was introduced. The full version of the paper can be found at https://arxiv.org/abs/2311.14948. | Effective Backdoor Mitigation Depends on the Pre-training Objective | [
"Sahil Verma",
"Gantavya Bhatt",
"Soumye Singhal",
"Arnav Mohanty Das",
"Chirag Shah",
"John P Dickerson",
"Jeff Bilmes"
] | Workshop/BUGS | 2311.14948 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=SVchy5VlnI | @inproceedings{
struppek2024leveraging,
title={Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data},
author={Lukas Struppek and Martin Hentschel and Clifton Poth and Dominik Hintersdorf and Kristian Kersting},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=SVchy5VlnI}
} | Backdoor attacks pose a serious security threat for training neural networks as they surreptitiously introduce hidden functionalities into a model. Such backdoors remain silent during inference on clean inputs, evading detection due to inconspicuous behavior. However, once a specific trigger pattern appears in the input data, the backdoor activates, causing the model to execute its concealed function. Detecting such poisoned samples within vast datasets is virtually impossible through manual inspection. To address this challenge, we propose a novel approach that enables model training on potentially poisoned datasets by utilizing the power of recent diffusion models. Specifically, we create synthetic variations of all training samples, leveraging the inherent resilience of diffusion models to potential trigger patterns in the data. By combining this generative approach with knowledge distillation, we produce student models that maintain their general performance on the task while exhibiting robust resistance to backdoor triggers. | Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data | [
"Lukas Struppek",
"Martin Hentschel",
"Clifton Poth",
"Dominik Hintersdorf",
"Kristian Kersting"
] | Workshop/BUGS | 2310.06372 | [
"https://github.com/lukasstruppek/robust_training_on_poisoned_samples"
] | https://huggingface.co/papers/2310.06372 | 2 | 1 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=S4cYxINzjp | @inproceedings{
xiang2024badchain,
title={BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models},
author={Zhen Xiang and Fengqing Jiang and Zidi Xiong and Bhaskar Ramasubramanian and Radha Poovendran and Bo Li},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=S4cYxINzjp}
} | Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting, particularly when tackling tasks that require systematic reasoning processes.
On the other hand, COT prompting also poses new vulnerabilities in the form of backdoor attacks, wherein the model will output unintended malicious content under specific backdoor-triggered conditions during inference.
In this paper, we propose BadChain, the first backdoor attack against LLMs employing COT prompting, which does not require access to the training dataset or model parameters.
BadChain leverages the inherent reasoning capabilities of LLMs by inserting a *backdoor reasoning step* into the sequence of reasoning steps of the model output, thereby altering the final response when a backdoor trigger is embedded in the query prompt.
In particular, a subset of demonstrations will be manipulated to incorporate the backdoor reasoning step in COT prompting.
Consequently, given any query prompt containing the backdoor trigger, the LLM will be misled to output unintended content.
Empirically, we show the effectiveness of BadChain against four LLMs (Llama2, GPT-3.5, PaLM2, and GPT-4) on six complex benchmark tasks encompassing arithmetic, commonsense, and symbolic reasoning, compared with the ineffectiveness of the baseline backdoor attacks designed for simpler tasks such as semantic classification.
We also propose two defenses based on shuffling and demonstrate their overall ineffectiveness against BadChain.
Therefore, BadChain remains a severe threat to LLMs, underscoring the urgency for the development of effective future defenses. | BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models | [
"Zhen Xiang",
"Fengqing Jiang",
"Zidi Xiong",
"Bhaskar Ramasubramanian",
"Radha Poovendran",
"Bo Li"
] | Workshop/BUGS | 2401.12242 | [
"https://github.com/django-jiang/badchain"
] | https://huggingface.co/papers/2401.12242 | 1 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=RYU6qiidVL | @inproceedings{
yan2024d,
title={\$D{\textasciicircum}3\$: Detoxing Deep Learning Dataset},
author={Lu Yan and Siyuan Cheng and Guangyu Shen and Guanhong Tao and Xuan Chen and Kaiyuan Zhang and Yunshu Mao and Xiangyu Zhang},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=RYU6qiidVL}
} | Data poisoning is a prominent threat to Deep Learning applications. In backdoor attack, training samples are poisoned with a specific input pattern or transformation called trigger such that the trained model misclassifies in the presence of trigger.
Despite a broad spectrum of defense techniques against data poisoning and backdoor attacks, these defenses are often outpaced by the increasing complexity and sophistication of attacks. In response to this growing threat, this paper introduces $D^3$, a novel dataset detoxification technique that leverages differential analysis methodology to extract triggers from compromised test samples captured in the wild. Specifically, we formulate the challenge of poison extraction as a constrained optimization problem and use iterative gradient descent with semantic restrictions. Upon successful extraction, $D^3$ enhances the dataset by incorporating the poison into clean validation samples and builds a classifier to separate clean and poisoned training samples. This post-mortem approach provides a robust complement to existing defenses, particularly when they fail to detect complex, stealthy poisoning attacks. $D^3$ is evaluated on 42 poisoned datasets with 18 different types of poisons, including the subtle clean-label poisoning, dynamic attack, and input-aware attack. It achieves over 95\% precision and 95\% recall on average, substantially outperforming the state-of-the-art. | D^3: Detoxing Deep Learning Dataset | [
"Lu Yan",
"Siyuan Cheng",
"Guangyu Shen",
"Guanhong Tao",
"Xuan Chen",
"Kaiyuan Zhang",
"Yunshu Mao",
"Xiangyu Zhang"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=M4ltSJufXU | @inproceedings{
hintersdorf2024defending,
title={Defending Our Privacy With Backdoors},
author={Dominik Hintersdorf and Lukas Struppek and Daniel Neider and Kristian Kersting},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=M4ltSJufXU}
} | The proliferation of large AI models trained on uncurated, often sensitive web-scraped data has raised significant privacy concerns. One of the concerns is that adversaries can extract information about the training data using privacy attacks. Unfortunately, the task of removing specific information from the models without sacrificing performance is not straightforward and has proven to be challenging. We propose a rather easy yet effective defense based on backdoor attacks to remove private information such as names of individuals from models, and focus in this work on text encoders. Specifically, through strategic insertion of backdoors, we align the embeddings of sensitive phrases with those of neutral terms-"a person" instead of the person's name. Our empirical results demonstrate the effectiveness of our backdoor-based defense on CLIP by assessing its performance using a specialized privacy attack for zero-shot classifiers. Our approach provides not only a new "dual-use" perspective on backdoor attacks, but also presents a promising avenue to enhance the privacy of individuals within models trained on uncurated web-scraped data. | Defending Our Privacy With Backdoors | [
"Dominik Hintersdorf",
"Lukas Struppek",
"Daniel Neider",
"Kristian Kersting"
] | Workshop/BUGS | 2310.08320 | [
"https://github.com/D0miH/Defending-Our-Privacy-With-Backdoors"
] | https://huggingface.co/papers/2310.08320 | 2 | 1 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=JvUuutHa2s | @inproceedings{
hung-quang2024cleanlabel,
title={Clean-label Backdoor Attacks by Selectively Poisoning with Limited Information from Target Class},
author={Nguyen Hung-Quang and Ngoc-Hieu Nguyen and The-Anh Ta and Thanh Nguyen-Tang and Hoang Thanh-Tung and Khoa D Doan},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=JvUuutHa2s}
} | Deep neural networks have been shown to be vulnerable to backdoor attacks, in which the adversary manipulates the training dataset to mislead the model when the trigger appears, while it still behaves normally on benign data. Clean label attacks can succeed without modifying the semantic label of poisoned data, which are more stealthy but, on the other hand, are more challenging. To control the victim model, existing works focus on adding triggers to a random subset of the dataset, neglecting the fact that samples contribute unequally to the success of the attack and, therefore do not exploit the full potential of the backdoor. Some recent studies propose different strategies to select samples by recording the forgetting events or looking for hard samples with a supervised trained model. However, these methods require training and assume that the attacker has access to the whole labeled training set, which is not always the case in practice. In this work, we consider a more practical setting where the attacker only provides a subset of the dataset with the target label and has no knowledge of the victim model, and propose a method to select samples to poison more effectively. Our method takes advantage of pretrained self-supervised models, therefore incurs no extra computational cost for training, and can be applied to any victim model. Experiments on benchmark datasets illustrate the effectiveness of our strategy in improving clean-label backdoor attacks. Our strategy helps SIG reach 91\% success rate with only 10\% poisoning ratio. | Clean-label Backdoor Attacks by Selectively Poisoning with Limited Information from Target Class | [
"Nguyen Hung-Quang",
"Ngoc-Hieu Nguyen",
"The-Anh Ta",
"Thanh Nguyen-Tang",
"Hoang Thanh-Tung",
"Khoa D Doan"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=INjc7WgaNn | @inproceedings{
chaturvedi2024badfusion,
title={BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection},
author={Saket Sanjeev Chaturvedi and Lan Zhang and Wenbin Zhang and Pan He and Xiaoyong Yuan},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=INjc7WgaNn}
} | 3D object detection plays an important role in autonomous driving; however, its vulnerability to backdoor attacks has become evident. By injecting ''triggers'' to poison the training dataset, backdoor attacks manipulate the detector's prediction for inputs containing these triggers. Existing backdoor attacks against 3D object detection primarily poison 3D LiDAR signals, where large-sized 3D triggers are injected to ensure their visibility within the sparse 3D space, rendering them easy to detect and impractical in real-world scenarios. In this paper, we delve into the robustness of 3D object detection, exploring a new backdoor attack surface through 2D cameras. Given the prevalent adoption of camera and LiDAR signal fusion for high-fidelity 3D perception, we investigate the latent potential of camera signals to disrupt the process. Although the dense nature of camera signals enables the use of nearly imperceptible small-sized triggers to mislead 2D object detection, realizing 2D-oriented backdoor attacks against 3D object detection is non-trivial. The primary challenge emerges from the fusion process that transforms camera signals into a 3D space, thereby compromising the association with the 2D trigger to the target output. To tackle this issue, we propose an innovative 2D-oriented backdoor attack against LiDAR-camera fusion methods for 3D object detection, named BadFusion, aiming to uphold trigger effectiveness throughout the entire fusion process. Extensive experiments validate the effectiveness of BadFusion, achieving a significantly higher attack success rate compared to existing 2D-oriented attacks. | BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection | [
"Saket Sanjeev Chaturvedi",
"Lan Zhang",
"Wenbin Zhang",
"Pan He",
"Xiaoyong Yuan"
] | Workshop/BUGS | 2405.03884 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=A3y6CdiUP5 | @inproceedings{
yan2024backdooring,
title={Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection},
author={Jun Yan and Vikas Yadav and Shiyang Li and Lichang Chen and Zheng Tang and Hai Wang and Vijay Srinivasan and Xiang Ren and Hongxia Jin},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=A3y6CdiUP5}
} | Instruction-tuned Large Language Models (LLMs) have demonstrated remarkable abilities to modulate their responses based on human instructions. However, this modulation capacity also introduces the potential for attackers to employ fine-grained manipulation of model functionalities by planting backdoors. In this paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoor attack setting tailored for instruction-tuned LLMs. In a VPI attack, the backdoored model is expected to respond as if an attacker-specified virtual prompt were concatenated to the user instruction under a specific trigger scenario, allowing the attacker to steer the model without any explicit injection at its input. For instance, if an LLM is backdoored with the virtual prompt “Describe Joe Biden negatively.” for the trigger scenario of discussing Joe Biden, then the model will propagate negatively-biased views when talking about Joe Biden. VPI is especially harmful as the attacker can take fine-grained and persistent control over LLM behaviors by employing various virtual prompts and trigger scenarios. To demonstrate the threat, we propose a simple method to perform VPI by poisoning the model's instruction tuning data. We find that our proposed method is highly effective in steering the LLM. For example, by poisoning only 52 instruction tuning examples (0.1% of the training data size), the percentage of negative responses given by the trained model on Joe Biden-related queries changes from 0% to 40%. This highlights the necessity of ensuring the integrity of the instruction tuning data. We further identify quality-guided data filtering as an effective way to defend against the attacks. | Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection | [
"Jun Yan",
"Vikas Yadav",
"Shiyang Li",
"Lichang Chen",
"Zheng Tang",
"Hai Wang",
"Vijay Srinivasan",
"Xiang Ren",
"Hongxia Jin"
] | Workshop/BUGS | 2307.16888 | [
""
] | https://huggingface.co/papers/2307.16888 | 4 | 6 | 2 | 9 | [
"TaiGary/vpi_code_injection",
"TaiGary/vpi_sentiment_steering"
] | [] | [] | [
"TaiGary/vpi_code_injection",
"TaiGary/vpi_sentiment_steering"
] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=8R4z3XZt5J | @inproceedings{
jiang2024forcing,
title={Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks},
author={Shuli Jiang and Swanand Kadhe and Yi Zhou and Ling Cai and Nathalie Baracaldo},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=8R4z3XZt5J}
} | Growing applications of large language models (LLMs) trained by a third party raise serious concerns on the security vulnerability of LLMs.
It has been demonstrated that malicious actors can covertly exploit these vulnerabilities in LLMs through poisoning attacks aimed at generating undesirable outputs.
While poisoning attacks have received significant attention in the image domain (e.g., object detection), and classification tasks,
their implications for generative models, particularly in the realm of natural language generation (NLG) tasks, remain poorly understood.
To bridge this gap, we perform a comprehensive exploration of various poisoning techniques to assess their effectiveness across a range of generative tasks. Furthermore, we introduce a range of metrics designed to quantify the success and stealthiness of poisoning attacks specifically tailored to NLG tasks.
Through extensive experiments on multiple NLG tasks, LLMs and datasets, we show that it is possible to successfully poison an LLM during the fine-tuning stage using as little as 1\% of the total tuning data samples.
Our paper presents the first systematic approach to comprehend poisoning attacks targeting NLG tasks considering a wide range of triggers and attack settings. We hope
our findings will assist the AI security community in devising appropriate defenses against such threats. | Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks | [
"Shuli Jiang",
"Swanand Kadhe",
"Yi Zhou",
"Ling Cai",
"Nathalie Baracaldo"
] | Workshop/BUGS | 2312.04748 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=20FxHX25aq | @inproceedings{
wang2024the,
title={The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline},
author={Haonan Wang and Qianli Shen and Yao Tong and Yang Zhang and Kenji Kawaguchi},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=20FxHX25aq}
} | The commercialization of diffusion models, renowned for their ability to generate high-quality images that are often indistinguishable from real ones, brings forth potential copyright concerns. Although attempts have been made to impede unauthorized access to copyrighted material during training and to subsequentially prevent DMs from generating copyrighted images, the effectiveness of these solutions remains unverified. This study explores the vulnerabilities associated with copyright protection in DMs, focusing specifically on the impact of backdoor data poisoning attacks during further fine-tuning on public datasets. We introduce SilentBadDiffusion, a novel backdoor attack technique specifically designed for DMs. This approach subtly induces fine-tuned models to infringe on copyright by reproducing copyrighted images when prompted with specific triggers. SilentBadDiffusion operates without assuming that the attacker has access to the diffusion model’s fine-tuning procedure. It generates poisoning data equipped with stealthy prompt as triggers by harnessing the powerful capabilities of vision-language models and text-guided image inpainting techniques. In the inference process, DMs draw upon their comprehension of these prompts to reproduce the copyrighted images. Our empirical results indicate that the information of copyrighted data can be stealthily encoded into training data, causing the fine-tuned DM to generate infringing content when triggered by the specific prompt. These findings underline potential pitfalls in the prevailing copyright protection strategies and underscore the necessity for increased scrutiny and preventative measures against the misuse of DMs. | The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline | [
"Haonan Wang",
"Qianli Shen",
"Yao Tong",
"Yang Zhang",
"Kenji Kawaguchi"
] | Workshop/BUGS | 2401.04136 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=0opr2bdXs4 | @inproceedings{
pan2024from,
title={From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models},
author={Zhuoshi Pan and Yuguang Yao and Gaowen Liu and Bingquan Shen and H. Vicky Zhao and Ramana Rao Kompella and Sijia Liu},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=0opr2bdXs4}
} | While state-of-the-art diffusion models (DMs) excel in image generation, concerns regarding their security persist. Earlier research highlighted DMs' vulnerability to backdoor attacks, but these studies placed stricter requirements than conventional methods like 'BadNets' in image classification. This is because the former necessitates modifications to the diffusion sampling and training procedures. Unlike the prior work, we investigate whether generating backdoor attacks in DMs can be as simple as BadNets, *i.e.*, by only contaminating the training dataset without tampering the original diffusion process. In this more realistic backdoor setting, we uncover *bilateral backdoor effects* that not only serve an *adversarial* purpose (compromising the functionality of DMs) but also offer a *defensive* advantage (which can be leveraged for backdoor defense). On one hand, a BadNets-like backdoor attack remains effective in DMs for producing incorrect images that do not align with the intended text conditions. On the other hand, backdoored DMs exhibit an increased ratio of backdoor triggers, a phenomenon referred as 'trigger amplification', among the generated images. We show that the latter insight can be utilized to improve the existing backdoor detectors for the detection of backdoor-poisoned data points. Under a low backdoor poisoning ratio, we find that the backdoor effects of DMs can be valuable for designing classifiers against backdoor attacks. | From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models | [
"Zhuoshi Pan",
"Yuguang Yao",
"Gaowen Liu",
"Bingquan Shen",
"H. Vicky Zhao",
"Ramana Rao Kompella",
"Sijia Liu"
] | Workshop/BUGS | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yCQC8hLyZj | @inproceedings{
yu2023emergence,
title={Emergence of Segmentation with Minimalistic White-Box Transformers},
author={Yaodong Yu and Tianzhe Chu and Shengbang Tong and Ziyang Wu and Druv Pai and Sam Buchanan and Yi Ma},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=yCQC8hLyZj}
} | Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection. Previous works have shown that segmentation properties emerge in vision transformers (ViTs) trained using self-supervised methods such as DINO, but not in those trained on supervised classification tasks. In this study, we probe whether segmentation emerges in transformer-based models \textit{solely} as a result of intricate self-supervised learning mechanisms, or if the same emergence can be achieved under much broader conditions through proper design of the model architecture. Through extensive experimental results, we demonstrate that when employing a white-box transformer-like architecture known as \ours{}, whose design explicitly models and pursues low-dimensional structures in the data distribution, segmentation properties, at both the whole and parts levels, already emerge with a minimalistic supervised training recipe. Layer-wise finer-grained analysis reveals that the emergent properties strongly corroborate the designed mathematical functions of the white-box network. Our results suggest a path to design white-box foundation models that are simultaneously highly performant and mathematically fully interpretable. | Emergence of Segmentation with Minimalistic White-Box Transformers | [
"Yaodong Yu",
"Tianzhe Chu",
"Shengbang Tong",
"Ziyang Wu",
"Druv Pai",
"Sam Buchanan",
"Yi Ma"
] | Workshop/XAIA | 2308.16271 | [
"https://github.com/ma-lab-berkeley/crate"
] | https://huggingface.co/papers/2308.16271 | 6 | 13 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=xuT2SDuJX6 | @inproceedings{
deck2023a,
title={A Critical Survey on Fairness Benefits of {XAI}},
author={Luca Deck and Jakob Schoeffer and Maria De-Arteaga and Niklas Kuehl},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=xuT2SDuJX6}
} | In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts. Based on a systematic literature review and a subsequent qualitative content analysis, we identify seven archetypal claims from 175 papers on the alleged fairness benefits of XAI. We present crucial caveats with respect to these claims and provide an entry point for future discussions around the potentials and limitations of XAI for specific fairness desiderata. While the literature often suggests XAI to be an enabler for several fairness desiderata, we notice a divide between these desiderata and the capabilities of XAI. We encourage to conceive XAI as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness and to be more specific about how exactly what kind of XAI method enables whom to address which fairness desideratum. | A Critical Survey on Fairness Benefits of XAI | [
"Luca Deck",
"Jakob Schoeffer",
"Maria De-Arteaga",
"Niklas Kuehl"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xrEpp63kz7 | @inproceedings{
klein2023understanding,
title={Understanding Scalable Perovskite Solar Cell Manufacturing with Explainable {AI}},
author={Lukas Klein and Sebastian Ziegler and Felix Laufer and Charlotte Debus and Markus G{\"o}tz and Klaus Maier-Hein and Ulrich Paetzold and Fabian Isensee and Paul Jaeger},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=xrEpp63kz7}
} | Large-area processing of perovskite semiconductor thin-films is complex and evokes unexplained variance in quality, posing a major hurdle for the commercialization of perovskite photovoltaics. Advances in scalable fabrication processes are currently limited to gradual and arbitrary trial-and-error procedures. While the in-situ acquisition of photoluminescence videos has the potential to reveal important variations in the thin-film formation process, the high dimensionality of the data quickly surpasses the limits of human analysis. In response, this study leverages deep learning and explainable artificial intelligence (XAI) to discover relationships between sensor information acquired during the perovskite thin-film formation process and the resulting solar cell performance indicators, while rendering these relationships humanly understandable. Through a diverse set of XAI methods, we explain not only *what* characteristics are important but also *why*, allowing material scientists to translate findings into actionable conclusions. Our study demonstrates that XAI methods will play a critical role in accelerating energy materials science. | Understanding Scalable Perovskite Solar Cell Manufacturing with Explainable AI | [
"Lukas Klein",
"Sebastian Ziegler",
"Felix Laufer",
"Charlotte Debus",
"Markus Götz",
"Klaus Maier-Hein",
"Ulrich Paetzold",
"Fabian Isensee",
"Paul Jaeger"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=x9H6lNez5b | @inproceedings{
nguyen2023exploring,
title={Exploring Practitioner Perspectives On Training Data Attribution Explanations},
author={Elisa Nguyen and Evgenii Kortukov and Jean Song and Seong Joon Oh},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=x9H6lNez5b}
} | Explainable AI (XAI) aims to provide insight into opaque model reasoning to humans and as such is an interdisciplinary field by nature. In this paper, we interviewed 10 practitioners to understand the possible usability of training data attribution (TDA) explanations and to explore the design space of such an approach. We confirmed that training data quality is often the most important factor for high model performance in practice and model developers mainly rely on their own experience to curate data. End-users expect explanations to enhance their interaction with the model and do not necessarily prioritise but are open to training data as a means of explanation. Within our participants, we found that TDA explanations are not well-known and therefore not used. We urge the community to focus on the utility of TDA techniques from the human-machine collaboration perspective and broaden the TDA evaluation to reflect common use cases in practice. | Exploring Practitioner Perspectives On Training Data Attribution Explanations | [
"Elisa Nguyen",
"Evgenii Kortukov",
"Jean Song",
"Seong Joon Oh"
] | Workshop/XAIA | 2310.20477 | [
""
] | https://huggingface.co/papers/2310.20477 | 1 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=wNhcShUyAf | @inproceedings{
melamed2023explaining,
title={Explaining high-dimensional text classifiers},
author={Odelia Melamed and Rich Caruana},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=wNhcShUyAf}
} | Explainability has become a valuable tool in the last few years, helping humans better understand AI-guided decisions. However, the classic explainability tools are sometimes quite limited when considering high-dimensional inputs and neural network classifiers. We present a new explainability method using theoretically proven high-dimensional properties in neural network classifiers. We present two usages of it: 1) On the classical sentiment analysis task for the IMDB reviews dataset, and 2) our Malware-Detection task for our PowerShell scripts dataset. | Explaining high-dimensional text classifiers | [
"Odelia Melamed",
"Rich Caruana"
] | Workshop/XAIA | 2311.13454 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wFJoNkiASU | @inproceedings{
you2023sumofparts,
title={Sum-of-Parts Models: Faithful Attributions for Groups of Features},
author={Weiqiu You and Helen Qu and Marco Gatti and Bhuvnesh Jain and Eric Wong},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=wFJoNkiASU}
} | An explanation of a machine learning model is considered "faithful" if it accurately reflects the model's decision-making process. However, explanations such as feature attributions for deep learning are not guaranteed to be faithful, and can produce potentially misleading interpretations. In this work, we develop Sum-of-Parts (SOP), a class of models whose predictions come with grouped feature attributions that are faithful-by-construction. This model decomposes a prediction into an interpretable sum of scores, each of which is directly attributable to a sparse group of features. We evaluate SOP on benchmarks with standard interpretability metrics, and in a case study, we use the faithful explanations from SOP to help astrophysicists discover new knowledge about galaxy formation. | Sum-of-Parts Models: Faithful Attributions for Groups of Features | [
"Weiqiu You",
"Helen Qu",
"Marco Gatti",
"Bhuvnesh Jain",
"Eric Wong"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=w6Qnoy2RXG | @inproceedings{
amara2023ginxeval,
title={{GI}nX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations},
author={Kenza Amara and Mennatallah El-Assady and Rex Ying},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=w6Qnoy2RXG}
} | Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions. However, it is not clear yet how to evaluate the *correctness* of those explanations, whether it is from a human or a model perspective. One unaddressed bottleneck in the current evaluation procedure is the problem of out-of-distribution explanations, whose distribution differs from those of the training data. This important issue affects existing evaluation metrics such as the popular faithfulness or fidelity score. In this paper, we show the limitations of faithfulness metrics. We propose **GInX-Eval** (**G**raph **In**-distribution e**X**planation **Eval**uation), an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness and offers new insights on explainability methods. Using a fine-tuning strategy, the GInX score measures how informative removed edges are for the model and the HomophilicRank score evaluates if explanatory edges are correctly ordered by their importance and the explainer accounts for redundant information. GInX-Eval verifies if ground-truth explanations are instructive to the GNN model. In addition, it shows that many popular methods, including gradient-based methods, produce explanations that are not better than a random designation of edges as important subgraphs, challenging the findings of current works in the area. Results with GInX-Eval are consistent across multiple datasets and align with human evaluation. | GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations | [
"Kenza Amara",
"Mennatallah El-Assady",
"Rex Ying"
] | Workshop/XAIA | 2309.16223 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vVpefYmnsG | @inproceedings{
hedstr{\"o}m2023sanity,
title={Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test},
author={Anna Hedstr{\"o}m and Leander Weber and Sebastian Lapuschkin and Marina H{\"o}hne},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=vVpefYmnsG}
} | The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function. However, recent works have identified several methodological caveats for the empirical interpretation of MPRT. To address these caveats, we introduce two adaptations to the original MPRT — Smooth MPRT and Efficient MPRT, where the former minimises the impact that noise has on the evaluation results through sampling and the latter circumvents the need for biased similarity measurements by re-interpreting the test through the explanation’s rise in complexity, after full parameter randomisation. Our experimental results demonstrate that these proposed variants lead to improved metric reliability, thus enabling a more trustworthy application of XAI methods | Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test | [
"Anna Hedström",
"Leander Weber",
"Sebastian Lapuschkin",
"Marina MC Höhne"
] | Workshop/XAIA | 2401.06465 | [
"https://github.com/annahedstroem/sanity-checks-revisited"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=uVAiiHFH0L | @inproceedings{
xue2023stability,
title={Stability Guarantees for Feature Attributions with Multiplicative Smoothing},
author={Anton Xue and Rajeev Alur and Eric Wong},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=uVAiiHFH0L}
} | Explanation methods for machine learning models tend not to provide any formal guarantees and may not reflect the underlying decision-making process. In this work, we analyze stability as a property for reliable feature attribution methods. We prove that relaxed variants of stability are guaranteed if the model is sufficiently Lipschitz with respect to the masking of features. We develop a smoothing method called Multiplicative Smoothing (MuS) to achieve such a model. We show that MuS overcomes the theoretical limitations of standard smoothing techniques and can be integrated with any classifier and feature attribution method. We evaluate MuS on vision and language models with various feature attribution methods, such as LIME and SHAP, and demonstrate that MuS endows feature attributions with non-trivial stability guarantees. | Stability Guarantees for Feature Attributions with Multiplicative Smoothing | [
"Anton Xue",
"Rajeev Alur",
"Eric Wong"
] | Workshop/XAIA | 2307.05902 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=uU1eXPwesa | @inproceedings{
martin2023fruni,
title={{FRUNI} and {FTREE} synthetic knowledge graphs for evaluating explainability},
author={Pablo Sanchez Martin and Tarek Besold and Priyadarshini Kumari},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=uU1eXPwesa}
} | Research on knowledge graph completion (KGC)---i.e., link prediction within incomplete KGs---is witnessing significant growth in popularity. Recently, KGC using KG embedding (KGE) models, primarily based on complex architectures (e.g., transformers), have achieved remarkable performance. Still, extracting the \emph{minimal and relevant} information employed by KGE models to make predictions, while constituting a major part of \emph{explaining the predictions}, remains a challenge. While there exists a growing literature on explainers for trained KGE models, systematically exposing and quantifying their failure cases poses even greater challenges. In this work, we introduce two synthetic datasets, FRUNI and FTREE, designed to demonstrate the (in)ability of explainer methods to spot link predictions that rely on indirectly connected links. Notably, we empower practitioners to control various aspects of the datasets, such as noise levels and dataset size, enabling them to assess the performance of explainability methods across diverse scenarios. Through our experiments, we assess the performance of four recent explainers in providing accurate explanations for predictions on the proposed datasets. We believe that these datasets are valuable resources for further validating explainability methods within the knowledge graph community. | FRUNI and FTREE synthetic knowledge graphs for evaluating explainability | [
"Pablo Sanchez Martin",
"Tarek Besold",
"Priyadarshini Kumari"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tiLZkab8TP | @inproceedings{
hajiramezanali2023on,
title={On the Consistency of {GNN} Explainability Methods},
author={Ehsan Hajiramezanali and Sepideh Maleki and Alex Tseng and Aicha BenTaieb and Gabriele Scalia and Tommaso Biancalani},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=tiLZkab8TP}
} | Despite the widespread utilization of post-hoc explanation methods for graph neural networks (GNNs) in high-stakes settings, there has been a lack of comprehensive evaluation regarding their quality and reliability. This evaluation is challenging primarily due to the data's non-Euclidean nature, arbitrary size, and complex topological structure. In this context, we argue that the consistency of GNN explanations, denoting the ability to produce similar explanations for input graphs with minor structural changes that do not alter their output predictions, is a key requirement for effective post-hoc GNN explanations. To fulfill this gap, we introduce a novel metric based on Fused Gromov--Wasserstein distance to quantify consistency. Finally, we demonstrate that current methods do not perform well according to this metric, underscoring the need for further research on reliable GNN explainability methods. | On the Consistency of GNN Explainability Methods | [
"Ehsan Hajiramezanali",
"Sepideh Maleki",
"Alex Tseng",
"Aicha BenTaieb",
"Gabriele Scalia",
"Tommaso Biancalani"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=se4ojQqjB5 | @inproceedings{
armitage2023explainable,
title={Explainable {AI} in Music Performance: Case Studies from Live Coding and Sound Spatialisation},
author={Jack Armitage and Nicola Privato and Victor Shepardson and Celeste Betancur Gutierrez},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=se4ojQqjB5}
} | Explainable Artificial Intelligence (XAI) has emerged as a significant area of research, with diverse applications across various fields. In the realm of arts, the application and implications of XAI remain largely unexplored. This paper investigates how artist-researchers address and navigate explainability in their systems during creative AI/ML practices, focusing on music performance. We present two case studies: live coding of AI/ML models and sound spatialisation performance. In the first case, we explore the inherent explainability in live coding and how the integration of interactive and on-the-fly machine learning processes can enhance this explainability. In the second case, we investigate how sound spatialisation can serve as a powerful tool for understanding and navigating the latent dimensions of autoencoders. Our autoethnographic reflections reveal the complexities and nuances of applying XAI in the arts, and underscore the need for further research in this area. We conclude that the exploration of XAI in the arts, particularly in music performance, opens up new avenues for understanding and improving the interaction between artists and AI/ML systems. This research contributes to the broader discussion on the diverse applications of XAI, with the ultimate goal of extending the frontiers of applied XAI. | Explainable AI in Music Performance: Case Studies from Live Coding and Sound Spatialisation | [
"Jack Armitage",
"Nicola Privato",
"Victor Shepardson",
"Celeste Betancur Gutierrez"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=qt9yTS7TKc | @inproceedings{
segal2023robust,
title={Robust Recourse for Binary Allocation Problems},
author={Meirav Segal and Anne-Marie George and Ingrid Yu and Christos Dimitrakakis},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=qt9yTS7TKc}
} | We present the problem of algorithmic recourse for the setting of binary allocation problems. In this setting, the optimal allocation does not depend only on the prediction model and the individual's features, but also on the current available resources, decision maker's objective and other individuals currently applying for the resource.
Specifically, we focus on 0-1 knapsack problems and in particular the use case of lending.
We first provide a method for generating counterfactual explanations and then address the problem of recourse invalidation due to changes in allocation variables. Finally, we empirically compare our method with perturbation-robust recourse and show that our method can provide higher validity at a lower cost. | Robust Recourse for Binary Allocation Problems | [
"Meirav Segal",
"Anne-Marie George",
"Ingrid Yu",
"Christos Dimitrakakis"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=nVGuWh4S2G | @inproceedings{
koebler2023towards,
title={Towards Explanatory Model Monitoring},
author={Alexander Koebler and Thomas Decker and Michael Lebacher and Ingo Thon and Volker Tresp and Florian Buettner},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=nVGuWh4S2G}
} | Monitoring machine learning systems and efficiently recovering their reliability after performance degradation are two of the most critical issues in real-world applications. However, current monitoring strategies lack the capability to provide actionable insights answering the question of why the performance of a particular model really degraded. To address this, we propose Explanatory Performance Estimation (XPE) as a novel method that facilitates more informed model monitoring and maintenance by attributing an estimated performance change to interpretable input features. We demonstrate the superiority of our approach compared to natural baselines on different data sets. We also discuss how the generated results lead to valuable insights that can reveal potential root causes for model deterioration and guide toward actionable countermeasures. | Towards Explanatory Model Monitoring | [
"Alexander Koebler",
"Thomas Decker",
"Michael Lebacher",
"Ingo Thon",
"Volker Tresp",
"Florian Buettner"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=mAzhEP9jPv | @inproceedings{
kroeger2023are,
title={Are Large Language Models Post Hoc Explainers?},
author={Nicholas Kroeger and Dan Ley and Satyapriya Krishna and Chirag Agarwal and Himabindu Lakkaraju},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=mAzhEP9jPv}
} | Large Language Models (LLMs) are increasingly used as powerful tools for a plethora of natural language processing (NLP) applications. A recent innovation, in-context learning (ICL), enables LLMs to learn new tasks by supplying a few examples in the prompt during inference time, thereby eliminating the need for model fine-tuning. While LLMs have been utilized in several applications, their applicability in explaining the behavior of other models remains relatively unexplored. Despite the growing number of new explanation techniques, many require white-box access to the model and/or are computationally expensive, highlighting a need for next-generation post hoc explainers. In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models. More specifically, we propose a novel framework encompassing multiple prompting strategies: i) Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL, and iv) Explanation-based ICL, with varying levels of information about the underlying ML model and the local neighborhood of the test sample. We conduct extensive experiments with real-world benchmark datasets to demonstrate that LLM-generated explanations perform on par with state-of-the-art post hoc explainers using their ability to leverage ICL examples and their internal knowledge in generating model explanations. On average, across four datasets and two ML models, we observe that LLMs identify the most important feature with 72.19% accuracy, opening up new frontiers in explainable artificial intelligence (XAI) to explore LLM-based explanation frameworks. | Are Large Language Models Post Hoc Explainers? | [
"Nicholas Kroeger",
"Dan Ley",
"Satyapriya Krishna",
"Chirag Agarwal",
"Himabindu Lakkaraju"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=lJ63ABWs8V | @inproceedings{
stein2023rectifying,
title={Rectifying Group Irregularities in Explanations for Distribution Shift},
author={Adam Stein and Yinjun Wu and Eric Wong and Mayur Naik},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=lJ63ABWs8V}
} | It is well-known that real-world changes constituting distribution shift adversely affect model performance. How to characterize those changes in an interpretable manner is poorly understood. Existing techniques take the form of shift explana- tions that elucidate how samples map from the original distribution toward the shifted one by reducing the disparity between the two distributions. However, these methods can introduce group irregularities, leading to explanations that are less feasible and robust. To address these issues, we propose Group-aware Shift Explanations (GSE), an explanation method that leverages worst-group optimization to rectify group irregularities. We demonstrate that GSE not only maintains group structures, but can improve feasibility and robustness over a variety of domains by up to 20% and 25% respectively. | Rectifying Group Irregularities in Explanations for Distribution Shift | [
"Adam Stein",
"Yinjun Wu",
"Eric Wong",
"Mayur Naik"
] | Workshop/XAIA | 2305.16308 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=lERHoohuX5 | @inproceedings{
zytek2023lessons,
title={Lessons from Usable {ML} Deployments Applied to Wind Turbine Monitoring},
author={Alexandra Zytek and Wei-En Wang and Sofia Koukoura and Kalyan Veeramachaneni},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=lERHoohuX5}
} | Through past experiences deploying what we call usable ML (one step beyond explainable ML, including both explanations and other augmenting information) to real-world domains, we have learned three key lessons. First, many organizations are beginning to hire people who we call "bridges" because they bridge the gap between ML developers and domain experts, and these people fill a valuable role in developing usable ML applications. Second, a configurable system that enables easily iterating on usable ML interfaces during collaborations with bridges is key. Finally, there is a need for continuous, in-deployment evaluations to quantify the real-world impact of usable ML. Throughout this paper, we apply these lessons to the task of wind turbine monitoring, an essential task in the renewable energy domain. Turbine engineers and data analysts must decide whether to perform costly in-person investigations on turbines to prevent potential cases of brakepad failure, and well-tuned usable ML interfaces can aid with this decision-making process. Through the applications of our lessons to this task, we hope to demonstrate the potential real-world impact of usable ML in the renewable energy domain. | Lessons from Usable ML Deployments and Application to Wind Turbine Monitoring | [
"Alexandra Zytek",
"Wei-En Wang",
"Sofia Koukoura",
"Kalyan Veeramachaneni"
] | Workshop/XAIA | 2312.02859 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=joaWGug1CU | @inproceedings{
ali2023explainable,
title={Explainable Alzheimer{\textquoteright}s Disease Progression Prediction using Reinforcement Learning},
author={Raja Farrukh Ali and Ayesha Farooq and Emmanuel Adeniji and John Woods and Vinny Sun and William Hsu},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=joaWGug1CU}
} | We present a novel application of SHAP (SHapley Additive exPlanations) to enhance the interpretability of Reinforcement Learning (RL) models used for Alzheimer's Disease (AD) progression prediction. Leveraging RL's predictive capabilities on a subset of the ADNI dataset, we employ SHAP to explain the model's decision-making process. Our approach provides detailed insights into the key factors influencing AD progression predictions, offering both global and individual, patient-level interpretability. By bridging the gap between predictive power and transparency, our work is a step towards empowering clinicians and researchers to gain a deeper understanding of AD progression and facilitate more informed decision-making in AD-related research and patient care. To encourage further exploration, we open-source our codebase at https://github.com/rfali/xrlad. | Explainable Reinforcement Learning for Alzheimer’s Disease Progression Prediction. | [
"Raja Farrukh Ali",
"Ayesha Farooq",
"Emmanuel Adeniji",
"John Woods",
"Vinny Sun",
"William Hsu"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=jnNixRhhF8 | @inproceedings{
garde2023deepdecipher,
title={DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models},
author={Albert Garde and Esben Kran and Fazl Barez},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=jnNixRhhF8}
} | As large language models (LLMs) become more capable, there is an urgent need for interpretable and transparent tools. Current methods are difficult to implement, and accessible tools to analyze model internals are lacking.
To bridge this gap, we present DeepDecipher - an API and interface for probing neurons in transformer models' MLP layers. DeepDecipher makes the outputs of advanced interpretability techniques readily available for LLMs. The easy-to-use interface also makes inspecting these complex models more intuitive.
This paper outlines DeepDecipher's design and capabilities. We demonstrate how to analyze neurons, compare models, and gain insights into model behavior. For example, we contrast DeepDecipher's functionality with similar tools like Neuroscope and OpenAI's Neuron Explainer.
DeepDecipher enables efficient, scalable analysis of LLMs. By granting access to state-of-the-art interpretability methods, DeepDecipher makes LLMs more transparent, trustworthy, and safe. Researchers, engineers, and developers can quickly diagnose issues, audit systems, and advance the field. | DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models | [
"Albert Garde",
"Esben Kran",
"Fazl Barez"
] | Workshop/XAIA | 2310.01870 | [
"https://github.com/apartresearch/deepdecipher"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=iqXixXrMKa | @inproceedings{
carmichael2023how,
title={How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?},
author={Zachariah Carmichael and Walter Scheirer},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=iqXixXrMKa}
} | Surging interest in deep learning from high-stakes domains has precipitated concern over the inscrutable nature of black box neural networks. Explainable AI (XAI) research has led to an abundance of explanation algorithms for these black boxes. Such post hoc explainers produce human-comprehensible explanations, however, their fidelity with respect to the model is not well understood - explanation evaluation remains one of the most challenging issues in XAI. In this paper, we ask a targeted but important question: can popular feature-additive explainers (e.g., LIME, SHAP, SHAPR, MAPLE, and PDP) explain feature-additive predictors? Herein, we evaluate such explainers on ground truth that is analytically derived from the additive structure of a model. We demonstrate the efficacy of our approach in understanding these explainers applied to symbolic expressions, neural networks, and generalized additive models on thousands of synthetic and several real-world tasks. Our results suggest that all explainers eventually fail to correctly attribute the importance of features, especially when a decision-making process involves feature interactions. | How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors? | [
"Zachariah Carmichael",
"Walter Scheirer"
] | Workshop/XAIA | 2310.18496 | [
"https://github.com/craymichael/PostHocExplainerEvaluation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=iMR4ukkUFU | @inproceedings{
yuan2023a,
title={A Simple Scoring Function to Fool {SHAP}: Stealing from the One Above},
author={Jun Yuan and Aritra Dasgupta},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=iMR4ukkUFU}
} | Explainable Al (XAl) methods such as SHAP can help discover unfairness in black-box models. If the XAl method reveals a significant impact from a "protected attribute" (e.g., gender, race) on the model output, the model is considered unfair. However, adversarial models can subvert the detection of XAI methods. Previous approaches to constructing such an adversarial model require access to underlying data distribution. We propose a simple rule that does not require access to the underlying data or data distribution. It can adapt any scoring function to fool XAl methods, such as SHAP. Our work calls for more attention to scoring functions besides classifiers in XAl research and reveals the limitations of XAl methods for explaining behaviors of scoring functions. | A Simple Scoring Function to Fool SHAP: Stealing from the One Above | [
"Jun Yuan",
"Aritra Dasgupta"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=hpuOA3nkVW | @inproceedings{
kumar2023explaining,
title={Explaining Longitudinal Clinical Outcomes using Domain-Knowledge driven Intermediate Conceptshttps://openreview.net/profile?id={\textasciitilde}Thomas\_Kannampallil1},
author={Sayantan Kumar and Thomas Kannampallil and Aristeidis Sotiras and Philip Payne},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=hpuOA3nkVW}
} | The black-box nature of complex deep learning models makes it challenging to explain the rationale behind model predictions to clinicians and healthcare providers. Most of the current explanation methods in healthcare provide explanations through feature importance scores, which identify clinical features that are important for prediction. For high-dimensional clinical data, using individual input features as units of explanations often leads to noisy explanations that are sensitive to input perturbations and less informative for clinical interpretation. In this work, we design a novel deep learning framework that predicts domain-knowledge driven intermediate high-level clinical concepts from input features and uses them as units of explanation. Our framework is self-explaining; relevance scores are generated for each concept to predict and explain in an end-to-end joint training scheme. We perform systematic experiments on a real-world electronic health records dataset to evaluate both the performance and explainability of the predicted clinical concepts. | Explaining Longitudinal Clinical Outcomes using Domain-Knowledge driven Intermediate Concepts | [
"Sayantan Kumar",
"Thomas Kannampallil",
"Aristeidis Sotiras",
"Philip Payne"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=hkfsR3HMuj | @inproceedings{
hsu2023diagnosing,
title={Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making},
author={Aliyah Hsu and Yeshwanth Cherapanamjeri and Briton Park and Tristan Naumann and Anobel Odisho and Bin Yu},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=hkfsR3HMuj}
} | Pre-trained transformers are often fine-tuned to aid clinical decision-making using limited clinical notes. Model interpretability is crucial, especially in high-stakes domains like medicine, to establish trust and ensure safety, which requires human engagement. We introduce SUFO, a systematic framework that enhances interpretability of fine-tuned transformer feature spaces. SUFO utilizes a range of analytic and visualization techniques, including Supervised probing, Unsupervised similarity analysis, Feature dynamics, and Outlier analysis to address key questions about model trust and interpretability.
We conduct a case study investigating the impact of pre-training data where we focus on real-world pathology classification tasks, and validate our findings on MedNLI. We evaluate five 110M-sized pre-trained transformer models, categorized into general-domain (BERT, TNLR), mixed-domain (BioBERT, Clinical BioBERT), and domain-specific (PubMedBERT) groups.
Our SUFO analyses reveal that: (1) while PubMedBERT, the domain-specific model, contains valuable information for fine-tuning, it can overfit to minority classes when class imbalances exist. In contrast, mixed-domain models exhibit greater resistance to overfitting, suggesting potential improvements in domain-specific model robustness; (2) in-domain pre-training accelerates feature disambiguation during fine-tuning; and (3) feature spaces undergo significant sparsification during this process, enabling clinicians to identify common outlier modes among fine-tuned models as demonstrated in this paper. These findings showcase the utility of SUFO in enhancing trust and safety when using transformers in medicine, and we believe SUFO can aid practitioners in evaluating fine-tuned language models for other applications in medicine and in more critical domains. | Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making | [
"Aliyah Hsu",
"Yeshwanth Cherapanamjeri",
"Briton Park",
"Tristan Naumann",
"Anobel Odisho",
"Bin Yu"
] | Workshop/XAIA | 2305.17588 | [
"https://github.com/adelaidehsu/path_model_evaluation"
] | https://huggingface.co/papers/2305.17588 | 1 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=h6OT5pzrGc | @inproceedings{
havaldar2023visual,
title={Visual Topics via Visual Vocabularies},
author={Shreya Havaldar and Weiqiu You and Lyle Ungar and Eric Wong},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=h6OT5pzrGc}
} | Researchers have long used topic modeling to automatically characterize and summarize text documents without supervision. Can we extract similar structures from collections of images? To do this, we propose visual vocabularies, a method to analyze image datasets by decomposing images into segments, and grouping similar segments into visual "words". These vocabularies of visual "words" enable us to extract visual topics that capture hidden themes distinct from what is captured by classic unsupervised approaches. We evaluate our visual topics using standard topic modeling metrics and confirm the coherency of our visual topics via a human study. | Visual Topics via Visual Vocabularies | [
"Shreya Havaldar",
"Weiqiu You",
"Lyle Ungar",
"Eric Wong"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=h5usKrxCH2 | @inproceedings{
zhang2023attributionlab,
title={AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments},
author={Yang Zhang and Yawei Li and Hannah Brown and Mina Rezaei and Bernd Bischl and Philip Torr and Ashkan Khakzar and Kenji Kawaguchi},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=h5usKrxCH2}
} | Feature attribution explains neural network outputs by identifying relevant input features.
How do we know if the identified features are indeed relevant to the network? This notion is referred to as _faithfulness_, an essential property that reflects the alignment between the identified (attributed) features and the features used by the model.
One recent trend to test faithfulness is to design the data such that we know which input features are relevant to the label and then train a model on the designed data.
Subsequently, the identified features are evaluated by comparing them with these designed ground truth features.
However, this idea has the underlying assumption that the neural network learns to use _all_ and _only_ these designed features, while there is no guarantee that the learning process trains the network in this way.
In this paper, we solve this missing link by _explicitly designing the neural network_ by manually setting its weights, along with _designing data_, so we know precisely which input features in the dataset are relevant to the designed network.
Thus, we can test faithfulness in _AttributionLab_, our designed synthetic environment, which serves as a sanity check and is effective in filtering out attribution methods. If an attribution method is not faithful in a simple controlled environment, it can be unreliable in more complex scenarios. Furthermore, the AttributionLab environment serves as a laboratory for controlled experiments through which we can study feature attribution methods, identify issues, and suggest potential improvements. | AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments | [
"Yang Zhang",
"Yawei Li",
"Hannah Brown",
"Mina Rezaei",
"Bernd Bischl",
"Philip Torr",
"Ashkan Khakzar",
"Kenji Kawaguchi"
] | Workshop/XAIA | 2310.06514 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=gh69Bu7k48 | @inproceedings{
park2023geometric,
title={Geometric Remove-and-Retrain ({GOAR}): Coordinate-Invariant eXplainable {AI} Assessment},
author={Yong-Hyun Park and Junghoon Seo and Bomseok Park and Seongsu Lee and Junghyo Jo},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=gh69Bu7k48}
} | Identifying the relevant input features that have a critical influence on the output results is indispensable for the development of explainable artificial intelligence (XAI). Remove-and-Retrain (ROAR) is a widely accepted approach for assessing the importance of individual pixels by measuring changes in accuracy following their removal and subsequent retraining of the modified dataset. However, we uncover notable limitations in pixel-perturbation strategies. When viewed from a geometric perspective, this method perturbs pixels by moving each sample in the pixel-basis direction. However, we have found that this approach is coordinate-dependent and fails to discriminate between differences among features, thereby compromising the reliability of the evaluation. To address this challenge, we introduce an alternative feature-perturbation approach named Geometric Remove-and-Retrain (GOAR). GOAR offers a perturbation strategy that takes into account the geometric structure of the dataset, providing a coordinate-independent metric for accurate feature comparison. Through a series of experiments with both synthetic and real datasets, we substantiate that GOAR's geometric metric transcends the limitations of pixel-centric metrics. | Geometric Remove-and-Retrain (GOAR): Coordinate-Invariant eXplainable AI Assessment | [
"Yong-Hyun Park",
"Junghoon Seo",
"Bomseok Park",
"Seongsu Lee",
"Junghyo Jo"
] | Workshop/XAIA | 2407.12401 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=fPnpjEhyxv | @inproceedings{
tapley2023utilizing,
title={Utilizing Explainability Techniques for Reinforcement Learning Model Assurance},
author={Alexander Tapley},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=fPnpjEhyxv}
} | Explainable Reinforcement Learning (XRL) can provide transparency into the decision-making process of a Reinforcement Learning (RL) model and increase user trust and adoption into real-world use cases. By utilizing XRL techniques, researchers can identify potential vulnerabilities within a trained RL model prior to deployment, therefore limiting the potential for mission failure or mistakes by the system. This paper introduces the ARLIN (Assured RL Model Interrogation) Toolkit, a Python library that provides explainability outputs for trained RL models that can be used to identify potential policy vulnerabilities and critical points. Using XRL datasets, ARLIN provides detailed analysis into an RL model's latent space, creates a semi-aggregated Markov decision process (SAMDP) to outline the model's path throughout an episode, and produces cluster analytics for each node within the SAMDP to identify potential failure points and vulnerabilities within the model. To illustrate ARLIN's effectiveness, we provide sample API usage and corresponding explainability visualizations and vulnerability point detection for a publicly available RL model. The open-source code repository is available for download at https://github.com/mitre/arlin. | Utilizing Explainability Techniques for Reinforcement Learning Model Assurance | [
"Alexander Tapley"
] | Workshop/XAIA | 2311.15838 | [
"https://github.com/mitre/arlin"
] | https://huggingface.co/papers/2311.15838 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=ewagDhIy8Y | @inproceedings{
dammu2023detecting,
title={Detecting Spurious Correlations via Robust Visual Concepts in Real and {AI}-Generated Image Classification},
author={Preetam Prabhu Srikar Dammu and Chirag Shah},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=ewagDhIy8Y}
} | Often machine learning models tend to automatically learn associations present in the training data without questioning their validity or appropriateness. This undesirable property is the root cause of the manifestation of spurious correlations, which render models unreliable and prone to failure in the presence of distribution shifts. Research shows that most methods attempting to remedy spurious correlations are only effective for a model's known spurious associations. Current spurious correlation detection algorithms either rely on extensive human annotations or are too restrictive in their formulation. Moreover, they rely on strict definitions of visual artifacts that may not apply to data produced by generative models, as they are known to hallucinate contents that do not conform to standard specifications. In this work, we introduce a general-purpose method that efficiently detects potential spurious correlations, and requires significantly less human interference in comparison to the prior art. Additionally, the proposed method provides intuitive explanations while eliminating the need for pixel-level annotations. We demonstrate the proposed method's tolerance to the peculiarity of AI-generated images, which is a considerably challenging task, one where most of the existing methods fall short. Consequently, our method is also suitable for detecting spurious correlations that may propagate to downstream applications originating from generative models. | Detecting Spurious Correlations via Robust Visual Concepts in Real and AI-Generated Image Classification | [
"Preetam Prabhu Srikar Dammu",
"Chirag Shah"
] | Workshop/XAIA | 2311.01655 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=d7FsEtYjvN | @inproceedings{
hsiao2023towards,
title={Towards the next generation explainable {AI} that promotes {AI}-human mutual understanding},
author={Janet Hsiao and Antoni Chan},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=d7FsEtYjvN}
} | Recent advances in deep learning AI has demanded better explanations on AI’s operations to enhance transparency of AI’s decisions, especially in critical systems such as self-driving car or medical diagnosis applications, to ensure safety, user trust and user satisfaction. However, current Explainable AI (XAI) solutions focus on using more AI to explain AI, without considering users’ mental processes. Here we use cognitive science theories and methodologies to develop a next-generation XAI framework that promotes human-AI mutual understanding, using computer vision AI models as examples due to its importance in critical systems. Specifically, we propose to equip XAI with an important cognitive capacity in human social interaction: theory of mind (ToM), i.e., the capacity to understand others’ behaviour by attributing mental states to them. We focus on two ToM abilities: (1) Inferring human strategy and performance (i.e., Machine’s ToM), and (2) Inferring human understanding of AI strategy and trust towards AI (i.e., to infer Human’s ToM). Computational modeling of human cognition and experimental psychology methods play an important role for XAI to develop these two ToM abilities to provide user-centered explanations through comparing users' strategy with AI’s strategy and estimating user’s current understanding of AI’s strategy, similar to real-life teachers. Enhanced human-AI mutual understanding can in turn lead to better adoption and trust of AI systems. This framework thus highlights the importance of cognitive science approaches to XAI. | Towards the next generation explainable AI that promotes AI-human mutual understanding | [
"Janet Hsiao",
"Antoni Chan"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=cBXiaGUcK8 | @inproceedings{
wellawatte2023extracting,
title={Extracting human interpretable structure-property relationships in chemistry using {XAI} and large language models},
author={Geemi Wellawatte and Philippe Schwaller},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=cBXiaGUcK8}
} | Explainable Artificial Intelligence (XAI) is an emerging field in AI that aims to address the opaque nature of machine learning models. Furthermore, it has been shown that XAI can be used to extract input-output relationships, making them a useful tool in chemistry to understand structure-property relationships. However, one of the main limitations of XAI methods is that they are developed for technically oriented users. We propose the XpertAI framework that integrates XAI methods with large language models (LLMs) accessing scientific literature to generate accessible natural language explanations of raw chemical data automatically. We conducted 5 case studies to evaluate the performance of XpertAI. Our results show that XpertAI combines the strengths of LLMs and XAI tools in generating specific, scientific, and interpretable explanations. | Extracting human interpretable structure-property relationships in chemistry using XAI and large language models | [
"Geemi Wellawatte",
"Philippe Schwaller"
] | Workshop/XAIA | 2311.04047 | [
"https://github.com/geemi725/xpertai"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=bhvlGMbONN | @inproceedings{
rawal2023are,
title={Are Video{QA} Models Truly Multimodal?},
author={Ishaan Rawal and Shantanu Jaiswal and Basura Fernando and Cheston Tan},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=bhvlGMbONN}
} | While VideoQA Transformer models demonstrate competitive performance on standard benchmarks, the reasons behind their success are not fully understood. Do these models jointly capture and leverage the rich multimodal structures and dynamics from video and text? Or are they merely exploiting shortcuts to achieve high scores? Hence, we design $\textit{QUAG}$ (QUadrant AveraGe), a lightweight and non-parametric probe, to critically analyze multimodal representations. QUAG facilitates combined dataset-model study by systematic ablation of model's coupled multimodal understanding during inference. Surprisingly, it demonstrates that the models manage to maintain high performance even under multimodal impairment. This indicates that the current VideoQA benchmarks and metrics do not penalize models that find shortcuts and discount joint multimodal understanding. Motivated by this, we propose $\textit{CLAVI}$ (Counterfactual in LAnguage and VIdeo), a diagnostic dataset for coupled multimodal understanding in VideoQA. CLAVI consists of temporal questions and videos that are augmented to curate balanced counterfactuals in language and video domains. We evaluate models on CLAVI and find that all models achieve high performance on multimodal shortcut instances, but most of them have very poor performance on the counterfactual instances that necessitate joint multimodal understanding. Overall, we show that many VideoQA models are incapable of learning multimodal representations and that their success on standard datasets is an illusion of joint multimodal understanding. | Are VideoQA Models Truly Multimodal? | [
"Ishaan Rawal",
"Shantanu Jaiswal",
"Basura Fernando",
"Cheston Tan"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=bGsW1wSIxQ | @inproceedings{
lee2023interactive,
title={Interactive Model Correction with Natural Language},
author={Yoonho Lee and Michelle Lam and Helena Vasconcelos and Michael Bernstein and Chelsea Finn},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=bGsW1wSIxQ}
} | In supervised learning, models are trained to extract correlations from a static dataset. This often leads to models that rely on spurious correlations that fail to generalize to new data distributions, such as a bird classifier that relies on the background of an image. Preventing models from latching on to spurious correlations necessarily requires additional information beyond labeled data. Existing methods incorporate forms of additional instance-level supervision, such as labels for spurious features or additional labeled data from a balanced distribution. Such strategies can become prohibitively costly for large-scale datasets since they require additional annotation at a scale close to the original training data. We hypothesize that far less supervision suffices if we provide targeted feedback about the misconceptions of models trained on a given dataset. We introduce Clarify, a novel natural language interface and method for interactively correcting model misconceptions. Through Clarify, users need only provide a short text description to describe a model's consistent failure patterns, such as "water background" for a bird classifier. Then, in an entirely automated way, we use such descriptions to improve the training process by reweighting the training data or gathering additional targeted data. Our empirical results show that non-expert users can successfully describe model misconceptions via Clarify, improving worst-group accuracy by an average of 7.3% in two datasets with spurious correlations. Finally, we use Clarify to find and rectify 31 novel spurious correlations in ImageNet, improving minority-split accuracy from 21.1% to 28.7%. | Interactive Model Correction with Natural Language | [
"Yoonho Lee",
"Michelle Lam",
"Helena Vasconcelos",
"Michael Bernstein",
"Chelsea Finn"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ag1CpSUjPS | @inproceedings{
karimi2023on,
title={On the Relationship Between Explanation and Prediction: A Causal View},
author={Amir-Hossein Karimi and Krikamol Muandet and Simon Kornblith and Bernhard Sch{\"o}lkopf and Been Kim},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=ag1CpSUjPS}
} | Explainability has become a central requirement for the development, deployment, and adoption of machine learning (ML) models and we are yet to understand what explanation methods can and cannot do. Several factors such as data, model prediction, hyperparameters used in training the model, and random initialization can all influence downstream explanations. While previous work empirically hinted that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we measure the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors (hyperparameters) (inputs to generate saliency-based Es or Ys). We discover that Y's relative direct influence on E follows an odd pattern; the influence is higher in the lowest-performing models than in mid-performing models, and it then decreases in the top-performing models. We believe our work is a promising first step towards providing better guidance for practitioners who can make more informed decisions in utilizing these explanations by knowing what factors are at play and how they relate to their end task. | On the Relationship Between Explanation and Prediction: A Causal View | [
"Amir-Hossein Karimi",
"Krikamol Muandet",
"Simon Kornblith",
"Bernhard Schölkopf",
"Been Kim"
] | Workshop/XAIA | 2212.06925 | [
""
] | https://huggingface.co/papers/2212.06925 | 0 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=Zbt9z0a95l | @inproceedings{
wabartha2023piecewise,
title={Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning},
author={Maxime Wabartha and Joelle Pineau},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=Zbt9z0a95l}
} | Learning inherently interpretable policies is a central challenge in the path to developing autonomous agents that humans can trust.
We argue for the use of policies that are piecewise-linear.
We carefully study to what extent they can retain the interpretable properties of linear policies while performing competitively with neural baselines.
In particular, we propose the HyperCombinator (HC), a piecewise-linear neural architecture expressing a policy with a controllably small number of sub-policies.
Each sub-policy is linear with respect to interpretable features, shedding light on the agent's decision process without needing an additional explanation model.
We evaluate HC policies in control and navigation experiments, visualize the improved interpretability of the agent and highlight its trade-off with performance. | Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning | [
"Maxime Wabartha",
"Joelle Pineau"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=YVQSGT6ME0 | @inproceedings{
chaudhary2023comet,
title={{COMET}: Cost Model Explanation Framework},
author={Isha Chaudhary and Alex Renda and Charith Mendis and Gagandeep Singh},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=YVQSGT6ME0}
} | Cost models predict the cost of executing given assembly code basic blocks on a
specific microarchitecture. Recently, neural cost models have been shown to be
fairly accurate and easy to construct. They can replace heavily engineered analytical
cost models used in compilers. However, their black-box nature discourages their
adoption. In this work, we develop the first framework, COMET, for generating
faithful, generalizable, and intuitive explanations for neural cost models. We
generate and compare COMET’s explanations for the popular neural cost model,
Ithemal against those for an accurate CPU simulation-based cost model, uiCA. We
obtain an empirical inverse correlation between the prediction errors of Ithemal
and uiCA and the granularity of basic block features in COMET’s explanations for
them, indicating potential reasons for Ithemal’s higher error with respect to uiCA. | COMET: Neural Cost Model Explanation Framework | [
"Isha Chaudhary",
"Alex Renda",
"Charith Mendis",
"Gagandeep Singh"
] | Workshop/XAIA | 2302.06836 | [
"https://github.com/uiuc-focal-lab/comet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=WyBAWwpqTY | @inproceedings{
zimmermann2023scale,
title={Scale Alone Does not Improve Mechanistic Interpretability in Vision Models},
author={Roland Zimmermann and Thomas Klein and Wieland Brendel},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=WyBAWwpqTY}
} | In light of the recent widespread adoption of AI systems, understanding the internal information processing of neural networks has become increasingly critical. Most recently, machine vision has seen remarkable progress by scaling neural networks to unprecedented levels in dataset and model size. We here ask whether this extraordinary increase in scale also positively impacts the field of mechanistic interpretability. In other words, has our understanding of the inner workings of scaled neural networks improved as well? We use a psychophysical paradigm to quantify one form of mechanistic interpretability for a diverse suite of nine models and find no scaling effect for interpretability - neither for model nor dataset size. Specifically, none of the investigated state-of-the-art models are easier to interpret than the GoogLeNet model from almost a decade ago. Latest-generation vision models appear even less interpretable than older architectures, hinting at a regression rather than improvement, with modern models sacrificing interpretability for accuracy. These results highlight the need for models explicitly designed to be mechanistically interpretable and the need for more helpful interpretability methods to increase our understanding of networks at an atomic level. We release a dataset containing more than 130'000 human responses from our psychophysical evaluation of 767 units across nine models. This dataset facilitates research on automated instead of human-based interpretability evaluations, which can ultimately be leveraged to directly optimize the mechanistic interpretability of models. | Scale Alone Does not Improve Mechanistic Interpretability in Vision Models | [
"Roland Zimmermann",
"Thomas Klein",
"Wieland Brendel"
] | Workshop/XAIA | 2307.05471 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ThwzmgEwm5 | @inproceedings{
guo2023relax,
title={ReLax: An Efficient and Scalable Recourse Explanation Benchmarking Library using {JAX}},
author={Hangzhi Guo and Xinchang Xiong and Wenbo Zhang and Amulya Yadav},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=ThwzmgEwm5}
} | Despite the progress made in the field of algorithmic recourse, current research practices remain constrained, largely restricting benchmarking and evaluation of recourse methods to medium-sized datasets (approximately 50k data points) due to the severe runtime overhead of recourse generation. This constraint impedes the pace of research development in algorithmic recourse and raises concerns about the scalability of existing methods. To mitigate this problem, we propose ReLax, a JAX-based benchmarking library, designed for efficient and scalable recourse explanations. ReLax supports a wide range of recourse methods and datasets and offers performance improvements of at least two orders of magnitude over existing libraries. Notably, we demonstrate that ReLax is capable of benchmarking real-world datasets of up to 10M data points, roughly 200 times the scale of current practices, without imposing prohibitive computational costs. ReLax is fully open-sourced and can be accessed at https://github.com/BirkhoffG/jax-relax. | ReLax: An Efficient and Scalable Recourse Explanation Benchmarking Library using JAX | [
"Hangzhi Guo",
"Xinchang Xiong",
"Wenbo Zhang",
"Amulya Yadav"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SCcOu4hJ97 | @inproceedings{
leemann2023caution,
title={Caution to the Exemplars: On the Intriguing Effects of Example Choice on Human Trust in {XAI}},
author={Tobias Leemann and Yao Rong and Thai-Trang Nguyen and Enkelejda Kasneci and Gjergji Kasneci},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=SCcOu4hJ97}
} | In model audits explainable AI (XAI) systems are usually presented to human auditors on a limited number of examples due to time constraints. However, recent literature has suggested that in order to establish trust in ML models, it is not only the model’s overall performance that matters but also the specific examples on which it is correct. In this work, we study this hypothesis through a controlled user study with N = 320 participants. On a tabular and an image dataset, we show model explanations to users on examples that are categorized as ambiguous or unambiguous. For ambiguous examples, there is disagreement on the correct label among human raters whereas for unambiguous examples human labelers agree. We find that ambiguity can have a substantial effect on human trust, which is however influenced by surprising interactions of the data modality and explanation quality. While unambiguous examples boost trust for explanations that remain plausible, they also help auditors identify highly implausible explanations, thereby decreasing trust. Our results suggest paying closer attention to the selected examples in the presentation of XAI techniques. | Caution to the Exemplars: On the Intriguing Effects of Example Choice on Human Trust in XAI | [
"Tobias Leemann",
"Yao Rong",
"Thai-Trang Nguyen",
"Enkelejda Kasneci",
"Gjergji Kasneci"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QPqL9xsYOf | @inproceedings{
alvarez-napagao2023policy,
title={Policy graphs in action: explaining single- and multi-agent behaviour using predicates},
author={Sergio Alvarez-Napagao and Adri{\'a}n Tormos and Victor Abalos and Dmitry Gnatyshak},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=QPqL9xsYOf}
} | This demo shows that policy graphs (PGs) provide reliable explanations of the behaviour of agents trained in two distinct environments. Additionally, this work shows the ability to generate surrogate agents using PGs that exhibit accurate behavioral resemblances to the original agents and that this feature allows us to validate the explanations given by the system. This facilitates transparent integration of opaque agents into socio-technical systems, ensuring explainability of their actions and decisions, enabling trust in hybrid human-AI environments, and ensuring cooperative efficacy. We present demonstrations based on two environments and we present a work-in-progress library that will allow integration with a broader range of environments and types of agent policies. | Policy graphs in action: explaining single- and multi-agent behaviour using predicates | [
"Adrián Tormos",
"Victor Abalos",
"Dmitry Gnatyshak",
"Sergio Alvarez-Napagao"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=OIbmpF4ZR9 | @inproceedings{
ziems2023explaining,
title={Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection},
author={Noah Ziems and Gang Liu and John Flanagan and Meng Jiang},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=OIbmpF4ZR9}
} | Network intrusion detection (NID) systems which leverage machine learning have been shown to have strong performance in practice when used to detect malicious network traffic.
Decision trees in particular offer a strong balance between performance and simplicity, but require users of NID systems to have background knowledge in machine learning to interpret.
In addition, they are unable to provide additional outside information as to why certain features may be important for classification.
In this work, we explore the use of large language models (LLMs) to provide explanations and additional background knowledge for decision tree NID systems.
Further, we introduce a new human evaluation framework for decision tree explanations, which leverages automatically generated quiz questions that measure human evaluators' understanding of decision tree inference.
Finally, we show LLM generated decision tree explanations correlate highly with human ratings of readability, quality, and use of background knowledge while simultaneously providing better understanding of decision boundaries. | Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection | [
"Noah Ziems",
"Gang Liu",
"John Flanagan",
"Meng Jiang"
] | Workshop/XAIA | 2310.19658 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=N5RmOXuTDo | @inproceedings{
ho2023obey,
title={ObEy Anything: Quantifiable Object-based Explainability without Ground Truth Annotations},
author={William Ho and Lennart Schulze and Richard Zemel},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=N5RmOXuTDo}
} | Neural networks are at the core of AI systems recently observing accelerated adoption in high-stakes environments. Consequently, understanding their black-box predictive behavior is paramount. Current explainable AI techniques, however, are limited to explaining a single prediction, rather than characterizing the inherent ability of the model to be explained, reducing their usefulness to manual inspection of samples. In this work, we offer a conceptual distinction between explanation methods and explainability. We use this motivation to propose Object-based Explainability (ObEy), a novel model explainability metric that collectively assesses model-produced saliency maps relative to objects in images, inspired by humans’ perception of scenes. To render ObEy independent of the prediction task, we use full-image instance segmentations obtained from a foundation model, making the metric applicable on existing models in any setting. We demonstrate ObEy’s immediate applicability to use cases in model inspection and comparison. As a result, we present new insights into the explainability of adversarially trained models from a quantitative perspective. | ObEy: Quantifiable Object-based Explainability without Ground-Truth Annotations | [
"Lennart Schulze",
"William Ho",
"Richard Zemel"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Liw9vOCxe2 | @inproceedings{
martinez2023costaware,
title={Cost-aware counterfactuals for black box explanations},
author={Natalia Martinez and Kanthi Sarpatwar and Sumanta Mukherjee and Roman Vaculin},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=Liw9vOCxe2}
} | Counterfactual explanations provide actionable insights into the minimal change in a system that would lead to a more desirable prediction from a black box model. We address the challenges of finding valid and low cost counterfactuals in the setting where there is a different cost or preference for perturbing each feature. We propose a multiplicative weight approach that is applied on the perturbation, and show that this simple approach can be easily adapted to obtain multiple diverse counterfactuals, as well as to integrate the importance features obtained by other state of the art explainers to provide counterfactual examples. Additionally, we discuss the computation of valid counterfactuals with numerical gradient-based methods when the black box model presents flat regions with no reliable gradient. In this scenario, sampling approaches, as well as those that rely on available data, sometimes provide counterfactuals that may not be close to the decision boundary. We show that a simple long-range guidance approach, which consist of sampling from a larger radius sphere in search of a direction of change for the black box predictor when no gradient is available, improves the quality of the counterfactual explanation. In this work we discuss existing approaches, and show how our proposed alternatives compares favourably on different datasets and metrics. | Cost-aware counterfactuals for black box explanations | [
"Natalia Martinez",
"Kanthi Sarpatwar",
"Sumanta Mukherjee",
"Roman Vaculin"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=KPtW2SU0my | @inproceedings{
barr2023the,
title={The Disagreement Problem in Faithfulness Metrics},
author={Brian Barr and Noah Fatsi and Leif Hancox-Li and Peter Richter and Daniel Proano},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=KPtW2SU0my}
} | The field of explainable artificial intelligence (XAI) aims to explain how black-box machine learning models work. Much of the work centers around the holy grail of providing post-hoc feature attributions to any model architecture. While the pace of innovation around novel methods has slowed down, the question remains of how to choose a method, and how to make it fit for purpose. Recently, efforts around benchmarking XAI methods have suggested metrics for that purpose—but there are many choices. That bounty of choice still leaves an end user unclear on how to proceed. This paper focuses on comparing metrics with the aim of measuring faithfulness of local explanations on tabular classification problems—and shows that the current metrics don’t agree; leaving users unsure how to choose the most faithful explanations. | The Disagreement Problem in Faithfulness Metrics | [
"Brian Barr",
"Noah Fatsi",
"Leif Hancox-Li",
"Peter Richter",
"Daniel Proano"
] | Workshop/XAIA | 2311.07763 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=JqfN8vp1ov | @inproceedings{
ulrich2023interactive,
title={Interactive Visual Feature Search},
author={Devon Ulrich and Ruth Fong},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=JqfN8vp1ov}
} | Many visualization techniques have been created to explain the behavior of computer vision models, but they largely consist of static diagrams that convey limited information. Interactive visualizations allow users to more easily interpret a model's behavior, but most are not easily reusable for new models. We introduce Visual Feature Search, a novel interactive visualization that is adaptable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar model features. We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on a range of applications, such as in medical imaging and wildlife classification. | Interactive Visual Feature Search | [
"Devon Ulrich",
"Ruth Fong"
] | Workshop/XAIA | 2211.15060 | [
"https://github.com/lookingglasslab/visualfeaturesearch"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=GL7RDOru1k | @inproceedings{
jiang2023empowering,
title={Empowering Domain Experts to Detect Social Bias in Generative {AI} with User-Friendly Interfaces},
author={Roy Jiang and Rafal Kocielnik and Adhithya Prakash Saravanan and Pengrui Han and R. Michael Alvarez and Anima Anandkumar},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=GL7RDOru1k}
} | Generative AI models have become vastly popular and drive advances in all aspects of the modern economy. Detecting and quantifying the implicit social biases that they inherit in training, such as racial and gendered biases, is a critical first step in avoiding discriminatory outcomes. However, current methods are difficult to use and inflexible, presenting an obstacle for domain experts such as social scientists, ethicists, and gender studies experts. We present two comprehensive open-source bias testing tools (BiasTestGPT for PLMs and BiasTestVQA for VQA models) hosted on HuggingFace to address this challenge. With these tools, we provide intuitive and flexible tools for social bias testing in generative AI models, allowing for unprecedented ease in detecting and quantifying social bias across multiple generative AI models and mediums. | Empowering Domain Experts to Detect Social Bias in Generative AI with User-Friendly Interfaces | [
"Roy Jiang",
"Rafal Kocielnik",
"Adhithya Prakash Saravanan",
"Pengrui Han",
"R. Michael Alvarez",
"Anima Anandkumar"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FSmlu6xrUt | @inproceedings{
marcinkevi{\v{c}}s2023beyond,
title={Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?},
author={Ri{\v{c}}ards Marcinkevi{\v{c}}s and Sonia Laguna and Moritz Vandenhirtz and Julia Vogt},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=FSmlu6xrUt}
} | Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, consequently affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design. Furthermore, we formalise the model's *intervenability* as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs. | Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable? | [
"Ričards Marcinkevičs",
"Sonia Laguna",
"Moritz Vandenhirtz",
"Julia Vogt"
] | Workshop/XAIA | 2401.13544 | [
"https://github.com/sonialagunac/beyond-cbm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=F6RPYDUIZr | @inproceedings{
raman2023do,
title={Do Concept Bottleneck Models Obey Locality?},
author={Naveen Raman and Mateo Espinosa Zarlenga and Juyeon Heo and Mateja Jamnik},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=F6RPYDUIZr}
} | Concept-based learning improves a deep learning model's interpretability by explaining its predictions via human-understandable concepts. Deep learning models trained under this paradigm heavily rely on the assumption that neural networks can learn to predict the presence or absence of a given concept independently of other concepts. Recent work, however, strongly suggests that this assumption may fail to hold in Concept Bottleneck Models (CBMs), a quintessential family of concept-based interpretable architectures. In this paper, we investigate whether CBMs correctly capture the degree of conditional independence across concepts when such concepts are localised both \textit{spatially}, by having their values entirely defined by a fixed subset of features, and \textit{semantically}, by having their values correlated with only a fixed subset of predefined concepts. To understand locality, we analyse how changes to features outside of a concept's spatial or semantic locality impact concept predictions. Our results suggest that even in well-defined scenarios where the presence of a concept is localised to a fixed feature subspace, or whose semantics are correlated to a small subset of other concepts, CBMs fail to learn this locality. These results cast doubt upon the quality of concept representations learnt by CBMs and strongly suggest that concept-based explanations may be fragile to changes outside their localities. | Do Concept Bottleneck Models Obey Locality? | [
"Naveen Raman",
"Mateo Espinosa Zarlenga",
"Juyeon Heo",
"Mateja Jamnik"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=DkyNNQPmSj | @inproceedings{
piratla2023estimation,
title={Estimation of Concept Explanations Should be Uncertainty Aware},
author={Vihari Piratla and Juyeon Heo and Sukriti Singh and Adrian Weller},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=DkyNNQPmSj}
} | Model explanations are very valuable for interpreting and debugging prediction models. We study a specific kind of global explanations called Concept Explanations, where the goal is to interpret a model using human-understandable concepts. Recent advances in multi-modal learning rekindled interest in concept explanations and led to several label-efficient proposals for estimation. However, existing estimation methods are unstable to the choice of concepts or dataset that is used for computing explanations. We observe that instability in explanations is because estimations do not model noise. We propose an uncertainty aware estimation method, which readily improved reliability of the concept explanations. We demonstrate with theoretical analysis and empirical evaluation that explanations computed by our method are stable to the choice of concepts and data shifts while also being label-efficient and faithful. | Estimation of Concept Explanations Should be Uncertainty Aware | [
"Vihari Piratla",
"Juyeon Heo",
"Sukriti Singh",
"Adrian Weller"
] | Workshop/XAIA | 2312.08063 | [
"https://github.com/vps-anonconfs/uace"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=CKPGhnMADQ | @inproceedings{
chan2023optimising,
title={Optimising Human-{AI} Collaboration by Learning Convincing Explanations},
author={Alex Chan and Alihan H{\"u}y{\"u}k and Mihaela van der Schaar},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=CKPGhnMADQ}
} | Machine learning models are being increasingly deployed to take, or assist in taking, complicated and high-impact decisions, from quasi-autonomous vehicles to clinical decision support systems. This poses challenges, particularly when models have hard-to-detect failure modes and are able to take actions without oversight. In order to handle this challenge, we propose a method for a collaborative system that remains safe by having a human ultimately making decisions, while giving the model the best opportunity to convince and debate them with interpretable explanations. However, the most helpful explanation varies among individuals and may be inconsistent across stated preferences. To this end we develop an algorithm, Ardent, to efficiently learn a ranking through interaction and best assist humans complete a task. By utilising a collaborative approach, we can ensure safety and improve performance while addressing transparency and accountability concerns. Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations, which we validate through extensive simulations alongside a user study involving a challenging image classification task, demonstrating consistent improvement over competing systems. | Optimising Human-AI Collaboration by Learning Convincing Explanations | [
"Alex Chan",
"Alihan Hüyük",
"Mihaela van der Schaar"
] | Workshop/XAIA | 2311.07426 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=ANrzX5KFAG | @inproceedings{
madaan2023diffusionguided,
title={Diffusion-Guided Counterfactual Generation for Model Explainability},
author={Nishtha Madaan and Srikanta Bedathur},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=ANrzX5KFAG}
} | Generating counterfactual explanations is one of the most effective approaches for uncovering the inner workings of black-box neural network models and building user trust. While remarkable strides have been made in generative modeling using diffusion models in domains like vision, their utility in generating counterfactual explanations in structured modalities remains unexplored. In this paper, we introduce Structured Counterfactual Diffuser or SCD, the first plug-and-play framework leveraging diffusion for generating counterfactual explanations in structured data. SCD learns the underlying data distribution via a diffusion model which is then guided at test time to generate counterfactuals for any arbitrary black-box model, input, and desired prediction. Our experiments show that our counterfactuals not only exhibit high plausibility compared to the existing state-of-the-art but also show significantly better proximity and diversity. | Diffusion-Guided Counterfactual Generation for Model Explainability | [
"Nishtha Madaan",
"Srikanta Bedathur"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=9yXEqVKacK | @inproceedings{
kori2023glance,
title={{GLANCE}: Global to Local Architecture-Neutral Concept-based Explanations},
author={Avinash Kori and Ben Glocker and Francesca Toni},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=9yXEqVKacK}
} | Most of the current explainability techniques focus on capturing the importance of features in input space. However, given the complexity of models and data-generating processes, the resulting explanations are far from being complete, in that they lack an indication of feature interactions and visualization of their effect. In this work, we propose a novel surrogate-model-based explainability framework to explain the decisions of any CNN-based image classifiers by extracting causal relations between the features. These causal relations serve as global explanations from which local explanations of different forms can be obtained. Specifically, we employ a generator to visualize the `effect' of interactions among features in latent space and draw feature importance therefrom as local explanations. We demonstrate and evaluate explanations obtained with our framework on the Morpho-MNIST, the FFHQ, and the AFHQ datasets. | GLANCE: Global to Local Architecture-Neutral Concept-based Explanations | [
"Avinash Kori",
"Ben Glocker",
"Francesca Toni"
] | Workshop/XAIA | 2207.01917 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=9i4AcMYE6o | @inproceedings{
harel2023inherent,
title={Inherent Inconsistencies of Feature Importance},
author={Nimrod Harel and Uri Obolski and Ran Gilad-Bachrach},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=9i4AcMYE6o}
} | The rapid advancement and widespread adoption of machine learning-driven technologies have underscored the practical and ethical need for creating interpretable artificial intelligence systems. Feature importance, a method that assigns scores to the contribution of individual features on prediction outcomes, seeks to bridge this gap as a tool for enhancing human comprehension of these systems. Feature importance serves as an explanation of predictions in diverse contexts, whether by providing a global interpretation of a phenomenon across the entire dataset or by offering a localized explanation for the outcome of a specific data point. Furthermore, feature importance is being used both for explaining models and for identifying plausible causal relations in the data, independently from the model. However, it is worth noting that these various contexts have traditionally been explored in isolation, with limited theoretical foundations.
This paper presents an axiomatic framework designed to establish coherent relationships among the different contexts of feature importance scores. Notably, our work unveils a surprising conclusion: when we combine the proposed properties with those previously outlined in the literature, we demonstrate the existence of an inconsistency. This inconsistency highlights that certain essential properties of feature importance scores cannot coexist harmoniously within a single framework. | Inherent Inconsistencies of Feature Importance | [
"Nimrod Harel",
"Uri Obolski",
"Ran Gilad-Bachrach"
] | Workshop/XAIA | 2206.08204 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=8BR8EaWNTZ | @inproceedings{
chaleshtori2023on,
title={On Evaluating Explanation Utility for Human-{AI} Decision-Making in {NLP}},
author={Fateme Hashemi Chaleshtori and Atreya Ghosal and Ana Marasovic},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=8BR8EaWNTZ}
} | Is explainability a false promise? This debate has emerged from the lack of consistent evidence that explanations help in situations they are introduced for. In NLP, the evidence is not only inconsistent but also scarce. While there is a clear need for more human-centered, application-grounded evaluations, it is less clear where NLP researchers should begin if they want to conduct them. To address this, we introduce evaluation guidelines established through an extensive review and meta-analysis of related work. | On Evaluating Explanation Utility for Human-AI Decision-Making in NLP | [
"Fateme Hashemi Chaleshtori",
"Atreya Ghosal",
"Ana Marasovic"
] | Workshop/XAIA | [
"https://github.com/utahnlp/nlp-explanation-utility-guideline"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=81FSrQxgEv | @inproceedings{
laguna2023explimeable,
title={Exp{LIME}able: An exploratory framework for {LIME}},
author={Sonia Laguna and Julian Heidenreich and Jiugeng Sun and Nil{\"u}fer Cetin and Ibrahim Al Hazwani and Udo Schlegel and Furui Cheng and Mennatallah El-Assady},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=81FSrQxgEv}
} | ExpLIMEable is a tool to enhance the comprehension of Local Interpretable Model-Agnostic Explanations (LIME), particularly within the realm of medical image analysis. LIME explanations often lack robustness due to variances in perturbation techniques and interpretable function choices. Powered by a convolutional neural network for brain MRI tumor classification, \textit{ExpLIMEable} seeks to mitigate these issues. This explainability tool allows users to tailor and explore the explanation space generated post hoc by different LIME parameters to gain deeper insights into the model's decision-making process, its sensitivity, and limitations. We introduce a novel dimension reduction step on the perturbations seeking to find more informative neighborhood spaces and extensive provenance tracking to support the user. This contribution ultimately aims to enhance the robustness of explanations, key in high-risk domains like healthcare. | ExpLIMEable: An exploratory framework for LIME | [
"Sonia Laguna",
"Julian Heidenreich",
"Jiugeng Sun",
"Nilüfer Cetin",
"Ibrahim Al Hazwani",
"Udo Schlegel",
"Furui Cheng",
"Mennatallah El-Assady"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3oysFpd6Pq | @inproceedings{
ghosh2023influence,
title={Influence Based Approaches to Algorithmic Fairness: A Closer Look},
author={Soumya Ghosh and Prasanna Sattigeri and Inkit Padhi and Manish Nagireddy and Jie Chen},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=3oysFpd6Pq}
} | Off-the-shelf pre-trained models are increasingly common in machine learning. When deployed in the real world, it is essential that such models are not just accurate but also demonstrate qualities like fairness. This paper takes a closer look at recently proposed approaches that edit a pre-trained model for group fairness by re-weighting the training data. We offer perspectives that unify disparate weighting schemes from past studies and pave the way for new weighting strategies to address group fairness concerns. | Influence Based Approaches to Algorithmic Fairness: A Closer Look | [
"Soumya Ghosh",
"Prasanna Sattigeri",
"Inkit Padhi",
"Manish Nagireddy",
"Jie Chen"
] | Workshop/XAIA | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3BX9tM03GT | @inproceedings{
singh2023explaining,
title={Explaining black box text modules in natural language with language models},
author={Chandan Singh and Aliyah Hsu and Richard Antonello and Shailee Jain and Alexander Huth and Bin Yu and Jianfeng Gao},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=3BX9tM03GT}
} | Large language models (LLMs) have demonstrated remarkable prediction performance for a growing array of tasks. However, their rapid proliferation and increasing opaqueness have created a growing need for interpretability. Here, we ask whether we can automatically obtain natural language explanations for black box text modules. A *text module* is any function that maps text to a scalar continuous value, such as a submodule within an LLM or a fitted model of a brain region. *Black box* indicates that we only have access to the module's inputs. We introduce Summarize and Score (SASC), a method that takes in a text module and returns a natural language explanation of the module's selectivity along with a score for how reliable the explanation. We study SASC in 2 contexts. First, we evaluate SASC on synthetic modules and find that it often recovers ground truth explanations. Second, we use SASC to explain modules found within a pre-trained BERT model, enabling inspection of the model's internals. | Explaining black box text modules in natural language with language models | [
"Chandan Singh",
"Aliyah Hsu",
"Richard Antonello",
"Shailee Jain",
"Alexander Huth",
"Bin Yu",
"Jianfeng Gao"
] | Workshop/XAIA | 2305.09863 | [
"https://github.com/microsoft/automated-explanations"
] | https://huggingface.co/papers/2305.09863 | 5 | 3 | 0 | 7 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=2CfzKrx1vr | @inproceedings{
heo2023use,
title={Use Perturbations when Learning from Explanations},
author={Juyeon Heo and Vihari Piratla and Matthew Wicker and Adrian Weller},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=2CfzKrx1vr}
} | Machine learning from explanations (MLX) is an approach to learning that uses human-provided explanations of relevant or irrelevant features for each input to ensure that model predictions are right for the right reasons. Existing MLX approaches rely on local model interpretation methods and require strong model smoothing to align model and human explanations, leading to sub-optimal performance. We recast MLX as a robustness problem, where human explanations specify a lower dimensional manifold from which perturbations can be drawn, and show both theoretically and empirically how this approach alleviates the need for strong model smoothing. We consider various approaches to achieving robustness, leading to improved performance over prior MLX methods. Finally, we show how to combine robustness with an earlier MLX method, yielding state-of-the-art results on both synthetic and real-world benchmarks. | Use Perturbations when Learning from Explanations | [
"Juyeon Heo",
"Vihari Piratla",
"Matthew Wicker",
"Adrian Weller"
] | Workshop/XAIA | 2303.06419 | [
"https://github.com/vihari/robust_mlx"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=112o4j4VCY | @inproceedings{
kasmi2023assessment,
title={Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain},
author={Gabriel Kasmi and Laurent Dubus and Yves-Marie Saint-Drenan and Philippe BLANC},
booktitle={XAI in Action: Past, Present, and Future Applications},
year={2023},
url={https://openreview.net/forum?id=112o4j4VCY}
} | Neural networks have shown remarkable performance in computer vision, but their deployment in numerous scientific and technical fields is challenging due to their black-box nature. Scientists and practitioners need to evaluate the reliability of a decision, i.e., to know simultaneously if a model relies on the relevant features and whether these features are robust to image corruptions. Existing attribution methods aim to provide human-understandable explanations by highlighting important regions in the image domain, but fail to fully characterize a decision process's reliability. To bridge this gap, we introduce the Wavelet sCale Attribution Method (WCAM), a generalization of attribution from the pixel domain to the space-scale domain using wavelet transforms. Attribution in the wavelet domain reveals where and on what scales the model focuses, thus enabling us to assess whether a decision is reliable. Our code is accessible here: \url{https://github.com/gabrielkasmi/spectral-attribution}. | Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain | [
"Gabriel Kasmi",
"Laurent Dubus",
"Yves-Marie Saint-Drenan",
"Philippe BLANC"
] | Workshop/XAIA | 2305.14979 | [
"https://github.com/gabrielkasmi/spectral-attribution"
] | https://huggingface.co/papers/2305.14979 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=yqGoKziEvY | @inproceedings{
herrmann2023learning,
title={Learning Useful Representations of Recurrent Neural Network Weight Matrices},
author={Vincent Herrmann and Francesco Faccio and J{\"u}rgen Schmidhuber},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=yqGoKziEvY}
} | Recurrent Neural Networks (RNNs) are general-purpose parallel-sequential computers. The program of an RNN is its weight matrix. Its direct analysis, however, tends to be challenging. Is it possible to learn useful representations of RNN weights that facilitate downstream tasks? While the "Mechanistic Approach" directly 'looks inside' the RNN to predict its behavior, the "Functionalist Approach" analyzes its overall functionality---specifically, its input-output mapping. Our two novel Functionalist Approaches extract information from RNN weights by 'interrogating' the RNN through probing inputs. Our novel theoretical framework for the Functionalist Approach demonstrates conditions under which it can generate rich representations for determining the behavior of RNNs. RNN weight representations generated by Mechanistic and Functionalist approaches are compared by evaluating them in two downstream tasks. Our results show the superiority of Functionalist methods. | Learning Useful Representations of Recurrent Neural Network Weight Matrices | [
"Vincent Herrmann",
"Francesco Faccio",
"Jürgen Schmidhuber"
] | Workshop/NeurReps | [
"https://github.com/vincentherrmann/rnn-weights-representation-learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=yW1HcKnFcG | @inproceedings{
chetan2023distance,
title={Distance Learner: Incorporating Manifold Prior to Model Training},
author={Aditya Chetan and Nipun Kwatra},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=yW1HcKnFcG}
} | The manifold hypothesis (real-world data concentrates near low-dimensional manifolds) is suggested as the principle behind the effectiveness of machine learning algorithms in very high-dimensional problems that are common in domains such as vision and speech. Multiple methods have been proposed to explicitly incorporate the manifold hypothesis as a prior in modern Deep Neural Networks (DNNs), with varying success. In this paper, we propose a new method, Distance Learner, to incorporate this prior for DNN-based classifiers. Distance Learner is trained to predict the distance of a point from the underlying manifold of each class, rather than the class label. For classification, Distance Learner then chooses the class corresponding to the closest predicted class manifold. Distance Learner can also identify points as being out of distribution (belonging to neither class), if the distance to the closest manifold is higher than a threshold. We evaluate our method on multiple synthetic datasets and show that Distance Learner learns much more meaningful classification boundaries compared to a standard classifier. We also evaluate our method on the task of adversarial robustness and find that it not only outperforms standard classifiers by a large margin but also performs at par with classifiers trained via well-accepted standard adversarial training. | Distance Learner: Incorporating Manifold Prior to Model Training | [
"Aditya Chetan",
"Nipun Kwatra"
] | Workshop/NeurReps | 2207.06888 | [
"https://github.com/microsoft/distance-learner"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wIS0vop9R7 | @inproceedings{
lecomte2023an,
title={An Information-Theoretic Understanding of Maximum Manifold Capacity Representations},
author={Victor Lecomte and Rylan Schaeffer and Berivan Isik and Mikail Khona and Yann LeCun and Sanmi Koyejo and Andrey Gromov and Ravid Shwartz-Ziv},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=wIS0vop9R7}
} | Maximum Manifold Capacity Representations (MMCR) is a recent multi-view self-supervised learning (MVSSL) method that matches or surpasses other leading MVSSL methods. MMCR is interesting for at least two reasons. Firstly, MMCR is an oddity in the zoo of MVSSL methods: it is not (explicitly) contrastive, applies no masking, performs no clustering, leverages no distillation, and does not (explicitly) reduce redundancy. Secondly, while many self-supervised learning (SSL) methods originate in information theory, MMCR distinguishes itself by claiming a different origin: a statistical mechanical characterization of the geometry of linear separability of data manifolds. However, given the rich connections between statistical mechanics and information theory, and given recent work showing how many SSL methods can be understood from an information-theoretic perspective, we conjecture that MMCR can be similarly understood from an information-theoretic perspective. In this paper, we leverage tools from high dimensional probability and information theory to demonstrate that an optimal solution to MMCR's nuclear norm-based objective function is the same optimal solution that maximizes a well-known lower bound on mutual information. | An Information-Theoretic Understanding of Maximum Manifold Capacity Representations | [
"Victor Lecomte",
"Rylan Schaeffer",
"Berivan Isik",
"Mikail Khona",
"Yann LeCun",
"Sanmi Koyejo",
"Andrey Gromov",
"Ravid Shwartz-Ziv"
] | Workshop/NeurReps | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=uOjSFxFz5k | @inproceedings{
sonoda2023joint,
title={Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks},
author={Sho Sonoda and Hideyuki Ishi and Isao Ishikawa and Masahiro Ikeda},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=uOjSFxFz5k}
} | The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated. In this study, we present a systematic method to induce a generalized neural network and its right inverse operator, called the ridgelet transform, from a joint group invariant function on the data-parameter domain. Since the ridgelet transform is an inverse, (1) it can describe the arrangement of parameters for the network to represent a target function, which is understood as the encoding rule, and (2) it implies the universality of the network. Based on the group representation theory, we present a new simple proof of the universality by using Schur's lemma in a unified manner covering a wide class of networks, for example, the original ridgelet transform, formal deep networks, and the dual voice transform. Since traditional universality theorems were demonstrated based on functional analysis, this study sheds light on the group theoretic aspect of the approximation theory, connecting geometric deep learning to abstract harmonic analysis. | Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks | [
"Sho Sonoda",
"Hideyuki Ishi",
"Isao Ishikawa",
"Masahiro Ikeda"
] | Workshop/NeurReps | 2310.03530 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=u7r2160QiP | @inproceedings{
sortur2023sample,
title={Sample Efficient Modeling of Drag Coefficients for Satellites with Symmetry},
author={Neel Sortur and Linfeng Zhao and Robin Walters},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=u7r2160QiP}
} | Accurate knowledge of the atmospheric drag coefficient for a satellite in low Earth orbit is crucial to plan an orbit that avoids collisions with other spacecraft, but its calculation has high uncertainty and is very expensive to numerically compute for long-horizon predictions. Previous work has improved coefficient modeling speed with data-driven approaches, but these models do not utilize domain symmetry. This work investigates enforcing the invariance of atmospheric particle deflections off certain satellite geometries, resulting in higher sample efficiency and theoretically more robustness for data-driven methods. We train $G$-equivariant MLPs to predict the drag coefficient, where $G$ defines invariances of the coefficient across different orientations of the satellite. We experiment on a synthetic dataset computed using the numerical Test Particle Monte Carlo (TPMC) method, where particles are fired at a satellite in the computational domain. We find that our method is more sample and computationally efficient than unconstrained baselines, which is significant because TPMC simulations are extremely computationally expensive. | Sample Efficient Modeling of Drag Coefficients for Satellites with Symmetry | [
"Neel Sortur",
"Linfeng Zhao",
"Robin Walters"
] | Workshop/NeurReps | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tIrGgIn8jr | @inproceedings{
lu2023ames,
title={{AMES}: A Differentiable Embedding Space Selection Framework for Latent Graph Inference},
author={Yuan Lu and Haitz S{\'a}ez de Oc{\'a}riz Borde and Pietro Lio},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=tIrGgIn8jr}
} | In real-world scenarios, although data entities may possess inherent relationships, the specific graph illustrating their connections might not be directly accessible. Latent graph inference addresses this issue by enabling Graph Neural Networks (GNNs) to operate on point cloud data, dynamically learning the necessary graph structure. These graphs are often derived from a latent embedding space, which can be modeled using Euclidean, hyperbolic, spherical, or product spaces. However, currently, there is no principled differentiable method for determining the optimal embedding space. In this work, we introduce the Attentional Multi-Embedding Selection (AMES) framework, a differentiable method for selecting the best embedding space for latent graph inference through backpropagation, considering a downstream task. Our framework consistently achieves comparable or superior results compared to previous methods for latent graph inference across five benchmark datasets. Importantly, our approach eliminates the need for conducting multiple experiments to identify the optimal embedding space. Furthermore, we explore interpretability techniques that track the gradient contributions of different latent graphs, shedding light on how our attention-based, fully differentiable approach learns to choose the appropriate latent space. In line with previous works, our experiments emphasize the advantages of hyperbolic spaces in enhancing performance. More importantly, our interpretability framework provides a general approach for quantitatively comparing embedding spaces across different tasks based on their contributions, a dimension that has been overlooked in previous literature on latent graph inference. | AMES: A Differentiable Embedding Space Selection Framework for Latent Graph Inference | [
"Yuan Lu",
"Haitz Sáez de Ocáriz Borde",
"Pietro Lio"
] | Workshop/NeurReps | 2311.11891 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=rmdSVvC1Qk | @inproceedings{
vastola2023optimal,
title={Optimal packing of attractor states in neural representations},
author={John Vastola},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=rmdSVvC1Qk}
} | Animals' internal states reflect variables like their position in space, orientation, decisions, and motor actions—but how should these internal states be arranged? Internal states which frequently transition between one another should be close enough that transitions can happen quickly, but not so close that neural noise significantly impacts the stability of those states, and how reliably they can be encoded and decoded. In this paper, we study the problem of striking a balance between these two concerns, which we call an 'optimal packing' problem since it resembles mathematical problems like sphere packing. While this problem is generally extremely difficult, we show that symmetries in environmental transition statistics imply certain symmetries of the optimal neural representations, which allows us in some cases to exactly solve for the optimal state arrangement. We focus on two toy cases: uniform transition statistics, and cyclic transition statistics. | Optimal packing of attractor states in neural representations | [
"John Vastola"
] | Workshop/NeurReps | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ql3u5ITQ5C | @inproceedings{
murray2023grokking,
title={Grokking in recurrent networks with attractive and oscillatory dynamics},
author={Keith Murray},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=ql3u5ITQ5C}
} | Generalization is perhaps the most salient property of biological intelligence. In the context of artificial neural networks (ANNs), generalization has been studied through investigating the recently-discovered phenomenon of "grokking" whereby small transformers generalize on modular arithmetic tasks. We extend this line of work to continuous time recurrent neural networks (CT-RNNs) to investigate generalization in neural systems. Inspired by the card game SET, we reformulated previous modular arithmetic tasks as a binary classification task to elicit interpretable CT-RNN dynamics. We found that CT-RNNs learned one of two dynamical mechanisms characterized by either attractive or oscillatory dynamics. Notably, both of these mechanisms displayed strong parallels to deterministic finite automata (DFA). In our grokking experiments, we found that attractive dynamics generalize more frequently in training regimes with few withheld data points while oscillatory dynamics generalize more frequently in training regimes with many withheld data points. | Grokking in recurrent networks with attractive and oscillatory dynamics | [
"Keith Murray"
] | Workshop/NeurReps | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=qMdWGydOli | @inproceedings{
portilheiro2023quantifying,
title={Quantifying Lie Group Learning with Local Symmetry Error},
author={Vasco Portilheiro},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=qMdWGydOli}
} | Despite increasing interest in using machine learning to discover symmetries, no quantitative measure has been proposed in order to compare the performance of different algorithms. Our proposal, both intuitively and theoretically grounded, is to compare Lie groups using a *local symmetry error*, based on the difference between their infinitesimal actions at any possible datapoint. Namely, we use a well-studied metric to compare the induced tangent spaces. We obtain an upper bound on this metric which is uniform across datapoints, under some conditions. We show that when one of the groups is a circle group, this bound is furthermore both tight and easily computable, thus globally characterizing the local errors. We demonstrate our proposal by quantitatively evaluating an existing algorithm. We note that our proposed metric has deficiencies in comparing tangent spaces of different dimensions, as well as distinct groups whose local actions are similar. | Quantifying Lie Group Learning with Local Symmetry Error | [
"Vasco Portilheiro"
] | Workshop/NeurReps | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=q1zZJrXoIe | @inproceedings{
feng2023how,
title={How do language models bind entities in context?},
author={Jiahai Feng and Jacob Steinhardt},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=q1zZJrXoIe}
} | To correctly use in-context information, language models (LMs) must bind entities to their attributes. For example, given a context describing a "green square" and a "blue circle", LMs must bind the shapes to their respective colors. We analyze LM representations and identify the binding ID mechanism: a general mechanism for solving the binding problem, which we observe in every sufficiently large model from the Pythia and LLaMA families. Using causal interventions, we show that LMs' internal activations represent binding information by attaching binding ID vectors to corresponding entities and attributes. We further show that binding ID vectors form a continuous subspace, in which distances between binding ID vectors reflect their discernability. Overall, our results uncover interpretable strategies in LMs for representing symbolic knowledge in-context, providing a step towards understanding general in-context reasoning in large-scale LMs. | How do language models bind entities in context? | [
"Jiahai Feng",
"Jacob Steinhardt"
] | Workshop/NeurReps | 2310.17191 | [
"https://github.com/jiahai-feng/binding-iclr"
] | https://huggingface.co/papers/2310.17191 | 1 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=oD8DD5jQ5I | @inproceedings{
charvin2023towards,
title={Towards Information Theory-Based Discovery of Equivariances},
author={Hippolyte Charvin and Nicola Catenacci Volpi and Daniel Polani},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=oD8DD5jQ5I}
} | The presence of symmetries imposes a stringent set of constraints on a system. This constrained structure allows intelligent agents interacting with such a system to drastically improve the efficiency of learning and generalization, through the internalisation of the system's symmetries into their information-processing. In parallel, principled models of complexity-constrained learning and behaviour make increasing use of information-theoretic methods. Here, we wish to marry these two perspectives and understand whether and in which form the information-theoretic lens can ``see'' the effect of symmetries of a system. For this purpose, we propose a novel variant of the Information Bottleneck principle, which has served as a productive basis for many principled studies of learning and information-constrained adaptive behaviour. We show (in the discrete case) that our approach formalises a certain duality between symmetry and information parsimony: namely, channel equivariances can be characterised by the optimal mutual information-preserving joint compression of the channel's input and output. This information-theoretic treatment furthermore suggests a principled notion of "soft" equivariance, whose "coarseness" is measured by the amount of input-output mutual information preserved by the corresponding optimal compression. This new notion offers a bridge between the field of bounded rationality and the study of symmetries in neural representations. The framework may also allow (exact and soft) equivariances to be automatically discovered. | Towards Information Theory-Based Discovery of Equivariances | [
"Hippolyte Charvin",
"Nicola Catenacci Volpi",
"Daniel Polani"
] | Workshop/NeurReps | 2310.16555 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=mQ1gpEXE3W | @inproceedings{
zhao2023improving,
title={Improving Convergence and Generalization Using Parameter Symmetries},
author={Bo Zhao and Robert Gower and Robin Walters and Rose Yu},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=mQ1gpEXE3W}
} | In overparametrized models, different parameter values may result in the same loss. Parameter space symmetries are loss-invariant transformations that change the model parameters. Teleportation applies such transformations to accelerate optimization. However, the exact mechanism behind this algorithm's success is not well understood. In this paper, we prove that teleportation gives overall faster time to convergence. Additionally, teleporting to minima with different curvatures improves generalization, which suggests a connection between the curvature of the minima and generalization ability. Finally, we show that integrating teleportation into optimization-based meta-learning improves convergence over traditional algorithms that perform only local updates. Our results showcase the versatility of teleportation and demonstrate the potential of incorporating symmetry in optimization. | Improving Convergence and Generalization Using Parameter Symmetries | [
"Bo Zhao",
"Robert Gower",
"Robin Walters",
"Rose Yu"
] | Workshop/NeurReps | 2305.13404 | [
"https://github.com/rose-stl-lab/teleportation-optimization"
] | https://huggingface.co/papers/2305.13404 | 2 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
Subsets and Splits