bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=3yJB9iHq65
@inproceedings{ brahma2023accelerated, title={Accelerated Modelling of Interfaces for Electronic Devices using Graph Neural Networks}, author={Pratik Brahma and Krishnakumar Sivaganesh Bhattaram and Sayeef Salahuddin}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=3yJB9iHq65} }
Modern microelectronic devices are composed of interfaces between a large number of materials, many of which are in amorphous or polycrystalline phases. Modeling such non-crystalline materials using first-principles methods such as density functional theory is often numerically intractable. Recently, graph neural networks (GNNs) have shown potential to achieve linear complexity with accuracies comparable to ab-initio methods. Here, we demonstrate the applicability of GNNs to accelerate the atomistic computational pipeline for predicting macroscopic transistor transport characteristics via learning microscopic physical properties. We generate amorphous heterostructures, specifically the HfO$_2$-SiO$_2$-Si semiconductor-dielectric transistor gate stack, via GNN predicted atomic forces, and show excellent accuracy in predicting transport characteristics including injection velocity for nanoslab silicon channels. This work paves the way for faster and more scalable methods to model modern advanced electronic devices via GNNs.
Accelerated Modelling of Interfaces for Electronic Devices using Graph Neural Networks
[ "Pratik Brahma", "Krishnakumar Sivaganesh Bhattaram", "Sayeef Salahuddin" ]
Workshop/AI4Mat
2310.06995
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3Huw3pa8TR
@inproceedings{ ottomano2023investigating, title={Investigating extrapolation and low-data challenges via contrastive learning of chemical compositions}, author={Federico Ottomano and Giovanni De Felice and Rahul Savani and Vladimir Gusev and Matthew Rosseinsky}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=3Huw3pa8TR} }
Practical applications of machine learning for materials discovery remain severely limited by the quantity and quality of the available data. Furthermore, little is known about the ability of machine learning models to extrapolate outside of the training distribution, which is essential for the discovery of compounds with extraordinary properties. To address these challenges, we develop a novel deep representation learning framework for chemical compositions. The proposed model, named COmpositional eMBedding NETwork (CombNet), combines recent developments in graph-based encoding of chemical compositions with a supervised contrastive learning approach. This is motivated by the observation that contrastive learning can produce a regularized representation space from raw data, offering empirical benefits for extrapolation in low-data scenarios. Moreover, our method harnesses exclusively the chemical composition of the underlying materials, as crystal structure is generally unavailable before the material is discovered. We demonstrate the effectiveness of CombNet over state-of-the-art methods under a bespoke evaluation scheme that simulates a realistic materials discovery scenario with experimental data.
Investigating extrapolation and low-data challenges via contrastive learning of chemical compositions
[ "Federico Ottomano", "Giovanni De Felice", "Rahul Savani", "Vladimir Gusev", "Matthew Rosseinsky" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=3GiwwOJ1be
@inproceedings{ munjal2023extracting, title={Extracting a Database of Challenges and Mitigation Strategies for Sodium-ion Battery Development}, author={Mrigi Munjal and Thorben Prein and Vineeth Venugopal and Kevin J Huang and Elsa Olivetti}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=3GiwwOJ1be} }
Sodium-ion batteries (SIBs) are emerging as a promising solution for grid-scale energy storage applications due to the widespread availability of sodium and the anticipated cost-effectiveness. The manufacturing expertise established for lithium-ion batteries (LIBs) offers a solid foundation for the development of SIBs. However, to realize their full potential, specific challenges related to the synthesis and performance of electrode materials in SIBs must be overcome. This work extracts a large database of challenges limiting the performance and synthesis of SIB cathode active materials (CAMs) and pairs these challenges with corresponding proposed mitigation strategies from the SIB literature by employing custom natural language processing (NLP) tools. The database is meant to help scientists expedite the development and exploration of SIBs.
Extracting a Database of Challenges and Mitigation Strategies for Sodium-ion Battery Development
[ "Mrigi Munjal", "Thorben Prein", "Vineeth Venugopal", "Kevin J Huang", "Elsa Olivetti" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=32XS0zXPqU
@inproceedings{ lin2023deep, title={Deep inverse design of hydrophobic patches on {DNA} origami for mesoscale assembly of superlattices}, author={Po-An Lin and Simiao Ren and Jonathan Caswell Piland and Leslie M. Collins and Stefan Zauscher and Yonggang Ke and Gaurav Arya}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=32XS0zXPqU} }
A major challenge in DNA nanotechnology is to extend the length scale of DNA structures from the nanoscale to the microscale to enable applications in cargo delivery, sensing, optical devices, and soft robotics. Self-assembly of DNA origami building blocks provides a promising approach for fabricating such higher-order structures. Inspired by self-assembly of patchy colloidal particles, researchers have recently begun to introduce patches of mutually attractive chemical moieties at designated sites on DNA origami to assemble them into complex higher-order architectures. However, designing such functionalized DNA origamis to target specific assembly structures is highly challenging because the underlying relationship between the building block design and assembly structure is very complex. Machine learning is especially well suited for such inverse-design tasks. In this work, we develop a coarse-grained model of DNA origami nanocubes grafted with hydrophobic brushes and employ the neural adjoint (NA) method to explore highly ordered target assemblies of such origamis, including checkerboard, honeycomb, and Kagome lattices. We envision that our design approach can be generalized to more complex designs and used to tailor structural properties to expand the application space of DNA nanotechnology.
Deep inverse design of hydrophobic patches on DNA origami for mesoscale assembly of superlattices
[ "Po-An Lin", "Simiao Ren", "Jonathan Caswell Piland", "Leslie M. Collins", "Stefan Zauscher", "Yonggang Ke", "Gaurav Arya" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2jTnJFijQT
@inproceedings{ burark2023codbench, title={Co{DB}ench: A Critical Evaluation of Data-driven Models for Continuous Dynamical Systems}, author={Priyanshu Burark and Karn Tiwari and Meer Mehran Rashid and Prathosh AP and N M Anoop Krishnan}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=2jTnJFijQT} }
Continuous dynamical systems, characterized by differential equations, are ubiquitously used to model several important problems: plasma dynamics, flow through porous media, weather forecasting, and epidemic dynamics. Recently, a wide range of data-driven models has been used successfully to model these systems. However, in contrast to established fields like computer vision, limited studies are available analyzing the strengths and potential applications of different classes of these models that could steer decision-making in scientific machine learning. Here, we introduce CoDBENCH, an exhaustive benchmarking suite comprising 11 state-of-the-art data-driven models for solving differential equations. Specifically, we comprehensively evaluate 4 distinct categories of models, viz., feed forward neural networks, deep operator regression models, frequency-based neural operators, and transformer architectures against 8 widely applicable benchmark datasets encompassing challenges from fluid and solid mechanics. We conduct extensive experiments, assessing the operators’ capabilities in learning, zero-shot super-resolution, data efficiency, robustness to noise, and computational efficiency. Interestingly, our findings highlight that current operators struggle with the newer mechanics datasets, motivating the need for more robust neural operators. All the datasets and codes are shared in an easy-to-use fashion for the scientific community. We hope this resource will be an impetus for accelerated progress and exploration in modeling dynamical systems. For codes and datasets, see: https://anonymous.4open.science/r/cod-bench-7525.
CoDBench: A Critical Evaluation of Data-driven Models for Continuous Dynamical Systems
[ "Priyanshu Burark", "Karn Tiwari", "Meer Mehran Rashid", "Prathosh AP", "N M Anoop Krishnan" ]
Workshop/AI4Mat
2310.01650
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2PucSD895t
@inproceedings{ li2023predicting, title={Predicting and Interpreting Energy Barriers of Metallic Glasses with Graph Neural Networks}, author={Haoyu Li and Shichang Zhang and Longwen Tang and Mathieu Bauchy and Yizhou Sun}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=2PucSD895t} }
Metallic Glasses (MGs) are widely used disordered materials. Understanding the relationship between the local structure and physical properties of MGs is one of the greatest challenges for both material science and condensed matter physics. In this work, we utilize Graph Neural Networks (GNNs) to model the atomic graph structure and study the connection between the structure and the corresponding local energy barrier, which is believed to govern many critical physical properties in MGs. One of our key contributions is to propose a novel \textit{Symmetrized GNN} (SymGNN) model for predicting the energy barriers, which is invariant under orthogonal transformations of the structure, e.g., rotations and reflections. Such invariance is a desired property that standard GNNs like Graph Convolutional Networks cannot capture. SymGNNs handle the invariance by aggregating over orthogonal transformations of the graph structure for representation learning, and an optimal distribution over all 3D orthogonal transformations $\mathcal{O}_3$ is learned to maximize the benefit of invariance. We demonstrate in our experiments that SymGNN can significantly improve the energy barrier prediction over other GNNs and non-graph machine learning models. With such an accurate model, we also apply graph explanation algorithms to better reveal the structure-property relationship of MGs. Our GNN framework allows effective prediction of material physical properties and bolsters material science research through the use of AI models.
Predicting and Interpreting Energy Barriers of Metallic Glasses with Graph Neural Networks
[ "Haoyu Li", "Shichang Zhang", "Longwen Tang", "Mathieu Bauchy", "Yizhou Sun" ]
Workshop/AI4Mat
2401.08627
[ "https://github.com/haoyuli02/symgnn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0r5DE2ZSwJ
@inproceedings{ gruver2023finetuned, title={Fine-Tuned Language Models Generate Stable Inorganic Materials as Text}, author={Nate Gruver and Anuroop Sriram and Andrea Madotto and Andrew Gordon Wilson and C. Lawrence Zitnick and Zachary Ward Ulissi}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=0r5DE2ZSwJ} }
Deep learning models have drastically accelerated materials discovery by accelerating predictive computational simulations like density functional theory (DFT). Large open computational materials databases such as the Materials Project or OQMD contain O($10^6$) known structures, and it is now straightforward to search those databases for materials with exciting properties. However, these databases are limited to experimentally known materials or candidates discovered in high-throughput computational campaigns. Many state-of-the-art engineering advances in solar photovaltaics, battery electrodes, and catalysts are made by discovering materials with outstanding properties that have not yet been discovered. Generative models are a natural solution to expand families of interest through sampling. While popular methods are typically constructed from variational autoencoders or diffusion models, we propose fine-tuning large language models for generation of stable materials. While unorthodox, fine-tuning large language models on text-encoded atomistic data is simple to implement yet reliable, with around 90\% of sampled structures obeying physical constraints on atom positions and charges. Using energy of hull calculations from both learned ML potentials and gold-standard DFT calculations, we show that our strongest model (fine-tuned LLaMA-2 70B) can generate materials predicted to be metastable at about twice the rate (49\% vs 28\%) of CDVAE, a competing diffusion model. Because of text prompting's inherent flexibility, our models can simultaneously be used for unconditional generation of stable material, infilling of partial structures and text-conditional generation. Finally, we show that language models' ability to capture key symmetries of crystal structures improves with model scale, suggesting that the biases of pretrained LLMs are surprisingly well-suited for atomistic data.
Fine-Tuned Language Models Generate Stable Inorganic Materials as Text
[ "Nate Gruver", "Anuroop Sriram", "Andrea Madotto", "Andrew Gordon Wilson", "C. Lawrence Zitnick", "Zachary Ward Ulissi" ]
Workshop/AI4Mat
2402.04379
[ "https://github.com/facebookresearch/crystal-llm" ]
https://huggingface.co/papers/2402.04379
3
7
1
6
[ "n0w0f/MatText-crystal-txt-llm-2m" ]
[ "n0w0f/MatText" ]
[]
[ "n0w0f/MatText-crystal-txt-llm-2m" ]
[ "n0w0f/MatText" ]
[]
1
oral
null
https://openreview.net/forum?id=0Qx8vlXKRk
@inproceedings{ bishnoi2023brognet, title={Bro{GN}et: Momentum-Conserving Graph Neural Stochastic Differential Equation for Learning Brownian Dynamics}, author={Suresh Bishnoi and Jayadeva Jayadeva and Sayan Ranu and N M Anoop Krishnan}, booktitle={AI for Accelerated Materials Design - NeurIPS 2023 Workshop}, year={2023}, url={https://openreview.net/forum?id=0Qx8vlXKRk} }
Neural networks (NNs) that exploit strong inductive biases based on physical laws and symmetries have shown remarkable success in learning the dynamics of physical systems directly from their trajectory. However, these works focus only on the systems that follow deterministic dynamics, such as Newtonian or Hamiltonian. Here, we propose a framework, namely Brownian graph neural networks (BroGNet), combining stochastic differential equations (SDEs) and GNNs to learn Brownian dynamics directly from the trajectory. We modify the architecture of BroGNet to enforce linear momentum conservation of the system, which, in turn, provides superior performance on learning dynamics as revealed empirically. We demonstrate this approach on several systems, namely, linear spring, linear spring with binary particle types, and non-linear spring systems, all following Brownian dynamics at finite temperatures. We show that BroGNet significantly outperforms proposed baselines across all the benchmarked Brownian systems. In addition, we demonstrate zero-shot generalizability of BroGNet to simulate unseen system sizes that are two orders of magnitude larger and to different temperatures than those used during training. Finally, we show that BroGNet conserves the momentum of the system, resulting in superior performance and data efficiency. Altogether, our study contributes to advancing the understanding of the intricate dynamics of Brownian motion and demonstrates the effectiveness of graph neural networks in modeling such complex systems.
BroGNet: Momentum-Conserving Graph Neural Stochastic Differential Equation for Learning Brownian Dynamics
[ "Suresh Bishnoi", "Jayadeva Jayadeva", "Sayan Ranu", "N M Anoop Krishnan" ]
Workshop/AI4Mat
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=z5dAdYOgbs
@inproceedings{ wang2023model, title={Model Evaluation for Geospatial Problems}, author={Jing Wang and Tyler Hallman and Laurel Hopkins and John Burns Kilbride and W. Douglas Robinson and Rebecca Hutchinson}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=z5dAdYOgbs} }
Geospatial problems often involve spatial autocorrelation and covariate shift, which violate the independent, identically distributed assumption underlying standard cross-validation. In this work, we establish a theoretical criterion for unbiased cross-validation, introduce a preliminary categorization framework to guide practitioners in choosing suitable cross-validation strategies for geospatial problems, reconcile conflicting recommendations on best practices, and develop a novel, straightforward method with both theoretical guarantees and empirical success.
Model Evaluation for Geospatial Problems
[ "Jing Wang", "Tyler Hallman", "Laurel Hopkins", "John Burns Kilbride", "W. Douglas Robinson", "Rebecca Hutchinson" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xIffUnZBqx
@inproceedings{ sangarya2023aggregate, title={Aggregate Representation Measure for Predictive Model Reusability}, author={Vishwesh Sangarya and Richard M Bradford and Jung-Eun Kim}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=xIffUnZBqx} }
In this paper, we propose a predictive quantifier to estimate the retraining cost of a trained model in distribution shifts. The proposed Aggregated Representation Measure (ARM) quantifies the change in the model's representation from the old to new data distribution. It provides, before actually retraining the model, a single concise index of resources - epochs, energy, and carbon emissions - required for the retraining. This enables reuse of a model with a much lower cost than training a new model from scratch. The experimental results indicate that ARM reasonably predicts retraining costs for varying noise intensities and enables comparisons among multiple model architectures to determine the most cost-effective and sustainable option.
Aggregate Representation Measure for Predictive Model Reusability
[ "Vishwesh Sangarya", "Richard M Bradford", "Jung-Eun Kim" ]
Workshop/CompSust
2405.09600
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=vjwYYlA8Pj
@inproceedings{ moorosi2023ai, title={{AI} for Whom? Shedding Critical Light on {AI} for Social Good}, author={Nyalleng Moorosi and Raesetje Sefala and Sasha Luccioni}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=vjwYYlA8Pj} }
In recent years, AI for Social Good (AI4SG) projects have grown in scope and popularity, covering a variety of topics from climate change to education and being the subject of numerous workshops and conferences at a global scale. In the current article, we reflect upon AI4SG, its definition and its current limitations. We propose ways to address these limitations, from connecting with relevant disciplines to a better consideration of the constraints and context of project deployment. We conclude with a proposal to refocus the field of AI4SG around the concept of sustainability from a variety of angles, arguing that this will help the field evolve while taking its own impacts into account.
AI for Whom? Shedding Critical Light on AI for Social Good
[ "Nyalleng Moorosi", "Raesetje Sefala", "Sasha Luccioni" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=oa7XXLuJnO
@inproceedings{ aiken2023moving, title={Moving targets: When does a poverty prediction model need to be updated?}, author={Emily Aiken and Tim Ohlenburg and Joshua Blumenstock}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=oa7XXLuJnO} }
A key challenge in the design of effective social protection programs is determining who should be eligible for program benefits. In low and middle-income countries, one of the most common criteria is a Proxy Means Test (PMT) -- a rudimentary application of machine learning that uses a short list of household characteristics to predict whether each household is poor, and therefore eligible, or non-poor, and therefore ineligible. Using nationwide survey data from six low and middle-income countries, this paper documents an important weakness in this use of machine learning: that the accuracy of the PMT prediction algorithm decreases steadily over time, by roughly 1.5-1.9 percentage points per year. We illustrate the implications of this finding for real-world anti-poverty programs, which typically update the PMT model only every 5-8 years, and then show that the aggregate effect can be decomposed into two forces: "model decay" caused by model drift, and "data decay" caused by changing household characteristics. Our final set of results show how an understanding of these forces can be used to optimize data collection policies to improve the efficiency of social protection programs.
Moving targets: When does a poverty prediction model need to be updated?
[ "Emily Aiken", "Tim Ohlenburg", "Joshua Blumenstock" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=dmjT841VuV
@inproceedings{ chauhan2023reallight, title={{REALLIGHT}: {DRL} based Intersection Control in Developing Countries without Traffic Simulators}, author={Sachin Chauhan and Rijurekha Sen}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=dmjT841VuV} }
Effective traffic intersection control is a crucial problem for urban sustainability. State of the art research seeking Artificial Intelligence (AI), for example Deep Reinforcement Learning (DRL) based traffic control are using traffic simulators, ignoring the shortcomings of traffic simulators used to train the DRL control algorithms. These simulators are limited in capturing fine nuances in traffic flow changes, which can make the trained models unrealistic. This is especially true in developing countries, where traffic flow is non-laned and chaotic, and extremely hard to simulate based on standard microscopic-model based traffic simulation rules. In the given paper, we seek to do away with traffic simulators, and try to train DRL systems with 40 hours of real traffic data deploying cameras at a New Delhi busy traffic intersection, making intelligent traffic intersection control more realistic for developing countries, and hence has been termed as REALLIGHT.
REALLIGHT: DRL based Intersection Control in Developing Countries without Traffic Simulators
[ "Sachin Chauhan", "Rijurekha Sen" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Zj74fKRgCE
@inproceedings{ eskandari2023multifidelity, title={Multi-fidelity Bayesian Optimisation of Syngas Fermentation Simulators}, author={Mahdi Eskandari and Lars Puiman and Jakob Zeitler}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=Zj74fKRgCE} }
A Bayesian optimization approach for maximizing the gas conversion rate in syngas fermentation is presented. We have access to an expensive-to-evaluate, computational fluid dynamic (CFD) reactor model and a cheap ideal-mixing based reactor model. The goal is to maximize the gas conversion rate with respect to the input variables. Due to the high cost of the industrial simulator, a multi-fidelity Bayesian optimization is adopted to solve the optimization problem using both high and low fidelities. We first describe the problem of syngas fermentation followed by our approach to solving simulator optimisation using multiple fidelities. We discuss concerns regarding significant differences in fidelity cost and their impact on fidelity-sampling and conclude with a discussion on the integration of real-world fermentation data.
Multi-fidelity Bayesian Optimisation of Syngas Fermentation Simulators
[ "Mahdi Eskandari", "Lars Puiman", "Jakob Zeitler" ]
Workshop/CompSust
2311.05776
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Y08yLPFm1z
@inproceedings{ kay2023unsupervised, title={Unsupervised Domain Adaptation in the Real World: A Case Study in Sonar Video}, author={Justin Kay and Suzanne Stathatos and Siqi Deng and Erik Young and Pietro Perona and Sara Beery and Grant Van Horn}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=Y08yLPFm1z} }
In real world applications of machine learning, adaptation to new domains (e.g. new regions, new populations, new sensors, or new points in time) has been shown to be an ongoing challenge. In unsupervised domain adaptation, the assumption is that the user has access to a large labeled set of source domain data, and the goal is to adapt to a new target domain without the use of any labeled target data. The open question is how unlabeled samples from the target domain should be incorporated into the model training process. In this work we document our experiences applying recently proposed unsupervised domain adaption techniques for object detection to a novel application domain: counting fish in sonar video. We find that: (i) prior works that show progress on standard domain adaptation benchmark datasets do not necessarily translate to our domain, (ii) validation methods are often unrealistic in these prior works, and (iii) higher complexity (in terms of implementation and parameters) techniques work better. We aim for this work to be a useful guide for other practitioners looking to use unsupervised domain adaptation techniques in real world applications.
Unsupervised Domain Adaptation in the Real World: A Case Study in Sonar Video
[ "Justin Kay", "Suzanne Stathatos", "Siqi Deng", "Erik Young", "Pietro Perona", "Sara Beery", "Grant Van Horn" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WS9liG8rxI
@inproceedings{ raghavan2023identifying, title={Identifying Stop-and-Go Congestion with Data-Driven Traffic Reconstruction}, author={Shreyaa Raghavan and Edgar Ramirez Sanchez and Cathy Wu}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=WS9liG8rxI} }
Identifying stop-and-go events (SAGs) in traffic flow presents an important avenue for advancing data-driven research for climate change mitigation and sustainability, owing to their substantial impact on carbon emissions, travel time, fuel consumption, and roadway safety. In fact, SAGs are estimated to account for 33-50% of highway driving externalities. However, insufficient attention has been paid to precisely quantifying where, when, and how often these SAGs take place––necessary for downstream decision-making, such as intervention design and policy analysis. A key challenge is that the data available to researchers and governments are typically sparse and aggregated to a granularity that obscures SAGs. To overcome such data limitations, this study thus explores the use of traffic reconstruction techniques for SAG identification. In particular, we introduce a kernel-based method for identifying spatiotemporal features in traffic and leverage bootstrapping to quantify the uncertainty of the reconstruction process. Experimental results on California highway data demonstrate the promise of the method for capturing SAGs. This work contributes to a foundation for data-driven decision-making to advance the sustainability of traffic systems.
Identifying Stop-and-Go Congestion with Data-Driven Traffic Reconstruction
[ "Shreyaa Raghavan", "Edgar Ramirez Sanchez", "Cathy Wu" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UIfp44BWDi
@inproceedings{ sankaranarayanan2023is, title={Is the Facebook Ad Algorithm a Climate Discourse Influencer?}, author={Aruna Sankaranarayanan and Erik Hemberg and Piotr Sapiezynski and Una-May O'Reilly}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=UIfp44BWDi} }
Sponsored climate discourse, driven by both climate contrarians and advocates, influences public attitudes towards climate change. We present an experimental study suggesting that the Facebook advertisement algorithm also influences climate discourse. The algorithm preferentially delivers ads to Facebook audiences in certain locations and demographics, at least partially based upon the ad image. Further, the algorithm is biased in terms of how it delivers ads featuring images of non-renewable sources of energy, and does not always fulfill targeting intentions as requested. This may result in inadvertent manipulation of ad delivery with consequences for climate discourse and algorithmic fairness.
Is the Facebook Ad Algorithm a Climate Discourse Influencer?
[ "Aruna Sankaranarayanan", "Erik Hemberg", "Piotr Sapiezynski", "Una-May O'Reilly" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=N2qmwRMrzo
@inproceedings{ zhao2023a, title={A Semi-Automated System to Annotate Communal Roosts in Large-Scale Weather Radar Data}, author={Wenlong Zhao and Gustavo Perez and Zezhou Cheng and Maria Carolina Tiburcio Dias Belotti and Yuting Deng and Victoria Simons and Elske K Tielens and Jeffrey Kelly and Kyle Horton and Subhransu Maji and Daniel Sheldon}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=N2qmwRMrzo} }
We have developed a semi-automated system to annotate communal roosts of birds and bats in weather radar data. This system comprises detection, tracking, confounder filtering, and human screening components. We have deployed this system to gather information on swallows from 612,786 scans taken from 12 radar stations around the Great Lakes over 21 years. The 15,628 annotated roost signatures have uncovered population trends and phenological shifts in swallows and martins. These species are rapidly declining aerial insectivores, and the data gathered has facilitated crucial sustainability analyses. While human screening is still required with the deployed system, we estimate that the screening process is approximately 7$\times$ faster than manual annotation. Furthermore, we found that incorporating temporal signals enhances the deployed detector's performance, increasing the mean average precision (mAP) from 48.7\% to 56.3\%. Our ongoing work aims to expand the analysis to bird and bat roosts at a continental scale.
A Semi-Automated System to Annotate Communal Roosts in Large-Scale Weather Radar Data
[ "Wenlong Zhao", "Gustavo Perez", "Zezhou Cheng", "Maria Carolina Tiburcio Dias Belotti", "Yuting Deng", "Victoria Simons", "Elske K Tielens", "Jeffrey Kelly", "Kyle Horton", "Subhransu Maji", "Daniel Sheldon" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=I1yGbIPGjx
@inproceedings{ das2023textitfocus, title={{\textbackslash}textit\{Focus on What's Important!\}{\textbackslash}{\textbackslash} Inspecting Variational Distributions for {\textbackslash}{\textbackslash} Gaussian Processes for better {\textbackslash}textit\{{AQ}\} Station Deployment}, author={Progyan Das and Mihir Agarwal}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=I1yGbIPGjx} }
In urban locales, the intricate dynamics of air quality indicators such as Particulate Matter (PM2.5) and Carbon Monoxide (CO) necessitate sophisticated modeling for precise prediction and monitoring. However, monitoring stations are sparse, and effective placement is a key problem in the domain. This study explores a novel approach utilizing Variational Multi-Task Gaussian Processes (VMTGP) endowed with a Spectral Mixture (SM) kernel to model the spatiotemporal distribution of these pollutants in Beijing, which beats the state-of-the-art Gaussian Process techniques on this dataset in the exact MTGP case. However, our innovation lies in an in-depth examination of the variational distribution of the inducing points, which are critical for scalability and accurate approximations in GP models. Through an empirical lens, we observe a pronounced clustering of inducing points around certain monitoring stations, hinting at a higher information content in these locales. Our findings underscore the inherent value in exploiting the clustering phenomenon of inducing points, opening up new vistas for enhancing the efficacy and interpretability of multi-task learning paradigms in air quality forecasting. This insight holds promise for developing more robust and localized air quality prediction models, crucial for urban planning and public health policy formulations, and adaptively deciding the most effective locations for placing AQ monitoring stations.
Focus on What's Important! Inspecting Variational Distributions for Gaussian Processes for better AQ Station Deployment
[ "Progyan Das", "Mihir Agarwal" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=H0HdmdXsTp
@inproceedings{ beukema2023satellite, title={Satellite Imagery and {AI}: A New Era in Ocean Conservation, from Research to Deployment and Impact}, author={Patrick Beukema and Favyen Bastani and Piper Wolters and Henry Herzog and Joseph George Ferdinando}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=H0HdmdXsTp} }
Illegal, unreported, and unregulated (IUU) fishing poses a global threat to ocean habitats. Publicly available satellite data offered by NASA and the European Space Agency (ESA) provide an opportunity to actively monitor this activity. Effectively leveraging satellite data for maritime conservation requires highly reliable machine learning models operating globally with minimal latency. This paper introduces three specialized computer vision models designed for synthetic aperture radar (Sentinel-1), optical imagery (Sentinel-2), and nighttime lights (Suomi-NPP/NOAA-20). It also presents best practices for developing and delivering real-time computer vision services for conservation. These models have been deployed in Skylight, a real time maritime monitoring platform, which is provided at no cost to users worldwide.
Satellite Imagery and AI: A New Era in Ocean Conservation, from Research to Deployment and Impact
[ "Patrick Beukema", "Favyen Bastani", "Piper Wolters", "Henry Herzog", "Joseph George Ferdinando" ]
Workshop/CompSust
2312.03207
[ "https://github.com/allenai/vessel-detection-sentinels" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=BaZZzH7EgA
@inproceedings{ zheng2023segment, title={Segment Any Stream: Scalable Water Extent Detection with the Segment Anything Model}, author={Haozhen Zheng and Chenhui Zhang and Kaiyu Guan and Yawen Deng and Sherrie Wang and Bruce L. Rhoads and Andrew J Margenot and Shengnan Zhou and Sheng Wang}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=BaZZzH7EgA} }
The accurate detection of water extent in streams and rivers is pivotal to understanding inland water hydrodynamics and terrestrial-aquatic interactions of biogeochemical cycles, in particular bank erosion and the resulting transfer of nutrient elements such as phosphorus (P). Prior studies have employed a variety of computational methods, ranging from hand-crafted decision rules based on spectral indices to advanced image segmentation techniques. However, these methods are limited in their generalizability when implemented in new regions. Furthermore, the recent development of vision foundation models such as the Segment Anything Model (SAM) has brought about opportunities for water extent detection due to their exceptional generalization capabilities. Nevertheless, the adaptation of these models remains challenging due to the computational overhead of fully fine-tuning the entire model. Taking these desiderata into account, this work proposes Segment Any Stream (SAS), which employs the Low-Rank Adaptation (LoRA) method to perform low-rank updates on a pretrained SAM with a small amount of curated high-resolution aerial imagery to map the water extents in the Mackinaw watershed, a HUC-8 watershed in central Illinois. Through our experiments, we show that SAS is lightweight yet highly effective: it enables efficient fine-tuning on a single consumer-grade GPU while achieving a high IoU of 0.76. This research highlights a generalizable framework for repurposing foundation models to support river/stream segmentation. We believe this framework can benefit the accurate and scalable quantification of streambank erosion as assessed by bank migration and width changes over time, a significant source of sediment and nutrient losses in agricultural landscapes. Code and data are released at https://github.com/zoezheng126/SAMed-river/tree/development
Segment Any Stream: Scalable Water Extent Detection with the Segment Anything Model
[ "Haozhen Zheng", "Chenhui Zhang", "Kaiyu Guan", "Yawen Deng", "Sherrie Wang", "Bruce L. Rhoads", "Andrew J Margenot", "Shengnan Zhou", "Sheng Wang" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7T4YMOj7MS
@inproceedings{ mak2023cooperative, title={Cooperative Logistics: Can Artificial Intelligence Enable Trustworthy Cooperation at Scale?}, author={Stephen Mak and Tim Pearce and Matthew Macfarlane and Liming Xu and Michael Ostroumov and Alexandra Brintrup}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=7T4YMOj7MS} }
Cooperative Logistics studies the setting where logistics companies pool their resources together to improve their individual performance. Prior literature suggests carbon savings of approximately 22%. If attained globally, this equates to 480,000,000 tonnes of CO2. Whilst well-studied in operations research – industrial adoption remains limited due to a lack of trustworthy cooperation. A key remaining challenge is fair and scalable gain sharing (i.e., how much should each company be fairly paid?). This paper introduces the novel algorithmic challenges that Cooperative Logistics offers AI, and novel applications of AI towards Cooperative Logistics. We further present findings from our initial experiments.
Cooperative Logistics: Can Artificial Intelligence Enable Trustworthy Cooperation at Scale?
[ "Stephen Mak", "Tim Pearce", "Matthew Macfarlane", "Liming Xu", "Michael Ostroumov", "Alexandra Brintrup" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7KTQsrUIOy
@inproceedings{ higuera-mendieta2023a, title={A table is worth a thousand pictures: Multi-modal contrastive learning in house burning classification in wildfire events}, author={Iv{\'a}n Higuera-Mendieta and Jeff Wen and Marshall Burke}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=7KTQsrUIOy} }
Wildfires have increased in frequency and duration over the last decade in the Western United States. This not only poses a risk to human life, but also results in billions of dollars in private and public infrastructure damages. As climate change potentially worsens the frequency and severity of wildfires, understanding their risk is critical for human adaptation and optimal fire prevention techniques. However, current fire spread models are often dependent on idealized fire and soil parameters, hard to compute, and not predictive of property damage. In this paper, we use a multimodal model with image and text embeddings that allows both image and text representations in the same latent space, to predict which houses will burn down in the event of wildfires. Our results indicate that the DE model achieves better performance than the unimodal baselines for image-only and text-only models (i.e. ResNet50 and XGBoost), and text or vision only models. Moreover, following other models in the literature, it outperform these models also in low-data regimes.
A table is worth a thousand pictures: Multi-modal contrastive learning in house burning classification in wildfire events
[ "Iván Higuera-Mendieta", "Jeff Wen", "Marshall Burke" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=3mlfh6c3dp
@inproceedings{ min2023joint, title={Joint time{\textendash}frequency scattering-enhanced representation for bird vocalization classification}, author={Yimeng Min and Eliot T Miller and Daniel Fink and Carla P Gomes}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=3mlfh6c3dp} }
Neural Networks (NNs) have been widely used in passive acoustic monitoring. Typically, audio is converted into a Mel Spectrogram as a preprocessing step before being fed into NNs. In this study, we investigate the Joint Time-Frequency Scattering transform as an alternative preprocessing technique for analyzing bird vocalizations. We highlight its superiority over the Mel Spectrogram because it captures intricate time-frequency patterns and emphasizes rapid signal transitions. While the Mel Spectrogram often gives similar importance to all sounds, the scattering transform differentiates between rapid and slow variations better. We use a Convolution Neural Network architecture and an attention-based transformer. Our results demonstrate that both the NN architectures can benefit from this enhanced preprocessing, where scattering transform can provide a more discriminative representation of bird vocalizations than the traditional Mel Spectrogram.
Joint time–frequency scattering-enhanced representation for bird vocalization classification
[ "Yimeng Min", "Eliot T Miller", "Daniel Fink", "Carla P Gomes" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0FOjGwgN0g
@inproceedings{ li2023solving, title={Solving Satisfiability Modulo Counting Problems in Computational Sustainability with Guarantees}, author={Jinzhao Li and Nan Jiang and Yexiang Xue}, booktitle={NeurIPS 2023 Computational Sustainability: Promises and Pitfalls from Theory to Deployment}, year={2023}, url={https://openreview.net/forum?id=0FOjGwgN0g} }
Many real-world problems in computational sustainability require tight integrations of symbolic and statistical AI. Interestingly, Satisfiability Modulo Counting (SMC) captures a wide variety of such problems. SMC searches for policy interventions to control probabilistic outcomes. Solving SMC is challenging because of its highly intractable nature ($NP^{PP}$-complete), incorporating statistical inference and symbolic reasoning. Previous research on SMC solving lacks provable guarantees and/or suffers from sub-optimal empirical performance, especially when combinatorial constraints are present. We propose XOR-SMC, a polynomial algorithm with access to NP-oracles, to solve highly intractable SMC problems with constant approximation guarantees. XOR-SMC transforms the highly intractable SMC into satisfiability problems, replacing the model counting in SMC with SAT formulae subject to randomized XOR constraints. Experiments on solving important SMC problems in computational sustainability demonstrate that XOR-SMC finds solutions close to the true optimum, outperforming several baselines which struggle to find good approximations for the intractable model counting in SMC.
Solving Satisfiability Modulo Counting Problems in Computational Sustainability with Guarantees
[ "Jinzhao Li", "Nan Jiang", "Yexiang Xue" ]
Workshop/CompSust
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yx26gFIzY7
@inproceedings{ havaldar2023learning, title={Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation}, author={Shreyas Havaldar and Navodita Sharma and Shubhi Sareen and Karthikeyan Shanmugam and Aravindan Raghuveer}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=yx26gFIzY7} }
Learning from Label Proportions (LLP) is a learning problem where only aggregate level labels are available for groups of instances, called bags, during training, and the aim is to get the best performance at the instance-level on the test data. This setting arises in domains like advertising and medicine due to regulatory guidelines concerning privacy. We propose a novel algorithmic framework for this problem that iteratively performs two main steps. For the first step (Pseudo Labeling) in every iteration, we define a Gibbs distribution over binary instance labels that incorporates a) covariate information through the constraint that instances with similar covariates should have similar labels and b) the bag level aggregated label. We then use Belief Propagation (BP) to marginalize the Gibbs distribution to obtain pseudo labels. In the second step (Embedding Refinement), we use the pseudo labels to provide supervision for a learner that yields a better embedding. Further, we iterate on the two steps again by using the second step's embeddings as new covariates for the next iteration. In the final iteration, a classifier is trained using the pseudo labels. Our algorithm displays strong gains against several SOTA baselines (up to 15% AUROC) for the LLP Binary Classification problem on various dataset types - tabular and image. We achieve these improvements with minimal computational overhead above standard supervised learning due to Belief Propagation, for large bag sizes, even for a million samples.
Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation
[ "Shreyas Havaldar", "Navodita Sharma", "Shubhi Sareen", "Karthikeyan Shanmugam", "Aravindan Raghuveer" ]
Workshop/RegML
2310.08056
[ "" ]
https://huggingface.co/papers/2310.08056
0
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=u9gLNSGgRA
@inproceedings{ yaghini2023regulation, title={Regulation Games for Trustworthy Machine Learning}, author={Mohammad Yaghini and Patty Liu and Franziska Boenisch and Nicolas Papernot}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=u9gLNSGgRA} }
Existing work on trustworthy machine learning (ML) often focuses on a single aspect of trust in ML (e.g., fairness, or privacy) and thus fails to obtain a holistic trust assessment. Furthermore, most techniques often fail to recognize that the parties who train models are not the same as the ones who assess their trustworthiness. We propose a framework that formulates trustworthy ML as a multi-objective multi-agent optimization problem to address these limitations. A holistic characterization of trust in ML naturally lends itself to a game theoretic formulation, which we call regulation games. We introduce and study a particular game instance, the SpecGame, which models the relationship between an ML model builder and regulators seeking to specify and enforce fairness and privacy regulations. Seeking socially optimal (i.e., efficient for all agents) solutions to the game, we introduce ParetoPlay. This novel equilibrium search algorithm ensures that agents remain on the Pareto frontier of their objectives and avoids the inefficiencies of other equilibria. For instance, we show that for a gender classification application, the achieved privacy guarantee is 3.76× worse than the ordained privacy requirement if regulators do not take the initiative to specify their desired guarantees first. We hope that our framework can provide policy guidance.
Regulation Games for Trustworthy Machine Learning
[ "Mohammad Yaghini", "Patty Liu", "Franziska Boenisch", "Nicolas Papernot" ]
Workshop/RegML
2402.03540
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qhP1aHHyeA
@inproceedings{ yu2023who, title={Who Leaked the Model? Tracking {IP} Infringers in Accountable Federated Learning}, author={Shuyang Yu and Junyuan Hong and Yi Zeng and Fei Wang and Ruoxi Jia and Jiayu Zhou}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=qhP1aHHyeA} }
Federated learning (FL) emerges as an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training. Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user. Meanwhile, the distributed nature of FL endorses a malicious client the convenience to compromise IP through illegal model leakage to unauthorized third parties. To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it. The collective challenges call for accountable federated learning, which requires verifiable ownership of the model and is capable of revealing the infringer's identity upon leakage. In this paper, we propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL. Specifically, before a global model is sent to a client in an FL round, DUW encodes a client-unique key into the model by leveraging a backdoor-based watermark injection. To identify the infringer of a leaked model, DUW examines the model and checks if the triggers can be decoded as the corresponding keys. Extensive empirical results show that DUW is highly effective and robust, achieving over 99% watermark success rate for Digits, CIFAR-10, and CIFAR-100 datasets under heterogeneous FL settings, and identifying the IP infringer with 100% accuracy even after common watermark removal attempts.
Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning
[ "Shuyang Yu", "Junyuan Hong", "Yi Zeng", "Fei Wang", "Ruoxi Jia", "Jiayu Zhou" ]
Workshop/RegML
2312.03205
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pnvRy1VzJZ
@inproceedings{ viard2023reading, title={Reading the drafts of the {AI} Act with a technical lens}, author={Tiphaine Viard and Melanie Gornet and Winston Maxwell}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=pnvRy1VzJZ} }
The draft AI Act is an effort led by European institutions to regulate the deployment and use of artificial intelligence. It is a notably difficult task, in part due to the polysemy of concepts such as artificial intelligence, covering topics such as foundational models, optimisation routines and rule-based models, among others. Furthermore, it gives a prism by which we can observe the wide variety of stakes different actors are pushing for. After an initial draft proposed by the Commission in 2021, the European Commission, Council and Parliament will now discuss and draft the final version as part of the trilogue phase. The existence of these three versions gives us a chance to understand the negociations happening between the different European institutions, and as such is an interesting look into the currents that shape the artificial intelligence ecosystem. In this paper we focus on the Commission, Council and Parliament proposals for the Act, and read them with a technical lens. In particular, we examine the technical concepts mobilized in the Act, and contextualize them in the wider sociotechnical environment surrounding artificial intelligence. For each main concept, we make a comparative analysis of each version, highlighting their differences and their impact. This paper is primarily geared towards computer scientists, data analysts and machine learning researchers, in order to clarify the tenets and decisions made in the current versions of the act.
Reading the drafts of the AI Act with a technical lens
[ "Tiphaine Viard", "Melanie Gornet", "Winston Maxwell" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=myWKEmGKfL
@inproceedings{ kenny2023in, title={In Pursuit of Regulatable {LLM}s}, author={Eoin Kenny and Julie Shah}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=myWKEmGKfL} }
Large-Language-Models (LLMs) are arguable the biggest breakthrough in artificial intelligence to date. Recently, they have come to the public Zeitgeist with a surge of media attention surrounding ChatGPT, a large generative language model released by OpenAI which quickly became the fastest growing application in history. This model achieved unparalleled human-AI conversational skills, and even passed various mutations of the popular Turing test which measures if AI systems have achieved general intelligence. Naturally, the world at large wants to utilize these systems for various applications, but in order to do-so in truly sensitive domains, the models must often be regulatable in order to be legally used. In this short paper, we propose one approach towards such systems by forcing them to reason using a combination of (1) human-defined concepts, (2) Case-Base Reasoning (CBR), and (3) counterfactual explanations. All of these have support in user testing and psychology that they are understandable and useful to practitioners of AI systems. We envision this approach will be able to provide transparent LLMs for text classification tasks and be fully regulatable and auditable.
In Pursuit of Regulatable LLMs
[ "Eoin Kenny", "Julie Shah" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kZBHvU5lIY
@inproceedings{ guerdan2023policy, title={Policy Comparison Under Unmeasured Confounding}, author={Luke Guerdan and Amanda Coston and Steven Wu and Ken Holstein}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=kZBHvU5lIY} }
Predictive models are often introduced under the rationale that they improve performance over an existing decision-making policy. However, it is challenging to directly compare an algorithm against a status quo policy due to uncertainty introduced by confounding and selection bias. In this work, we develop a regret estimator which evaluates differences in classification metrics across decision-making policies under confounding. Theoretical and experimental results demonstrate that our regret estimator yields tighter regret bounds than existing auditing frameworks designed to evaluate predictive models under confounding. Further, we show that our regret estimator can be combined with a flexible set of causal identification strategies to yield informative and well-justified policy comparisons. Our experimental results also illustrate how confounding and selection bias contribute to uncertainty in subgroup-level policy comparisons. We hope that our auditing framework will support the operationalization of regulatory frameworks calling for more direct assessments of predictive model efficacy.
Policy Comparison Under Unmeasured Confounding
[ "Luke Guerdan", "Amanda Coston", "Steven Wu", "Ken Holstein" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kAT1410oDy
@inproceedings{ yew2023you, title={You Still See Me: How Data Protection Supports the Architecture of {ML} Surveillance}, author={Rui-Jie Yew and Lucy Qin and Suresh Venkatasubramanian}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=kAT1410oDy} }
Data (as well as computation) is key to the functionality of ML systems. Data protection has therefore become a focal point of policy proposals and existing laws that are pertinent to the governance of ML systems. Privacy laws and legal scholarship have long emphasized privacy responsibilities developers have to protect individual data subjects. As a consequence, technical methods for privacy-preservation have been touted as solutions to prevent intrusions to individual data in the development of ML systems while preserving their resulting functionality. Further, privacy-preserving machine learning (PPML) has been offered up as a way to address the tension between being "seen" and "mis-seen" - to build models that can be fair, accurate, and conservative in data use. However, a myopic focus on privacy-preserving machine learning obscures broader privacy harms facilitated by ML models. In this paper, we argue that the use of PPML techniques to "un-see" data subjects introduces privacy costs of a fundamentally different nature. Your data may not be used in its raw or "personal" form, but models built from that data still make predictions and influence you and people like you. Moreover, PPML has allowed data collectors to excavate crevices of data that no one could touch before. We illustrate these privacy costs with an example on targeted advertising and models built with private set intersection.
You Still See Me: How Data Protection Supports the Architecture of ML Surveillance
[ "Rui-Jie Yew", "Lucy Qin", "Suresh Venkatasubramanian" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=jpuV8Mzc0R
@inproceedings{ sayal2023advancing, title={Advancing Clinical Trials via Real-World Aligned {ML} Best Practices}, author={Karen Sayal and Markus Trengove and Finnian Firth and Lea Goetz}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=jpuV8Mzc0R} }
There is an increasing drive to integrate machine learning (ML) tools into the drug development pipeline, to improve success rates and efficiency in the clinical development pathway. The ML regulatory framework being developed is closely aligned with ML best practices. However, there remain significant and tangible practical gaps in translating best practice standards into a real-world clinical trial context. To illustrate the practical challenges to regulating ML in this context, we present a theoretical oncology trial in which a ML tool is applied to support toxicity monitoring in patients. We explore the barriers in the highly regulated clinical trial environment to implementing data representativeness, model interpretability, and model usability.
Advancing Clinical Trials via Real-World Aligned ML Best Practices
[ "Karen Sayal", "Markus Trengove", "Finnian Firth", "Lea Goetz" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=inZoxsEpYn
@inproceedings{ papageorgiou2023necessity, title={Necessity of Processing Sensitive Data for Bias Detection and Monitoring: A Techno-Legal Exploration}, author={Ioanna Papageorgiou and Carlos Mougan}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=inZoxsEpYn} }
This paper explores the intersection of the upcoming AI Regulation and fair ML research, specifically examining the legal principle of "necessity" in the context of processing sensitive personal data for bias detection and monitoring in AI systems. Drawing upon Article 10 (5) of the AI Act, currently under negotiation, and the General Data Protection Regulation, we investigate the challenges posed by the nuanced concept of "necessity" in enabling AI providers to process sensitive personal data for bias detection and bias monitoring. The lack of guidance regarding this binding textual requirement creates significant legal uncertainty for all parties involved and risks a purposeful and inconsistent legal application. To address this issue from a techno-legal perspective, we delve into the core of the necessity principle and map it to current approaches in fair machine learning. Our objective is to bridge operational gaps between the forthcoming AI Act and the evolving field of fair ML and support an integrative approach of non-discrimination and data protection desiderata in the conception of fair ML, thereby facilitating regulatory compliance.
Necessity of Processing Sensitive Data for Bias Detection and Monitoring: A Techno-Legal Exploration
[ "Ioanna Papageorgiou", "Carlos Mougan" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=fiJEolaPj9
@inproceedings{ elkin-koren2023can, title={Can copyright be reduced to privacy}, author={Niva Elkin-Koren and Uri Hacohen and Roi Livni and Shay Moran}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=fiJEolaPj9} }
There is a growing concern that generative AI models may generate outputs that closely resemble the copyrighted input content used for their training. This worry has intensified as the quality and complexity of generative models have immensely improved, and the availability of extensive datasets containing copyrighted material has expanded. Researchers are actively exploring strategies to mitigate the risk of producing infringing samples, and a recent line of work suggests employing techniques such as differential privacy and other forms of algorithmic stability to safeguard copyrighted content. In this work, we examine whether algorithmic stability techniques such as differential privacy are suitable to ensure the responsible use of generative models without inadvertently violating copyright laws. We argue that there are fundamental differences between privacy and copyright that should not be overlooked. In particular, we highlight that although algorithmic stability may be perceived as a practical tool to detect copying, it does not necessarily equate to copyright protection. Therefore, if it is adopted as a standard for copyright infringement, it may undermine the intended purposes of copyright law
Can copyright be reduced to privacy
[ "Niva Elkin-Koren", "Uri Hacohen", "Roi Livni", "Shay Moran" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=fTv0LQRCZM
@inproceedings{ kong2023rnyitester, title={R\'enyiTester: A Variational Approach to Testing Differential Privacy}, author={Weiwei Kong and Andres Munoz medina and M{\'o}nica Ribero}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=fTv0LQRCZM} }
Governments and industries have widely adopted differential privacy as a measure to protect users’ sensitive data, creating the need for new implementations of differentially private algorithms. In order to properly test and audit these algorithms, a suite of tools for testing the property of differential privacy is needed. In this work we expand this testing suite and introduce RényiTester, an algorithm that can reject a mechanism that is not Rényi differentially private. Our algorithm computes computes a lower bound of the Rényi divergence between the distributions of a mechanism on neighboring datasets, only requiring black-box access to samples from the audited mechanism. We test this approach on a variety of pure and Rényi differentially private mechanisms with diverse output spaces and show that RényiTester detects bugs in mechanisms' implementations and design flaws. While detecting that a general mechanism is differentially private is known to be NP hard, we empirically show that tools like RényiTester provide a way for researchers and engineers to decrease the risk of deploying mechanisms that expose users' privacy.
RényiTester: A Variational Approach to Testing Differential Privacy
[ "Weiwei Kong", "Andres Munoz medina", "Mónica Ribero" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eKrYMGpXVY
@inproceedings{ guha2023conformal, title={Conformal Prediction via Regression-as-Classification}, author={Etash Guha and Shlok Natarajan and Thomas M{\"o}llenhoff and Mohammad Emtiyaz Khan and Eugene Ndiaye}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=eKrYMGpXVY} }
Conformal Prediction (CP) is a method of estimating risk or uncertainty when using Machine Learning to help abide by common Risk Management regulations often seen in fields like healthcare and finance. CP for regression can be challenging, especially when the output distribution is heteroscedastic, multimodal, or skewed. Some of the issues can be addressed by estimating a distribution over the output, but in reality, such approaches can be sensitive to estimation error and yield unstable intervals.~Here, we circumvent the challenges by converting regression to a classification problem and then use CP for classification to obtain CP sets for regression.~To preserve the ordering of the continuous-output space, we design a new loss function and present necessary modifications to the CP classification techniques.~Empirical results on many benchmarks shows that this simple approach gives surprisingly good results on many practical problems.
Conformal Prediction via Regression-as-Classification
[ "Etash Guha", "Shlok Natarajan", "Thomas Möllenhoff", "Mohammad Emtiyaz Khan", "Eugene Ndiaye" ]
Workshop/RegML
2404.08168
[ "https://github.com/EtashGuha/R2CCP" ]
https://huggingface.co/papers/2404.08168
0
0
0
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=bynBtr8ovP
@inproceedings{ medina2023a, title={A Unified Analysis of Label Inference Attacks}, author={Andres Munoz medina and Travis Dick and Claudio Gentile and Robert Istvan Busa-Fekete and Marika Swanberg}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=bynBtr8ovP} }
Randomized response and label aggregation are two common ways of sharing sensitive label information in a private way. In spite of their popularity in the privacy literature, there is a lack of consensus on how to compare the privacy properties of these two different mechanisms. In this work, we investigate the privacy risk of sharing label information for these privacy enhancing technologies through the lens of label reconstruction advantage measures. A reconstruction advantage measure quantifies the increase in an attacker's ability to infer the true label of an unlabeled example when provided with a private version of the labels in a dataset (e.g., averages of labels from different users or noisy labels output by randomized response), compared to an attacker that only observes the feature vectors, but may have prior knowledge of the correlation between features and labels. We extend the Expected Attack Utility (EAU) and Advantage of previous work to mechanisms that involve aggregation of labels across different examples. We theoretically quantify this measure for Randomized Response and random aggregates under various correlation assumptions with public features, and then empirically corroborate these findings by quantifying EAU on real-world data. To the best of our knowledge, these are the first experiments where randomized response and label proportions are placed on the same privacy footing. We finally point out that simple modifications to the random aggregate approach can provide extra DP-like protection.
A Unified Analysis of Label Inference Attacks
[ "Andres Munoz medina", "Travis Dick", "Claudio Gentile", "Robert Istvan Busa-Fekete", "Marika Swanberg" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ZLJ6XRbdaC
@inproceedings{ shi2023detecting, title={Detecting Pretraining Data from Large Language Models}, author={Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=ZLJ6XRbdaC} }
Although large language models (LMs) are widely deployed, the data used to train them is rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but certain that it inadvertently includes potentially problematic text such as copyrighted materials, personally identifiable information, and test data for widely reported reference benchmarks. However, we currently have no way to know which data of these types is included or in what proportions. In this paper, we study the pretraining data detection problem; given a piece of text and black-box access to an LM with no knowledge of its training data, can we determine if the model was trained on our text. To study this problem, we introduce a dynamic benchmark WIKIMIA and a new detection method MIN-K PROB. Our method is based on a simple hypothesis: an unseen example is likely to contain a few outlier words with low probabilities under the LM, while a seen example is less likely to have words with such low probabilities. MIN-K PROB can be applied without any knowledge about the pretrainig corpus or any additional training, departing from previous detection methods that require training a reference model on data that is similar to the pretraining data. Moreover, our experiments demonstrate that MIN-K PROB achieves a 7.4% improvement over these previous methods. Our analysis demonstrates that MIN-K PROB is an effective tool for detecting contaminated benchmark data and copyrighted content within LMs.
Detecting Pretraining Data from Large Language Models
[ "Weijia Shi", "Anirudh Ajith", "Mengzhou Xia", "Yangsibo Huang", "Daogao Liu", "Terra Blevins", "Danqi Chen", "Luke Zettlemoyer" ]
Workshop/RegML
2310.16789
[ "" ]
https://huggingface.co/papers/2310.16789
6
10
0
8
[]
[ "swj0419/WikiMIA", "swj0419/BookMIA" ]
[ "Yeyito/llm_contamination_detector" ]
[]
[ "swj0419/WikiMIA", "swj0419/BookMIA" ]
[ "Yeyito/llm_contamination_detector" ]
1
poster
null
https://openreview.net/forum?id=Z2Ig9ky9HI
@inproceedings{ deshpande2023anthropomorphization, title={Anthropomorphization of {AI}: Opportunities and Risks}, author={Ameet Deshpande and Tanmay Rajpurohit and Karthik Narasimhan and Ashwin Kalyan}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=Z2Ig9ky9HI} }
Anthropomorphization is the tendency to attribute human-like traits to non-human entities. It is prevalent in many social contexts - children anthropomorphize toys, adults do so with brands, and it is a literary device. It is also a versatile tool in science, with behavioral psychology and evolutionary biology meticulously documenting its consequences. With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly. We take a dyadic approach to understanding this phenomenon with large language models (LLMs) by studying (1) the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights and the (2) subtle psychological aspects customization and anthropomorphization. We find that anthropomorphized LLMs customized for different user bases violate multiple provisions in the legislative blueprint. In addition, we point out that anthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence. With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution. We propose a conservative strategy for the cautious use of anthropomorphization to improve trustworthiness of AI systems.
Anthropomorphization of AI: Opportunities and Risks
[ "Ameet Deshpande", "Tanmay Rajpurohit", "Karthik Narasimhan", "Ashwin Kalyan" ]
Workshop/RegML
2305.14784
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=YBbMBZGzCx
@inproceedings{ singh2023a, title={A Brief Tutorial on Sample Size Calculations for Fairness Audits}, author={Harvineet Singh and Fan Xia and Mi-Ok Kim and Romain Pirracchio and Rumi Chunara and Jean Feng}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=YBbMBZGzCx} }
In fairness audits, a standard objective is to detect whether a given algorithm performs substantially differently between subgroups. Properly powering the statistical analysis of such audits is crucial for obtaining informative fairness assessments, as it ensures a high probability of detecting unfairness when it exists. However, limited guidance is available on the amount of data necessary for a fairness audit, lacking directly applicable results concerning commonly used fairness metrics. Additionally, the consideration of unequal subgroup sample sizes is also missing. In this tutorial, we address these issues by providing guidance on how to determine the required subgroup sample sizes to maximize the statistical power of hypothesis tests for detecting unfairness. Our findings are applicable to audits of binary classification models and multiple fairness metrics derived as summaries of the confusion matrix. Furthermore, we discuss other aspects of audit study designs that can increase the reliability of audit results.
A Brief Tutorial on Sample Size Calculations for Fairness Audits
[ "Harvineet Singh", "Fan Xia", "Mi-Ok Kim", "Romain Pirracchio", "Rumi Chunara", "Jean Feng" ]
Workshop/RegML
2312.04745
[ "https://github.com/harvineet/sample-size-fairness-audits" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=WLw1oDGR2Q
@inproceedings{ pan2023anchmark, title={AnchMark: Anchor-contrastive Watermarking vs Gen{AI}-based Image Modifications}, author={Minzhou Pan and Yi Zeng and Xue Lin and Ning Yu and Cho-Jui Hsieh and Ruoxi Jia}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=WLw1oDGR2Q} }
This work explores the evolution of watermarking techniques designed to preserve the integrity of digital image content, especially against perturbations encountered during image transmission. An overlooked vulnerability is unveiled: existing watermarks' detectability significantly drops against even moderate generative model modifications, prompting a deeper investigation into the societal implications from a policy viewpoint. In response, we propose ANCHMARK, a robust watermarking paradigm, which remarkably achieves a detection AUC exceeding 0.93 against perturbations from unseen generative models, showcasing a promising advancement in reliable watermarking amidst evolving image modification techniques.
AnchMark: Anchor-contrastive Watermarking vs GenAI-based Image Modifications
[ "Minzhou Pan", "Yi Zeng", "Xue Lin", "Ning Yu", "Cho-Jui Hsieh", "Ruoxi Jia" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=V4hGy6Xm11
@inproceedings{ carey2023a, title={A new Framework for Measuring Re-Identification Risk}, author={CJ Carey and Travis Dick and Alessandro Epasto and Adel Javanmard and Josh Karlin and Shankar Kumar and Andres Munoz medina and Vahab Mirrokni and Gabriel Nunes and Sergei Vassilvitskii and Peilin Zhong}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=V4hGy6Xm11} }
Compact user representations (such as embeddings) form the backbone of personalization services. In this work, we present a new theoretical framework to measure re-identification risk in such user representations. Our framework, based on hypothesis testing, formally bounds the probability that an attacker may be able to obtain the identity of a user from their representation. As an application, we show how our framework is general enough to model important real-world applications such as the Chrome's Topics API for interest-based advertising. We complement our theoretical bounds by showing provably good attack algorithms for re-identification that we use to estimate the re-identification risk in the Topics API. We believe this work provides a rigorous and interpretable notion of re-identification risk and a framework to measure it that can be used to inform real-world applications.
A new Framework for Measuring Re-Identification Risk
[ "CJ Carey", "Travis Dick", "Alessandro Epasto", "Adel Javanmard", "Josh Karlin", "Shankar Kumar", "Andres Munoz medina", "Vahab Mirrokni", "Gabriel Nunes", "Sergei Vassilvitskii", "Peilin Zhong" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Tv9qMLwBWI
@inproceedings{ fujimoto2023assessing, title={Assessing the Impact of Distribution Shift on Reinforcement Learning Performance}, author={Ted Fujimoto and Joshua Suetterlein and Samrat Chatterjee and Auroop Ganguly}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=Tv9qMLwBWI} }
Research in machine learning is making progress in fixing its own reproducibility crisis. Reinforcement learning (RL), in particular, faces its own set of unique challenges. Comparison of point estimates, and plots that show successful convergence to the optimal policy during training, may obfuscate overfitting or dependence on the experimental setup. Although researchers in RL have proposed reliability metrics that account for uncertainty to better understand each algorithm's strengths and weaknesses, the recommendations of past work do not assume the presence of out-of-distribution observations. We propose a set of evaluation methods that measure the robustness of RL algorithms under distribution shifts. The tools presented here argue for the need to account for performance over time while the agent is acting in its environment. In particular, we recommend time series analysis as a method of observational RL evaluation. We also show that the unique properties of RL and simulated dynamic environments allow us to make stronger assumptions to justify the measurement of causal impact in our evaluations. We then apply these tools to single-agent and multi-agent environments to show the impact of introducing distribution shifts during test time. We present this methodology as a first step toward rigorous RL evaluation in the presence of distribution shifts.
Assessing the Impact of Distribution Shift on Reinforcement Learning Performance
[ "Ted Fujimoto", "Joshua Suetterlein", "Samrat Chatterjee", "Auroop Ganguly" ]
Workshop/RegML
2402.03590
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=TFWnViI30j
@inproceedings{ vincent2023an, title={An Alternative to Regulation: The Case for Public {AI}}, author={Nicholas Vincent and David Bau and Sarah Schwettmann and Joshua Tan}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=TFWnViI30j} }
Can governments build AI? In this paper, we describe an ongoing effort to develop "public AI"—publicly accessible AI models funded, provisioned, and governed by governments or other public bodies. Public AI presents both an alternative and a complement to standard regulatory approaches to AI, but it also suggests new technical and policy challenges. We present a roadmap for how the ML research community can help shape this initiative and support its implementation, and how public AI can complement other responsible AI initiatives.
An Alternative to Regulation: The Case for Public AI
[ "Nicholas Vincent", "David Bau", "Sarah Schwettmann", "Joshua Tan" ]
Workshop/RegML
2311.11350
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=SWDYBzULRk
@inproceedings{ moulange2023towards, title={Towards Responsible Governance of Biological Design Tools}, author={Richard Moulange and Max Langenkamp and Tessa Alexanian and Samuel Curtis and Morgan Livingston}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=SWDYBzULRk} }
Recent advancements in generative machine learning have enabled rapid progress in biological design tools (BDTs) such as protein structure and sequence prediction models. The unprecedented predictive accuracy and novel design capabilities of BDTs present new and significant dual-use risks. BDTs have the potential to improve vaccine design and drug discovery, but may also be misused deliberately or inadvertently to design biological agents capable of doing more harm or evading current screening techniques. Similar to other dual-use AI systems, BDTs present a wicked problem: how can regulators uphold public safety without stifling innovation? We highlight how current regulatory proposals that are primarily tailored toward large language models may be less effective for BDTs, which require fewer computational resources to train and are often developed in a decentralized, non-commercial, open-source manner. We propose a range of measures to mitigate misuse risks. These include measures to control model development, assess risks, encourage transparency, manage access to dangerous capabilities, and strengthen cybersecurity. Implementing such measures will require close coordination between developers and governments.
Towards Responsible Governance of Biological Design Tools
[ "Richard Moulange", "Max Langenkamp", "Tessa Alexanian", "Samuel Curtis", "Morgan Livingston" ]
Workshop/RegML
2311.15936
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=R5MTSLPyYZ
@inproceedings{ yaghini2023learning, title={Learning to Walk Impartially on the Pareto Frontier of Fairness, Privacy, and Utility}, author={Mohammad Yaghini and Patty Liu and Franziska Boenisch and Nicolas Papernot}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=R5MTSLPyYZ} }
Deploying machine learning (ML) models often requires both fairness and privacy guarantees. Both objectives often present notable trade-offs with the accuracy of the model—the primary focus of most applications. Thus, utility is prioritized while privacy and fairness constraints are treated as simple hyperparameters. In this work, we argue that by prioritizing one objective over others, we disregard more favorable solutions where at least certain objectives could have been improved without degrading any other. We adopt impartiality as a design principle: ML pipelines should not favor one objective over another. We theoretically show that a common ML pipeline design that features an unfairness mitigation step followed by private training is non-impartial. Then, parting from the two most common privacy frameworks for ML, we propose FairDP-SGD and FairPATE to train impartially specified private and fair models. Because impartially specified models recover the Pareto frontiers, i.e., the best trade-offs between different objectives, we show that they yield significantly better trade-offs than models optimized for one objective and hyperparameter-tuned for the others. Thus, our approach allows us to mitigate tensions between objectives previously found incompatible.
Learning to Walk Impartially on the Pareto Frontier of Fairness, Privacy, and Utility
[ "Mohammad Yaghini", "Patty Liu", "Franziska Boenisch", "Nicolas Papernot" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Mzu0NIMvvh
@inproceedings{ chen2023can, title={Can {LLM}-Generated Misinformation Be Detected?}, author={Canyu Chen and Kai Shu}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=Mzu0NIMvvh} }
The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures.
Can LLM-Generated Misinformation Be Detected?
[ "Canyu Chen", "Kai Shu" ]
Workshop/RegML
2309.13788
[ "https://github.com/llm-misinformation/llm-misinformation" ]
https://huggingface.co/papers/2309.13788
1
0
1
2
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=M2aNjwX4Ec
@inproceedings{ raghavan2023limitations, title={Limitations of the {\textquotedblleft}Four-Fifths Rule{\textquotedblright} and Statistical Parity Tests for Measuring Fairness}, author={Manish Raghavan and Pauline Kim}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=M2aNjwX4Ec} }
To ensure the fairness of algorithmic decision systems, such as employment se- lection tools, computer scientists and practitioners often refer to the so-called “four-fifths rule” as a measure of a tool’s compliance with anti-discrimination law. This reliance is problematic because the “rule” is in fact not a legal rule for es- tablishing discrimination, and it offers a crude test that will often be over- and under-inclusive in identifying practices that warrant further scrutiny. The “four- fifths rule” is one of a broader class of statistical tests, which we call Statistical Parity Tests (SPTs), that compare selection rates across demographic groups. While some SPTs are more statistically robust, all share some critical limitations in iden- tifying disparate impacts retrospectively. When these tests are used prospectively as an optimization objective shaping model development, additional concerns arise about the development process, behavioral incentives, and gameability. In this article, we discuss the appropriate role for SPTs in algorithmic governance. We suggest a combination of measures that take advantage of the additional informa- tion present during prospective optimization, providing greater insight into fairness considerations when building and auditing models.
Limitations of the “Four-Fifths Rule” and Statistical Parity Tests for Measuring Fairness
[ "Manish Raghavan", "Pauline Kim" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LYAfgPsJ41
@inproceedings{ yang2023navigating, title={Navigating Dataset Documentation in {ML}: A Large-Scale Analysis of Dataset Cards on Hugging Face}, author={Xinyu Yang and Weixin Liang and James Zou}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=LYAfgPsJ41} }
Advances in machine learning are closely tied to the creation of datasets. While dataset documentation is widely recognized as essential to the reliability, reproducibility, and transparency of ML, we lack a systematic empirical understanding of current dataset documentation practices. To shed light on this question, here we take Hugging Face - one of the largest platforms for sharing and collaborating on ML models and datasets - as a prominent case study. By analyzing all 7,433 dataset documentation on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices, yielding 5 main findings: (1) The dataset card completion rate shows marked heterogeneity correlated with dataset popularity: While 86.0\% of the top 100 downloaded dataset cards fill out all sections suggested by Hugging Face community, only 7.9\% of dataset cards with no downloads complete all these sections. (2) A granular examination of each section within the dataset card reveals that the practitioners seem to prioritize Dataset Description and Dataset Structure sections, accounting for 36.2\% and 33.6\% of the total card length, respectively, for the most downloaded datasets. In contrast, the Considerations for Using the Data section receives the lowest proportion of content, accounting for just 2.1\% of the text. (3) By analyzing the subsections within each section and utilizing topic modeling to identify key topics, we uncover what is discussed in each section, and underscore significant themes encompassing both technical and social impacts, as well as limitations within the Considerations for Using the Data section. (4) Our findings also highlight the need for improved accessibility and reproducibility of datasets in the Usage sections. (5) In addition, our human annotation evaluation emphasizes the pivotal role of comprehensive dataset content in shaping individuals' perceptions of a dataset card's overall quality. Overall, our study offers a unique perspective on analyzing dataset documentation through large-scale data science analysis and underlines the need for more thorough dataset documentation in machine learning research.
Navigating Dataset Documentation in ML: A Large-Scale Analysis of Dataset Cards on Hugging Face
[ "Xinyu Yang", "Weixin Liang", "James Zou" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=L97dqPfQdT
@inproceedings{ feng2023towards, title={Towards a Post-Market Monitoring Framework for Machine Learning-based Medical Devices: A case study}, author={Jean Feng and Adarsh Subbaswamy and Alexej Gossmann and Harvineet Singh and Berkman Sahiner and Mi-Ok Kim and Gene Pennello and Nicholas Petrick and Romain Pirracchio and Fan Xia}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=L97dqPfQdT} }
After a machine learning (ML)-based system is deployed in clinical practice, performance monitoring is important to ensure the safety and effectiveness of the algorithm over time. The goal of this work is to highlight the complexity of designing a monitoring strategy and the need for a systematic framework that compares the multitude of monitoring options. One of the main decisions is choosing between using real-world (observational) versus interventional data. Although the former is the most convenient source of monitoring data, it exhibits well-known biases, such as confounding, selection, and missingness. In fact, when the ML algorithm interacts with its environment, the algorithm itself may be a primary source of bias. On the other hand, a carefully designed interventional study that randomizes individuals can explicitly eliminate such biases, but the ethics, feasibility, and cost of such an approach must be carefully considered. Beyond the decision of the data source, monitoring strategies vary in the performance criteria they track, the interpretability of the test statistics, the strength of their assumptions, and their speed at detecting performance decay. As a first step towards developing a framework that compares the various monitoring options, we consider a case study of an ML-based risk prediction algorithm for postoperative nausea and vomiting (PONV). Bringing together tools from causal inference and statistical process control, we walk through the basic steps of defining candidate monitoring criteria, describing potential sources of bias and the causal model, and specifying and comparing candidate monitoring procedures. We hypothesize that these steps can be applied more generally, as techniques from causal inference can address other sources of biases as well.
Towards a Post-Market Monitoring Framework for Machine Learning-based Medical Devices: A case study
[ "Jean Feng", "Adarsh Subbaswamy", "Alexej Gossmann", "Harvineet Singh", "Berkman Sahiner", "Mi-Ok Kim", "Gene Pennello", "Nicholas Petrick", "Romain Pirracchio", "Fan Xia" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=K4m9g6AYcX
@inproceedings{ alag2023is, title={Is {EMA} Robust? Examining the Robustness of Data Auditing and a Novel Non-calibration Extension}, author={Ayush Alag and Yangsibo Huang and Kai Li}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=K4m9g6AYcX} }
Auditing data usage in machine learning models is crucial for regulatory compliance, especially with sensitive data like medical records. In this study, we scrutinize potential vulnerabilities within an acknowledged baseline method, Ensembled Membership Auditing (EMA), which employs membership inference attacks to determine if a specific model was trained using a particular dataset. We discover a novel False Negative Error Pattern in EMA when applied to large datasets, under adversarial methods like dropout, model pruning, and MemGuard. Our analysis across three datasets shows that larger convolutional models pose a greater challenge for EMA, but a novel metric-set analysis improves performance by up to $5\%$. To extend the applicability of our improvements, we introduce EMA-Zero, a GAN-based dataset auditing method that does not require an external calibration dataset. Notably, EMA-Zero performs comparably to EMA with synthetic calibration data trained on as few as 100 samples.
Is EMA Robust? Examining the Robustness of Data Auditing and a Novel Non-calibration Extension
[ "Ayush Alag", "Yangsibo Huang", "Kai Li" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HiimakFghr
@inproceedings{ kyriakou2023algorithmically, title={Algorithmically Mediated User Relations: Exploring Data's Relationality in Recommender Systems}, author={Athina Kyriakou and Oana Inel and Asia Biega and Abraham Bernstein}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=HiimakFghr} }
Personalization services, such as recommender systems, operate on vast amounts of user-item interactions to provide personalized content. To do so, they identify patterns in the available interactions and group users based on pre-existing offline or online social relations, or algorithmically determined similarities and differences. We refer to the relations created between users based on algorithmically determined constructs as algorithmically mediated user relations. However, prior works in the fields of law, technology policy, and philosophy have identified a lack of existing algorithmic governance frameworks to account for this relational aspect of data analysis. Algorithmically mediated user relations have also not been adequately acknowledged in technical approaches, such as for data importance and privacy, where users are usually considered independent from one another. In this paper, we highlight this conceptual discrepancy in the context of recommendation algorithms and provide empirical evidence of the limitations of the user independence assumption. We discuss related implications and future practical directions for accounting for algorithmically mediated user relations.
Algorithmically Mediated User Relations: Exploring Data's Relationality in Recommender Systems
[ "Athina Kyriakou", "Oana Inel", "Asia Biega", "Abraham Bernstein" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GjNRF5VTfn
@inproceedings{ kang2023scaling, title={Scaling up Trustless {DNN} Inference with Zero-Knowledge Proofs}, author={Daniel Kang and Tatsunori Hashimoto and Ion Stoica and Yi Sun}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=GjNRF5VTfn} }
As ML models have increased in capabilities and accuracy, so has the complexity of their deployments. Increasingly, ML model consumers are turning to service providers to serve the ML models in the ML-as-a-service (MLaaS) paradigm. As MLaaS proliferates, a critical requirement emerges: how can model consumers verify that the correct predictions were served, in the face of malicious, lazy, or buggy service providers? We present the first practical ImageNet-scale method to verify ML model inference non-interactively, i.e., after the inference has been done. To do so, we leverage recent developments in ZK-SNARKs (zero-knowledge succinct non-interactive argument of knowledge), a form of zero-knowledge proofs. ZK-SNARKs allows us to verify ML model execution non-interactively and with only standard cryptographic hardness assumptions. We provide the first ZK-SNARK proof of valid inference for a full-resolution ImageNet model, achieving 79% top-5 accuracy, with verification taking as little as one second. We further use these ZK-SNARKs to design protocols to verify ML model execution in a variety of scenarios, including verifying MLaaS predictions, verifying MLaaS model accuracy, and using ML models for trustless retrieval. Together, our results show that ZK-SNARKs have the promise to make verified ML model inference practical.
Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
[ "Daniel Kang", "Tatsunori Hashimoto", "Ion Stoica", "Yi Sun" ]
Workshop/RegML
2210.08674
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=9CYtG7db5A
@inproceedings{ cen2023outliers, title={Outliers Exist: What Happens if You are a Data-Driven Exception?}, author={Sarah Cen and Manish Raghavan}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=9CYtG7db5A} }
Data-driven tools are increasingly used to make consequential decisions. In recent years, they have begun to advise employers on which job applicants to interview, judges on which defendants to grant bail, lenders on which homeowners to give loans, and more. In such settings, different data-driven rules result in different decisions. The problem is, for every data-driven rule, there are exceptions. While a data-driven rule may be appropriate for some, it may not be appropriate for all. In this piece, we argue that existing frameworks do not fully encompass this view. As a result, individuals are often, through no fault of their own, made to bear the burden of being data-driven exceptions. We discuss how data-driven exceptions arise and provide a framework for understanding how we can relieve the burden on data-driven exceptions. Our framework requires balancing three considerations: individualization, uncertainty, and harm. Importantly, no single consideration trumps the rest. We emphasize the importance of uncertainty, advocating that decision-makers should utilize data-driven recommendations only if the levels of individualization and certainty are high enough to justify the potential harm resulting from those recommendations. We argue that data-driven decision-makers have a duty to consider the three components of our framework before making a decision, and connect these three components to existing methods.
Outliers Exist: What Happens if You are a Data-Driven Exception?
[ "Sarah Cen", "Manish Raghavan" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=8WH2t9F0Ip
@inproceedings{ wu2023membership, title={Membership Inference Attack on Diffusion Models via Quantile Regression}, author={Steven Wu and Shuai Tang and Sergul Aydore and Michael Kearns and Aaron Roth}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=8WH2t9F0Ip} }
Recently, diffusion models have demonstrated great potential for image synthesis due to their ability to generate high-quality synthetic data. However, when applied to sensitive data, privacy concerns have been raised about these models. In this paper, we evaluate the privacy risks of diffusion models through a \emph{membership inference (MI) attack}, which aims to identify whether a target example is in the training set when given the trained diffusion model. Our proposed MI attack learns a single quantile regression model that predicts (a quantile of) the distribution of reconstruction loss for each example. This enables us to identify a unique threshold on the reconstruction loss tailored to each example when determining their membership status. We show that our attack outperforms the prior state-of-the-art MI attack and avoids their high computational cost from training multiple shadow models. Consequently, our work enriches the set of practical tools for auditing the privacy risks of large-scale generative models.
Membership Inference Attack on Diffusion Models via Quantile Regression
[ "Steven Wu", "Shuai Tang", "Sergul Aydore", "Michael Kearns", "Aaron Roth" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=8NotCTD9cQ
@inproceedings{ liu2023prosac, title={{PROSAC}: Provably Safe Certification for Machine Learning Models under Adversarial Attacks}, author={Ziquan Liu and zhuo zhi and Ilija Bogunovic and Carsten Gerner-Beuerle and Miguel Rodrigues}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=8NotCTD9cQ} }
It is widely known that state-of-the-art machine learning models — including vision and language models — can be seriously compromised by adversarial perturbations, so it is also increasingly relevant to develop capability to certify their performance in the presence of the most effective adversarial attacks. Our paper offers a new approach to certify the performance of machine learning models in the presence of adversarial attacks, with population level risk guarantees. In particular, given a specific attack, we introduce the notion of a $(\alpha,\zeta)$ machine learning model safety guarantee: this guarantee, which is supported by a testing procedure based on the availability of a calibration set, entails one will only declare that a machine learning model adversarial (population) risk is less than $\alpha$ (i.e. the model is safe) given that the model adversarial (population) risk is higher than $\alpha$ (i.e. the model is in fact unsafe), with probability less than $\zeta$. We also propose Bayesian optimization algorithms to determine very efficiently whether or not a machine learning model is $(\alpha,\zeta)$-safe in the presence of an adversarial attack, along with their statistical guarantees. We apply our framework to a range of machine learning models — including various sizes of vision Transformer (ViT) and ResNet models — impaired by a variety of adversarial attacks such as AutoAttack, SquareAttack and natural evolution strategy attack, in order to illustrate the merit of our approach. Of particular relevance, we show that ViT's are generally more robust to adversarial attacks than ResNets and ViT-large is more robust than smaller models. Overall, our approach goes beyond existing empirical adversarial risk based certification guarantees, paving the way to more effective AI regulation based on rigorous (and provable) performance guarantees.
PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks
[ "Ziquan Liu", "zhuo zhi", "Ilija Bogunovic", "Carsten Gerner-Beuerle", "Miguel Rodrigues" ]
Workshop/RegML
2402.02629
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=75GEQc7Dmp
@inproceedings{ dai2023where, title={Where did you learn that?: Tracing the Impact of Training Data with Diffusion Model Ensembles}, author={Zheng Dai and Rui-Jie Yew and David Gifford}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=75GEQc7Dmp} }
The widespread adoption of diffusion models for creative uses such as image, video, and audio synthesis has raised serious legal and ethical concerns surrounding the use of training data and its regulation. Due to the size and complexity of these models, the effect of training data is difficult to characterize with existing methods, confounding regulatory efforts. In this work we propose a novel approach to trace the impact of training data using an encoded ensemble of diffusion models. In our approach, individual models in an ensemble are trained on encoded subsets of the overall training data to permit the identification of important training samples. The resulting ensemble allows us to efficiently remove the impact of any training sample. We demonstrate the viability of these ensembles for assessing influence and consider the regulatory implications of this work.
Where did you learn that?: Tracing the Impact of Training Data with Diffusion Model Ensembles
[ "Zheng Dai", "Rui-Jie Yew", "David Gifford" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=66nydPPVz7
@inproceedings{ brajovic2023merging, title={Merging ({EU})-Regulation and Model Reporting}, author={Danilo Brajovic and Vincent Philipp G{\"o}bels and Janika Kutz and Marco Huber}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=66nydPPVz7} }
Regulating AI systems remains a complex and unsolved issue despite years of active research. Various governmental approaches are currently underway, with the European AI Act being a significant initiative in this domain. In the absence of official regulations, researchers and developers have been exploring their own methods to ensure the secure application of AI systems. One well-established practice is the usage and documentation of AI applications through data and model cards. Although data and model cards do not explicitly address regulation, they are widely adopted in practice and share common characteristics with regulatory efforts. This paper presents an extended framework for reporting AI applications based on use-case, data, model and deployment cards, specifically designed to address upcoming regulations by the European Union. The proposed framework aligns with industry practices and provides comprehensive guidance for regulatory compliance and transparent reporting. By documenting the development process and addressing key requirements, the framework aims to support the responsible and accountable deployment of AI systems in line with EU regulations, positioning developers well for future legal requirements.
Merging (EU)-Regulation and Model Reporting
[ "Danilo Brajovic", "Vincent Philipp Göbels", "Janika Kutz", "Marco Huber" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4s3F2Kf9KL
@inproceedings{ pi2023missing, title={Missing Value Chain in Generative {AI} Governance: China as an example}, author={Yulu Pi}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=4s3F2Kf9KL} }
We examined the world’s first regulation on Generative AI, China’s Provisional Administrative Measures of Generative Artificial Intelligence Services, which came into effect in August 2023. Our assessment reveals that the Measures, while recognizing the technical advances of generative AI and seeking to govern its full life cycle, presents unclear distinctions regarding different roles in the value chain of Generative AI including upstream foundation model providers and downstream deployers. The lack of distinction and clear legal status between different players in the AI value chain can have profound consequences. It can lead to ambiguity in accountability, potentially undermining the governance and overall success of Generative AI services.
Missing Value Chain in Generative AI Governance: China as an example
[ "Yulu Pi" ]
Workshop/RegML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=4pO1Axdaeu
@inproceedings{ johnson2023assessing, title={Assessing {AI} Impact Assessments: A Classroom Study}, author={Nari Johnson and Hoda Heidari}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=4pO1Axdaeu} }
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems. Recent efforts from government or private-sector organizations have proposed many diverse instantiations of AIIAs, which take a variety of forms ranging from open-ended questionnaires to graded score-cards. However, to date that has been limited evaluation of existing AIIA instruments. We conduct a classroom study (N = 38) at a large research-intensive university (R1) in an elective course focused on the societal and ethical implications of AI. We assign students to different organizational roles (for example, an ML scientist or product manager) and ask participant teams to complete one of three existing AI impact assessments for one of two imagined generative AI systems. In our thematic analysis of participants' responses to pre- and post-activity questionnaires, we find preliminary evidence that impact assessments can influence participants' perceptions of the potential risks of generative AI systems, and the level of responsibility held by AI experts in addressing potential harm. We also discover a consistent set of limitations shared by several existing AIIA instruments, which we group into concerns about their format and content, as well as the feasibility and effectiveness of the activity in foreseeing and mitigating potential harms. Drawing on the findings of this study, we provide recommendations for future work on developing and validating AIIAs.
Assessing AI Impact Assessments: A Classroom Study
[ "Nari Johnson", "Hoda Heidari" ]
Workshop/RegML
2311.11193
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=2eJrIleV80
@inproceedings{ min2023silo, title={{SILO} Language Models: Isolating Legal Risk In a Nonparametric Datastore}, author={Sewon Min and Suchin Gururangan and Eric Wallace and Weijia Shi and Hannaneh Hajishirzi and Noah Smith and Luke Zettlemoyer}, booktitle={NeurIPS 2023 Workshop on Regulatable ML}, year={2023}, url={https://openreview.net/forum?id=2eJrIleV80} }
The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on low-risk text (e.g., out-of-copyright books or government documents), due to its limited size and domain coverage. We present SILO, a new language model that manages this risk-performance tradeoff during inference. SILO is built by (1) training a parametric LM on the Open License Corpus (OLC), a new corpus we curate with 228B tokens of public domain and permissively licensed text and (2) augmenting it with a more general and easily modifiable nonparametric datastore (e.g., containing copyrighted books or news) that is only queried during inference. The datastore allows use of high-risk data without training on it, supports sentence-level data attribution, and enables data producers to opt out from the model by removing content from the store. These capabilities can foster compliance with data-use regulations such as the fair use doctrine in the United States and the GDPR in the European Union. Our experiments show that the parametric LM struggles on its own with domains not covered by OLC. However, access to the datastore greatly improves out of domain performance, closing 90% of the performance gap with an LM trained on the Pile, a more diverse corpus with mostly high-risk text. We also analyze which nonparametric approach works best, where the remaining errors lie, and how performance scales with datastore size. Our results suggest that it is possible to build high quality language models while mitigating legal risk.
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore
[ "Sewon Min", "Suchin Gururangan", "Eric Wallace", "Weijia Shi", "Hannaneh Hajishirzi", "Noah Smith", "Luke Zettlemoyer" ]
Workshop/RegML
2308.04430
[ "https://github.com/kernelmachine/silo-lm" ]
https://huggingface.co/papers/2308.04430
3
9
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=zH9TwRVlGI
@inproceedings{ davies2023size, title={Size Matters: Large Graph Generation with Hi{GG}s}, author={Alex Owen Davies and Nirav Ajmeri and Telmo M Silva Filho}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=zH9TwRVlGI} }
Large graphs are present in a variety of domains, including social networks, civil infrastructure, and the physical sciences to name a few. Graph generation is similarly widespread, with applications in drug discovery, network analysis and synthetic datasets among others. While GNN (Graph Neural Network) models have been applied in these domains their high in-memory costs restrict them to small graphs. Conversely less costly rule-based methods struggle to reproduce complex structures. We propose HIGGS (Hierarchical Generation of Graphs) as a model-agnostic framework of producing large graphs with realistic local structures. HIGGS uses GNN models with conditional generation capabilities to sample graphs in hierarchies of resolution. As a result HIGGS has the capacity to extend the scale of generated graphs from a given GNN model by quadratic order. As a demonstration we implement HIGGS using DiGress, a recent graph- diffusion model, including a novel edge-predictive-diffusion variant edge-DiGress. We use this implementation to generate categorically attributed graphs with tens of thousands of nodes. These HIGGS generated graphs are far larger than any previously produced using GNNs. Despite this jump in scale we demonstrate that the graphs produced by HIGGS are, on the local scale, more realistic than those from the rule-based model BTER.
Size Matters: Large Graph Generation with HiGGs
[ "Alex Owen Davies", "Nirav Ajmeri", "Telmo M Silva Filho" ]
Workshop/SyntheticData4ML
2306.11412
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yYIL3uhQ8i
@inproceedings{ belkadi2023generating, title={Generating Medical Instructions with Conditional Transformer}, author={Samuel Belkadi and Nicolo Micheletti and Lifeng Han and Warren Del-Pinto and Goran Nenadic}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=yYIL3uhQ8i} }
Access to real-world medical instructions is essential for medical research and healthcare quality improvement. However, access to real medical instructions is often limited due to the sensitive nature of the information expressed. Additionally, manually labelling these instructions for training and fine-tuning Natural Language Processing (NLP) models can be tedious and expensive. We introduce a novel task-specific model architecture, Label-To-Text-Transformer (LT3), tailored to generate synthetic medical instructions based on provided labels, such as a vocabulary list of medications and their attributes. LT3 is trained on a vast corpus of medical instructions extracted from the MIMIC-III database, allowing the model to produce valuable synthetic medical instructions. We evaluate LT3's performance by contrasting it with a state-of-the-art Pre-trained Language Model (PLM), T5, analysing the quality and diversity of generated texts. We deploy the generated synthetic data to train the SpacyNER model for the Named Entity Recognition (NER) task over the n2c2-2018 dataset. The experiments show that the model trained on synthetic data can achieve a 96-98\% F1 score at Label Recognition on Drug, Frequency, Route, Strength, and Form. LT3 codes will be shared at \url{https://github.com/HECTA-UoM/Label-To-Text-Transformer}
Generating Medical Instructions with Conditional Transformer
[ "Samuel Belkadi", "Nicolo Micheletti", "Lifeng Han", "Warren Del-Pinto", "Goran Nenadic" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=yJ1dQFVAgU
@inproceedings{ ashok2023mathbbscimathbbfix, title={\${\textbackslash}mathbb\{S\}\$ci\${\textbackslash}mathbb\{F\}\$ix: Outperforming {GPT}3 on Scientific Factual Error Correction}, author={Dhananjay Ashok and Atharva Kulkarni and Hai Pham and Barnabas Poczos}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=yJ1dQFVAgU} }
Due to the prohibitively high cost of creating error correction datasets, most Factual Claim Correction methods rely on a powerful verification model to guide the correction process. This leads to a significant drop in performance in domains like Scientific Claim Correction, where good verification models do not always exist. In this work we introduce SciFix, a claim correction system that does not require a verifier but is able to outperform existing methods by a considerable margin — achieving correction accuracy of 84% on the SciFact dataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to next best accuracies of 7%, 5% and 15% on the same datasets respectively. Our method leverages the power of prompting with LLMs during training to create a richly annotated dataset that can be used for fully supervised training and regularization. We additionally use a claim-aware decoding procedure to improve the quality of corrected claims. Our method outperforms the very LLM that was used to generate the annotated dataset — with FewShot Prompting on GPT3.5 achieving 58%, 61% and 64% on the respective datasets, a consistently lower correction accuracy, despite using nearly 800 times as many parameters as our model.
𝕊ci𝔽ix: Outperforming GPT3 on Scientific Factual Error Correction
[ "Dhananjay Ashok", "Atharva Kulkarni", "Hai Pham", "Barnabas Poczos" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=xeCXTPPwnX
@inproceedings{ sizikova2023knowledgebased, title={Knowledge-based in silico models and dataset for the comparative evaluation of mammography {AI}}, author={Elena Sizikova and Niloufar Saharkhiz and Diksha Sharma and Miguel Lago and Berkman Sahiner and Jana Gut Delfino and Aldo Badano}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=xeCXTPPwnX} }
To generate evidence regarding the safety and efficacy of artificial intelligence (AI) enabled medical devices, AI models need to be evaluated on a diverse population of patient cases, some of which may not be readily available. We propose an evaluation approach for testing medical imaging AI models that relies on in silico imaging pipelines in which stochastic digital models of human anatomy (in object space) with and without pathology are imaged using a digital replica imaging acquisition system to generate realistic synthetic image datasets. Here, we release M-SYNTH, a dataset of cohorts with four breast fibroglandular density distributions imaged at different exposure levels using Monte Carlo x-ray simulations with the publicly available Virtual Imaging Clinical Trial for Regulatory Evaluation (VICTRE) toolkit. We utilize the synthetic dataset to analyze AI model performance and find that model performance decreases with increasing breast density and increases with higher mass density, as expected. As exposure levels decrease, AI model performance drops with the highest performance achieved at exposure levels lower than the nominal recommended dose for the breast type.
Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI
[ "Elena Sizikova", "Niloufar Saharkhiz", "Diksha Sharma", "Miguel Lago", "Berkman Sahiner", "Jana Gut Delfino", "Aldo Badano" ]
Workshop/SyntheticData4ML
[ "https://github.com/didsr/msynth-release" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=wK2y7ZhPvU
@inproceedings{ xu2023knowledgeinfused, title={Knowledge-Infused Prompting Improves Clinical Text Generation with Large Language Models}, author={Ran Xu and Hejie Cui and Yue Yu and Xuan Kan and Wenqi Shi and Yuchen Zhuang and Wei Jin and Joyce Ho and Carl Yang}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=wK2y7ZhPvU} }
Clinical natural language processing requires methods that can address domain-specific challenges, such as complex medical terminology and clinical contexts. Recently, large language models (LLMs) have shown promise in this domain. Yet, their direct deployment can lead to privacy issues and are constrained by resources. To address this challenge, we propose ClinGen, which infuses knowledge into synthetic clinical text generation using LLMs for clinical NLP tasks. Our model involves clinical knowledge extraction and context-informed LLM prompting. Both clinical topics and writing styles are drawn from external domain-specific knowledge graphs and LLMs to guide data generation. Extensive studies across 7 clinical NLP tasks and 16 datasets reveal that ClinGen consistently enhances performance across various tasks, effectively aligning the distribution of real datasets and enriching the diversity of generated training instances.
Knowledge-Infused Prompting Improves Clinical Text Generation with Large Language Models
[ "Ran Xu", "Hejie Cui", "Yue Yu", "Xuan Kan", "Wenqi Shi", "Yuchen Zhuang", "Wei Jin", "Joyce Ho", "Carl Yang" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=uc9sYHRX6O
@inproceedings{ jain2023improving, title={Improving Code Style for Accurate Code Generation}, author={Naman Jain and Tianjun Zhang and Wei-Lin Chiang and Joseph E. Gonzalez and Koushik Sen and Ion Stoica}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=uc9sYHRX6O} }
Natural language to code generation is an important application area of LLMs and has received wide attention from the community. The majority of relevant studies have exclusively concentrated on increasing the quantity and functional correctness of training sets while disregarding other stylistic elements of programs. More recently, data quality has garnered a lot of interest and multiple works have showcased its importance for improving performance. In this work, we investigate data quality for code and find that making the code more structured and readable leads to improved code generation performance of the system. We build a novel data-cleaning pipeline that uses these principles to transform existing programs by 1.) renaming variables, 2.) modularizing and decomposing complex code into smaller helper sub-functions, and 3.) inserting natural-language based planning annotations. We evaluate our approach on two challenging algorithmic code generation benchmarks and find that fine-tuning CodeLLaMa-7B on our transformed programs improves the performance by up to \textbf{30\%} compared to fine-tuning on the original dataset. Additionally, we demonstrate improved performance from using a smaller amount of higher-quality data, finding that a model fine-tuned on the entire original dataset is outperformed by a model trained on one-eighth of our cleaned dataset. Even in comparison to closed-source models, our models outperform the much larger AlphaCode models.
Improving Code Style for Accurate Code Generation
[ "Naman Jain", "Tianjun Zhang", "Wei-Lin Chiang", "Joseph E. Gonzalez", "Koushik Sen", "Ion Stoica" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qY2fTMsQug
@inproceedings{ namboori2023gemquad, title={Ge{MQ}u{AD} : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning}, author={Amani Namboori and Shivam Sadashiv Mangale and Andy Rosenbaum and Saleh Soltan}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=qY2fTMsQug} }
The emergence of Large Language Models (LLMs) with capabilities like In-Context Learning (ICL) has ushered in new possibilities for data generation across various domains while minimizing the need for extensive data collection and modeling techniques. Researchers have explored ways to use this generated synthetic data to optimize smaller student models for reduced deployment costs and lower latency in downstream tasks. However, ICL-generated data often suffers from low quality as the task specificity is limited with few examples used in ICL. In this paper, we propose GeMQuAD - a semi-supervised learning approach, extending the WeakDAP framework, applied to a dataset generated through ICL with just one example in the target language using AlexaTM 20B Seq2Seq LLM. Through our approach, we iteratively identify high-quality data to enhance model performance, especially for low-resource multilingual setting in the context of Extractive Question Answering task. Our framework surpasses the performance of baseline model trained on an English-only dataset by 5.05/6.50 points in F1/Exact Match(EM) for Hindi and by 3.81/3.69 points in F1/EM for Spanish on MLQA dataset. Notably, our approach uses a pre-trained LLM with no additional fine-tuning of LLM using only one annotated example in ICL to generate data, keeping the development process cost effective.
GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning
[ "Amani Namboori", "Shivam Sadashiv Mangale", "Andy Rosenbaum", "Saleh Soltan" ]
Workshop/SyntheticData4ML
2404.09163
[ "" ]
https://huggingface.co/papers/2404.09163
2
0
2
4
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=khDkUPsDwj
@inproceedings{ chen2023edge, title={{EDGE}++: Improved Training and Sampling of {EDGE}}, author={Xiaohui Chen and Mingyang Wu and Liping Liu}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=khDkUPsDwj} }
Traditional graph-generative models like the Stochastic-Block Model (SBM) fall short in capturing complex structures inherent in large graphs. Recently developed deep learning models like NetGAN, CELL, and Variational Graph Autoencoders have made progress but face limitations in replicating key graph statistics. Diffusion-based methods such as EDGE have emerged as promising alternatives, however, they present challenges in computational efficiency and generative performance. In this paper, we propose enhancements to the EDGE model to address these issues. Specifically, we introduce a degree-specific noise schedule that optimizes the number of active nodes at each timestep, significantly reducing memory consumption. Additionally, we present an improved sampling scheme that fine-tunes the generative process, allowing for better control over the similarity between the synthesized and the true network. Our experimental results demonstrate that the proposed modifications not only improve the efficiency but also enhance the accuracy of the generated graphs, offering a robust and scalable solution for graph generation tasks.
EDGE++: Improved Training and Sampling of EDGE
[ "Xiaohui Chen", "Mingyang Wu", "Liping Liu" ]
Workshop/SyntheticData4ML
2310.14441
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=kc66PwD7St
@inproceedings{ dong2023conditional, title={Conditional Generative Modeling for High-dimensional Marked Temporal Point Processes}, author={Zheng Dong and Zekai Fan and Shixiang Zhu}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=kc66PwD7St} }
Recent advancements in generative modeling have made it possible to generate high-quality content from context information, but a key question remains: how to teach models to know when to generate content? To answer this question, this study proposes a novel event generative model that draws its statistical intuition from marked temporal point processes, and offers a clean, flexible, and computationally efficient solution for a wide range of applications involving the generation of asynchronous events with high-dimensional marks. We use a conditional generator that takes the history of events as input and generates the high-quality subsequent event that is likely to occur given the prior observations. The proposed framework offers a host of benefits, including considerable representational power to capture intricate dynamics in multi- or even high-dimensional event space, as well as exceptional efficiency in learning the model and generating samples. Our numerical results demonstrate superior performance compared to other state-of-the-art baselines.
Conditional Generative Modeling for High-dimensional Marked Temporal Point Processes
[ "Zheng Dong", "Zekai Fan", "Shixiang Zhu" ]
Workshop/SyntheticData4ML
2305.12569
[ "" ]
https://huggingface.co/papers/2305.12569
0
0
0
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=iBrPSYHQ7V
@inproceedings{ khullar2023synthetic, title={Synthetic Data Generation for Scarce Road Scene Detection Scenarios}, author={Dipika Khullar and Yash Shah and Ninad Kulkarni and Negin Sokhandan}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=iBrPSYHQ7V} }
Recent advancements in generative models have led to significant improvements in the quality of generated images, making them virtually indistinguishable from real ones. However, using AI generated images for training robust computer vision models for real-world applications, especially object detection in road scene perception, is still a challenge. AI generated images usually lack the required diversity and scene complexity where specific objects appear with critically low frequency in the available real datasets. An example of such applications is the detection of emergency vehicles like police cars, fire trucks, and ambulances in road scenes. These vehicles appear with drastically low frequencies in available datasets. Successfully generating synthetic images of road scenes that include these types of vehicles and using them in training downstream models would prove useful for autonomous driving vehicles, mitigating safety concerns on the road. To address this, this paper proposes a new approach for synthetically generating diverse, complex, and domain-compatible images of emergency vehicles in road scenes by employing a diffusion-based generative model pretrained on a generic dataset. We investigate the impact of using generated synthetic images in the performance of downstream object detection models. Finally, we thoroughly discuss challenges of generating synthetic datasets with the proposed approach.
Synthetic Data Generation for Scarce Road Scene Detection Scenarios
[ "Dipika Khullar", "Yash Shah", "Ninad Kulkarni", "Negin Sokhandan" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=eDTdaE1yTG
@inproceedings{ jian2023stable, title={Stable Diffusion For Aerial Object Detection}, author={Yanan Jian and Fuxun Yu and Simranjit Singh and Dimitrios Stamoulis}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=eDTdaE1yTG} }
Aerial object detection is a challenging task, in which one major obstacle lies in the limitations of large-scale data collection and the long-tail distribution of certain classes. Synthetic data offers a promising solution, especially with recent advances in diffusion-based methods like stable diffusion (SD). However, the direct application of diffusion methods to aerial domains poses unique challenges: stable diffusion's optimization for rich ground-level semantics doesn't align with the sparse nature of aerial objects, and the extraction of post-synthesis object coordinates remains problematic. To address these challenges, we introduce a synthetic data augmentation framework tailored for aerial images. It encompasses sparse-to-dense region of interest (ROI) extraction to bridge the semantic gap, fine-tuning the diffusion model with low-rank adaptation (LORA) to circumvent exhaustive retraining, and finally, a Copy-Paste method to compose synthesized objects with backgrounds, providing a nuanced approach to aerial object detection through synthetic data.
Stable Diffusion For Aerial Object Detection
[ "Yanan Jian", "Fuxun Yu", "Simranjit Singh", "Dimitrios Stamoulis" ]
Workshop/SyntheticData4ML
2311.12345
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=bu6Fo3k2IS
@inproceedings{ wei2023intags, title={{INTAGS}: Interactive Agent-Guided Simulation}, author={Song Wei and Andrea Coletta and Svitlana Vyetrenko and Tucker Balch}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=bu6Fo3k2IS} }
The development of realistic agent-based simulator (ABS) remains a challenging task, mainly due to the sequential and dynamic nature of such a multi-agent system (MAS). To fill this gap, this work proposes a metric to distinguish between real and synthetic multi-agent systems; The metric evaluation depends on the live interaction between the {\it experimental (Exp) autonomous agent} and {\it background (BG) agent(s)}, explicitly accounting for the systems' sequential and dynamic nature. Specifically, we propose to characterize the system/environment by studying the effect of a sequence of BG agents' responses to the environment state evolution, and we take such effects' differences as MAS distance metric; The effect estimation is cast as a causal inference problem since the environment evolution is confounded with the previous environment state. Importantly, we propose the \underline{Int}eractive \underline{A}gent-\underline{G}uided \underline{S}imulation (INTAGS) framework to build a realistic simulator by optimizing over this novel metric. To adapt to any environment with interactive sequential decision making agents, INTAGS formulates the simulator as a stochastic policy in reinforcement learning. Moreover, INTAGS utilizes the policy gradient update to bypass differentiating the proposed metric such that it can support non-differentiable operations of multi-agent environments. Through extensive experiments, we demonstrate the effectiveness of INTAGS on an equity stock market simulation example.
INTAGS: Interactive Agent-Guided Simulation
[ "Song Wei", "Andrea Coletta", "Svitlana Vyetrenko", "Tucker Balch" ]
Workshop/SyntheticData4ML
2309.01784
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=bpO14yuQQg
@inproceedings{ rosenbaum2023calico, title={{CALICO}: Conversational Agent Localization via Synthetic Data Generation}, author={Andy Rosenbaum and Pegah Kharazmi and Ershad Banijamali and Lu Zeng and Christopher DiPersio and Pan Wei and Gokmen Oz and Clement Chung and Karolina Owczarzak and Fabian Triefenbach and Wael Hamza}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=bpO14yuQQg} }
We present CALICO, a method to fine-tune Large Language Models (LLMs) to localize conversational agent training data from one language to another. For slots (named entities), CALICO supports three operations: verbatim copy, literal translation, and localization, i.e. generating slot values more appropriate in the target language, such as city and airport names located in countries where the language is spoken. Furthermore, we design an iterative filtering mechanism to discard noisy generated samples, which we show boosts the performance of the downstream conversational agent. To prove the effectiveness of CALICO, we build and release a new human-localized (HL) version of the MultiATIS++ travel information test set in 8 languages. Compared to the original human-translated (HT) version of the test set, we show that our new HL version is more challenging. We also show that CALICO out-performs state-of-the-art LINGUIST (which relies on literal slot translation out of context) both on the HT case, where CALICO generates more accurate slot translations, and on the HL case, where CALICO generates localized slots which are closer to the HL test set.
CALICO: Conversational Agent Localization via Synthetic Data Generation
[ "Andy Rosenbaum", "Pegah Kharazmi", "Ershad Banijamali", "Lu Zeng", "Christopher DiPersio", "Pan Wei", "Gokmen Oz", "Clement Chung", "Karolina Owczarzak", "Fabian Triefenbach", "Wael Hamza" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=YU228ZUCOU
@inproceedings{ wang2023improving, title={Improving fairness for spoken language understanding in atypical speech with Text-to-Speech}, author={Helin Wang and Venkatesh Ravichandran and Milind Rao and Becky Lammers and Myra Sydnor and Nicholas Maragakis and Ankur A. Butala and Jayne Zhang and Lora Clawson and Victoria Chovaz and Laureano Moro-Velazquez}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=YU228ZUCOU} }
Spoken language understanding (SLU) systems often exhibit suboptimal performance in processing atypical speech, typically caused by neurological conditions and motor impairments. Recent advancements in Text-to-Speech (TTS) synthesis-based augmentation for more fair SLU have struggled to accurately capture the unique vocal characteristics of atypical speakers, largely due to insufficient data. To address this issue, we present a novel data augmentation method for atypical speakers by finetuning a TTS model, called Aty-TTS. Aty-TTS models speaker and atypical characteristics via knowledge transferring from a voice conversion model. Then, we use the augmented data to train SLU models adapted to atypical speech. To train these data augmentation models and evaluate the resulting SLU systems, we have collected a new atypical speech dataset containing intent annotation. Both objective and subjective assessments validate that Aty-TTS is capable of generating high-quality atypical speech. Furthermore, it serves as an effective data augmentation strategy, contributing to more fair SLU systems that can better accommodate individuals with atypical speech patterns.
Improving fairness for spoken language understanding in atypical speech with Text-to-Speech
[ "Helin Wang", "Venkatesh Ravichandran", "Milind Rao", "Becky Lammers", "Myra Sydnor", "Nicholas Maragakis", "Ankur A. Butala", "Jayne Zhang", "Lora Clawson", "Victoria Chovaz", "Laureano Moro-Velazquez" ]
Workshop/SyntheticData4ML
2311.10149
[ "https://github.com/wanghelin1997/aty-tts" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=Xr13v66xxT
@inproceedings{ hoorn2023generating, title={Generating Privacy-Preserving Longitudinal Synthetic Data}, author={Robin van Hoorn and Tom Bakkes and Zoi Tokoutsi and Ymke de Jong and R. Arthur Bouwman and Mykola Pechenizkiy}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=Xr13v66xxT} }
Before synthetic data (SD) generators are able to generate entire electronic health records, many challenges still have to be tackled. One of these challenges is to generate both privacy-preserving and longitudinal SD. This research combines the research streams of longitudinal SD and privacy-preserving static SD and presents a novel GAN architecture called Time-ADS-GAN. Time-ADS-GAN outperforms current state-of-the-art models on both utility and privacy on three datasets and is able to reproduce the results of a healthcare study significantly better than TimeGAN. As a second contribution, a variation of the $\epsilon$-identifiability metric is introduced and used in the analysis.
Generating Privacy-Preserving Longitudinal Synthetic Data
[ "Robin van Hoorn", "Tom Bakkes", "Zoi Tokoutsi", "Ymke de Jong", "R. Arthur Bouwman", "Mykola Pechenizkiy" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=XhxOCXlXSh
@inproceedings{ suh2023autodiff, title={AutoDiff: combining Auto-encoder and Diffusion model for tabular data synthesizing}, author={Namjoon Suh and Xiaofeng Lin and Din-Yin Hsieh and Mehrdad Honarkhah and Guang Cheng}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=XhxOCXlXSh} }
Diffusion model has become a main paradigm for synthetic data generation in many subfields of modern machine learning, including computer vision, language model, or speech synthesis. In this paper, we leverage the power of diffusion model for generating synthetic tabular data.The heterogeneous features in tabular data have been main obstacles in tabular data synthesis, and we tackle this problem by employing the auto-encoder architecture. When compared with the state-of-the-art tabular synthesizers, the resulting synthetic tables from our model show nice statistical fidelities to the real data, and perform well in downstream tasks for machine learning utilities. We conducted the experiments over $15$ publicly available datasets. Notably, our model adeptly captures the correlations among features, which has been a long-standing challenge in tabular data synthesis. Our code is available upon request and will be publicly released if paper is accepted.
AutoDiff: combining Auto-encoder and Diffusion model for tabular data synthesizing
[ "Namjoon Suh", "Xiaofeng Lin", "Din-Yin Hsieh", "Mehrdad Honarkhah", "Guang Cheng" ]
Workshop/SyntheticData4ML
2310.15479
[ "https://github.com/ucla-trustworthy-ai-lab/autodiffusion" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=UQ42dE8gKr
@inproceedings{ dua2023towards, title={Towards Effective Synthetic Data Sampling for Domain Adaptive Pose Estimation}, author={Isha Dua and Arjun Sharma and Shuaib Ahmed and Rahul Tallamraju}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=UQ42dE8gKr} }
In this paper, we investigate a synthetic data sampling approach towards unsupervised domain adaptation (UDA) for pose estimation. UDA is characterized by a labeled source domain and an unlabeled target domain. We observe that recent work in UDA for pose estimation fails to generalize across poses in target data, despite having support for such poses in the source data. We hypothesize that this failure to generalize is due to a lack of uniform support across poses of varying complexity in the source domain. Motivated by this challenge, we aim to sample and train with the source domain data to improve the domain adaptation performance on a target domain. The proposed sampling strategy sorts the source domain samples based on a difficulty score, which reflects the lack of uniform support across varying pose complexity in the source domain. The difficulty score is a reconstruction error obtained from training an auto-encoder on the source domain poses. We categorize the dataset into closely related groups using this score. Selectively training from all or some of these groups help us to better utilize the source pose distribution. Finally, current pose estimation evaluation metrics do not effectively measure the ability of the model to learn the geometry of pose. We evaluate our approach qualitatively and quantitatively on benchmark datasets. Our sampling strategy outperforms existing state-of-the-art for domain adaptation.
Towards Effective Synthetic Data Sampling for Domain Adaptive Pose Estimation
[ "Isha Dua", "Arjun Sharma", "Shuaib Ahmed", "Rahul Tallamraju" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=Too7FleWsa
@inproceedings{ xiong2023fair, title={Fair Wasserstein Coresets}, author={Zikai Xiong and Niccolo Dalmasso and Vamsi K. Potluru and Tucker Balch and Manuela Veloso}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=Too7FleWsa} }
Recent technological advancements have given rise to the ability of collecting vast amounts of data, that often exceed the capacity of commonly used machine learning algorithms. Approaches such as coresets and synthetic data distillation have emerged as frameworks to generate a smaller, yet representative, set of samples for downstream training. As machine learning is increasingly applied to decision-making processes, it becomes imperative for modelers to consider and address biases in the data concerning subgroups defined by factors like race, gender, or other sensitive attributes. Current approaches focus on creating fair synthetic representative samples by optimizing local properties relative to the original samples. These methods, however, are not guaranteed to positively affect the performance or fairness of downstream learning processes. In this work, we present Fair Wasserstein Coresets (FWC), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks. FWC aims to minimize the Wasserstein distance between the original datasets and the weighted synthetic samples while enforcing (an empirical version of) demographic parity, a prominent criterion for algorithmic fairness, via a linear constraint. We show that FWC can be tought of as a constrained version of Lloyd's algorithm for k-medians or k-means clustering. Our experiments, conducted on both synthetic and real datasets, demonstrate the scalability of our approach and highlight the competitive performance of FWC compared to existing fair clustering approaches, even when attempting to enhance the fairness of the latter through fair pre-processing techniques.
Fair Wasserstein Coresets
[ "Zikai Xiong", "Niccolo Dalmasso", "Vamsi K. Potluru", "Tucker Balch", "Manuela Veloso" ]
Workshop/SyntheticData4ML
2311.05436
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=TTCIZunOVM
@inproceedings{ trabucco2023effective, title={Effective Data Augmentation With Diffusion Models}, author={Brandon Trabucco and Kyle Doherty and Max Gurinas and Ruslan Salakhutdinov}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=TTCIZunOVM} }
Data augmentation is one of the most prevalent tools in deep learning, underpinning many recent advances, including those from classification, generative models, and representation learning. The standard approach to data augmentation combines simple transformations like rotations and flips to generate new images from existing ones. However, these new images lack diversity along key semantic axes present in the data. Current augmentations cannot alter the high-level semantic attributes, such as animal species present in a scene, to enhance the diversity of data. We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models. Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples. We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
Effective Data Augmentation With Diffusion Models
[ "Brandon Trabucco", "Kyle Doherty", "Max Gurinas", "Ruslan Salakhutdinov" ]
Workshop/SyntheticData4ML
2302.07944
[ "https://github.com/brandontrabucco/da-fusion" ]
https://huggingface.co/papers/2302.07944
2
0
0
4
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=Rk5WoEETTU
@inproceedings{ mueller2023continuous, title={Continuous Diffusion for Mixed-Type Tabular Data}, author={Markus Mueller and Kathrin Gruber and Dennis Fok}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=Rk5WoEETTU} }
Score-based generative models or diffusion models have proven successful across many domains in generating texts and images. However, the consideration of mixed-type tabular data with this model family has fallen short so far. Existing research mainly combines continuous and categorical diffusion processes and does not explicitly account for the feature heterogeneity inherent to tabular data. In this paper, we combine score matching and score interpolation to ensure a common type of continuous noise distribution that affects both continuous and categorical features. Further, we investigate the impact of distinct noise schedules per feature or per data type. We allow for adaptive, learnable noise schedules to ensure optimally allocated model capacity and balanced generative capability. Results show that our model outperforms the benchmark models consistently and that accounting for heterogeneity within the noise schedule design boosts sample quality.
Continuous Diffusion for Mixed-Type Tabular Data
[ "Markus Mueller", "Kathrin Gruber", "Dennis Fok" ]
Workshop/SyntheticData4ML
2312.10431
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=RVFuoTm8sx
@inproceedings{ smith2023balancing, title={Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic Contrast Sets}, author={Brandon Abreu Smith and Miguel Farinha and Siobhan Mackenzie Hall and Hannah Rose Kirk and Aleksandar Shtedritski and Max Bain}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=RVFuoTm8sx} }
Vision-language models are growing in popularity and public visibility to generate, edit, and caption images at scale; but their outputs can perpetuate and amplify societal biases learned during pre-training on uncurated image-text pairs from the internet. Although debiasing methods have been proposed, we argue that these measurements of model bias lack validity due to dataset bias. We demonstrate there are spurious correlations in COCO Captions, the most commonly used dataset for evaluating bias, between background context and the gender of people in-situ. This is problematic because commonly-used bias metrics (such as Bias@K) rely on per-gender base rates. To address this issue, we propose a novel dataset debiasing pipeline to augment the COCO dataset with synthetic, gender-balanced contrast sets, where only the gender of the subject is edited and the background is fixed. As existing image editing methods have limitations and sometimes produce low-quality images; we introduce a method to automatically filter the generated images based on their similarity to real images. Using our balanced synthetic contrast sets, we benchmark bias in multiple CLIP-based models, demonstrating how metrics are skewed by imbalance in the original COCO images. Our results indicate that the proposed approach improves the validity of the evaluation, ultimately contributing to more realistic understanding of bias in CLIP.
Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic Contrast Sets
[ "Brandon Abreu Smith", "Miguel Farinha", "Siobhan Mackenzie Hall", "Hannah Rose Kirk", "Aleksandar Shtedritski", "Max Bain" ]
Workshop/SyntheticData4ML
2305.15407
[ "https://github.com/oxai/debias-gensynth" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LS3aKnm7fw
@inproceedings{ benarous2023harnessing, title={Harnessing Synthetic Datasets: The Role of Shape Bias in Deep Neural Network Generalization}, author={Elior Benarous and Sotiris Anagnostidis and Luca Biggio and Thomas Hofmann}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=LS3aKnm7fw} }
Recent advancements in deep learning have been primarily driven by the use of large models trained on increasingly vast datasets. While neural scaling laws have emerged to predict network performance given a specific level of computational resources, the growing demand for expansive datasets raises concerns. To address this, a new research direction has emerged, focusing on the creation of synthetic data as a substitute. In this study, we investigate how neural networks exhibit shape bias during training on synthetic datasets, serving as an indicator of the synthetic data quality. Specifically, our findings indicate three key points: (1) Shape bias varies across network architectures and types of supervision, casting doubt on its reliability as a predictor for generalization and its ability to explain differences in model recognition compared to human capabilities. (2) Relying solely on shape bias to estimate generalization is unreliable, as it is entangled with diversity and naturalism. (3) We propose a novel interpretation of shape bias as a tool for estimating the diversity of samples within a dataset. Our research aims to clarify the implications of using synthetic data and its associated shape bias in deep learning, addressing concerns regarding generalization and dataset quality.
Harnessing Synthetic Datasets: The Role of Shape Bias in Deep Neural Network Generalization
[ "Elior Benarous", "Sotiris Anagnostidis", "Luca Biggio", "Thomas Hofmann" ]
Workshop/SyntheticData4ML
2311.06224
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=LJhhovnxVJ
@inproceedings{ kim2023carpe, title={Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models}, author={Yujin Kim and Jaehong Yoon and Seonghyeon Ye and Sung Ju Hwang and Se-Young Yun}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=LJhhovnxVJ} }
In an ever-evolving world, the dynamic nature of knowledge presents challenges for language models that are trained on static data, leading to outdated encoded information. However, real-world scenarios require models not only to acquire new knowledge but also to overwrite outdated information into updated ones. To address this under-explored issue, we introduce the temporally evolving question answering benchmark, EvolvingQA - a novel benchmark designed for training and evaluating LMs on an evolving Wikipedia database, where the construction of our benchmark is automated with our pipeline using large language models. Our benchmark incorporates question-answering as a downstream task to emulate real-world applications. Through EvolvingQA, we uncover that existing continual learning baselines have difficulty in updating and forgetting outdated knowledge. Our findings suggest that the models fail to learn updated knowledge due to the small weight gradient. Furthermore, we elucidate that the models struggle mostly on providing numerical or temporal answers to questions asking for updated knowledge. Our work aims to model the dynamic nature of real-world information, offering a robust measure for the evolution-adaptability of language models. Our data construction code and dataset files are available at https://github.com/kimyuji/EvolvingQA_benchmark.
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
[ "Yujin Kim", "Jaehong Yoon", "Seonghyeon Ye", "Sung Ju Hwang", "Se-Young Yun" ]
Workshop/SyntheticData4ML
2311.08106
[ "https://github.com/kimyuji/evolvingqa_benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=IlJ3motcJw
@inproceedings{ yuan2023learning, title={Learning to Place Objects into Scenes by Hallucinating Scenes around Objects}, author={Lu Yuan and James Hong and Vishnu Sarukkai and Kayvon Fatahalian}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=IlJ3motcJw} }
The ability to modify images to add new objects into a scene stands to be a powerful image editing control. However, object insertion is not robustly supported by existing diffusion-based image editing methods. The central challenge is predicting where an object should go in a scene, given only an image of the scene. To address this challenge, we propose DreamPlace, a two-step method that inserts objects of a given class into images by 1) predicting where the object is likely to go in the image and 2) inpainting the object at this location. We train our object placement model solely using synthetic data, leveraging diffusion-based image outpainting to hallucinate novel images of scenes surrounding a given object. DreamPlace, using its learned placement model, can produce qualitatively more realistic object insertion edits than comparable diffusion-based baselines. Moreover, for a limited set of object categories where benchmark annotations exist, our learned object placement model, despite being trained entirely on generated data, makes up to 35% more accurate object placements than the state-of-the-art supervised method trained on a large, manually annotated dataset (>80k annotated samples).
Learning to Place Objects into Scenes by Hallucinating Scenes around Objects
[ "Lu Yuan", "James Hong", "Vishnu Sarukkai", "Kayvon Fatahalian" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HtHpbAusYI
@inproceedings{ kabra2023evaluating, title={Evaluating {VLM}s for Score-Based, Multi-Probe Annotation of 3D Objects}, author={Rishabh Kabra and Loic Matthey and Alexander Lerchner and Niloy Mitra}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=HtHpbAusYI} }
Unlabeled 3D objects present an opportunity to leverage pretrained vision language models (VLMs) on a range of annotation tasks---from describing object semantics to physical properties. An accurate response must take into account the full appearance of the object in 3D, various ways of phrasing the question/prompt, and changes in other factors that affect the response. We present a method, to marginalize over arbitrary factors varied across VLM queries, which relies on the VLM’s scores for sampled responses. We first show that this aggregation method can outperform a language model (e.g., GPT4) for summarization, for instance avoiding hallucinations when there are contrasting details between responses. Secondly, we show that aggregated annotations are useful for prompt-chaining; they help improve downstream VLM predictions (e.g., of object material when the object’s type is specified as an auxiliary input in the prompt). Such auxiliary inputs allow ablating and measuring the contribution of visual reasoning over language-only reasoning. Using these evaluations, we show that VLMs approach the quality of human-verified annotations on both type and material inference on the large-scale Objaverse dataset.
Evaluating VLMs for Score-Based, Multi-Probe Annotation of 3D Objects
[ "Rishabh Kabra", "Loic Matthey", "Alexander Lerchner", "Niloy Mitra" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HbU5QuPZj6
@inproceedings{ krchova2023strong, title={Strong statistical parity through fair synthetic data}, author={Ivona Krchova and Michael Platzer and Paul Tiwald}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=HbU5QuPZj6} }
AI-generated synthetic data, in addition to protecting the privacy of original data sets, allows users and data consumers to tailor data to their needs. This paper explores the creation of synthetic data that embodies Fairness by Design, focusing on the statistical parity fairness definition. By equalizing the learned target probability distributions of the synthetic data generator across sensitive attributes, a downstream model trained on such synthetic data provides fair predictions across all thresholds, that is, strong fair predictions even when inferring from biased, original data. This fairness adjustment can be either directly integrated into the sampling process of a synthetic generator or added as a post-processing step. The flexibility allows data consumers to create fair synthetic data and fine-tune the trade-off between accuracy and fairness without any previous assumptions on the data or re-training the synthetic data generator.
Strong statistical parity through fair synthetic data
[ "Ivona Krchova", "Michael Platzer", "Paul Tiwald" ]
Workshop/SyntheticData4ML
2311.03000
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HVrDgZa5hh
@inproceedings{ yamaguchi2023on, title={On the Limitation of Diffusion Models for Synthesizing Training Datasets}, author={Shin'ya Yamaguchi and Takuma Fukuda}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=HVrDgZa5hh} }
Synthetic samples from diffusion models are promising for leveraging in training discriminative models as replications of real training datasets. However, we found that the synthetic datasets degrade classification performance over real datasets even when using state-of-the-art diffusion models. This means that modern diffusion models do not perfectly represent the data distribution for the purpose of replicating datasets for training discriminative tasks. This paper investigates the gap between synthetic and real samples by analyzing the synthetic samples reconstructed from real samples through the diffusion and reverse process. By varying the time steps starting the reverse process in the reconstruction, we can control the trade-off between the information in the original real data and the information added by diffusion models. Through assessing the reconstructed samples and trained models, we found that the synthetic data are concentrated in modes of the training data distribution as the reverse step increases, and thus, they are difficult to cover the outer edges of the distribution. Our findings imply that modern diffusion models are insufficient to replicate training data distribution perfectly, and there is room for the improvement of generative modeling in the replication of training datasets.
On the Limitation of Diffusion Models for Synthesizing Training Datasets
[ "Shin'ya Yamaguchi", "Takuma Fukuda" ]
Workshop/SyntheticData4ML
2311.13090
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=HAJ6K0Sn5b
@inproceedings{ ma2023star, title={{STAR}: Improving Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models}, author={Mingyu Derek Ma and Xiaoxuan Wang and Po-Nien Kung and P. Jeffrey Brantingham and Nanyun Peng and Wei Wang}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=HAJ6K0Sn5b} }
Information extraction tasks such as event extraction require an in-depth understanding of the output structure and sub-task dependencies. They heavily rely on task-specific training data in the form of (passage, target structure) pairs to obtain reasonable performance. However, obtaining such data through human annotation is costly, leading to a pressing need for low-resource information extraction approaches that require minimal human labeling for real-world applications. Fine-tuning supervised models with synthesized training data would be a generalizable method, but the existing data generation methods either still rely on large-scale ground-truth data or cannot be applied to complicated IE tasks due to their poor performance. To address these challenges, we propose STAR, a data generation method that leverages Large Language Models (LLMs) to synthesize data instances given limited seed demonstrations, thereby boosting low-resource information extraction performance. Our approach involves generating target structures (Y) followed by generating passages (X), all accomplished with the aid of LLMs. We design fine-grained step-by-step instructions to obtain the initial data instances. We further reduce errors and improve data quality through self-reflection error identification and self-refinement with iterative revision. Our experiments show that the data generated by STAR significantly improves the performance of low-resource event extraction and relation extraction tasks, even surpassing the effectiveness of human-curated data. Human assessment of the data quality shows STAR-generated data exhibits higher passage quality and better align with the task definitions compared with the human-curated data.
STAR: Improving Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models
[ "Mingyu Derek Ma", "Xiaoxuan Wang", "Po-Nien Kung", "P. Jeffrey Brantingham", "Nanyun Peng", "Wei Wang" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=GFoS5D9s6I
@inproceedings{ hemmat2023feedbackguided, title={Feedback-guided Data Synthesis for Imbalanced Classification}, author={Reyhane Askari Hemmat and Mohammad Pezeshki and Florian Bordes and Michal Drozdzal and Adriana Romero-Soriano}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=GFoS5D9s6I} }
Current status quo in machine learning is to use static datasets of real images for training, which often come from long-tailed distributions. With the recent advances in generative models, researchers have started augmenting these static datasets with synthetic data, reporting moderate performance improvements on classification tasks. We hypothesize that these performance gains are limited by the lack of feedback from the classifier to the generative model, which would promote the usefulness of the generated samples to improve the classifier’s performance. In this work, we introduce a framework for augmenting static datasets with useful synthetic samples, which leverages one-shot feedback from the classifier to drive the sampling of the generative model. In order for the framework to be effective, we find that the samples must be close to the support of the real data of the task at hand, and be sufficiently diverse. We validate three feedback criteria on a long-tailed dataset (ImageNet-LT) as well as a group-imbalanced dataset (NICO++). On ImageNet-LT, we achieve state-of-the-art results, with over 4% improvement on underrepresented classes while being twice efficient in terms of the number of generated synthetic samples. NICO++ also enjoys marked boosts of over 5% in worst group accuracy. With these results, our framework paves the path towards effectively leveraging state-of-the-art text-to-image models as data sources that can be queried to improve downstream applications.
Feedback-guided Data Synthesis for Imbalanced Classification
[ "Reyhane Askari Hemmat", "Mohammad Pezeshki", "Florian Bordes", "Michal Drozdzal", "Adriana Romero-Soriano" ]
Workshop/SyntheticData4ML
2310.00158
[ "https://github.com/facebookresearch/feedback-guided-data-synthesis" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=EWvT2SWskY
@inproceedings{ mishra2023synthetic, title={Synthetic Imitation Edit Feedback for Factual Alignment in Clinical Summarization}, author={Prakamya Mishra and Zonghai Yao and shuwei chen and Beining Wang and Rohan Mittal and hong yu}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=EWvT2SWskY} }
Large Language Models (LLMs) like the GPT and LLaMA families have demonstrated exceptional capabilities in capturing and condensing critical contextual information and achieving state-of-the-art performance in the summarization task. However, community concerns about these models' hallucination issues continue to rise. LLMs sometimes generate factually hallucinated summaries, which can be extremely harmful in the clinical domain NLP tasks (e.g., clinical note summarization), where factually incorrect statements can lead to critically erroneous diagnoses. Fine-tuning LLMs using human feedback has shown the promise of aligning LLMs to be factually consistent during generation, but such training procedure requires high-quality human-annotated data, which can be extremely expensive to get in the clinical domain. In this work, we propose a new pipeline using ChatGPT instead of human experts to generate high-quality feedback data for improving factual consistency in the clinical note summarization task. We focus specifically on edit feedback because recent work discusses the shortcomings of human alignment via preference feedback in complex situations (such as clinical NLP tasks that require extensive expert knowledge), as well as some advantages of collecting edit feedback from domain experts. In addition, although GPT has reached the expert level in many clinical NLP tasks (e.g., USMLE QA), there is not much previous work discussing whether GPT can generate expert-level edit feedback for LMs in the clinical note summarization task. We hope to fill this gap. Finally, Our evaluations demonstrate the potential use of GPT edits in human alignment, especially from a factuality perspective.
Synthetic Imitation Edit Feedback for Factual Alignment in Clinical Summarization
[ "Prakamya Mishra", "Zonghai Yao", "shuwei chen", "Beining Wang", "Rohan Mittal", "hong yu" ]
Workshop/SyntheticData4ML
2310.20033
[ "https://github.com/seasonyao/learnfromhumanedit" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=DO8YT1pt4L
@inproceedings{ boudewijn2023privacy, title={Privacy Measurements in Tabular Synthetic Data: State of the Art and Future Research Directions}, author={Alexander Theodorus Petrus Boudewijn and Andrea Filippo Ferraris and Daniele Panfilo and Vanessa Cocca and Sabrina Zinutti and Karel De Schepper and Carlo Rossi Chauvenet}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=DO8YT1pt4L} }
Synthetic data (SD) have garnered attention as a privacy enhancing technology. Unfortunately, there is no standard for assessing their degree of privacy protection. In this paper, we discuss proposed assessment approaches. This contributes to the development of SD privacy standards; stimulates multi-disciplinary discussion; and helps SD researchers make informed modeling and evaluation decisions.
Privacy Measurements in Tabular Synthetic Data: State of the Art and Future Research Directions
[ "Alexander Theodorus Petrus Boudewijn", "Andrea Filippo Ferraris", "Daniele Panfilo", "Vanessa Cocca", "Sabrina Zinutti", "Karel De Schepper", "Carlo Rossi Chauvenet" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7Ra8cc6XFb
@inproceedings{ r{\"a}is{\"a}2023on, title={On Consistent Bayesian Inference from Synthetic Data}, author={Ossi R{\"a}is{\"a} and Joonas J{\"a}lk{\"o} and Antti Honkela}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=7Ra8cc6XFb} }
Generating synthetic data, with or without differential privacy, has attracted significant attention as a potential solution to the dilemma between making data easily available, and the privacy of data subjects. Several works have shown that consistency of downstream analyses from synthetic data, including accurate uncertainty estimation, requires accounting for the synthetic data generation. There are very few methods of doing so, most of them for frequentist analysis. In this paper, we study how to perform consistent Bayesian inference from synthetic data. We prove that mixing posterior samples obtained separately from multiple large synthetic datasets converges to the posterior of the downstream analysis under standard regularity conditions when the analyst's model is compatible with the data provider's model. We also present several examples showing how the theory works in practice, and showing how Bayesian inference can fail when the compatibility assumption is not met, or the synthetic dataset is not significantly larger than the original.
On Consistent Bayesian Inference from Synthetic Data
[ "Ossi Räisä", "Joonas Jälkö", "Antti Honkela" ]
Workshop/SyntheticData4ML
2305.16795
[ "https://github.com/dpbayes/napsu-mq-bayesian-downstream-experiments" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=7GbfIEvoS8
@inproceedings{ lin2023differentially, title={Differentially Private Synthetic Data via Foundation Model {API}s 1: Images}, author={Zinan Lin and Sivakanth Gopi and Janardhan Kulkarni and Harsha Nori and Sergey Yekhanin}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=7GbfIEvoS8} }
Generating differentially private (DP) synthetic data that closely resembles the original private data is a scalable way to mitigate privacy concerns in the current data-driven world. In contrast to current practices that train customized models for this task, we aim to generate DP Synthetic Data via APIs (DPSDA), where we treat foundation models as blackboxes and only utilize their inference APIs. Such API-based, training-free approaches are easier to deploy as exemplified by the recent surge in the number of API-based apps. These approaches can also leverage the power of large foundation models which are only accessible via their inference APIs. However, this comes with greater challenges due to strictly more restrictive model access and the need to protect privacy from the API provider. In this paper, we present a new framework called Private Evolution (PE) to solve this problem and show its initial promise on synthetic images. Surprisingly, PE can match or even outperform state-of-the-art (SOTA) methods without any model training. For example, on CIFAR10 (with ImageNet as the public data), we achieve FID≤7.9 with privacy cost ε = 0.67, significantly improving the previous SOTA from ε = 32. We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images. The code and data are released at https://github.com/microsoft/DPSDA.
Differentially Private Synthetic Data via Foundation Model APIs 1: Images
[ "Zinan Lin", "Sivakanth Gopi", "Janardhan Kulkarni", "Harsha Nori", "Sergey Yekhanin" ]
Workshop/SyntheticData4ML
2305.15560
[ "https://github.com/microsoft/dpsda" ]
https://huggingface.co/papers/2305.15560
2
0
0
5
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=1MV49Ug6q9
@inproceedings{ kuo2023synthetic, title={Synthetic Health-related Longitudinal Data with Mixed-type Variables Generated using Diffusion Models}, author={Nicholas I-Hsien Kuo and Federico Garcia and Anders Sonnerborg and Michael Bohm and Rolf Kaiser and Maurizio Zazzi and Louisa Jorm and Sebastiano Barbieri}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=1MV49Ug6q9} }
This paper introduces a novel method for simulating Electronic Health Records (EHRs) using Diffusion Probabilistic Models (DPMs). We showcase the ability of DPMs to generate longitudinal EHRs with mixed-type variables – numeric, binary, and categorical. Our approach is benchmarked against existing Generative Adversarial Network (GAN)-based methods in two clinical scenarios: management of acute hypotension in the intensive care unit and antiretroviral therapy for people with human immunodeficiency virus. Our DPM-simulated datasets not only minimise patient disclosure risk but also outperform GAN-generated datasets in terms of realism. These datasets also prove effective for training downstream machine learning algorithms, including reinforcement learning and Cox proportional hazards models for survival analysis.
Synthetic Health-related Longitudinal Data with Mixed-type Variables Generated using Diffusion Models
[ "Nicholas I-Hsien Kuo", "Federico Garcia", "Anders Sonnerborg", "Michael Bohm", "Rolf Kaiser", "Maurizio Zazzi", "Louisa Jorm", "Sebastiano Barbieri" ]
Workshop/SyntheticData4ML
2303.12281
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=0jAd2k8JV4
@inproceedings{ yoon2023diffusionbased, title={Diffusion-based Semantic-Discrepant Outlier Generation for Out-of-Distribution Detection}, author={Suhee Yoon and Sanghyu Yoon and Hankook Lee and Sangjun Han and Ye Seul Sim and Kyungeun Lee and Hyeseung Cho and Woohyung Lim}, booktitle={NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI}, year={2023}, url={https://openreview.net/forum?id=0jAd2k8JV4} }
Out-of-distribution (OOD) detection, which determines whether a given sample is part of the training distribution, has recently shown promising results by training with synthetic OOD datasets. The important properties for effective synthetic OOD datasets are two-fold: (i) the OOD sample should be close to in-distribution (ID), but (ii) represents semantic-wise shifted information. To achieve this, we introduce a novel framework that consists of Semantic-Discrepant (SD) Outlier generation and an advanced OOD detection method. For SD outlier generation, we utilize a conditional diffusion model trained with pseudo-labels. Then, we propose a simple yet effective method, semantic-discrepant guidance, allowing model to generate realistic outliers that contain incoherent semantic shift while preserving nuisance information (e.g., background). Furthermore, we suggest SD outlier-aware OOD detector training and scoring methods. Our experiments demonstrate the effectiveness of our framework on CIFAR-10 dataset. We achieve AUROC of 98% when CIFAR-100 are given as OOD. The SD outlier dataset on CIFAR-10 is available at https://zenodo.org/record/8394847.
Diffusion-based Semantic-Discrepant Outlier Generation for Out-of-Distribution Detection
[ "Suhee Yoon", "Sanghyu Yoon", "Hankook Lee", "Sangjun Han", "Ye Seul Sim", "Kyungeun Lee", "Hyeseung Cho", "Woohyung Lim" ]
Workshop/SyntheticData4ML
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=zlsj9akpaa
@inproceedings{ zhou2023webarena, title={WebArena: A Realistic Web Environment for Building Autonomous Agents}, author={Shuyan Zhou and Frank Xu and Hao Zhu and Xuhui Zhou and Robert Lo and Abishek Sridhar and Xianyi Cheng and Tianyue Ou and Yonatan Bisk and Daniel Fried and Uri Alon and Graham Neubig}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=zlsj9akpaa} }
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that \ours can be used to measure such progress.
WebArena: A Realistic Web Environment for Building Autonomous Agents
[ "Shuyan Zhou", "Frank F. Xu", "Hao Zhu", "Xuhui Zhou", "Robert Lo", "Abishek Sridhar", "Xianyi Cheng", "Tianyue Ou", "Yonatan Bisk", "Daniel Fried", "Uri Alon", "Graham Neubig" ]
Workshop/FMDM
2307.13854
[ "https://github.com/web-arena-x/webarena" ]
https://huggingface.co/papers/2307.13854
7
23
4
11
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=zDTqQVGgzH
@inproceedings{ kirsch2023towards, title={Towards General-Purpose In-Context Learning Agents}, author={Louis Kirsch and James Harrison and C. Freeman and Jascha Sohl-Dickstein and J{\"u}rgen Schmidhuber}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=zDTqQVGgzH} }
Reinforcement Learning (RL) algorithms are usually hand-crafted, driven by the research and engineering of humans. An alternative approach is to automate this research process via meta-learning. A particularly ambitious objective is to automatically discover new RL algorithms from scratch that use in-context learning to learn-how-to-learn entirely from data while also generalizing to a wide range of environments. Those RL algorithms are implemented entirely in neural networks, by conditioning on previous experience from the environment, without any explicit optimization-based routine at meta-test time. To achieve generalization, this requires a broad task distribution of diverse and challenging environments. Our Transformer-based Generally Learning Agents (GLAs) are an important first step in this direction. Our GLAs are meta-trained using supervised learning techniques on an offline dataset with experiences from RL environments that is augmented with random projections to generate task diversity. During meta-testing our agents perform in-context meta-RL on entirely different robotic control problems such as Reacher, Cartpole, or HalfCheetah that were not in the meta-training distribution.
Towards General-Purpose In-Context Learning Agents
[ "Louis Kirsch", "James Harrison", "C. Daniel Freeman", "Jascha Sohl-Dickstein", "Jürgen Schmidhuber" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=ypr4srFxy1
@inproceedings{ kim2023agnostic, title={Agnostic Architecture for Heterogeneous Multi-Environment Reinforcement Learning}, author={Kuk Jin Kim and Changhee Joo}, booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop}, year={2023}, url={https://openreview.net/forum?id=ypr4srFxy1} }
In new environments, training a Reinforcement Learning (RL) agent from scratch can prove to be inefficient. The computational and temporal costs can be significantly reduced if the agent can learn across diverse environments and effectively perform transfer learning. However, achieving learning across multiple environments is challenging due to the varying state and action spaces inherent in different RL problems. Padding or naive parameter-sharing with environment-specific layers for different state-action spaces are possible solutions for multi-environment training. However, they can be less scalable when training for new environments. In this work, we present a flexible and environment-agnostic architecture designed for learning across multiple environments simultaneously without padding or environment-specific layers, while enabling transfer learning for new environments. We also propose training algorithms for this architecture to enable both online and offline RL. Our experiments demonstrate that multi-environment training with one agent is possible in heterogeneous environments and parameter-sharing with environment-specific layers is not effective in transfer learning.
Agnostic Architecture for Heterogeneous Multi-Environment Reinforcement Learning
[ "Kukjin Kim", "Changhee Joo" ]
Workshop/FMDM
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster