bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=2H1oU6VD4H | @inproceedings{
sacha2023moleculeedit,
title={Molecule-edit templates for efficient and accurate retrosynthesis prediction},
author={Miko{\l}aj Sacha and Micha{\l} Sadowski and Piotr Kozakowski and Ruard van Workum and Stanislaw Jastrzebski},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=2H1oU6VD4H}
} | Retrosynthesis involves determining a sequence of reactions to synthesize complex molecules from simpler precursors. As this poses a challenge in organic chemistry, machine learning has offered solutions, particularly for predicting possible reaction substrates for a given target molecule. These solutions mainly fall into template-based and template-free categories. The former is efficient but relies on a vast set of predefined reaction patterns, while the latter, though more flexible, can be computationally intensive and less interpretable. To address these issues, we introduce METRO (Molecule-Edit Templates for RetrOsynthesis), a machine-learning model that predicts reactions using minimal templates - simplified reaction patterns capturing only essential molecular changes - reducing computational overhead and achieving state-of-the-art results on standard benchmarks. | Molecule-edit templates for efficient and accurate retrosynthesis prediction | [
"Mikołaj Sacha",
"Michał Sadowski",
"Piotr Kozakowski",
"Ruard van Workum",
"Stanislaw Kamil Jastrzebski"
] | Workshop/AI4Science | 2310.07313 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=28w0bjBQiw | @inproceedings{
zhang2023machine,
title={Machine Learning for Blockchain},
author={Luyao Zhang},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=28w0bjBQiw}
} | In this research, we explore the nexus between artificial intelligence (AI) and blockchain, two paramount forces steering the contemporary digital era. AI, replicating human cognitive functions, encompasses capabilities from visual discernment to complex decision-making, with significant applicability in sectors such as healthcare and finance. Its influence during the web2 epoch not only enhanced the prowess of user-oriented platforms but also prompted debates on centralization. Conversely, blockchain provides a foundational structure advocating for decentralized and transparent transactional archiving. Yet, the foundational principle of "code is law" in blockchain underscores an imperative need for the fluid adaptability that AI brings. Our analysis methodically navigates the corpus of literature on the fusion of blockchain with machine learning, emphasizing AI's potential to elevate blockchain's utility. Additionally, we chart prospective research trajectories, weaving together blockchain and machine learning in niche domains like causal machine learning, reinforcement mechanism design, and cooperative AI. These intersections aim to cultivate interdisciplinary pursuits in AI for Science, catering to a broad spectrum of stakeholders. | Machine Learning for Blockchain: Literature Review and Open Research Questions | [
"Luyao Zhang"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0zOMy0JE4B | @inproceedings{
wang2023sampleefficient,
title={Sample-efficient Antibody Design through Protein Language Model for Risk-aware Batch Bayesian Optimization},
author={Yanzheng Wang and TIANYU SHI and Jie Fu},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=0zOMy0JE4B}
} | Antibody design is a time-consuming and expensive process that often requires extensive experimentation to identify the best candidates. To address this challenge, we propose an efficient and risk-aware antibody design framework that leverages protein language models (PLMs) and batch Bayesian optimization (BO). Our framework utilizes the generative power of protein language models to predict candidate sequences with higher naturalness and a Bayesian optimization algorithm to iteratively explore the sequence space and identify the most promising candidates. To further improve the efficiency of the search process, we introduce a risk-aware approach that balances exploration and exploitation by incorporating uncertainty estimates into the acquisition function of the Bayesian optimization algorithm. We demonstrate the effectiveness of our approach through experiments on several benchmark datasets, showing that our framework outperforms state-of-the-art methods in terms of both efficiency and quality of the designed sequences. Our framework has the potential to accelerate the discovery of new antibodies and reduce the cost and time required for antibody design. | Sample-efficient Antibody Design through Protein Language Model for Risk-aware Batch Bayesian Optimization | [
"Yanzheng Wang",
"Boyue wang",
"TIANYU SHI",
"Jie Fu",
"Yi Zhou",
"zhizhuo zhang"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0ipxdwZmFR | @inproceedings{
hvatov2023easy,
title={Easy to learn hard to master - how to solve an arbitrary equation with {PINN}},
author={Alexander Hvatov and Damir Aminev and Nikita Demyanchuk},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=0ipxdwZmFR}
} | Physics-informed neural networks (PINNs) offer predictive capabilities for processes defined by known equations and limited data. While custom architectures and loss computations are often designed for each equation, the untapped potential of classical architectures remains unclear. To make a comprehensive study, it is required to compare performance of a given neural network architecture and loss formulation for different types of equations. This paper introduces an open-source framework for unified handling of ordinary differential equations (ODEs), partial differential equations (PDEs), and their systems. We explore PINN applicability and convergence comprehensively, demonstrating its performance across ODEs, PDEs, ODE systems, and PDE systems. | Easy to learn hard to master - how to solve an arbitrary equation with PINN | [
"Alexander Hvatov",
"Damir Aminev",
"Nikita Demyanchuk"
] | Workshop/AI4Science | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0bfWpiwPLZ | @inproceedings{
kong2023molecule,
title={Molecule Design by Latent Prompt Transformer},
author={Deqian Kong and Yuhao Huang and Jianwen Xie and Ying Nian Wu},
booktitle={NeurIPS 2023 AI for Science Workshop},
year={2023},
url={https://openreview.net/forum?id=0bfWpiwPLZ}
} | This paper proposes a latent prompt Transformer model for solving challenging optimization problems such as molecule design, where the goal is to find molecules with optimal values of a target chemical or biological property that can be computed by an existing software. Our proposed model consists of three components. (1) A latent vector whose prior distribution is modeled by a Unet transformation of a Gaussian white noise vector. (2) A molecule generation model that generates the string-based representation of molecule conditional on the latent vector in (1). We adopt the causal Transformer model that takes the latent vector in (1) as prompt. (3) A property prediction model that predicts the value of the target property of a molecule based on a non-linear regression on the latent vector in (1). We call the proposed model the latent prompt Transformer model. After initial training of the model on existing molecules and their property values, we then gradually shift the model distribution towards the region that supports desired values of the target property for the purpose of molecule design. Our experiments show that our proposed model achieves state of the art performances on several benchmark molecule design tasks. | Molecule Design by Latent Prompt Transformer | [
"Deqian Kong",
"Yuhao Huang",
"Jianwen Xie",
"Ying Nian Wu"
] | Workshop/AI4Science | 2310.03253 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=z4BgPtgEsS | @inproceedings{
kim2023lesion,
title={Lesion in-and-out painting for medical image augmentation},
author={Yisak Kim and Kyungmin Jeon and Soyeon Kim and Chang Min Park},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=z4BgPtgEsS}
} | Deep learning(DL) in the medical imaging field suffers from lack of usable data compared to natural image because of the private and sensitive nature of medical data. Also it is a highly imbalanced data because for almost any disease, medical imaging has more patients not having it rather than having it. To address these problems, synthetic data generation is considered to be a promising solution. In this study, we present Lesion In-aNd-Out Painting (LINOP) to generate synthetic medical images for data augmentation. Generative model based on Mask Aware Transformer (MAT) architecture was used to synthesize lesions onto normal image (inpainting) and synthesis outside of lesion area (outpainting). We train and validate a lesion inpainting pipeline on mammography dataset and a lesion outpainting pipeline on chest X-ray dataset. For mammography, proposed augmentation showed up to 30.3\% improvements on mass localization in terms of mAP@50, and for CXR, up to 10.3\% improvements on disease classification in terms of AUROC. | Lesion in-and-out painting for medical image augmentation | [
"Yisak Kim",
"Kyungmin Jeon",
"Soyeon Kim",
"Chang Min Park"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=z1AVG5LDQ7 | @inproceedings{
kim2023adversarial,
title={Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance},
author={June-Woo Kim and Chihyeon Yoon and Miika Toikkanen and Sangmin Bae and Ho-Young Jung},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=z1AVG5LDQ7}
} | Deep generative models have emerged as a promising approach in the medical image domain to address data scarcity. However, their use for sequential data like respiratory sounds is less explored. In this work, we propose a straightforward approach to augment imbalanced respiratory sound data using an audio diffusion model as a conditional neural vocoder. We also demonstrate a simple yet effective adversarial fine-tuning method to align features between the synthetic and real respiratory sound samples to improve respiratory sound classification performance. Our experimental results on the ICBHI dataset demonstrate that the proposed adversarial fine-tuning is effective, while only using the conventional augmentation method shows performance degradation. Moreover, our method outperforms the baseline by 2.24% on the ICBHI Score and improves the accuracy of the minority classes up to 26.58%. For the supplementary material, we provide the code at https://github.com/kaen2891/adversarial_fine-tuning_using_generated_respiratory_sound. | Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance | [
"June-Woo Kim",
"Chihyeon Yoon",
"Miika Toikkanen",
"Sangmin Bae",
"Ho-Young Jung"
] | Workshop/DGM4H | 2311.06480 | [
"https://github.com/kaen2891/adversarial_fine-tuning_using_generated_respiratory_sound"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wsFCbuDwxY | @inproceedings{
choi2023clinical,
title={Clinical Time Series Imputation using Conditional Information Bottleneck},
author={MinGyu Choi and Changhee Lee},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=wsFCbuDwxY}
} | Clinical time series imputation presents a significant challenge because it requires capturing the underlying temporal dynamics from partially observed time series data input. Among the recent successes of imputation methods based on generative models, the information bottleneck (IB) framework offers a well-suited theoretical foundation for multiple imputations, allowing us to account for the uncertainty associated with the imputed values. However, direct application of IB framework to time series data without considering temporal context can lead to a substantial loss of temporal dependencies. To address such a challenge, we propose a novel conditional information bottleneck (CIB) approach for time series imputation, which aims to mitigate the potentially negative consequences of the regularization constraint by reducing the redundant information conditioned on the temporal context. Our experiments, conducted on real-world healthcare dataset and image sequences, demonstrate that our method significantly improves imputation performance, and also enhances prediction performance based on the imputed values. | Clinical Time Series Imputation using Conditional Information Bottleneck | [
"MinGyu Choi",
"Changhee Lee"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=uowg2Iz5eJ | @inproceedings{
inecik2023fcvi,
title={fc{VI}: Flow Cytometry Variational Inference},
author={Kemal Inecik and Adil Meric and Fabian Theis},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=uowg2Iz5eJ}
} | Single-cell flow cytometry stands as a pivotal instrument in both biomedical research and clinical practice, not only offering invaluable insights into cellular phenotypes and functions but also significantly advancing our understanding of various patient states. However, its potential is often constrained by factors such as technical limitations, noise interference, and batch effects, which complicate comparison between flow cytometry experiments and compromise its overall impact. Recent advances in deep representation learning have demonstrated promise in overcoming similar challenges in related fields, particularly in the context of single-cell transcriptomic sequencing data analysis. Here, we propose flowVI, a multimodal deep generative model, tailored for integrative analysis of multiple massively parallel cytometry datasets from diverse sources. By effectively modeling noise variances, technical biases, and batch-specific heterogeneity using probabilistic data representation, we demonstrate that flowVI not only excels in the imputation of missing protein markers but also seamlessly integrates data from distinct cytometry panels. FlowVI thus emerges as a potent tool for constructing comprehensive flow cytometry atlases and enhancing the precision of flow cytometry data analyses. The source code for replicating these findings is hosted on GitHub, theislab/flowVI. | flowVI: Flow Cytometry Variational Inference | [
"Kemal Inecik",
"Adil Meric",
"Fabian J Theis"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=um0kSuayWn | @inproceedings{
choi2023a,
title={A {GAN} Model with Controllable Lesion Generation for Synthetic Capsule Endoscopy Datasets},
author={Hyundong Choi and Heechul Jung},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=um0kSuayWn}
} | In this paper, we will address a novel approach to create a synthetic capsule endoscopy dataset. In the medical area, research using deep learning has been actively conducted. It is important to secure a large amount of high-quality datasets to develop a deep learning model. However, medical data have privacy concerns or data bias issues. For this reason, medical data for learning can be noisy and incomplete. Also, it is difficult to obtain qualitative and quantitative medical data. To overcome these limitations, one of the studies that has recently been in the spotlight is synthetic data research. If we use synthetic data to learn deep learning models, we can maintain a more uniform data format and label. In this study, we want to solve the problem of lack of data by creating enough endoscopic datasets by naturally synthesizing the desired lesions in the desired location. We applied the crop and paste method and CycleGAN to the capsule endoscopy dataset for the first time. After placing the desired lesion at the desired coordinates using the crop and paste method, a widely used Data Augmentation Technique, we achieve natural synthesis using the CycleGAN model. We propose an Image-to-Image model that adjusts the type of location and lesion of the generated synthetic data. Through high-quality synthetic data generated in this way, we aim to realize the potential of deep learning in the medical field. | A GAN Model with Controllable Lesion Generation for Synthetic Capsule Endoscopy Datasets | [
"Hyundong Choi",
"Heechul Jung"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=tiqs7trqcC | @inproceedings{
trottet2023generative,
title={Generative Time Series Models with Interpretable Latent Processes for Complex Disease Trajectories},
author={C{\'e}cile Trottet and Manuel Sch{\"u}rch and Amina Mollaysa and Ahmed Allam and Michael Krauthammer},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=tiqs7trqcC}
} | We propose a deep generative time series approach using latent temporal processes for modeling and holistically analyzing complex
disease trajectories and demonstrate its effectiveness in modeling systemic sclerosis.
We aim to find meaningful temporal latent representations of an underlying generative process that
explain the observed disease trajectories in an interpretable and comprehensive way.
To enhance the interpretability of these latent temporal processes,
we develop a semi-supervised approach for disentangling the latent space using established medical concepts.
We show that the learned temporal latent processes can be utilized for further data analysis,
including finding similar patients and clustering the disease into new sub-types.
Moreover, our method enables personalized online monitoring and prediction of multivariate time series including uncertainty quantification. | Generative Time Series Models with Interpretable Latent Processes for Complex Disease Trajectories | [
"Cécile Trottet",
"Manuel Schürch",
"Amina Mollaysa",
"Ahmed Allam",
"Michael Krauthammer"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=s5nWttQIib | @inproceedings{
lee2023semantic,
title={Semantic Map Guided Synthesis of Wireless Capsule Endoscopy Images using Diffusion Models},
author={Haejin Lee and Jeongwoo Ju and Jonghyuck Lee and Yeoun Joo Lee and Heechul Jung},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=s5nWttQIib}
} | Wireless capsule endoscopy (WCE) is a non-invasive method for visualizing the gastrointestinal (GI) tract, crucial for diagnosing GI tract diseases. However, interpreting WCE results can be time-consuming and tiring. Existing studies have employed deep neural networks (DNNs) for automatic GI tract lesion detection, but acquiring sufficient training examples, particularly due to privacy concerns, remains a challenge. Public WCE databases lack diversity and quantity. To address this, we propose a novel approach leveraging generative models, specifically the diffusion model (DM), for generating diverse WCE images. Our model incorporates semantic map resulted from visualization scale (VS) engine, enhancing the controllability and diversity of generated images. We evaluate our approach using visual inspection and visual Turing tests, demonstrating its effectiveness in generating realistic and diverse WCE images. | Semantic Map Guided Synthesis of Wireless Capsule Endoscopy Images using Diffusion Models | [
"Haejin Lee",
"Jeongwoo Ju",
"Jonghyuck Lee",
"Yeoun Joo Lee",
"Heechul Jung"
] | Workshop/DGM4H | 2311.05889 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=r7qL5vM3Aa | @inproceedings{
almod{\'o}var2023federated,
title={Federated learning for causal inference using deep generative disentangled models},
author={Alejandro Almod{\'o}var and Juan Parras and Santiago Zazo},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=r7qL5vM3Aa}
} | In the context of decentralized and privacy-constrained healthcare data settings, we introduce an innovative approach to estimate individual treatment effects (ITE) via federated learning. Emphasizing the critical importance of data privacy in healthcare, especially when drawing on data from various global hospitals, we address challenges arising from data scarcity and specific treatment assignment criteria influenced by the availability of the medication of interest. Our methodology uses federated learning applied to neural network-based generative causal inference models to bridge the gap between decentralized and centralized ITE estimation on a benchmark dataset. | Federated learning for causal inference using deep generative disentangled models | [
"Alejandro Almodóvar",
"Juan Parras",
"Santiago Zazo"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=qV1sBPrfRL | @inproceedings{
hill2023chiron,
title={{CHIR}on: A Generative Foundation Model for Structured Sequential Medical Data},
author={Brian Hill and Melikasadat Emami and Vijay Nori and Aldo Cordova-Palomera and Robert Tillman and Eran Halperin},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=qV1sBPrfRL}
} | Recent advances in large language models (LLMs) have shown that foundation models (FMs) can learn highly complex representations of sequences that can be used for downstream generative and discriminative tasks such as text generation and classification.
While most FMs focus on text, recent work has shown FMs can be learnt for sequential medical data, e.g. ICD-10 diagnosis codes associated with specific patient visits.
These FMs demonstrate improved performance on downstream discriminative disease classification tasks, but cannot be used for generative tasks such as synthesizing artificial patient visits for data augmentation or privacy-preserving data sharing since they utilize BERT-based pre-training.
In this paper, we introduce CHIRon, the first generative FM for sequential medical data.
CHIRon utilizes causal masking during for pre-training, enabling generative applications, and incorporates a number of architectural improvements and support for additional medical data types (diagnoses, procedures, medications, lab results, place of service, demographics).
We show empirically that CHIRon can be used to generate realistic sequential medical data and also outperforms state of the art FMs for sequential medical data on disease classification tasks. | CHIRon: A Generative Foundation Model for Structured Sequential Medical Data | [
"Brian L. Hill",
"Melikasadat Emami",
"Vijay S Nori",
"Aldo Cordova-Palomera",
"Robert E. Tillman",
"Eran Halperin"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=q53lpM5KEg | @inproceedings{
shi2023mapping,
title={Mapping and Diagnosing Augmented Whole Slide Image Datasets with Training Dynamics},
author={Wenqi Shi and Benoit Marteau and May Dongmei Wang},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=q53lpM5KEg}
} | Pediatric heart transplantation represents the standard of care for children confronting end-stage heart failure. One of the most common postoperative complications, heart transplant rejection, has been monitored via surveillance endomyocardial biopsies and manual assessment by cardiac pathology experts. However, manual annotations with interobserver and intraobserver variability among cardiovascular pathology experts lead to significant disagreements about the severity of rejection. Artificial intelligence (AI)-enabled computational pathology usually requires large-scale manual annotations of gigapixel whole-slide images (WSIs) for effective model training. To address these challenges, we develop an AI-enabled rare disease detection framework for automating heart transplant rejection detection from WSIs of pediatric patients. Specifically, we conduct dataset cartography with data maps and training dynamics to map and diagnose the augmented samples, exploring the model behavior on individual instances during model training. Extensive experiments on internal and external patient cohorts have demonstrated the feasibility of both tile-level and biopsy-level detection with augmented samples. The proposed data-efficient learning framework may support seamless scalability to real-world rare disease detection without the burden of iterative expert annotations. | Mapping and Diagnosing Augmented Whole Slide Image Datasets with Training Dynamics | [
"Wenqi Shi",
"Benoit Louis Marteau",
"May Dongmei Wang"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=mqnR8rGWkn | @inproceedings{
boyle2023automated,
title={Automated clinical coding using off-the-shelf large language models},
author={Joseph Boyle and Antanas Kascenas and Pat Lok and Maria Liakata and Alison O'Neil},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=mqnR8rGWkn}
} | The task of assigning diagnostic ICD codes to patient hospital admissions is typically performed by expert human coders. Efforts towards automated ICD coding are dominated by supervised deep learning models. However, difficulties in learning to predict the large number of rare codes remain a barrier to adoption in clinical practice. In this work, we leverage off-the-shelf pre-trained generative large language models (LLMs) to develop a practical solution that is suitable for zero-shot and few-shot code assignment, with no need for further task-specific training. Unsupervised pre-training alone does not guarantee precise knowledge of the ICD ontology and specialist clinical coding task, therefore we frame the task as information extraction, providing a description of each coded concept and asking the model to retrieve related mentions. For efficiency, rather than iterating over all codes, we leverage the hierarchical nature of the ICD ontology to sparsely search for relevant codes. We validate our method using Llama-2, GPT-3.5 and GPT-4 on the CodiEsp dataset of ICD-coded clinical case documents. Our tree-search method achieves state-of-the-art performance on rarer classes, achieving the best macro-F1 of 0.225, whilst achieving slightly lower micro-F1 of 0.157, compared to 0.216 and 0.219 respectively from PLM-ICD. To the best of our knowledge, this is the first method for automated clinical coding requiring no task-specific learning. | Automated clinical coding using off-the-shelf large language models | [
"Joseph Spartacus Boyle",
"Antanas Kascenas",
"Pat Lok",
"Maria Liakata",
"Alison Q O'Neil"
] | Workshop/DGM4H | 2310.06552 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=mDwURmlapW | @inproceedings{
aristimunha2023synthetic,
title={Synthetic Sleep {EEG} Signal Generation using Latent Diffusion Models},
author={Bruno Aristimunha and Raphael Yokoingawa de Camargo and Sylvain Chevallier and Oeslle Lucena and Adam Thomas and M. Jorge Cardoso and Walter Lopez Pinaya and Jessica Dafflon},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=mDwURmlapW}
} | Electroencephalography (EEG) is a non-invasive method that allows for recording rich temporal information and is a valuable tool for diagnosing various neurological and psychiatric conditions. One of the main limitations of EEG is the low signal-to-noise ratio and the lack of data availability to train large data-hungry neural networks. Sharing large healthcare datasets is crucial to advancing medical imaging research, but privacy concerns often impede such efforts. Deep generative models have gained attention as a way to circumvent data-sharing limitations and as a possible way to generate data to improve the performance of these models. This work investigates latent diffusion models with spectral loss as deep generative modeling to generate 30-second windows of synthetic EEG signals of sleep stages. The spectral loss is essential to guarantee that the generated signal contains structured oscillations on specific frequency bands that are typical of EEG signals. We trained our models using two large sleep datasets ($\textbf{Sleep EDFx}$ and $\textbf{SHHS}$) and used the Multi-Scale Structural Similarity Metric, Frechet inception distance, and a spectrogram analysis to evaluate the quality of synthetic signals. We demonstrate that the latent diffusion model can generate realistic signals with the correct neural oscillation and could, therefore, be used to overcome the scarcity of EEG data. | Synthetic Sleep EEG Signal Generation using Latent Diffusion Models | [
"Bruno Aristimunha",
"Raphael Yokoingawa de Camargo",
"Sylvain Chevallier",
"Oeslle Lucena",
"Adam G Thomas",
"M. Jorge Cardoso",
"Walter Hugo Lopez Pinaya",
"Jessica Dafflon"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=khB5CQidql | @inproceedings{
kumar2023mmnormvae,
title={mmNorm{VAE}: Normative Modeling on Multimodal Neuroimaging Data using Variational Autoencoders},
author={Sayantan Kumar and Philip Payne and Aristeidis Sotiras},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=khB5CQidql}
} | Normative modelling is a popular method for studying brain disorders like Alzheimer's Disease (AD) where the normal brain patterns of cognitively normal subjects are modelled and can be used at subject-level to detect deviations relating to disease pathology. So far, deep learning-based normative frameworks have largely been applied on a single imaging modality. We aim to design a multi-modal normative modelling framework based on multimodal variational autoencoders (mmNormVAE) where disease abnormality is aggregated across multiple neuroimaging modalities (T1-weighted and T2-weighted MRI) and subsequently used to estimate subject-level neuroanatomical deviations due to AD. | mmNormVAE: Normative Modeling on Multimodal Neuroimaging Data using Variational Autoencoders | [
"Sayantan Kumar",
"Philip Payne",
"Aristeidis Sotiras"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=kDbD2GkLfy | @inproceedings{
mesinovic2023dysurv,
title={DySurv: Dynamic Deep Learning Model for Survival Prediction in the {ICU}},
author={Munib Mesinovic and Peter Watkinson and Tingting Zhu},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=kDbD2GkLfy}
} | Survival analysis helps approximate underlying distributions of time-to-events which in the case of critical care like in the ICU can be a powerful tool for dynamic mortality risk prediction. Extending beyond the classical Cox model, deep learning techniques have been leveraged over the last years relaxing the many constraints of their counterparts from statistical methods. In this work, we propose a novel conditional variational autoencoder-based method called DySurv which uses a combination of static and time-series measurements from patient electronic health records in estimating risk of death dynamically in the ICU. DySurv has been tested on standard benchmarks where it outperforms most existing methods including other deep learning methods and we evaluate it on a real-world patient database from MIMIC-IV. The predictive capacity of DySurv is consistent and the survival estimates remain disentangled across different datasets supporting the idea that dynamic deep learning models based on conditional variational inference in multi-task cases can be robust models for survival analysis. | DySurv: Dynamic Deep Learning Model for Survival Prediction in the ICU | [
"Munib Mesinovic",
"Peter Watkinson",
"Tingting Zhu"
] | Workshop/DGM4H | 2310.18681 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=fe0I3PIqFT | @inproceedings{
ijishakin2023semisupervised,
title={Semi-Supervised Diffusion Model for Brain Age Prediction},
author={Ayodeji Ijishakin and Sophie Martin and Florence Townend and James Cole},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=fe0I3PIqFT}
} | Brain age prediction models have succeeded in predicting clinical outcomes in neurodegenerative diseases, but can struggle with tasks involving faster progressing diseases and low quality data. To enhance their performance, we employ a semi-supervised diffusion model, obtaining a 0.83(p<0.01) correlation between chronological and predicted age on low quality T1w MR images. This was competitive with state-of-the-art non-generative methods. Furthermore, the predictions produced by our model were significantly associated with survival length (r=0.24, p<0.05) in Amyotrophic Lateral Sclerosis. Thus, our approach demonstrates the value of diffusion-based architectures for the task of brain age prediction. | Semi-Supervised Diffusion Model for Brain Age Prediction | [
"Ayodeji Ijishakin",
"Sophie A. Martin",
"Florence J Townend",
"James H. Cole",
"ANDREA MALASPINA"
] | Workshop/DGM4H | 2402.09137 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=c4p3ng0SCt | @inproceedings{
geenjaar2023uncovering,
title={Uncovering the latent dynamics of whole-brain f{MRI} tasks with a sequential variational autoencoder},
author={Eloy Geenjaar and Donghyun Kim and Riyasat Ohib and Marlena Duda and Amrit Kashyap and Sergey Plis and Vince Calhoun},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=c4p3ng0SCt}
} | The neural dynamics underlying brain activity are critical to understanding cognitive processes and mental disorders. However, current voxel-based whole-brain dimensionality reduction techniques fail to capture these dynamics, producing latent timeseries that inadequately relate to behavioral tasks. To address this issue, we introduce a novel approach to learning low-dimensional approximations of neural dynamics using a sequential variational autoencoder (SVAE) that learns the latent dynamical system. Importantly, our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods, with improved spatial localization to task-relevant brain regions, and we find fixed points for the dynamics that are stable across random initialization of the model. | Uncovering the latent dynamics of whole-brain fMRI tasks with a sequential variational autoencoder | [
"Eloy Geenjaar",
"Donghyun Kim",
"Riyasat Ohib",
"Marlena Duda",
"Amrit Kashyap",
"Sergey M. Plis",
"Vince Calhoun"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=XUgIZQvxg4 | @inproceedings{
ferrante2023generative,
title={Generative Multimodal Decoding: Reconstructing Images and Text from Human f{MRI}},
author={Matteo Ferrante and Tommaso Boccato and Furkan Ozcelik and Rufin VanRullen and Nicola Toschi},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=XUgIZQvxg4}
} | The human brain adeptly processes immense visual information using complex neural mechanisms. Recent advances in functional MRI (fMRI) enable decoding this visual information from recorded brain activity patterns. In this work, we present an innovative approach for reconstructing meaningful images and captions directly from fMRI data, with a focus on brain captioning due to its enhanced flexibility over image decoding.
We utilize the Natural Scenes fMRI dataset containing brain recordings from subjects viewing images. Our method leverages state-of-the-art image captioning and diffusion models for multimodal decoding. We train regression models between fMRI data and textual/visual features and incorporate depth estimation to guide image reconstruction.
Our key innovation is a multimodal framework aligning neural and deep learning representations to generate both semantic captions and photorealistic images from brain activity. We demonstrate quantitative improvements in captioning over prior art and in image spatial relationships through our reconstruction pipeline.
In conclusion, this work significantly advances brain decoding capabilities through an integrated vision-language approach. Our flexible decoding platform combining high-level semantic text and low-level visual depth information provides new insights into human visual cognition. The proposed methods could enable future applications in brain-computer interfaces, neuroscience, and AI. | Generative Multimodal Decoding: Reconstructing Images and Text from Human fMRI | [
"Matteo Ferrante",
"Tommaso Boccato",
"Furkan Ozcelik",
"Rufin VanRullen",
"Nicola Toschi"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=VsJyA3dhLr | @inproceedings{
jang2023texture,
title={Texture synthesis for realistic-looking virtual colonoscopy using mask-aware transformer},
author={Seunghyun Jang and Yisak Kim and Dongheon Lee and Chang Min Park},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=VsJyA3dhLr}
} | In virtual colonoscopy, computer vision techniques focus on depth estimation, photometric tracking, and simultaneous localization and mapping (SLAM). To narrow the domain gap between virtual and real colonoscopy data, it is necessary to utilize real-world data or employ realistic-looking virtual dataset. We introduce a texture synthesis and outpainting strategy using the Mask-aware-transformer. The method can generate textures for the inner surface suitable for virtual colonoscopy, including realistic-looking, controllable, and variety of synthesized textures. We generated RGB-D dataset employing the generated virtual colonoscopy, resulting in 9 video recordings. Each sequence was generated from distinct colon models, accumulating a total of 14,120 frames, paired with ground truth depth. Evaluating the generalizability across various datasets, the depth estimation model trained on our dataset exhibited superior transfer performance. | Texture synthesis for realistic-looking virtual colonoscopy using mask-aware transformer | [
"Seunghyun Jang",
"Yisak Kim",
"Dongheon Lee",
"Chang Min Park"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=Uk6WMt9l9w | @inproceedings{
alam2023ddxt,
title={{DD}xT: Deep Generative Transformer Models for Differential Diagnosis},
author={Mohammad Mahmudul Alam and Edward Raff and Tim Oates and Cynthia Matuszek},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=Uk6WMt9l9w}
} | Differential Diagnosis (DDx) is the process of identifying the most likely medical condition among the possible pathologies through the process of elimination based on evidence. An automated process that narrows a large set of pathologies down to the most likely pathologies will be of great importance. The primary prior works have relied on the Reinforcement Learning (RL) paradigm under the intuition that it aligns better with how physicians perform DDx. In this paper, we show that a generative approach trained with simpler supervised and self-supervised learning signals can achieve superior results on the current benchmark. The proposed Transformer-based generative network, named DDxT, autoregressively produces a set of possible pathologies, i.e., DDx, and predicts the actual pathology using a neural network. Experiments are performed using the DDXPlus dataset. In the case of DDx, the proposed network has achieved a mean accuracy of $99.82\%$ and a mean F1 score of $0.9472$. Additionally, mean accuracy reaches $99.98\%$ with a mean F1 score of $0.9949$ while predicting ground truth pathology. The proposed DDxT outperformed the previous RL-based approaches by a big margin. Overall, the automated Transformer-based DDx generative model has the potential to become a useful tool for a physician in times of urgency. | DDxT: Deep Generative Transformer Models for Differential Diagnosis | [
"Mohammad Mahmudul Alam",
"Edward Raff",
"Tim Oates",
"Cynthia Matuszek"
] | Workshop/DGM4H | 2312.01242 | [
"https://github.com/MahmudulAlam/Differential-Diagnosis-Using-Transformers"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Ujqjn2q9Gi | @inproceedings{
thompson2023large,
title={Large Language Models with Retrieval-Augmented Generation for Zero-Shot Disease Phenotyping},
author={Will Thompson and David Vidmar and Jessica De Freitas and Gabriel Altay and Kabir Manghnani and Andrew Nelsen and Kellie Morland and John Pfeifer and Brandon Fornwalt and RuiJun Chen and Martin Stumpe and Riccardo Miotto},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=Ujqjn2q9Gi}
} | Identifying disease phenotypes from electronic health records (EHRs) is critical for numerous secondary uses. Manually encoding physician knowledge into rules is particularly challenging for rare diseases due to inadequate EHR coding, necessitating review of clinical notes. Large language models (LLMs) offer promise in text understanding but may not efficiently handle real-world clinical documentation. We propose a zero-shot LLM-based method enriched by retrieval-augmented generation and MapReduce, which pre-identifies disease-related text snippets to be used in parallel as queries for the LLM to establish diagnosis. We show that this method as applied to pulmonary hypertension (PH), a rare disease characterized by elevated arterial pressures in the lungs, significantly outperforms physician logic rules ($F_1$ score of 0.62 vs. 0.75). This method has the potential to enhance rare disease cohort identification, expanding the scope of robust clinical research and care gap identification. | Large Language Models with Retrieval-Augmented Generation for Zero-Shot Disease Phenotyping | [
"Will Thompson",
"David Michael Vidmar",
"Jessica Karina De Freitas",
"Gabriel Altay",
"Kabir Manghnani",
"Andrew Nelsen",
"Kellie Morland",
"John Pfeifer",
"Brandon Kenneth Fornwalt",
"RuiJun Chen",
"Martin Stumpe",
"Riccardo Miotto"
] | Workshop/DGM4H | 2312.06457 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=UVF1AMBj9u | @inproceedings{
cai2023jolt,
title={Jo{LT}: Jointly Learned Representations of Language and Time-Series},
author={Yifu Cai and Mononito Goswami and Arjun Choudhry and Arvind Srinivasan and Artur Dubrawski},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=UVF1AMBj9u}
} | Time-series and text data is prevalent in healthcare and frequently exist in tandem, for e.g., in electrocardiogram (ECG) interpretation reports. Yet, these modalities are typically modeled independently. Even studies that jointly model time-series and text do so by converting time-series to images or graphs. We hypothesize that explicitly modeling time-series jointly with text can improve tasks such as summarization and question answering for time-series data, which have received little attention so far. To address this gap, we introduce JoLT to jointly learn desired representations from pre-trained time-series and text models. JoLT utilizes a Querying Transformer (Q-Former) to align the time-series and text representations. Our experiments on a large real-world electrocardiography dataset for medical time-series summarization show that JoLT outperforms state-of-the-art image captioning and medical question-answering approaches, and that the decoder architecture, size, and pre-training data can vary the performance on said tasks. | JoLT: Jointly Learned Representations of Language and Time-Series | [
"Yifu Cai",
"Mononito Goswami",
"Arjun Choudhry",
"Arvind Srinivasan",
"Artur Dubrawski"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=TynSiNAVc8 | @inproceedings{
kim2023a,
title={A 3D Conditional Diffusion Model for Image Quality Transfer - An Application to Low-Field {MRI}},
author={Seunghoi Kim and Daniel Alexander and Ahmed Karam Eldaly and Matteo Figini},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=TynSiNAVc8}
} | Low-field (LF) MRI scanners (<1T) are still prevalent in settings with limited resources or unreliable power supply. However, they often yield images with lower spatial resolution and contrast than high-field (HF) scanners. This quality disparity can result in inaccurate clinician interpretations. Image Quality Transfer (IQT) has been developed to enhance the quality of images by learning a mapping function between low and high-quality images. Existing IQT models often fail to restore high-frequency features, leading to blurry output. In this paper, we propose a 3D conditional diffusion model to improve 3D volumetric data, specifically LF MR images. Additionally, we incorporate a cross-batch mechanism into the self-attention and padding of our network, ensuring broader contextual awareness even under small 3D patches. Experiments on the publicly available Human Connectome Project (HCP) dataset for IQT and brain parcellation demonstrate that our model outperforms existing methods both quantitatively and qualitatively. The code is publicly available at \url{https://github.com/edshkim98/DiffusionIQT}. | A 3D Conditional Diffusion Model for Image Quality Transfer - An Application to Low-Field MRI | [
"Seunghoi Kim",
"Daniel C. Alexander",
"Ahmed Karam Eldaly",
"Matteo Figini",
"Henry F J Tregidgo"
] | Workshop/DGM4H | [
"https://github.com/edshkim98/diffusioniqt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ThnI0oEeVG | @inproceedings{
granese2023the,
title={The Negative Impact of Denoising on Automated Classification of Electrocardiograms},
author={Federica Granese and Ahmad Fall and Alex Lence and Joe-Elie Salem and Jean-Daniel Zucker and Edi Prifti},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=ThnI0oEeVG}
} | We present an evaluation of recent state-of-the-art electrocardiogram denoising methods and assess their impact on the performance of automatic diagnosis classifiers, with a focus on the risk prediction of torsade de pointes arrhythmia. Our findings indicate that the traditional approach of evaluating denoising methods independently of the application is insufficient. This is particularly the case for applications where the signals are used for phenotype prediction. We observed that when classifiers are fed denoised data instead of raw data, their performance significantly deteriorates, with a decline of up to 40 percentage points in accuracy and up to 27 percentage points in AUROC when a misclassification detection method is further applied, underscoring a notable reduction in model reliability. These findings highlight the importance of considering the downstream impact of denoising on automated classification tasks and it sheds light on the complexities of trustworthiness in the context of healthcare applications. | The Negative Impact of Denoising on Automated Classification of Electrocardiograms | [
"Federica Granese",
"Ahmad Fall",
"Alex Lence",
"Joe-Elie Salem",
"Jean-Daniel Zucker",
"Edi Prifti"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=SXw8DBKoRg | @inproceedings{
sch{\"u}rch2023generating,
title={Generating Personalized Insulin Treatments Strategies with Conditional Generative Time Series Models},
author={Manuel Sch{\"u}rch and Xiang Li and Ahmed Allam and Giulia Hofer and Amina Mollaysa and Claudia Cavelti-Weder and Michael Krauthammer},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=SXw8DBKoRg}
} | We propose a novel framework that combines deep generative time series models with decision theory for generating personalized treatment strategies. It leverages historical patient trajectory data to jointly learn the generation of realistic personalized treatment and future outcome trajectories through deep generative time series models. In particular, our framework enables the generation of novel multivariate treatment strategies tailored to the personalized patient history and trained for optimal expected future outcomes based on conditional expected utility maximization. We demonstrate our framework by generating personalized insulin treatment strategies and blood glucose predictions for hospitalized diabetes patients, showcasing the potential of our approach for generating improved personalized treatment strategies. | Generating Personalized Insulin Treatments Strategies with Conditional Generative Time Series Models | [
"Manuel Schürch",
"Xiang Li",
"Ahmed Allam",
"Giulia Hofer",
"Amina Mollaysa",
"Claudia Cavelti-Weder",
"Michael Krauthammer"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RWNgcQuPzj | @inproceedings{
kaleta2023lcsd,
title={{LC}-{SD}: Realistic Endoscopic Image Generation with Limited Training Data},
author={Joanna Kaleta and Diego Dall'alba and Szymon Plotka and Przemyslaw Korzeniowski},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=RWNgcQuPzj}
} | Computer-assisted surgical systems provide support information to the surgeon, which can improve the execution and overall outcome of the procedure. These systems are based on deep learning models that are trained on complex and challenging-to-annotate data. Generating synthetic data can overcome these limitations, but it is necessary to reduce the domain gap between real and synthetic data. We propose a method for image-to-image translation based on a Stable Diffusion model, which generates realistic images starting from synthetic data. Compared to previous works, the proposed method is better suited for clinical application as it requires a much smaller amount of input data and allows finer control over the generation of details by introducing different variants of supporting control networks. The proposed method is applied in the context of laparoscopic cholecystectomy, using synthetic and real data from public datasets. It achieves a mean Intersection over Union of 69.76%, significantly improving the baseline results (69.76% vs. 42.21%). The proposed method for translating synthetic images into images with realistic characteristics will enable the training of deep learning methods that can generalize optimally to real-world contexts, thereby improving computer-assisted intervention guidance systems. | LC-SD: Realistic Endoscopic Image Generation with Limited Training Data | [
"Joanna Kaleta",
"Diego Dall'alba",
"Szymon Plotka",
"Przemyslaw Korzeniowski"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=RUxqqDRXu2 | @inproceedings{
bafna2023diffrnafold,
title={Diff{RNAF}old: Generating {RNA} Tertiary Structures with Latent Space Diffusion},
author={Mihir Bafna and Vikranth Keerthipati and Subhash Kanaparthi and Ruochi Zhang},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=RUxqqDRXu2}
} | RNA molecules provide an exciting frontier for novel therapeutics. Accurate determination of RNA structure could accelerate development of therapeutics through an improved understanding of function. However, the extremely large conformation space has kept the RNA 3D structure space largely unresolved. Using recent advances in generative modeling, we propose DiffRNAFold, a latent space diffusion model for RNA tertiary structure design. Our preliminary results suggest that DiffRNAFold generated molecules are similar in 3D space to true RNA molecules, providing an important first step towards accurate structure and function prediction in vivo. | DiffRNAFold: Generating RNA Tertiary Structures with Latent Space Diffusion | [
"Mihir Bafna",
"Vikranth Keerthipati",
"Subhash Chandra Kanaparthi",
"Ruochi Zhang"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=QAruOR4nUa | @inproceedings{
lu2023effectively,
title={Effectively Fine-tune to Improve Large Multimodal Models for Radiology Report Generation},
author={Yuzhe Lu and Sungmin Hong and Yash Shah and Panpan Xu},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=QAruOR4nUa}
} | Writing radiology reports from medical images requires a high level of domain expertise. It is time-consuming even for trained radiologists and can be error-prone for inexperienced radiologists. It would be appealing to automate this task by leveraging generative AI, which has shown drastic progress in vision and language understanding. In particular, Large Language Models (LLM) have demonstrated impressive capabilities recently and continued to set new state-of-the-art performance on almost all natural language tasks. While many have proposed architectures to combine vision models with LLMs for multimodal tasks, few have explored practical fine-tuning strategies. In this work, we proposed a simple yet effective two-stage fine-tuning protocol to align visual features to LLM's text embedding space as soft visual prompts. Our framework with OpenLLaMA-7B achieved state-of-the-art level performance without domain-specific pretraining. Moreover, we provide detailed analyses of soft visual prompts and attention mechanisms, shedding light on future research directions. | Effectively Fine-tune to Improve Large Multimodal Models for Radiology Report Generation | [
"Yuzhe Lu",
"Sungmin Hong",
"Yash Shah",
"Panpan Xu"
] | Workshop/DGM4H | 2312.01504 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=PvHhhn1iX9 | @inproceedings{
yu2023adversarial,
title={Adversarial Denoising Diffusion Model for Unsupervised Anomaly Detection},
author={Jongmin Yu and Hyeontaek Oh and Jinhong Yang},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=PvHhhn1iX9}
} | In this paper, we propose the Adversarial Denoising Diffusion Model (ADDM). The ADDM is based on the Denoising Diffusion Probabilistic Model (DDPM) but complementarily trained by adversarial learning. The proposed adversarial learning is achieved by classifying model-based denoised samples and samples to which random Gaussian noise is added to a specific sampling step. With the addition of explicit adversarial learning on data samples, ADDM can learn the semantic characteristics of the data more robustly during training, which achieves a similar data sampling performance with much fewer sampling steps than DDPM. We apply ADDM to anomaly detection in unsupervised MRI images. Experimental results show that the proposed ADDM outperformed existing generative model-based unsupervised anomaly detection methods. In particular, compared to other DDPM-based anomaly detection methods, the proposed ADDM shows better performance with the same number of sampling steps and similar performance with 50% fewer sampling steps. | Adversarial Denoising Diffusion Model for Unsupervised Anomaly Detection | [
"Jongmin Yu",
"Hyeontaek Oh",
"Jinhong Yang"
] | Workshop/DGM4H | 2312.04382 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=OBzxDn1XEy | @inproceedings{
rosnati2023robust,
title={Robust semi-supervised segmentation with timestep ensembling diffusion models},
author={Margherita Rosnati and M{\'e}lanie Roschewitz and Ben Glocker},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=OBzxDn1XEy}
} | Medical image segmentation is challenging due to limited data and annotations. Denoising diffusion probabilistic models (DDPM) show promise in modelling natural image distributions and are successfully applied in medical imaging. Our research focuses on semi-supervised image segmentation using diffusion models' latent representations and addressing domain generalisation. We found that optimal performance depends on the choice of diffusion steps and ensembling. Our model outperformed in domain-shifted settings while remaining competitive within domain, highlighting DDPMs' potential for medical image segmentation. | Robust semi-supervised segmentation with timestep ensembling diffusion models | [
"Margherita Rosnati",
"Mélanie Roschewitz",
"Ben Glocker"
] | Workshop/DGM4H | 2311.07421 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Ntu8oGEV3S | @inproceedings{
bedin2023ecg,
title={{ECG} Inpainting with denoising diffusion prior},
author={Lisa Bedin and Gabriel Cardoso and Remi Dubois and Eric Moulines},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=Ntu8oGEV3S}
} | In this work, we train a generative denoising diffusion model (DDGM) in healthy electrocardiogram (ECG) data capable of generating realistic healthy heartbeats. We then show how recent advances in solving linear inverse Bayesian problems with DDGM can be used to derive interpretable outlier detection tools for electrophysiological anomalies. | ECG Inpainting with denoising diffusion prior | [
"Lisa Bedin",
"Gabriel Cardoso",
"Remi Dubois",
"Eric Moulines"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=NQEhg8WdvG | @inproceedings{
wu2023counterfactual,
title={Counterfactual Generative Models for Time-Varying Treatments},
author={Shenghao Wu and Wenbin Zhou and Minshuo Chen and Shixiang Zhu},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=NQEhg8WdvG}
} | Estimating the counterfactual outcome of treatment is essential for decision-making in public health and clinical science, among others. Often, treatments are administered in a sequential, time-varying manner, leading to an exponentially increased number of possible counterfactual outcomes. Furthermore, in modern applications, the outcomes are high-dimensional and conventional average treatment effect estimation fails to capture disparities in individuals. To tackle these challenges, we propose a novel conditional generative framework capable of producing counterfactual samples under time-varying treatment, without the need for explicit density estimation. Our method carefully addresses the distribution mismatch between the observed and counterfactual distributions via a loss function based on inverse probability weighting. We present a thorough evaluation of our method using both synthetic and real-world data. Our results demonstrate that our method is capable of generating high-quality counterfactual samples and outperforms the state-of-the-art baselines. | Counterfactual Generative Models for Time-Varying Treatments | [
"Shenghao Wu",
"Wenbin Zhou",
"Minshuo Chen",
"Shixiang Zhu"
] | Workshop/DGM4H | 2305.15742 | [
"https://github.com/shenghaowu/counterfactual-generative-models"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=MGfj4n1V5Y | @inproceedings{
yu2023investigating,
title={Investigating Causality Between Genotype And Clinical Phenotype In Neurological Disorders Using Structural Causal Model and Normalizing Flow},
author={Fanyang Yu and Rongguang Wang and Pratik Chaudhari and Christos Davatzikos},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=MGfj4n1V5Y}
} | Understanding the causal relationship between genotype and clinical phenotype is crucial for disease treatment and prognosis. Despite the existing literature on exploring associations of genetics with clinical phenotypes such as imaging patterns and survival in various diseases, there are few to none work address the causation of these correlated genotypes. This paper leverages recent advances in causal deep learning to formulate the phenotypical outcome given the change in genotype as a causal inference problem. We build upon structural causal model (SCM) with normalizing flows parameterized by deep networks to perform the counterfactual query to investigate the causal relationship between genotype and clinical phenotype in two types of neurological disorders. Specifically, we focus on the causal effect of (1) APOE4 allele on brain volumetric measures in Alzheimer's disease; (2) key driver gene mutations on overall survival (OS) in glioblastoma. Experimental results show that APOE4 noncarriers causally lead to greater gray matter atrophy in the frontal lobe, and survival-correlated genes do not exhibit causal effect on OS in glioblastoma. | Investigating Causality Between Genotype And Clinical Phenotype In Neurological Disorders Using Structural Causal Model and Normalizing Flow | [
"Fanyang Yu",
"Rongguang Wang",
"Pratik Chaudhari",
"Christos Davatzikos"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=LK934TymQh | @inproceedings{
napier2023transferring,
title={Transferring Movement Understanding for Parkinson{\textquoteright}s Therapy by Generative Pre-Training},
author={Emily Napier and Gavia Gray and Sageev Oore},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=LK934TymQh}
} | Motion data is a modality of clinical importance for Parkinson's research but modeling it typically requires careful design of the machine learning system. Inspired by recent advances in autoregressive language modeling, we investigate the extent to which these modeling assumptions may be relaxed. We quantize motion capture data into discrete tokens and apply a generic autoregressive model
to learn a model of human motion. Representing both positions and joint angles in a combined vocabulary, we model forward and inverse kinematics in addition to autoregressive prediction in 3D and angular space. This lets us pre-train on a 1B token, 40 hour dataset of motion capture, and then finetune on one hour of clinically relevant data in a downstream task. Despite the naivety of this approach, the model is able to perform clinical tasks and we demonstrate high performance classifying 5 hours of dance data. | Transferring Movement Understanding for Parkinson’s Therapy by Generative Pre-Training | [
"Emily Napier",
"Gavia Gray",
"Tristan Loria",
"Veronica Vuong",
"Michael Thaut",
"Sageev Oore"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HzuNJ0Nqvf | @inproceedings{
kozlova2023protein,
title={Protein Inpainting Co-Design with ProtFill},
author={Elizaveta Kozlova and Arthur Valentin and Daniel Nakhaee-Zadeh Gutierrez},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=HzuNJ0Nqvf}
} | Designing new proteins with specific binding capabilities is a challenging task that has the potential to revolutionize many fields, including medicine and material science. Here we introduce ProtFill, a novel method for the simultaneous design of protein structures and sequences. Employing an $SE(3)$ equivariant diffusion graph neural network, our method excels in both sequence prediction and structure recovery compared to SOTA models. We incorporate edge feature updates in GVP-GNN message passing layers to refine our design process. The model's applicability for the interface redesign task is showcased for antibodies as well as other proteins. The code is available at https://github.com/adaptyvbio/ProtFill. | Protein Inpainting Co-Design with ProtFill | [
"Elizaveta Kozlova",
"Arthur Valentin",
"Daniel Nakhaee-Zadeh Gutierrez"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=H7bgz9b9sz | @inproceedings{
ziaei2023language,
title={Language models are susceptible to incorrect patient self-diagnosis in medical applications},
author={Rojin Ziaei and Samuel Schmidgall},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=H7bgz9b9sz}
} | Large language models (LLMs) are becoming increasingly relevant as a potential tool for healthcare, aiding communication between clinicians, researchers, and patients. However, traditional evaluations of LLMs on medical exam questions do not reflect the complexity of real patient-doctor interactions. An example of this complexity is the introduction of patient self-diagnosis, where a patient attempts to diagnose their own medical conditions from various sources. While the patient sometimes arrives at an accurate conclusion, they more often are led toward misdiagnosis due to the patient's over-emphasis on bias validating information. In this work we present a variety of LLMs with multiple-choice questions from United States medical board exams which are modified to include self-diagnostic reports from patients. Our findings highlight that when a patient proposes incorrect bias-validating information, the diagnostic accuracy of LLMs drop dramatically, revealing a high susceptibility to errors in self-diagnosis. | Language models are susceptible to incorrect patient self-diagnosis in medical applications | [
"Rojin Ziaei",
"Samuel Schmidgall"
] | Workshop/DGM4H | 2309.09362 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=EJ7YNgWYFj | @inproceedings{
abdine2023prottext,
title={Prot2Text: Multimodal Protein{\textquoteright}s Function Generation with {GNN}s and Transformers},
author={Hadi Abdine and Michail Chatzianastasis and Costas Bouyioukos and Michalis Vazirgiannis},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=EJ7YNgWYFj}
} | In recent years, significant progress has been made in the field of protein function prediction with the development of various machine-learning approaches. However, most existing methods formulate the task as a multi-classification problem, i.e. assigning predefined labels to proteins. In this work, we propose a novel approach, Prot2Text, which predicts a protein's function in a free text style, moving beyond the conventional binary or categorical classifications. By combining Graph Neural Networks(GNNs) and Large Language Models(LLMs), in an encoder-decoder framework, our model effectively integrates diverse data types including protein sequence, structure, and textual annotation and description. This multimodal approach allows for a holistic representation of proteins' functions, enabling the generation of detailed and accurate functional descriptions. To evaluate our model, we extracted a multimodal protein dataset from SwissProt, and demonstrate empirically the effectiveness of Prot2Text. These results highlight the transformative impact of multimodal models, specifically the fusion of GNNs and LLMs, empowering researchers with powerful tools for more accurate function prediction of existing as well as first-to-see proteins. | Prot2Text: Multimodal Protein’s Function Generation with GNNs and Transformers | [
"Hadi Abdine",
"Michail Chatzianastasis",
"Costas Bouyioukos",
"Michalis Vazirgiannis"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=Bfr0m4Ucl6 | @inproceedings{
smit2023are,
title={Are we going {MAD}? Benchmarking Multi-Agent Debate between Language Models for Medical Q\&A},
author={Andries Smit and Paul Duckworth and Nathan Grinsztajn and Kale-ab Tessera and Thomas Barrett and Arnu Pretorius},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=Bfr0m4Ucl6}
} | Recent advancements in large language models (LLMs) underscore their potential for responding to medical inquiries. However, ensuring that generative agents provide accurate and reliable answers remains an ongoing challenge. In this context, multi-agent debate (MAD) has emerged as a prominent strategy for enhancing the truthfulness of LLMs. In this work, we provide a comprehensive benchmark of MAD strategies for medical Q&A, along with open-source implementations. This sheds light on the effective utilization of various strategies including the trade-offs between cost, time, and accuracy. We build upon these insights to provide a novel debate-prompting strategy based on agent agreement that outperforms previously published strategies on medical Q&A tasks. | Are we going MAD? Benchmarking Multi-Agent Debate between Language Models for Medical Q A | [
"Andries Petrus Smit",
"Paul Duckworth",
"Nathan Grinsztajn",
"Kale-ab Tessera",
"Thomas D Barrett",
"Arnu Pretorius"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BfHX0hKRSe | @inproceedings{
sukeda2023jmedloramedical,
title={{JM}edLo{RA}:Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning},
author={Issey Sukeda and Masahiro Suzuki and Hiroki Sakaji and Satoshi Kodera},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=BfHX0hKRSe}
} | In the ongoing wave of impact driven by large language models (LLMs) like ChatGPT, the adaptation of LLMs to medical domain has emerged as a crucial research frontier. Since mainstream LLMs tend to be designed for general-purpose applications, constructing a medical LLM through domain adaptation is a huge challenge. While instruction-tuning is used to fine-tune some LLMs, its precise roles in domain adaptation remain unknown. Here we show the contribution of LoRA-based instruction-tuning to performance in Japanese medical question-answering tasks. In doing so, we employ a multifaceted evaluation for multiple-choice questions, including scoring based on "Exact match" and "Gestalt distance" in addition to the conventional accuracy. Our findings suggest that LoRA-based instruction-tuning can partially incorporate domain-specific knowledge into LLMs, with larger models demonstrating more pronounced effects. Furthermore, our results underscore the potential of adapting English-centric models for Japanese applications in domain adaptation, while also highlighting the persisting limitations of Japanese-centric models. This initiative represents a pioneering effort in enabling medical institutions to fine-tune and operate models without relying on external services. | JMedLoRA:Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning | [
"Issey Sukeda",
"Masahiro Suzuki",
"Hiroki Sakaji",
"Satoshi Kodera"
] | Workshop/DGM4H | 2310.10083 | [
""
] | https://huggingface.co/papers/2310.10083 | 0 | 2 | 0 | 4 | [
"AIgroup-CVM-utokyohospital/llama2-jmedlora-3000",
"AIgroup-CVM-utokyohospital/llama2-jmedlora-30000",
"AIgroup-CVM-utokyohospital/llama2-jmedlora-900"
] | [] | [] | [
"AIgroup-CVM-utokyohospital/llama2-jmedlora-3000",
"AIgroup-CVM-utokyohospital/llama2-jmedlora-30000",
"AIgroup-CVM-utokyohospital/llama2-jmedlora-900"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=AGQYCBdKA5 | @inproceedings{
kolbeinsson2023generative,
title={Generative models for wearables data},
author={Arinbj{\"o}rn Kolbeinsson and Luca Foschini},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=AGQYCBdKA5}
} | Data scarcity is a common obstacle in medical research due to the high costs associated with data collection and the complexity of gaining access to and utilizing data. Synthesizing health data may provide an efficient and cost-effective solution to this shortage, enabling researchers to explore distributions and populations that are not represented in existing observations or difficult to access due to privacy considerations. To that end, we have developed a multi-task self-attention model that produces realistic wearable activity data. We examine the characteristics of the generated data and quantify its similarity to genuine samples. | Generative models for wearables data | [
"Arinbjörn Kolbeinsson",
"Luca Foschini"
] | Workshop/DGM4H | 2307.16664 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=9cHt8szDMj | @inproceedings{
decruyenaere2023synthetic,
title={Synthetic Data: Can We Trust Statistical Estimators?},
author={Alexander Decruyenaere and Paloma Rabaey and Christiaan Polet and Johan Decruyenaere and Stijn Vansteelandt and Thomas Demeester and Heidelinde Dehaene},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=9cHt8szDMj}
} | The increasing interest in data sharing makes synthetic data appealing. However, the analysis of synthetic data raises a unique set of methodological challenges. In this work, we highlight the importance of inferential utility and provide empirical evidence against naive inference from synthetic data (that handles these as if they were really observed). We argue that the rate of false-positive findings (type 1 error) will be unacceptably high, even when the estimates are unbiased. One of the reasons is the underestimation of the true standard error, which may even progressively increase with larger sample sizes due to slower convergence. This is especially problematic for deep generative models. Before publishing synthetic data, it is essential to develop statistical inference tools for such data. | Synthetic Data: Can We Trust Statistical Estimators? | [
"Alexander Decruyenaere",
"Heidelinde Dehaene",
"Paloma Rabaey",
"Christiaan Polet",
"Johan Decruyenaere",
"Stijn Vansteelandt",
"Thomas Demeester"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=5jpUvL64Av | @inproceedings{
davidson2023buddi,
title={Bu{DDI}: Bulk Deconvolution with Domain Invariance to predict cell-type-specific perturbations from bulk},
author={Natalie Davidson and Casey Greene},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=5jpUvL64Av}
} | While single-cell experiments provide deep cellular resolution within a single sample, some single-cell experiments are inherently more challenging than bulk experiments due to dissociation difficulties, cost, or limited tissue availability. This creates a situation where we have deep cellular profiles of one sample or condition, and bulk profiles across multiple samples and conditions. To bridge this gap, we propose BuDDI (BUlk Deconvolution with Domain Invariance). BuDDI utilizes domain adaptation techniques to effectively integrate available corpora of case-control bulk and reference scRNA-seq observations to infer cell-type-specific perturbation effects. BuDDI achieves this by learning independent latent spaces within a single variational autoencoder (VAE) encompassing at least four sources of variability: 1) cell-type proportion, 2) perturbation effect, 3) structured experimental variability, and 4) remaining variability. Since each latent space is encouraged to be independent, we simulate perturbation responses by independently composing each latent space to simulate cell-type-specific perturbation responses.
We evaluated BuDDI’s performance on simulated and real data with experimental designs of increasing complexity. We first validated that BuDDI could learn domain invariant latent spaces on data with matched samples across each source of variability. Then we validated that BuDDI could accurately predict cell-type-specific perturbation response when no single-cell perturbed profiles were used during training; instead, only bulk samples had both perturbed and non-perturbed observations. Finally, we validated BuDDI on predicting sex-specific differences, an experimental design where it is not possible to have matched samples. In each experiment, BuDDI outperformed all other comparative methods and baselines. As more reference atlases are completed, BuDDI provides a path to combine these resources with bulk-profiled treatment or disease signatures to study perturbations, sex differences, or other factors at single-cell resolution. | BuDDI: Bulk Deconvolution with Domain Invariance to predict cell-type-specific perturbations from bulk | [
"Natalie R Davidson",
"Casey Greene"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3tu9EnRoWl | @inproceedings{
andani2023multivstain,
title={Multi-V-Stain: Multiplexed Virtual Staining of Histopathology Whole-Slide Images},
author={Sonali Andani and Boqi Chen and Joanna Ficek-Pascual and Simon Heinke and Ruben Casanova and Bettina Sobottka and Bernd Bodenmiller and Viktor Koelzer and Gunnar Ratsch},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=3tu9EnRoWl}
} | Pathological assessment on Hematoxylin \& Eosin (H\&E) stained tissue samples is a clinically-established routine for cancer diagnosis. While providing rich morphological information, it lacks insights on protein expression patterns, essential for cancer prognosis and treatment decisions. Imaging Mass Cytometry (IMC) is adept at highly multiplexed protein profiling. However, it has challenges such as high operational cost and a restrictive focus on small Regions-of-Interest. To this end, we propose Multi-V-Stain, a novel image-to-image translation method for multiplexed IMC virtual staining. Our method can effectively leverage the rich morphological features from H\&E images to predict multiplexed protein expressions on a Whole-Slide Image level. In our assessments using an in-house melanoma dataset, Multi-V-Stain consistently achieves higher image quality and generates stains that are more biologically relevant when compared to existing techniques. | Multi-V-Stain: Multiplexed Virtual Staining of Histopathology Whole-Slide Images | [
"Sonali Andani",
"Boqi Chen",
"Joanna Ficek-Pascual",
"Simon Heinke",
"Ruben Casanova",
"Bettina Sobottka",
"Bernd Bodenmiller",
"Viktor Koelzer",
"Gunnar Ratsch"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=1cUu0a4T5I | @inproceedings{
khadhraoui2023hierarchical,
title={Hierarchical Protein Representation for Interface Co-design with {HICON}},
author={Aous Khadhraoui and Daniel Nakhaee-Zadeh Gutierrez and Elizaveta Kozlova},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=1cUu0a4T5I}
} | Protein-protein interactions (PPIs) are essential for many biological processes, but their design is challenging due to their complex and dynamic nature. We propose a new model called Hierarchical Interface CO-design Network (HICON) that can jointly generate the sequence and 3D structure of protein interfaces. HICON uses a novel hierarchical architecture that combines atomic and amino acid resolutions in an equivariant manner and leverages Large Protein Language Models for sequence initialization. We evaluate HICON on a variety of biological interfaces, including protein-protein, enzyme-ligand, and antibody paratope-epitope interfaces. Our results show that HICON outperforms state-of-the-art models on sequence prediction and paratope co-design on several computational metrics. | Hierarchical Protein Representation for Interface Co-design with HICON | [
"Aous Khadhraoui",
"Daniel Nakhaee-Zadeh Gutierrez",
"Elizaveta Kozlova"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=0aeDKGhlTo | @inproceedings{
sharma2023medic,
title={{MED}iC: Mitigating {EEG} Data Scarcity Via Class-Conditioned Diffusion Model},
author={Gulshan Sharma and Abhinav Dhall and Ramanathan Subramanian},
booktitle={Deep Generative Models for Health Workshop NeurIPS 2023},
year={2023},
url={https://openreview.net/forum?id=0aeDKGhlTo}
} | Learning with a small-scale Electroencephalography (EEG) dataset is a non-trivial task. On the other hand, collecting a large-scale EEG dataset is equally challenging due to subject availability and procedure sophistication constraints. Data augmentation offers a potential solution to address the shortage of data; however, traditional augmentation techniques are inefficient for EEG data. In this paper, we propose MEDiC, a class-conditioned Denoising Diffusion Probabilistic Model (DDPM) based approach to generate synthetic EEG embeddings. We perform experiments on a publicly accessible dataset. Empirical findings indicate that MEDiC efficiently generates synthetic EEG embeddings, which can serve as effective proxies to original EEG data. | MEDiC: Mitigating EEG Data Scarcity Via Class-Conditioned Diffusion Model | [
"Gulshan Sharma",
"Abhinav Dhall",
"Ramanathan Subramanian"
] | Workshop/DGM4H | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=zxaoBcdACd | @inproceedings{
pal2023multitabqa,
title={MultiTab{QA}: Generating Tabular Answers for Multi-Table Question Answering},
author={Vaishali Pal and Andrew Yates and Evangelos Kanoulas and Maarten Rijke},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=zxaoBcdACd}
} | Recent advances in tabular question answering (QA) with large language models are constrained in their coverage and only answer questions over a single table. However, real-world queries are complex in nature, often over multiple tables in a relational database or web page. Single table questions do not involve common table operations such as set operations, Cartesian products (joins), or nested queries. Furthermore, multi-table operations often result in a tabular output, which necessitates table generation capabilities of tabular QA models. To fill this gap, we propose a new task of answering questions over multiple tables. Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers. To enable effective training, we build a pre-training dataset comprising of 132,645 SQL queries and tabular answers. Further, we evaluate the generated tables by introducing table-specific metrics of varying strictness assessing various levels of granularity of the table structure. MultiTabQA outperforms state-of-the-art single table QA models adapted to a multi-table QA setting by finetuning on three datasets: Spider, Atis and GeoQuery. | MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering | [
"Vaishali Pal",
"Andrew Yates",
"Evangelos Kanoulas",
"Maarten Rijke"
] | Workshop/TRL | 2305.12820 | [
"https://github.com/kolk/multitabqa"
] | https://huggingface.co/papers/2305.12820 | 1 | 0 | 0 | 4 | [
"vaishali/multitabqa-base",
"vaishali/multitabqa-base-sql",
"vaishali/multitabqa-base-atis",
"vaishali/multitabqa-base-geoquery"
] | [] | [] | [
"vaishali/multitabqa-base",
"vaishali/multitabqa-base-sql",
"vaishali/multitabqa-base-atis",
"vaishali/multitabqa-base-geoquery"
] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=zWzgulKtvw | @inproceedings{
buss2023generating,
title={Generating Data Augmentation Queries Using Large Language Models},
author={Christopher Buss and Jasmin Mousavi and Mikhail Tokarev and Arash Termehchy and David Maier and Stefan Lee},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=zWzgulKtvw}
} | Users often want to augment entities in their datasets with relevant information from external data sources. As many external sources are accessible only via keyword-search interfaces, a user usually has to manually formulate a keyword query that extracts relevant information for each entity. This is challenging as many data sources contain numerous tuples, only a small fraction of which may be relevant. Moreover, different datasets may represent the same information in distinct forms and under different terms. In such cases, it is difficult to formulate a query that precisely retrieves information relevant to a specific entity. Current methods for information enrichment mainly rely on resource-intensive manual effort to formulate queries to discover relevant information. However, it is often important for users to get initial answers quickly and without substantial investment in resources (such as human attention). We propose a progressive approach to discovering entity-relevant information from external sources with minimal expert intervention. It leverages end users’ feedback to progressively learn how to retrieve information relevant to each entity in a dataset from external data sources. To bootstrap performance, we use a pre-trained large language model (LLM) to produce rich representations of entities. We evaluate the use of parameter efficient techniques for aligning the LLM’s representations with our downstream task of online query policy learning and find that even lightweight fine-tuning methods can effectively adapt encodings to domain-specific data. | Generating Data Augmentation Queries Using Large Language Models | [
"Christopher Buss",
"Jasmin Mousavi",
"Mikhail Tokarev",
"Arash Termehchy",
"David Maier",
"Stefan Lee"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=uzVCoZSfly | @inproceedings{
chen2023recontab,
title={ReConTab: Regularized Contrastive Representation Learning for Tabular Data},
author={Suiyao Chen and Jing Wu and Naira Hovakimyan and Handong Yao},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=uzVCoZSfly}
} | Representation learning stands as one of the critical machine learning techniques across various domains. Through the acquisition of high-quality features, pre-trained embeddings significantly reduce input space redundancy, benefiting downstream pattern recognition tasks such as classification, regression, or detection. Nonetheless, in the domain of tabular data, feature engineering and selection still heavily rely on manual intervention, leading to time-consuming processes and necessitating domain expertise. In response to this challenge, we introduce ReConTab, a deep automatic representation learning framework with regularized contrastive learning. Agnostic to any type of modeling task, ReConTab constructs an asymmetric autoencoder based on the same raw features from model inputs, producing low-dimensional representative embeddings. Specifically, regularization techniques are applied for raw feature selection. Meanwhile, ReConTab leverages contrastive learning to distill the most pertinent information for downstream tasks. Experiments conducted on extensive real-world datasets substantiate the framework's capacity to yield substantial and robust performance improvements. Furthermore, we empirically demonstrate that pre-trained embeddings can seamlessly integrate as easily adaptable features, enhancing the performance of various traditional methods such as XGBoost and Random Forest. | ReConTab: Regularized Contrastive Representation Learning for Tabular Data | [
"Suiyao Chen",
"Jing Wu",
"Naira Hovakimyan",
"Handong Yao"
] | Workshop/TRL | 2310.18541 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=u2OVQ2Xvq1 | @inproceedings{
zhou2023unlocking,
title={Unlocking the Transferability of Tokens in Deep Models for Tabular Data},
author={Qile Zhou and Han-Jia Ye and Leye Wang and De-Chuan Zhan},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=u2OVQ2Xvq1}
} | Fine-tuning a pre-trained deep neural network has become a successful paradigm in various machine learning tasks. However, such a paradigm becomes particularly challenging with tabular data when there are discrepancies between the feature sets of pre-trained models and the target tasks. In this paper, we propose TabToken, a method aims at enhancing the quality of feature tokens (i.e., embeddings of tabular features). TabToken allows for the utilization of pre-trained models when the upstream and downstream tasks share overlapping features, facilitating model fine-tuning even with limited training examples. Specifically, we introduce a contrastive objective that regularizes the tokens, capturing the semantics within and across features. During the pre-training stage, the tokens are learned jointly with top-layer deep models such as transformer. In the downstream task, tokens of the shared features are kept fixed while TabToken efficiently fine-tunes the remaining parts of the model. TabToken not only enables knowledge transfer from a pre-trained model to tasks with heterogeneous features, but also enhances the discriminative ability of deep tabular models in standard classification and regression tasks. | Unlocking the Transferability of Tokens in Deep Models for Tabular Data | [
"Qile Zhou",
"Han-Jia Ye",
"Leye Wang",
"De-Chuan Zhan"
] | Workshop/TRL | 2310.15149 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=s0rM4hWBUq | @inproceedings{
hwang2023augmentation,
title={Augmentation for Context in Financial Numerical Reasoning over Textual and Tabular Data with Large-Scale Language Model},
author={Yechan Hwang and Jinsu Lim and Young-Jun Lee and Ho-Jin Choi},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=s0rM4hWBUq}
} | Constructing large-scale datasets for numerical reasoning over tabular and textual data in the financial domain is particularly challenging. Moreover, even the commonly used augmentation techniques for dataset construction prove to be ineffective in augmenting financial dataset. To address this challenge, this paper proposes a context augmentation methodology for enhancing the financial dataset, which generates new contexts for the original question. To do this, we leverage the hallucination capability of large-scale generative language models. Specifically, by providing instructions with constraints for context generation with the original dataset's questions and arithmetic programs together as input to the language model's prompt, we create plausible contexts that provide evidence for the given questions. The experimental results showed that the reasoning performance improved when we augmented the FinQA dataset using our methodology and trained the model with it. | Augmentation for Context in Financial Numerical Reasoning over Textual and Tabular Data with Large-Scale Language Model | [
"Yechan Hwang",
"Jinsu Lim",
"Young-Jun Lee",
"Ho-Jin Choi"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=rSuRu22bbN | @inproceedings{
liu2023tabcontrast,
title={TabContrast: A Local-Global Level Method for Tabular Contrastive Learning},
author={Hao Liu and Yixin Chen and Bradley Fritz and Christopher King},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=rSuRu22bbN}
} | Representation learning is a cornerstone of contemporary artificial intelligence, significantly boosting performance across diverse downstream tasks. Notably, domains like computer vision and NLP have witnessed transformative advancements owing to self-supervised contrastive learning techniques. Yet, the translation of these techniques to tabular data remains an intricate challenge. Traditional approaches, especially within the tabular arena, tend to explore model architecture and loss function design, often overlooking the nuanced creation of positive and negative sample pairs. These pairs are vital, shaping the quality of the learned representations and the overall model efficacy. Recognizing this imperative, our paper probes the specificities of tabular data and the unique challenges it presents. As a solution, we introduce "TabContrast". This method adopts a local-global contrast approach, segmenting features into subsets and subsequently performing tailored clustering to unveil inherent data patterns. By aligning samples with cluster centroids and emphasizing clear semantic distinctions, TabContrast promises enhanced representation efficacy. Preliminary evaluations highlight its potential, particularly in tabular datasets with more features available. | TabContrast: A Local-Global Level Method for Tabular Contrastive Learning | [
"Hao Liu",
"Yixin Chen",
"Bradley Fritz",
"Christopher King"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=pPAK4FIopM | @inproceedings{
chowdhury2023explaining,
title={Explaining Explainers: Necessity and Sufficiency in Tabular Data},
author={Prithwijit Chowdhury and Mohit Prabhushankar and Ghassan AlRegib},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=pPAK4FIopM}
} | In recent days, ML classifiers trained on tabular data are used to make efficient and fast decisions for various decision-making tasks. The lack of transparency in the decision-making processes of these models have led to the emergence of EXplainable AI (XAI). However, discrepancies exist among XAI programs, raising concerns about their accuracy. The notion of what an “important" and “relevant" feature is, is different for different explanation strategies. Thus grounding them using theoretically backed ideas of necessity and sufficiency can prove to be a reliable way to increase their trustworthiness. We propose a novel approach to quantify these two concepts in order to provide a means to explore which explanation method might be suitable for tasks involving the implementation of sparse high dimensional tabular datasets. Moreover, our global necessity and sufficiency scores aim to help experts to correlate their domain knowledge with our findings and also allow an extra basis for evaluation of the results provided by popular local explanation methods like LIME and SHAP. | Explaining Explainers: Necessity and Sufficiency in Tabular Data | [
"Prithwijit Chowdhury",
"Mohit Prabhushankar",
"Ghassan AlRegib"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=lsn7ehxAdt | @inproceedings{
thimonier2023beyond,
title={Beyond Individual Input for Deep Anomaly Detection on Tabular Data},
author={Hugo Thimonier and Fabrice Popineau and Arpad Rimmel and Bich-Li{\^e}n DOAN},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=lsn7ehxAdt}
} | Anomaly detection is vital in many domains, such as finance, healthcare, and cybersecurity. In this paper, we propose a novel deep anomaly detection method for tabular data that leverages Non-Parametric Transformers (NPTs), a model initially proposed for supervised tasks, to capture both feature-feature and sample-sample dependencies. In a reconstruction-based framework, we train the NPT to reconstruct masked features of normal samples. In a non-parametric fashion, we leverage the whole training set during inference and use the model's ability to reconstruct the masked features to generate an anomaly score. To the best of our knowledge, this is the first work to successfully combine feature-feature and sample-sample dependencies for anomaly detection on tabular datasets. Through extensive experiments on 31 benchmark tabular datasets, we demonstrate that our method achieves state-of-the-art performance, outperforming existing methods by 2.4% and 1.2% in terms of F1-score and AUROC, respectively. Our ablation study provides evidence that modeling both types of dependencies is crucial for anomaly detection on tabular data. | Beyond Individual Input for Deep Anomaly Detection on Tabular Data | [
"Hugo Thimonier",
"Fabrice Popineau",
"Arpad Rimmel",
"Bich-Liên DOAN"
] | Workshop/TRL | 2305.15121 | [
"https://github.com/hugothimonier/npt-ad"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=lWBMNF7D8F | @inproceedings{
marton2023gradtree,
title={GradTree: Learning Axis-Aligned Decision Trees with Gradient Descent},
author={Sascha Marton and Stefan L{\"u}dtke and Christian Bartelt and Heiner Stuckenschmidt},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=lWBMNF7D8F}
} | Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability. However, learning a DT from data is a difficult optimization problem, as it is non-convex and non-differentiable. Therefore, common approaches learn DTs using a greedy growth algorithm that minimizes the impurity locally at each internal node. Unfortunately, this greedy procedure can lead to inaccurate trees.
In this paper, we present a novel approach for learning hard, axis-aligned DTs with gradient descent. The proposed method uses backpropagation with a straight-through operator on a dense DT representation, to jointly optimize all tree parameters.
Our approach outperforms existing methods on a wide range of binary classification benchmarks and is available under: https://github.com/s-marton/GradTree | GradTree: Learning Axis-Aligned Decision Trees with Gradient Descent | [
"Sascha Marton",
"Stefan Lüdtke",
"Christian Bartelt",
"Heiner Stuckenschmidt"
] | Workshop/TRL | 2305.03515 | [
"https://github.com/s-marton/gradtree"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=l1u7jA60wT | @inproceedings{
bordt2023elephants,
title={Elephants Never Forget: Testing Language Models for Memorization of Tabular Data},
author={Sebastian Bordt and Harsha Nori and Rich Caruana},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=l1u7jA60wT}
} | While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over. In this work, we address this concern for tabular data. Starting with simple qualitative tests for whether an LLM knows the names and values of features, we introduce a variety of different techniques to assess the degrees of contamination, including statistical tests for conditional distribution modeling and four tests that identify memorization. Our investigation reveals that LLMs are pre-trained on many popular tabular datasets. This exposure can lead to invalid performance evaluation on downstream tasks because the LLMs have, in effect, been fit to the test set. Interestingly, we also identify a regime where the language model reproduces important statistics of the data, but fails to reproduce the dataset verbatim. On these datasets, although seen during training, good performance on downstream tasks might not be due to overfitting. Our findings underscore the need for ensuring data integrity in machine learning tasks with LLMs. To facilitate future research, we release an open-source tool that can perform various tests for memorization https://github.com/interpretml/LLM-Tabular-Memorization-Checker. | Elephants Never Forget: Testing Language Models for Memorization of Tabular Data | [
"Sebastian Bordt",
"Harsha Nori",
"Rich Caruana"
] | Workshop/TRL | 2403.06644 | [
"https://github.com/interpretml/llm-tabular-memorization-checker"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=kzR5Cj5blw | @inproceedings{
si2023interpretabnet,
title={InterpreTabNet: Enhancing Interpretability of Tabular Data Using Deep Generative Models and Large Language Models},
author={Jacob Yoke Hong Si and Michael Cooper and Wendy Yusi Cheng and Rahul Krishnan},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=kzR5Cj5blw}
} | Tabular data are omnipresent in various sectors of industries. Neural networks for tabular data such as TabNet have been proposed to make predictions while leveraging the attention mechanism for interpretability. We find that the inferred attention masks on high-dimensional data are often dense, hindering interpretability. To remedy this, we propose the InterpreTabNet, a variant of the TabNet model that models the attention mechanism as a latent variable sampled from a Gumbel-Softmax distribution. This enables us to regularize the model to learn distinct concepts in the attention masks via a KL Divergence regularizer. It prevents overlapping feature selection which maximizes the model's efficacy and improves interpretability. To automate the interpretation of the features from our model, we employ GPT-4 and use prompt engineering to map from the learned feature mask onto natural language text describing the learned signal. Through comprehensive experiments on real-world datasets, we demonstrate that our InterpreTabNet Model outperforms previous methods for learning from tabular data while attaining competitive accuracy and interpretability. | InterpreTabNet: Enhancing Interpretability of Tabular Data Using Deep Generative Models and Large Language Models | [
"Jacob Yoke Hong Si",
"Michael Cooper",
"Wendy Yusi Cheng",
"Rahul Krishnan"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=k5Mnd4pO7X | @inproceedings{
bhasin2023on,
title={On Incorporating new Variables during Evaluation},
author={Harsimran Bhasin and Soumyadeep Ghosh},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=k5Mnd4pO7X}
} | Any classification or regression model needs access to the same features and input
that were utilized to train the model. However in real world scenarios, several models are in operation for years and new variables/features may be available
during the inferencing stage. If such features are to be utilized, their values have to be captured in the dataset that was utilized for training the model. We propose
a model agnostic approach where we trained a model without the access to those
features during the training stage, which could benefit from the additional features
available during testing. We show that by using the proposed approach and without
any access to the extra features during the training phase, we are able to improve
the performance of the model on four real world tabular datasets. We provide
extensive analysis on how and which variables result in the improvement over the
model which was trained without the extra feature(s). | On Incorporating new Variables during Evaluation | [
"Harsimran Bhasin",
"Soumyadeep Ghosh"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=jSebr3OJA2 | @inproceedings{
margeloiu2023gcondnet,
title={{GC}ondNet: A Novel Method for Improving Neural Networks on Small High-Dimensional Tabular Data},
author={Andrei Margeloiu and Nikola Simidjievski and Pietro Lio and Mateja Jamnik},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=jSebr3OJA2}
} | Neural network models often struggle with high-dimensional but small sample-size tabular datasets. One reason is that current weight initialisation methods assume independence between weights, which can be problematic when there are insufficient samples to estimate the model's parameters accurately. In such small data scenarios, leveraging additional structures can improve the model's performance and training stability. To address this, we propose GCondNet, a general approach to enhance neural networks by leveraging implicit structures present in tabular data. We create a graph between samples for each data dimension, and utilise Graph Neural Networks (GNNs) for extracting this implicit structure, and for conditioning the parameters of the first layer of an underlying predictor network. By creating many small graphs, GCondNet exploits the data's high-dimensionality, and thus improves the performance of an underlying predictor network. We demonstrate the effectiveness of our method on 9 real-world datasets, where GCondNet outperforms 15 standard and state-of-the-art methods. The results show that GCondNet is a versatile framework for injecting graph-regularisation into various types of neural networks, including MLPs and tabular Transformers. | GCondNet: A Novel Method for Improving Neural Networks on Small High-Dimensional Tabular Data | [
"Andrei Margeloiu",
"Nikola Simidjievski",
"Pietro Lio",
"Mateja Jamnik"
] | Workshop/TRL | 2211.06302 | [
"https://github.com/andreimargeloiu/gcondnet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=gs6yfSvwue | @inproceedings{
peng2023highperformance,
title={High-Performance Transformers for Table Structure Recognition Need Early Convolutions},
author={Anthony Peng and Seongmin Lee and Xiaojing Wang and Rajarajeswari (Raji) Balasubramaniyan and Duen Horng Chau},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=gs6yfSvwue}
} | Table structure recognition (TSR) aims to convert tabular images into a machine-readable format, where a visual encoder extracts image features and a textual decoder generates table-representing tokens. Existing approaches use classic convolutional neural network (CNN) backbones for the visual encoder and transformers for the textual decoder. However, this hybrid CNN-Transformer architecture introduces a complex visual encoder that accounts for nearly half of the total model parameters, markedly reduces both training and inference speed, and hinders the potential for self-supervised learning in TSR. In this work, we design a lightweight visual encoder for TSR without sacrificing expressive power. We discover that a convolutional stem can match classic CNN backbone performance, with a much simpler model. The convolutional stem strikes an optimal balance between two crucial factors for high-performance TSR: a higher receptive field (RF) ratio and a longer sequence length. This allows it to "see" an appropriate portion of the table and "store" the complex table structure within sufficient context length for the subsequent transformer. We conducted reproducible ablation studies and open-sourced our code at https://github.com/poloclub/tsr-convstem to enhance transparency, inspire innovations, and facilitate fair comparisons in our domain as tables are a promising modality for representation learning. | High-Performance Transformers for Table Structure Recognition Need Early Convolutions | [
"Anthony Peng",
"Seongmin Lee",
"Xiaojing Wang",
"Rajarajeswari (Raji) Balasubramaniyan",
"Duen Horng Chau"
] | Workshop/TRL | 2311.05565 | [
"https://github.com/poloclub/tsr-convstem"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=ejfKPO9aw0 | @inproceedings{
kozdoba2023unnormalized,
title={Unnormalized Density Estimation with Root Sobolev Norm Regularization},
author={Mark Kozdoba and Binyamin Perets and Shie Mannor},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=ejfKPO9aw0}
} | Density estimation is one of the central problems in non-parametric statistical learning. While parametric neural network-based methods have achieved notable success in fields such as image and text, their non-parametric counterparts lag, particularly in higher dimensions. Non-parametric methods, known for their conceptual simplicity and explicit model bias, can offer enhanced interpretability and more effective regularization control in smaller data regimes or other data modalities.
We propose a new approach to non-parametric density estimation that is
based on regularizing a Sobolev norm of the density. This method is
statistically consistent, is different from Kernel Density Estimation,
and makes the inductive bias of the model clear and interpretable.
\textbf{Our method is assessed against the comprehensive ADBench suite for tabular Anomaly Detection, ranking second among over 15 algorithms}, all of which are specifically tailored for anomaly detection in tabular data.
The contributions of this paper are as follows: 1. While there is no closed analytic form for the associated kernel, we show that one can approximate it using sampling. 2. The optimization problem needed to determine the density is non-convex, and standard gradient methods do not perform well. However, we show that with an appropriate initialization and using natural gradients, one can obtain well-performing solutions. 3. While the approach provides unnormalized densities, which prevents the use of
log-likelihood for cross-validation, we show that one can instead adapt Fisher
Divergence-based Score Matching methods for this task. | Unnormalized Density Estimation with Root Sobolev Norm Regularization | [
"Mark Kozdoba",
"Binyamin Perets",
"Shie Mannor"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=dkeZPuFmIz | @inproceedings{
sui2023selfsupervised,
title={Self-supervised Representation Learning from Random Data Projectors},
author={Yi Sui and Tongzi Wu and Jesse Cresswell and Ga Wu and George Stein and Xiao Shi Huang and Xiaochen Zhang and Maksims Volkovs},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=dkeZPuFmIz}
} | Self-supervised representation learning SSRL has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations. While augmentation-based SSRL algorithms push the boundaries of performance in computer vision and natural language processing, they are often not directly applicable to other data modalities such as tabular and time-series data. This paper presents an SSRL approach that can be applied to these data modalities because it does not rely on augmentations or masking. Specifically, we show that high-quality data representations can be learned by reconstructing random data projections. We evaluate the proposed approach on real-world applications with tabular and time-series data. We show that it outperforms multiple state-of-the-art SSRL baselines and is competitive with methods built on domain-specific knowledge. Due to its wide applicability and strong empirical results, we argue that learning from randomness is a fruitful research direction worthy of attention and further study. | Self-supervised Representation Learning from Random Data Projectors | [
"Yi Sui",
"Tongzi Wu",
"Jesse Cresswell",
"Ga Wu",
"George Stein",
"Xiao Shi Huang",
"Xiaochen Zhang",
"Maksims Volkovs"
] | Workshop/TRL | 2310.07756 | [
"https://github.com/layer6ai-labs/lfr"
] | https://huggingface.co/papers/2310.07756 | 1 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | oral |
null | https://openreview.net/forum?id=dQLDxIPsU4 | @inproceedings{
li2023treeregularized,
title={Tree-Regularized Tabular Embeddings},
author={Xuan Li and Yun Wang and Bo Li},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=dQLDxIPsU4}
} | Tabular neural network (NN) has attracted remarkable attentions and its recent advances have gradually narrowed the performance gap with respect to tree-based models on many public datasets. While the mainstreams focus on calibrating NN to fit tabular data, we emphasize the importance of homogeneous embeddings and alternately concentrate on regularizing tabular inputs through supervised pretraining. Specifically, we extend a recent work coined as DeepTLF, and utilize the structure of pretrained tree ensembles to transform raw variables into a single vector (T2V), or an array of tokens (T2T). Without loss of space efficiency, these binarized embeddings can be directly consumed by canonical tabular NN with full-connected or attention-based building blocks. Through quantitative experiments on 88 OpenML datasets with binary classification task, we validated that the proposed tree-regularized representation not only tapers the difference with respect to tree-based models, but also achieves on-par and better performance when compared with advanced NN models. Most importantly, it possesses better robustness and can be easily scaled and generalized as standalone encoder for tabular modality. | Tree-Regularized Tabular Embeddings | [
"Xuan Li",
"Yun Wang",
"Bo Li"
] | Workshop/TRL | 2403.00963 | [
"https://github.com/milanlx/tree-regularized-embedding"
] | https://huggingface.co/papers/2403.00963 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=btK3lk5puP | @inproceedings{
lee2023binning,
title={Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains},
author={Kyungeun Lee and Ye Seul Sim and Hyeseung Cho and Suhee Yoon and Sanghyu Yoon and Woohyung Lim},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=btK3lk5puP}
} | The ability of deep networks to learn superior representations hinges on leveraging the proper inductive biases, considering the inherent properties of datasets. In tabular domains, it is critical to effectively handle heterogeneous features (both categorical and numerical) in a unified manner and to grasp irregular functions like piecewise constant functions.
To address the challenges in the self-supervised learning framework, we propose a novel pretext task based on the classical binning method. The idea is straightforward: reconstructing the bin indices (either orders or classes) rather than the original values. This pretext task provides the encoder with an inductive bias to capture the irregular dependencies, mapping from continuous inputs to discretized bins, and mitigates the feature heterogeneity by setting all features to have category-type targets.
Our empirical investigations ascertain several advantages of binning: compatibility with encoder architecture and additional modifications, standardizing all features into equal sets, grouping similar values within a feature, and providing ordering information. Comprehensive evaluations across diverse tabular datasets corroborate that our method consistently improves tabular representation learning performance for a wide range of downstream tasks. | Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains | [
"Kyungeun Lee",
"Ye Seul Sim",
"Hyeseung Cho",
"Suhee Yoon",
"Sanghyu Yoon",
"Woohyung Lim"
] | Workshop/TRL | 2405.07414 | [
"https://github.com/kyungeun-lee/tabularbinning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=b4GEmjsHAB | @inproceedings{
zahradn{\'\i}k2023a,
title={A Deep Learning Blueprint for Relational Databases},
author={Luk{\'a}{\v{s}} Zahradn{\'\i}k and Jan Neumann and Gustav {\v{S}}{\'\i}r},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=b4GEmjsHAB}
} | We introduce a modular neural message-passing scheme that closely follows the formal model of relational databases, effectively enabling end-to-end deep learning directly from database storages. We experiment with several instantiations of the scheme, including notably the use of cross-attention modules to capture the referential constraints of the relational model. We address the issues of efficient learning data representation and loading, salient to the database setting, and compare against representative models from a number of related fields, demonstrating favorable initial results. | A Deep Learning Blueprint for Relational Databases | [
"Lukáš Zahradník",
"Jan Neumann",
"Gustav Šír"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=b0OhN0ii36 | @inproceedings{
feuer2023scaling,
title={Scaling Tab{PFN}: Sketching and Feature Selection for Tabular Prior-Data Fitted Networks},
author={Benjamin Feuer and Niv Cohen and Chinmay Hegde},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=b0OhN0ii36}
} | Tabular classification has traditionally relied on supervised algorithms, which estimate the parameters of a prediction model using its training data. Recently, Prior-Data Fitted Networks such as TabPFN have successfully learned to classify tabular data in-context: the model parameters are designed to classify new samples based on labelled training samples given after the model training. While such models show great promise, their applicability to real-world data remains limited due to the computational scale needed. We conduct an initial investigation of sketching and feature-selection methods for TabPFN, and note certain key differences between it and conventionally fitted tabular models. | Scaling TabPFN: Sketching and Feature Selection for Tabular Prior-Data Fitted Networks | [
"Benjamin Feuer",
"Niv Cohen",
"Chinmay Hegde"
] | Workshop/TRL | 2311.10609 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=WXNmnmpRBJ | @inproceedings{
grinsztajn2023modeling,
title={Modeling string entries for tabular data prediction: do we need big large language models?},
author={Leo Grinsztajn and Myung Jun Kim and Edouard Oyallon and Gael Varoquaux},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=WXNmnmpRBJ}
} | Tabular data are often characterized by numerical and categorical features. But these features co-exist with features made of text entries, such as names or descriptions. Here, we investigate whether language models can extract information from these text entries. Studying 19 datasets and varying training sizes, we find that using language model to encode text features improve predictions upon no encodings and character-level approaches based on substrings. Furthermore, we find that larger, more advanced language models translate to more significant improvements. | Modeling string entries for tabular data prediction: do we need big large language models? | [
"Leo Grinsztajn",
"Myung Jun Kim",
"Edouard Oyallon",
"Gael Varoquaux"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=VRBhaU8IDz | @inproceedings{
bonet2023hyperfast,
title={HyperFast: Instant Classification for Tabular Data},
author={David Bonet and Daniel Mas Montserrat and Xavier Gir{\'o}-i-Nieto and Alexander Ioannidis},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=VRBhaU8IDz}
} | Training deep learning models and performing hyperparameter tuning can be computationally demanding and time-consuming. Meanwhile, traditional machine learning methods like gradient-boosting algorithms remain the preferred choice for most tabular data applications, while neural network alternatives require extensive hyperparameter tuning or work only in toy datasets under limited settings. In this paper, we introduce HyperFast, a meta-trained hypernetwork designed for instant classification of tabular data in a single forward pass. HyperFast generates a task-specific neural network tailored to an unseen dataset that can be directly used for classification inference, removing the need for training a model. We report extensive experiments with OpenML and genomic data, comparing HyperFast to competing tabular data neural networks, traditional ML methods, AutoML systems, and boosting machines. HyperFast shows highly competitive results, while being significantly faster. Additionally, our approach demonstrates robust adaptability across a variety of classification tasks with little to no fine-tuning, positioning HyperFast as a strong solution for numerous applications and rapid model deployment. HyperFast introduces a promising paradigm for fast classification, with the potential to substantially decrease the computational burden of deep learning. Our code, which offers a scikit-learn-like interface, along with the trained HyperFast model, can be found at https://github.com/AI-sandbox/HyperFast. | HyperFast: Instant Classification for Tabular Data | [
"David Bonet",
"Daniel Mas Montserrat",
"Xavier Giró-i-Nieto",
"Alexander Ioannidis"
] | Workshop/TRL | 2402.14335 | [
"https://github.com/ai-sandbox/hyperfast"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=V4Pa9B8zRk | @inproceedings{
sch{\"a}fl2023hopular,
title={Hopular: Modern Hopfield Networks for Tabular Data},
author={Bernhard Sch{\"a}fl and Lukas Gruber and Angela Bitto-Nemling and Sepp Hochreiter},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=V4Pa9B8zRk}
} | While Deep Learning excels in structured data as encountered in vision and natural language processing, it failed to meet its expectations on tabular data. For tabular data, Support Vector Machines (SVMs), Random Forests, and Gradient Boosting are the best performing techniques with Gradient Boosting in the lead. Recently, we saw a surge of Deep Learning methods that were tailored to tabular data but still underperform compared to Gradient Boosting on small-sized datasets. We suggest "Hopular", a novel Deep Learning architecture for medium- and small-sized datasets, where each layer is equipped with continuous modern Hopfield networks. The modern Hopfield networks use stored data to identify feature-feature, feature-target, and sample-sample dependencies. Hopular's novelty is that every layer can directly access the original input as well as the whole training set via stored data in the Hopfield networks. Therefore, Hopular can step-wise update its current model and the resulting prediction at every layer like standard iterative learning algorithms. In experiments on small-sized tabular datasets with less than 1,000 samples, Hopular surpasses Gradient Boosting, Random Forests, SVMs, and in particular several Deep Learning methods. In experiments on medium-sized tabular data with about 10,000 samples, Hopular outperforms XGBoost, CatBoost, LightGBM and a state-of-the art Deep Learning method designed for tabular data. Thus, Hopular is a strong alternative to these methods on tabular data. | Hopular: Modern Hopfield Networks for Tabular Data | [
"Bernhard Schäfl",
"Lukas Gruber",
"Angela Bitto-Nemling",
"Sepp Hochreiter"
] | Workshop/TRL | 2206.00664 | [
"https://github.com/ml-jku/hopular"
] | https://huggingface.co/papers/2206.00664 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=UkP1BSm2tt | @inproceedings{
ye2023trainingfree,
title={Training-Free Generalization on Heterogeneous Tabular Data via Meta-Representation},
author={Han-Jia Ye and Qile Zhou and De-Chuan Zhan},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=UkP1BSm2tt}
} | Tabular data is prevalent across various machine learning domains. Yet, the inherent heterogeneities in attribute and class spaces across different tabular datasets hinder the effective sharing of knowledge, limiting a tabular model to benefit from other datasets. In this paper, we propose Tabular data Pre-Training via Meta-representation (TabPTM), which allows one tabular model pre-training on a set of heterogeneous datasets. Then, this pre-trained model can be directly applied to unseen datasets that have diverse attributes and classes without additional training. Specifically, TabPTM represents an instance through its distance to a fixed number of prototypes, thereby standardizing heterogeneous tabular datasets. A deep neural network is then trained to associate these meta-representations with dataset-specific classification confidences, endowing TabPTM with the ability of training-free generalization. Experiments validate that TabPTM achieves promising performance in new datasets, even under few-shot scenarios. | Training-Free Generalization on Heterogeneous Tabular Data via Meta-Representation | [
"Han-Jia Ye",
"Qile Zhou",
"De-Chuan Zhan"
] | Workshop/TRL | 2311.00055 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=TSkiaPP1sq | @inproceedings{
zeighami2023neurodb,
title={Neuro{DB}: Efficient, Privacy-Preserving and Robust Query Answering with Neural Networks},
author={Sepanta Zeighami and Cyrus Shahabi},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=TSkiaPP1sq}
} | The Neural Database framework, or NeuroDB for short, is a novel means of query answering using neural networks. It utilizes neural networks as a means of data storage by training neural networks to directly answer queries. That is, neural networks are trained to take queries as input and output query answer estimates. In doing so, relational tables are represented by neural network weights and are queried through a model forward pass. NeuroDB has shown significant practical advantages in (1) approximate query processing, (2) privacy-preserving query answering, and (3) querying incomplete datasets. The success of the NeuroDB framework can be attributed to the approach learning patterns present in the query answers, utilized to learn a compact representation of the dataset with respect to the queries. This allows learning small neural networks that accurately and efficiently represent query answers. Meanwhile, learning such patterns allows for improving the accuracy in the presence of error, with such robustness to noise allowing for improved accuracy in the case of private query answering and query answering on incomplete datasets. This paper presents an overview of the NeuroDB framework and its applications to the three aforementioned scenarios. | NeuroDB: Efficient, Privacy-Preserving and Robust Query Answering with Neural Networks | [
"Sepanta Zeighami",
"Cyrus Shahabi"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=R8VFPAfOcN | @inproceedings{
saeed2023a,
title={A {DB}-First approach to query factual information in {LLM}s},
author={Mohammed Saeed and Nicola De Cao and Paolo Papotti},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=R8VFPAfOcN}
} | In many use-cases, information is stored in text but not available in structured data. However, extracting data from natural language (NL) text to precisely fit a schema, and thus enable querying, is a challenging task. With the rise of pre-trained Large Language Models (LLMs), there is now an effective solution to store and use information extracted from massive corpora of text documents. Thus, we envision the use of SQL queries to cover a broad range of data that is not captured by traditional databases (DBs) by tapping the information in LLMs. This ability enables querying the factual information in LLMs with the SQL interface, which is more precise than NL prompts. We present a traditional DB architecture using physical operators for querying the underlying LLM. The key idea is to execute some operators of the query plan with prompts that retrieve data from the LLM. For a large class of SQL queries, querying LLMs returns well structured relations, with encouraging qualitative results. | A DB-First approach to query factual information in LLMs | [
"Mohammed Saeed",
"Nicola De Cao",
"Paolo Papotti"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=PxEY7pBb6F | @inproceedings{
cherepanova2023a,
title={A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning},
author={Valeriia Cherepanova and Roman Levin and Gowthami Somepalli and Jonas Geiping and C. Bruss and Andrew Wilson and Tom Goldstein and Micah Goldblum},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=PxEY7pBb6F}
} | Academic tabular benchmarks often contain small sets of curated features. In contrast, data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones. To prevent over-fitting in subsequent downstream modeling, practitioners commonly use automated feature selection methods that identify a reduced subset of informative features. Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance. We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers, using real datasets and multiple methods for generating extraneous features. We also propose an input-gradient-based analogue of LASSO for neural networks that outperforms classical feature selection methods on challenging problems such as selecting from corrupted or second-order features. | A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning | [
"Valeriia Cherepanova",
"Roman Levin",
"Gowthami Somepalli",
"Jonas Geiping",
"C. Bruss",
"Andrew Wilson",
"Tom Goldstein",
"Micah Goldblum"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=OFV0uNeZ7R | @inproceedings{
zhu2023incorporating,
title={Incorporating {LLM} Priors into Tabular Learners},
author={Max Zhu and Sini{\v{s}}a Stanivuk and Andrija Petrovic and Mladen Nikolic and Pietro Lio},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=OFV0uNeZ7R}
} | We present a method to integrate Large Language Models (LLMs) and traditional tabular data classification techniques, addressing LLMs’ challenges like data serialization sensitivity and biases. We introduce two strategies utilizing LLMs for ranking categorical variables and generating priors on correlations between continuous variables and targets, enhancing performance in few-shot scenarios. We focus on Logistic Regression, introducing MonotonicLR that employs a non-linear monotonic function for mapping ordinals to cardinals while preserving LLM-determined orders. Validation against baseline models reveals the superior performance of our approach, especially in low-data scenarios, while remaining interpretable. | Incorporating LLM Priors into Tabular Learners | [
"Max Zhu",
"Siniša Stanivuk",
"Andrija Petrovic",
"Mladen Nikolic",
"Pietro Lio"
] | Workshop/TRL | 2311.11628 | [
""
] | https://huggingface.co/papers/2311.11628 | 1 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=MzSNAO2ZlW | @inproceedings{
kayali2023chorus,
title={{CHORUS}: Foundation Models for Unified Data Discovery and Exploration},
author={Moe Kayali and Anton Lykov and Ilias Fountalis and Nikolaos Vasiloglou and Dan Olteanu and Dan Suciu},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=MzSNAO2ZlW}
} | We apply foundation models to data discovery and exploration tasks. Foundation models are large language models (LLMs) that show promising performance on a range of diverse tasks unrelated to their training. We show that these models are highly applicable to the data discovery and data exploration domain. When carefully used, they have superior capability on three representative tasks: table-class detection, column-type annotation and join-column prediction. On all three tasks, we show that a foundation-model-based approach outperforms the task-specific models and so the state of the art. Further, our approach often surpasses human-expert task performance. We investigate the fundamental characteristics of this approach including generalizability to several foundation models and the dataset contamination. All in all, this suggests a future direction in which disparate data management tasks can be unified under foundation models. | CHORUS: Foundation Models for Unified Data Discovery and Exploration | [
"Moe Kayali",
"Anton Lykov",
"Ilias Fountalis",
"Nikolaos Vasiloglou",
"Dan Olteanu",
"Dan Suciu"
] | Workshop/TRL | 2306.09610 | [
"https://github.com/mkyl/chorus"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=Ld5UCpiT07 | @inproceedings{
singha2023tabular,
title={Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in {LLM}s},
author={Ananya Singha and Jos{\'e} Cambronero and Sumit Gulwani and Vu Le and Chris Parnin},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=Ld5UCpiT07}
} | Large language models (LLMs) are increasingly applied for tabular tasks using
in-context learning. The prompt representation for a table may play a role in the
LLMs ability to process the table. Inspired by prior work, we generate a collection
of self-supervised structural tasks (e.g. navigate to a cell and row; transpose the
table) and evaluate the performance differences when using 8 formats. In contrast
to past work, we introduce 8 noise operations inspired by real-world messy data
and adversarial inputs, and show that such operations can impact LLM performance
across formats for different structural understanding tasks. | Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs | [
"Ananya Singha",
"José Cambronero",
"Sumit Gulwani",
"Vu Le",
"Chris Parnin"
] | Workshop/TRL | 2310.10358 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=JIrTIMI5Yd | @inproceedings{
cong2023introducing,
title={Introducing the Observatory Library for End-to-End Table Embedding Inference},
author={Tianji Cong and Zhenjie Sun and Paul Groth and H. Jagadish and Madelon Hulsebos},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=JIrTIMI5Yd}
} | Transformer-based table embedding models have become prevalent for a wide range of applications involving tabular data. Such models require the serialization of a table as a sequence of tokens for model ingestion and embedding inference. Different downstream tasks require different kinds or levels of embeddings such as column or entity embeddings. Hence, various serialization and encoding methods have been proposed and implemented. Surprisingly, this conceptually simple process of creating table embeddings is not straightforward in practice for a few reasons: 1) a model may not natively expose a certain level of embedding; 2) choosing the correct table serialization and input preprocessing methods is difficult because there are many available; and 3) tables with a massive number of rows and columns cannot fit the input limit of models. In this work, we extend Observatory, a framework for characterizing embeddings of relational tables, by streamlining end-to-end inference of table embeddings, which eases the use of table embedding models in practice. The codebase of Observatory is publicly available at https://github.com/superctj/observatory. | Introducing the Observatory Library for End-to-End Table Embedding Inference | [
"Tianji Cong",
"Zhenjie Sun",
"Paul Groth",
"H. Jagadish",
"Madelon Hulsebos"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=IbiiNw4oRj | @inproceedings{
schambach2023scaling,
title={Scaling Experiments in Self-Supervised Cross-Table Representation Learning},
author={Maximilian Schambach and Dominique Paul and Johannes Otterbach},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=IbiiNw4oRj}
} | To analyze the scaling potential of deep tabular representation learning models, we introduce a novel Transformer-based architecture specifically tailored to tabular data and cross-table representation learning by utilizing table-specific tokenizers and a shared Transformer backbone.
Our training approach encompasses both single-table and cross-table models, trained via missing value imputation through a self-supervised masked cell recovery objective.
To understand the scaling behavior of our method, we train models of varying sizes, ranging from approximately $10^4$ to $10^7$ parameters.
These models are trained on a carefully curated pretraining dataset, consisting of 135 M training tokens sourced from 76 diverse datasets.
We assess the scaling of our architecture in both single-table and cross-table pretraining setups by evaluating the pretrained models using linear probing on a curated set of benchmark datasets and comparing the results with conventional baselines. | Scaling Experiments in Self-Supervised Cross-Table Representation Learning | [
"Maximilian Schambach",
"Dominique Paul",
"Johannes Otterbach"
] | Workshop/TRL | 2309.17339 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=HtdZSf1ObU | @inproceedings{
jin2023benchmarking,
title={Benchmarking Tabular Representation Models in Transfer Learning Settings},
author={Qixuan Jin and Talip Ucar},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=HtdZSf1ObU}
} | Deep learning has revolutionized the transfer of knowledge between similar tasks in data modalities such as images, text, and graphs. However, the same level of success has not been attained in for tabular data. This disparity can be attributed to the inherent absence of structural characteristics, such as spatial and temporal correlations, within common tabular datasets. Moreover, classic methods such as logistic regression and decision trees have been shown to perform competitively with deep learning methods. In this work, we benchmark the classic and deep learning methods specifically within the setting of transfer learning. We offer new benchmarking results for the EHR phenotyping task in the MetaMIMIC dataset and propose a new transfer learning setting of transferring mortality prediction from common to rare cancers with The Cancer Genome Atlas (TCGA). | Benchmarking Tabular Representation Models in Transfer Learning Settings | [
"Qixuan Jin",
"Talip Ucar"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=HK3MUPgFg4 | @inproceedings{
breejen2023exploring,
title={Exploring the Retrieval Mechanism for Tabular Deep Learning},
author={Felix den Breejen and Sangmin Bae and Stephen Cha and Tae-Young Kim and Seoung Hyun Koh and Se-Young Yun},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=HK3MUPgFg4}
} | While interests in tabular deep learning has significantly grown, conventional tree-based models still outperform deep learning methods. To narrow this performance gap, we explore the innovative retrieval mechanism, a methodology that allows neural networks to refer to other data points while making predictions. Our experiments reveal that retrieval-based training, especially when fine-tuning the pretrained TabPFN model, notably surpasses existing methods. Moreover, the extensive pretraining plays a crucial role to enhance the performance of the model. These insights imply that blending the retrieval mechanism with pretraining and transfer learning schemes offers considerable potential for advancing the field of tabular deep learning. | Fine-Tuning the Retrieval Mechanism for Tabular Deep Learning | [
"Felix den Breejen",
"Sangmin Bae",
"Stephen Cha",
"Tae-Young Kim",
"Seoung Hyun Koh",
"Se-Young Yun"
] | Workshop/TRL | 2311.07343 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=H0gENXL7F2 | @inproceedings{
ness2023in,
title={In Defense of Zero Imputation for Tabular Deep Learning},
author={Mike Van Ness and Madeleine Udell},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=H0gENXL7F2}
} | Missing values are a common problem in many supervised learning contexts. While a wealth of literature exists related to missing value imputation, less literature has focused on the impact of imputation on downstream supervised learning. Recently, impute-then-predict neural networks have been proposed as a powerful solution to this problem, allowing for joint optimization of imputations and predictions. In this paper, we illustrate a somewhat surprising result: multi-layer perceptrons (MLPs) paired with zero imputation perform as well as more powerful deep impute-then-predict models on real-world data. To support this finding, we analyze the results of various deep impute-then-predict models to better understand why they fail to outperform zero imputation. Our analysis sheds light onto the difficulties of imputation in real-world contexts, and highlights the utility of zero imputation for tabular deep learning. | In Defense of Zero Imputation for Tabular Deep Learning | [
"Mike Van Ness",
"Madeleine Udell"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FflKTuIRTD | @inproceedings{
huang2023data,
title={Data Ambiguity Strikes Back: How Documentation Improves {GPT}'s Text-to-{SQL}},
author={Zezhou Huang and Pavan Kalyan Damalapati and Eugene Wu},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=FflKTuIRTD}
} | Text-to-SQL allows experts to use databases without in-depth knowledge of them. However, real-world tasks have both query and data ambiguities. Most works on Text-to-SQL focused on query ambiguities and designed chat interfaces for experts to provide clarifications.
In contrast, the data management community has long studied data ambiguities, but mainly addresses error detection and correction,
rather than documenting them for disambiguation in data tasks. This work delves into these data ambiguities in real-world datasets.
We have identified prevalent data ambiguities of value consistency, data coverage, and data granularity that affect tasks. We examine how documentation, originally made to help humans to disambiguate data, can help GPT-4 with Text-to-SQL tasks. By offering documentation on these, we found GPT-4's performance improved by $28.9$%. | Data Ambiguity Strikes Back: How Documentation Improves GPT's Text-to-SQL | [
"Zezhou Huang",
"Pavan Kalyan Damalapati",
"Eugene Wu"
] | Workshop/TRL | 2310.18742 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=EocsZtcA7P | @inproceedings{
yak2023textttingestables,
title={{\textbackslash}texttt\{IngesTables\}: Scalable and Efficient Training of {LLM}-Enabled Tabular Foundation Models},
author={Scott Yak and Yihe Dong and Javier Gonzalvo and Sercan Arik},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=EocsZtcA7P}
} | There is a massive amount of tabular data that can be taken advantage of via `foundation models' to improve prediction performance for downstream tabular prediction tasks. However, numerous challenges constitute bottlenecks in building tabular foundation models, including learning semantic relevance between tables and features, mismatched schemes, arbitrarily high cardinality for categorical values, and scalability to many tables, rows and features. We propose IngesTables, a novel canonical tabular foundation model building framework, designed to address the aforementioned challenges.
IngesTables employs LLMs to encode representations of table/feature semantics and the relationships, that are then modeled via an attention-based tabular architecture. Unlike other LLM-based approaches, IngesTables is much cheaper to train and faster to run inference, because of how LLM-generated embeddings are defined and cached.
We show that IngesTables demonstrates significant improvements over commonly-used models like XGBoost on clinical trial datasets in standard supervised learning settings, and is competitive with tabular prediction models that are specialized for clinical trial datasets without incurring LLM-level cost and latency. | IngesTables: Scalable and Efficient Training of LLM-Enabled Tabular Foundation Models | [
"Scott Yak",
"Yihe Dong",
"Javier Gonzalvo",
"Sercan Arik"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=6Kb3pE9nWQ | @inproceedings{
huh2023poolsearchdemonstrate,
title={Pool-Search-Demonstrate: Improving Data-wrangling {LLM}s via better in-context examples},
author={Joon Suk Huh and Changho Shin and Elina Choi},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=6Kb3pE9nWQ}
} | Data-wrangling is a process that transforms raw data for further analysis and for use in downstream tasks. Recently, it has been shown that foundation models can be successfully used for data-wrangling tasks (Narayan et. al., 2022). An important aspect of data wrangling with LMs is to properly construct prompts for the given task. Within these prompts, a crucial component is the choice of in-context examples. In the previous study of Narayan et. al., demonstration examples are chosen manually by the authors, which may not be scalable to new datasets. In this work, we propose a simple demonstration strategy that individualizes demonstration examples for each input by selecting them from a pool based on their distance in the embedding space. Additionally, we propose a postprocessing method that exploits the embedding of labels under a closed-world assumption. Empirically, our embedding-based example retrieval and postprocessing improve foundation models' performance by up to 84\% over randomly selected examples and 49\% over manually selected examples in the demonstration. Ablation tests reveal the effect of class embeddings, and various factors in demonstration such as quantity, quality, and diversity. | Pool-Search-Demonstrate: Improving Data-wrangling LLMs via better in-context examples | [
"Joon Suk Huh",
"Changho Shin",
"Elina Choi"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=5sOZNkkKh3 | @inproceedings{
chang2023how,
title={How to Prompt {LLM}s for Text-to-{SQL}: A Study in Zero-shot, Single-domain, and Cross-domain Settings},
author={Shuaichen Chang and Eric Fosler-Lussier},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=5sOZNkkKh3}
} | Large language models (LLMs) with in-context learning have demonstrated remarkable capability in the text-to-SQL task. Previous research has prompted LLMs with various demonstration-retrieval strategies and intermediate reasoning steps to enhance the performance of LLMs. However, those works often employ varied strategies when constructing the prompt text for text-to-SQL inputs, such as databases and demonstration examples. This leads to a lack of comparability in both the prompt constructions and their primary contributions. Furthermore, selecting an effective prompt construction has emerged as a persistent problem for future research. To address this limitation, we comprehensively investigate the impact of prompt constructions across various settings and provide insights into prompt constructions for future text-to-SQL studies. | How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings | [
"Shuaichen Chang",
"Eric Fosler-Lussier"
] | Workshop/TRL | 2305.11853 | [
"https://github.com/shuaichenchang/prompt-text-to-sql"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=4MkkNsAEmO | @inproceedings{
ma2023tabpfgen,
title={Tab{PFG}en {\textendash} Tabular Data Generation with Tab{PFN}},
author={Junwei Ma and Apoorv Dankar and George Stein and Guangwei Yu and Anthony Caterini},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=4MkkNsAEmO}
} | Advances in deep generative modelling have not translated well to tabular data. We argue that this is caused by a mismatch in structure between popular generative models and _discriminative_ models of tabular data. We thus devise a technique to turn TabPFN -- a highly performant transformer initially designed for in-context discriminative tabular tasks -- into an energy-based generative model, which we dub _TabPFGen_. This novel framework leverages the pre-trained TabPFN as part of the energy function and does not require any additional training or hyperparameter tuning, thus inheriting TabPFN's in-context learning capability. We can sample from TabPFGen analogously to other energy-based models. We demonstrate strong results on standard generative modelling tasks, including data augmentation, class-balancing, and imputation, unlocking a new frontier of tabular data generation. | TabPFGen – Tabular Data Generation with TabPFN | [
"Junwei Ma",
"Apoorv Dankar",
"George Stein",
"Guangwei Yu",
"Anthony Caterini"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=3gBqMkELhZ | @inproceedings{
wu2023multitaskguided,
title={Multitask-Guided Self-Supervised Tabular Learning for Patient-Specific Survival Prediction},
author={You Wu and Omid Bazgir and Yongju Lee and Tommaso Biancalani and James Lu and Ehsan Hajiramezanali},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=3gBqMkELhZ}
} | Survival prediction, central to the analysis of clinical trials, has the potential to be transformed by the availability of RNA-seq data as it reveals the underlying molecular and genetic mechanisms for disease and outcomes. However, the amount of RNA-seq samples available for understudied or rare diseases is often limited. To address this, leveraging data across different cancer types can be a viable solution, necessitating the application of self-supervised learning techniques. Yet, this wealth of data often comes in a tabular format without a known structure, hindering the development of a generally effective augmentation method for survival prediction. While traditional methods have been constrained by a one cancer-one model philosophy or have relied solely on a single modality, our approach, Guided-STab, on the contrary, offers a comprehensive approach through pretraining on all available RNA-seq data from various cancer types while guiding the representation by incorporating sparse clinical features as auxiliary tasks. With a multitask-guided self-supervised representation learning framework, we maximize the potential of vast unlabeled datasets from various cancer types, leading to genomic-driven survival predictions. These auxiliary clinical tasks then guide the learned representations to enhance critical survival factors. Extensive experiments reinforce the promise of our approach, as Guided-STab consistently outperforms established benchmarks on TCGA dataset. | Multitask-Guided Self-Supervised Tabular Learning for Patient-Specific Survival Prediction | [
"You Wu",
"Omid Bazgir",
"Yongju Lee",
"Tommaso Biancalani",
"James Lu",
"Ehsan Hajiramezanali"
] | Workshop/TRL | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3L2u0unIHd | @inproceedings{
sarkar2023testing,
title={Testing the Limits of Unified Sequence to Sequence {LLM} Pretraining on Diverse Table Data Tasks},
author={Soumajyoti Sarkar and Leonard Lausen},
booktitle={NeurIPS 2023 Second Table Representation Learning Workshop},
year={2023},
url={https://openreview.net/forum?id=3L2u0unIHd}
} | Tables stored in databases and tables which are present in web pages and articles account for a large part of semi-structured data that is available on the internet. It motivates the need to develop a modeling approach with large language models (LLMs) which can be used to solve diverse table tasks such as semantic parsing, question answering as well as classification problems. Traditionally, there existed separate sequence to sequence models specialized for each table task individually. It raises the question of how far can we go to build a unified model that works well on some table tasks without significant degradation on others. To that end, we attempt at creating a shared modeling approach in the pretraining stage with encoder-decoder style LLMs that can cater to diverse tasks. We evaluate our approach that continually pretrains and finetunes different model families of T5 with data from tables and surrounding context, on these downstream tasks at different model scales. Through multiple ablation studies, we observe that our pretraining with self-supervised objectives can significantly boost the performance of the models on these tasks. Our work is the first attempt at studying the advantages of a unified approach to table specific pretraining when scaled from 770M to 11B sequence to sequence models while also comparing the instruction finetuned variants of the models. | Testing the Limits of Unified Sequence to Sequence LLM Pretraining on Diverse Table Data Tasks | [
"Soumajyoti Sarkar",
"Leonard Lausen"
] | Workshop/TRL | 2310.00789 | [
""
] | https://huggingface.co/papers/2310.00789 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=yLps2oiTu7 | @inproceedings{
zalles2023network,
title={Network Regression with Wasserstein Distances},
author={Alexander Zalles and Kai M. Hung and Ann E. Finneran and Lydia Beaudrot and Cesar Uribe},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=yLps2oiTu7}
} | We study the problem of network regression, where the graph topology is inferred for unseen predictor values. We build upon recent developments on generalized regression models on metric spaces based on Fr\'echet means and propose a network regression method using the Wasserstein metric. We show that when representing graphs as multivariate Gaussian distributions, the regression problem in the Wasserstein metric becomes a weighted Wasserstein barycenter problem. In the case of non-negative weights, such a weighted barycenter can be efficiently computed using fixed point iterations. Numerical results show that the proposed approach improves existing procedures by accurately accounting for graph size, randomness, and sparsity in synthetic experiments. Additionally, real-world experiments utilizing the proposed approach result in larger metrics of model fitness, cementing improved prediction capabilities in practice. | Network Regression with Wasserstein Distances | [
"Alexander Zalles",
"Kai M. Hung",
"Ann E. Finneran",
"Lydia Beaudrot",
"Cesar Uribe"
] | Workshop/OTML | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=xRnM2khkx0 | @inproceedings{
viallard2023learning,
title={Learning via Wasserstein-Based High Probability Generalisation Bounds},
author={Paul Viallard and Maxime Haddouche and Umut Simsekli and Benjamin Guedj},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=xRnM2khkx0}
} | Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayes learning. Despite its successes and unfailing surge of interest in recent years, a limitation of the PAC-Bayes framework is that most bounds involve a Kullback-Leibler (KL) divergence term (or its variations), which might exhibit erratic behavior and fail to capture the underlying geometric structure of the learning problem -- hence restricting its use in practical applications. As a remedy, recent studies have attempted to replace the KL divergence in the PAC-Bayes bounds with the Wasserstein distance. Even though these bounds alleviated the aforementioned issues to a certain extent, they either hold in expectation, are for bounded losses, or are nontrivial to minimize in an SRM framework. In this work, we contribute to this line of research and prove novel Wasserstein distance-based PAC-Bayes generalisation bounds for both batch learning with independent and identically distributed (i.i.d.) data, and online learning with potentially non-i.i.d. data. Contrary to previous art, our bounds are stronger in the sense that (i) they hold with high probability, (ii) they apply to unbounded (potentially heavy-tailed) losses, and (iii) they lead to optimizable training objectives that can be used in SRM. As a result we derive novel Wasserstein-based PAC-Bayes learning algorithms and we illustrate their empirical advantage on a variety of experiments. | Learning via Wasserstein-Based High Probability Generalisation Bounds | [
"Paul Viallard",
"Maxime Haddouche",
"Umut Simsekli",
"Benjamin Guedj"
] | Workshop/OTML | 2306.04375 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=uj7TZNQclH | @inproceedings{
sun2023improved,
title={Improved Stein Variational Gradient Descent with Importance Weights},
author={Lukang Sun and Peter Richt{\'a}rik},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=uj7TZNQclH}
} | Stein Variational Gradient Descent~(\algname{SVGD}) is a popular sampling algorithm used in various machine learning tasks. It is well known that \algname{SVGD} arises from a discretization of the kernelized gradient flow of the Kullback-Leibler divergence $\KL\left(\cdot\mid\pi\right)$, where $\pi$ is the target distribution. In this work, we propose to enhance \algname{SVGD} via the introduction of {\em importance weights}, which leads to a new method for which we coin the name \algname{$\beta$-SVGD}. In the continuous time and infinite particles regime, the time for this flow to converge to the equilibrium distribution $\pi$, quantified by the Stein Fisher information, depends on $\rho_0$ and $\pi$ very weakly. This is very different from the kernelized gradient flow of Kullback-Leibler divergence, whose time complexity depends on $\KL\left(\rho_0\mid\pi\right)$. Under certain assumptions, we provide a descent lemma for the population limit \algname{$\beta$-SVGD}, which covers the descent lemma for the population limit \algname{SVGD} when $\beta\to 0$. We also illustrate the advantages of \algname{$\beta$-SVGD} over \algname{SVGD} by experiments. | Improved Stein Variational Gradient Descent with Importance Weights | [
"Lukang Sun",
"Peter Richtárik"
] | Workshop/OTML | 2210.00462 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=rlfRnwjrG2 | @inproceedings{
serrurier2023on,
title={On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective},
author={Mathieu Serrurier and Franck Mamalet and Thomas FEL and Louis B{\'e}thune and Thibaut Boissin},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=rlfRnwjrG2}
} | Input gradients have a pivotal role in a variety of applications, including adversarial attack algorithms for evaluating model robustness, explainable AI techniques for generating Saliency Maps, and counterfactual explanations.
However, Saliency Maps generated by traditional neural networks are often noisy and provide limited insights.
In this paper, we demonstrate that, on the contrary, the Saliency Maps of 1-Lipschitz neural networks, learnt with the dual loss of an optimal transportation problem, exhibit desirable XAI properties:
They are highly concentrated on the essential parts of the image with low noise, significantly outperforming state-of-the-art explanation approaches across various models and metrics.
We also prove that these maps align unprecedentedly well with human explanations on ImageNet.
To explain the particularly beneficial properties of the Saliency Map for such models, we prove this gradient encodes both the direction of the transportation plan and the direction towards the nearest adversarial attack. Following the gradient down to the decision boundary is no longer considered an adversarial attack, but rather a counterfactual explanation that explicitly transports the input from one class to another.
Thus, Learning with such a loss jointly optimizes the classification objective and the alignment of the gradient , i.e. the Saliency Map, to the transportation plan direction.
These networks were previously known to be certifiably robust by design, and we demonstrate that they scale well for large problems and models, and are tailored for explainability using a fast and straightforward method. | On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective | [
"Mathieu Serrurier",
"Franck Mamalet",
"Thomas FEL",
"Louis Béthune",
"Thibaut Boissin"
] | Workshop/OTML | 2206.06854 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=rW8um73AQj | @inproceedings{
rioux2023entropic,
title={Entropic Gromov-Wasserstein Distances: Stability and Algorithms},
author={Gabriel Rioux and Ziv Goldfeld and Kengo Kato},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=rW8um73AQj}
} | The Gromov-Wasserstein (GW) distance quantifies discrepancy between metric measure spaces, but suffers from computational hardness. The entropic Gromov-Wasserstein (EGW) distance serves as a computationally efficient proxy for the GW distance. Recently, it was shown that the quadratic GW and EGW distances admit variational forms that tie them to the well-understood optimal transport (OT) and entropic OT (EOT) problems. By leveraging this connection, we establish convexity and smoothness properties of the objective in this variational problem. This results in the first efficient algorithms for solving the EGW problem that are subject to formal guarantees in both the convex and non-convex regimes. | Entropic Gromov-Wasserstein Distances: Stability and Algorithms | [
"Gabriel Rioux",
"Ziv Goldfeld",
"Kengo Kato"
] | Workshop/OTML | 2306.00182 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=o8ZGNn9LpN | @inproceedings{
ahn2023spectr,
title={SpecTr++: Improved transport plans for speculative decoding of large language models},
author={Kwangjun Ahn and Ahmad Beirami and Ziteng Sun and Ananda Theertha Suresh},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=o8ZGNn9LpN}
} | We revisit the question of accelerating decoding of language models based on speculative draft samples, inspired by Y. Leviathan et al. (ICML 2023). Following Z. Sun et al. (NeurIPS 2023) which makes connections between speculative decoding and optimal transport theory, we design improved transport plans for this problem with no sacrifice in computational complexity in terms of the alphabet size. | SpecTr++: Improved transport plans for speculative decoding of large language models | [
"Kwangjun Ahn",
"Ahmad Beirami",
"Ziteng Sun",
"Ananda Theertha Suresh"
] | Workshop/OTML | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=njAlHBAt95 | @inproceedings{
yan2023offline,
title={Offline Imitation from Observation via Primal Wasserstein State Occupancy Matching},
author={Kai Yan and Alex Schwing and Yu-Xiong Wang},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=njAlHBAt95}
} | In real-world scenarios, arbitrary interactions with the environment can often be costly, and actions of expert demonstrations are not always available. To reduce the need for both, Offline Learning from Observations (LfO) is extensively studied, where the agent learns to solve a task with only expert states and task-agnostic non-expert state-action pairs. The state-of-the-art DIstribution Correction Estimation (DICE) methods minimize the state occupancy divergence between the learner and expert policies. However, they are limited to either $f$-divergences (KL and $\chi^2$) or Wasserstein distance with Rubinstein duality, the latter of which constrains the underlying distance metric crucial to the performance of Wasserstein-based solutions. To address this problem, we propose Primal Wasserstein DICE (PW-DICE), which minimizes the primal Wasserstein distance between the expert and learner state occupancies with a pessimistic regularizer and leverages a contrastively learned distance as the underlying metric for the Wasserstein distance. Theoretically, we prove that our framework is a generalization of the state-of-the-art, SMODICE, and unifies $f$-divergence and Wasserstein minimization. Empirically, we find that PW-DICE improves upon several state-of-the-art methods on multiple testbeds. | Offline Imitation from Observation via Primal Wasserstein State Occupancy Matching | [
"Kai Yan",
"Alex Schwing",
"Yu-Xiong Wang"
] | Workshop/OTML | 2311.01331 | [
"https://github.com/kaiyan289/pw-dice"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=mCNl1EMAfH | @inproceedings{
rioux2023semidiscrete,
title={Semi-discrete Gromov-Wasserstein distances: Existence of Gromov-Monge Maps and Statistical Theory},
author={Gabriel Rioux and Ziv Goldfeld and Kengo Kato},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=mCNl1EMAfH}
} | The Gromov-Wasserstein (GW) distance serves as a discrepancy measure between metric measure spaces. Despite recent theoretical developments, its structural properties, such as existence of optimal maps, remain largely unaccounted for. In this work, we analyze the semi-discrete regime for the GW problem wherein one measure is finitely supported. Notably, we derive a primitive condition which guarantees the existence of optimal maps. This condition also enables us to derive the asymptotic distribution of the empirical semi-discrete GW distance under proper centering and scaling. As a complement to this asymptotic result, we also derive expected empirical convergence rates. As is the case with the standard Wasserstein distance, the rate we derive in the semi-discrete GW case, $n^{-\frac{1}{2}}$, is dimension-independent which is in stark contrast to the curse of dimensionality rate obtained in general. | Semi-discrete Gromov-Wasserstein distances: Existence of Gromov-Monge Maps and Statistical Theory | [
"Gabriel Rioux",
"Ziv Goldfeld",
"Kengo Kato"
] | Workshop/OTML | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=hwBPHlH9tK | @inproceedings{
nguyen2023sliced,
title={Sliced Wasserstein Estimation with Control Variates},
author={Khai Nguyen and Nhat Ho},
booktitle={NeurIPS 2023 Workshop Optimal Transport and Machine Learning},
year={2023},
url={https://openreview.net/forum?id=hwBPHlH9tK}
} | The sliced Wasserstein (SW) distances between two probability measures are defined as the expectation of the Wasserstein distance between two one-dimensional projections of the two measures. The randomness comes from a projecting direction that is used to project the two input measures to one dimension. Due to the intractability of the expectation, Monte Carlo integration is performed to estimate the value of the SW distance. Despite having various variants, there has been no prior work that improves the Monte Carlo estimation scheme for the SW distance in terms of controlling its variance. To bridge the literature on variance reduction and the literature on the SW distance, we propose computationally efficient control variates to reduce the variance of the empirical estimation of the SW distance. The key idea is to first find Gaussian approximations of projected one-dimensional measures, then we utilize the closed-form of the Wasserstein-2 distance between two Gaussian distributions to design the control variates. In particular, we propose using a lower bound and an upper bound of the Wasserstein-2 distance between two fitted Gaussians as two computationally efficient control variates. We empirically show that the proposed control variate estimators can help to reduce the variance considerably when comparing measures over images and point-clouds. Finally, we demonstrate the favorable performance of the proposed control variate estimators in gradient flows to interpolate between two point-clouds and in deep generative modeling on standard image datasets, such as CIFAR10 and CelebA. | Sliced Wasserstein Estimation with Control Variates | [
"Khai Nguyen",
"Nhat Ho"
] | Workshop/OTML | 2305.00402 | [
"https://github.com/khainb/cv-sw"
] | https://huggingface.co/papers/2305.00402 | 0 | 0 | 0 | 2 | [] | [] | [] | [] | [] | [] | 1 | poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.