arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.04834
|
2023-06-07T23:40:04Z
|
A Semi-supervised Object Detection Algorithm for Underwater Imagery
|
[
"Suraj Bijjahalli",
"Oscar Pizarro",
"Stefan B. Williams"
] |
Detection of artificial objects from underwater imagery gathered by
Autonomous Underwater Vehicles (AUVs) is a key requirement for many subsea
applications. Real-world AUV image datasets tend to be very large and
unlabelled. Furthermore, such datasets are typically imbalanced, containing few
instances of objects of interest, particularly when searching for unusual
objects in a scene. It is therefore, difficult to fit models capable of
reliably detecting these objects. Given these factors, we propose to treat
artificial objects as anomalies and detect them through a semi-supervised
framework based on Variational Autoencoders (VAEs). We develop a method which
clusters image data in a learned low-dimensional latent space and extracts
images that are likely to contain anomalous features. We also devise an anomaly
score based on extracting poorly reconstructed regions of an image. We
demonstrate that by applying both methods on large image datasets, human
operators can be shown candidate anomalous samples with a low false positive
rate to identify objects of interest. We apply our approach to real seafloor
imagery gathered by an AUV and evaluate its sensitivity to the dimensionality
of the latent representation used by the VAE. We evaluate the precision-recall
tradeoff and demonstrate that by choosing an appropriate latent dimensionality
and threshold, we are able to achieve an average precision of 0.64 on
unlabelled datasets.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.04163
|
2023-06-07T05:26:38Z
|
Enhancing Virtual Assistant Intelligence: Precise Area Targeting for
Instance-level User Intents beyond Metadata
|
[
"Mengyu Chen",
"Zhenchang Xing",
"Jieshan Chen",
"Chunyang Chen",
"Qinghua Lu"
] |
Virtual assistants have been widely used by mobile phone users in recent
years. Although their capabilities of processing user intents have been
developed rapidly, virtual assistants in most platforms are only capable of
handling pre-defined high-level tasks supported by extra manual efforts of
developers. However, instance-level user intents containing more detailed
objectives with complex practical situations, are yet rarely studied so far. In
this paper, we explore virtual assistants capable of processing instance-level
user intents based on pixels of application screens, without the requirements
of extra extensions on the application side. We propose a novel cross-modal
deep learning pipeline, which understands the input vocal or textual
instance-level user intents, predicts the targeting operational area, and
detects the absolute button area on screens without any metadata of
applications. We conducted a user study with 10 participants to collect a
testing dataset with instance-level user intents. The testing dataset is then
utilized to evaluate the performance of our model, which demonstrates that our
model is promising with the achievement of 64.43% accuracy on our testing
dataset.
|
[
"cs.HC",
"cs.AI",
"cs.CV"
] | false |
2306.04202
|
2023-06-07T07:15:18Z
|
Video Compression with Arbitrary Rescaling Network
|
[
"Mengxi Guo",
"Shijie Zhao",
"Hao Jiang",
"Junlin Li",
"Li Zhang"
] |
Most video platforms provide video streaming services with different
qualities, and the quality of the services is usually adjusted by the
resolution of the videos. So high-resolution videos need to be downsampled for
compression. In order to solve the problem of video coding at different
resolutions, we propose a rate-guided arbitrary rescaling network (RARN) for
video resizing before encoding. To help the RARN be compatible with standard
codecs and generate compression-friendly results, an iteratively optimized
transformer-based virtual codec (TVC) is introduced to simulate the key
components of video encoding and perform bitrate estimation. By iteratively
training the TVC and the RARN, we achieved 5%-29% BD-Rate reduction anchored by
linear interpolation under different encoding configurations and resolutions,
exceeding the previous methods on most test videos. Furthermore, the
lightweight RARN structure can process FHD (1080p) content at real-time speed
(91 FPS) and obtain a considerable rate reduction.
|
[
"cs.MM",
"cs.CV",
"eess.IV"
] | false |
2306.04240
|
2023-06-07T08:30:44Z
|
T-ADAF: Adaptive Data Augmentation Framework for Image Classification
Network based on Tensor T-product Operator
|
[
"Feiyang Han",
"Yun Miao",
"Zhaoyi Sun",
"Yimin Wei"
] |
Image classification is one of the most fundamental tasks in Computer Vision.
In practical applications, the datasets are usually not as abundant as those in
the laboratory and simulation, which is always called as Data Hungry. How to
extract the information of data more completely and effectively is very
important. Therefore, an Adaptive Data Augmentation Framework based on the
tensor T-product Operator is proposed in this paper, to triple one image data
to be trained and gain the result from all these three images together with
only less than 0.1% increase in the number of parameters. At the same time,
this framework serves the functions of column image embedding and global
feature intersection, enabling the model to obtain information in not only
spatial but frequency domain, and thus improving the prediction accuracy of the
model. Numerical experiments have been designed for several models, and the
results demonstrate the effectiveness of this adaptive framework. Numerical
experiments show that our data augmentation framework can improve the
performance of original neural network model by 2%, which provides competitive
results to state-of-the-art methods.
|
[
"cs.CV",
"cs.NA",
"math.NA"
] | false |
2306.04269
|
2023-06-07T09:09:35Z
|
ColNav: Real-Time Colon Navigation for Colonoscopy
|
[
"Netanel Frank",
"Erez Posner",
"Emmanuelle Muhlethaler",
"Adi Zholkover",
"Moshe Bouhnik"
] |
Colorectal cancer screening through colonoscopy continues to be the dominant
global standard, as it allows identifying pre-cancerous or adenomatous lesions
and provides the ability to remove them during the procedure itself.
Nevertheless, failure by the endoscopist to identify such lesions increases the
likelihood of lesion progression to subsequent colorectal cancer. Ultimately,
colonoscopy remains operator-dependent, and the wide range of quality in
colonoscopy examinations among endoscopists is influenced by variations in
their technique, training, and diligence. This paper presents a novel real-time
navigation guidance system for Optical Colonoscopy (OC). Our proposed system
employs a real-time approach that displays both an unfolded representation of
the colon and a local indicator directing to un-inspected areas. These
visualizations are presented to the physician during the procedure, providing
actionable and comprehensible guidance to un-surveyed areas in real-time, while
seamlessly integrating into the physician's workflow. Through coverage
experimental evaluation, we demonstrated that our system resulted in a higher
polyp recall (PR) and high inter-rater reliability with physicians for coverage
prediction. These results suggest that our real-time navigation guidance system
has the potential to improve the quality and effectiveness of Optical
Colonoscopy and ultimately benefit patient outcomes.
|
[
"cs.CV",
"cs.HC",
"cs.LG"
] | false |
2306.04345
|
2023-06-07T11:20:01Z
|
An Overview of Challenges in Egocentric Text-Video Retrieval
|
[
"Burak Satar",
"Hongyuan Zhu",
"Hanwang Zhang",
"Joo Hwee Lim"
] |
Text-video retrieval contains various challenges, including biases coming
from diverse sources. We highlight some of them supported by illustrations to
open a discussion. Besides, we address one of the biases, frame length bias,
with a simple method which brings a very incremental but promising increase. We
conclude with future directions.
|
[
"cs.CV",
"cs.IR",
"cs.MM"
] | false |
2306.04396
|
2023-06-07T12:56:56Z
|
Improving Diffusion-based Image Translation using Asymmetric Gradient
Guidance
|
[
"Gihyun Kwon",
"Jong Chul Ye"
] |
Diffusion models have shown significant progress in image translation tasks
recently. However, due to their stochastic nature, there's often a trade-off
between style transformation and content preservation. Current strategies aim
to disentangle style and content, preserving the source image's structure while
successfully transitioning from a source to a target domain under text or
one-shot image conditions. Yet, these methods often require computationally
intense fine-tuning of diffusion models or additional neural networks. To
address these challenges, here we present an approach that guides the reverse
process of diffusion sampling by applying asymmetric gradient guidance. This
results in quicker and more stable image manipulation for both text-guided and
image-guided image translation. Our model's adaptability allows it to be
implemented with both image- and latent-diffusion models. Experiments show that
our method outperforms various state-of-the-art models in image translation
tasks.
|
[
"cs.CV",
"cs.AI",
"cs.LG",
"stat.ML"
] | false |
2306.04664
|
2023-06-07T10:04:16Z
|
Estimating Uncertainty in PET Image Reconstruction via Deep Posterior
Sampling
|
[
"Tin Vlašić",
"Tomislav Matulić",
"Damir Seršić"
] |
Positron emission tomography (PET) is an important functional medical imaging
technique often used in the evaluation of certain brain disorders, whose
reconstruction problem is ill-posed. The vast majority of reconstruction
methods in PET imaging, both iterative and deep learning, return a single
estimate without quantifying the associated uncertainty. Due to ill-posedness
and noise, a single solution can be misleading or inaccurate. Thus, providing a
measure of uncertainty in PET image reconstruction can help medical
practitioners in making critical decisions. This paper proposes a deep
learning-based method for uncertainty quantification in PET image
reconstruction via posterior sampling. The method is based on training a
conditional generative adversarial network whose generator approximates
sampling from the posterior in Bayesian inversion. The generator is conditioned
on reconstruction from a low-dose PET scan obtained by a conventional
reconstruction method and a high-quality magnetic resonance image and learned
to estimate a corresponding standard-dose PET scan reconstruction. We show that
the proposed model generates high-quality posterior samples and yields
physically-meaningful uncertainty estimates.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.04754
|
2023-06-07T20:04:23Z
|
Computational Modeling of Deep Multiresolution-Fractal Texture and Its
Application to Abnormal Brain Tissue Segmentation
|
[
"A. Temtam",
"L. Pei",
"K. Iftekharuddin"
] |
Computational modeling of Multiresolution- Fractional Brownian motion (fBm)
has been effective in stochastic multiscale fractal texture feature extraction
and machine learning of abnormal brain tissue segmentation. Further, deep
multiresolution methods have been used for pixel-wise brain tissue
segmentation. Robust tissue segmentation and volumetric measurement may provide
more objective quantification of disease burden and offer improved tracking of
treatment response for the disease. However, we posit that computational
modeling of deep multiresolution fractal texture features may offer elegant
feature learning. Consequently, this work proposes novel modeling of
Multiresolution Fractal Deep Neural Network (MFDNN) and its computational
implementation that mathematically combines a multiresolution fBm model and
deep multiresolution analysis. The proposed full 3D MFDNN model offers the
desirable properties of estimating multiresolution stochastic texture features
by analyzing large amount of raw MRI image data for brain tumor segmentation.
We apply the proposed MFDNN to estimate stochastic deep multiresolution fractal
texture features for tumor tissues in brain MRI images. The MFDNN model is
evaluated using 1251 patient cases for brain tumor segmentation using the most
recent BRATS 2021 Challenges dataset. The evaluation of the proposed model
using Dice overlap score, Husdorff distance and associated uncertainty
estimation offers either better or comparable performances in abnormal brain
tissue segmentation when compared to the state-of-the-art methods in the
literature. Index Terms: Computational Modeling, Multiresolution Fractional
Brownian Motion (fBm), Deep Multiresolution Analysis, Fractal Dimension (FD),
Texture Features, Brain tumor segmentation, Deep Learning.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.04763
|
2023-06-07T20:23:05Z
|
Context-Aware Self-Supervised Learning of Whole Slide Images
|
[
"Milan Aryal",
"Nasim Yahyasoltani"
] |
Presenting whole slide images (WSIs) as graph will enable a more efficient
and accurate learning framework for cancer diagnosis. Due to the fact that a
single WSI consists of billions of pixels and there is a lack of vast annotated
datasets required for computational pathology, the problem of learning from
WSIs using typical deep learning approaches such as convolutional neural
network (CNN) is challenging. Additionally, WSIs down-sampling may lead to the
loss of data that is essential for cancer detection. A novel two-stage learning
technique is presented in this work. Since context, such as topological
features in the tumor surroundings, may hold important information for cancer
grading and diagnosis, a graph representation capturing all dependencies among
regions in the WSI is very intuitive. Graph convolutional network (GCN) is
deployed to include context from the tumor and adjacent tissues, and
self-supervised learning is used to enhance training through unlabeled data.
More specifically, the entire slide is presented as a graph, where the nodes
correspond to the patches from the WSI. The proposed framework is then tested
using WSIs from prostate and kidney cancers. To assess the performance
improvement through self-supervised mechanism, the proposed context-aware model
is tested with and without use of pre-trained self-supervised layer. The
overall model is also compared with multi-instance learning (MIL) based and
other existing approaches.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.04339
|
2023-06-07T11:10:10Z
|
Unpaired Deep Learning for Pharmacokinetic Parameter Estimation from
Dynamic Contrast-Enhanced MRI
|
[
"Gyutaek Oh",
"Won-Jin Moon",
"Jong Chul Ye"
] |
DCE-MRI provides information about vascular permeability and tissue perfusion
through the acquisition of pharmacokinetic parameters. However, traditional
methods for estimating these pharmacokinetic parameters involve fitting tracer
kinetic models, which often suffer from computational complexity and low
accuracy due to noisy arterial input function (AIF) measurements. Although some
deep learning approaches have been proposed to tackle these challenges, most
existing methods rely on supervised learning that requires paired input DCE-MRI
and labeled pharmacokinetic parameter maps. This dependency on labeled data
introduces significant time and resource constraints, as well as potential
noise in the labels, making supervised learning methods often impractical. To
address these limitations, here we present a novel unpaired deep learning
method for estimating both pharmacokinetic parameters and the AIF using a
physics-driven CycleGAN approach. Our proposed CycleGAN framework is designed
based on the underlying physics model, resulting in a simpler architecture with
a single generator and discriminator pair. Crucially, our experimental results
indicate that our method, which does not necessitate separate AIF measurements,
produces more reliable pharmacokinetic parameters than other techniques.
|
[
"eess.IV",
"cs.AI",
"cs.CV",
"cs.LG",
"physics.med-ph"
] | false |
2306.04539
|
2023-06-07T15:44:53Z
|
Multimodal Learning Without Labeled Multimodal Data: Guarantees and
Applications
|
[
"Paul Pu Liang",
"Chun Kai Ling",
"Yun Cheng",
"Alex Obolenskiy",
"Yudong Liu",
"Rohan Pandey",
"Alex Wilf",
"Louis-Philippe Morency",
"Ruslan Salakhutdinov"
] |
In many machine learning systems that jointly learn from multiple modalities,
a core research question is to understand the nature of multimodal
interactions: the emergence of new task-relevant information during learning
from both modalities that was not present in either alone. We study this
challenge of interaction quantification in a semi-supervised setting with only
labeled unimodal data and naturally co-occurring multimodal data (e.g.,
unlabeled images and captions, video and corresponding audio) but when labeling
them is time-consuming. Using a precise information-theoretic definition of
interactions, our key contributions are the derivations of lower and upper
bounds to quantify the amount of multimodal interactions in this
semi-supervised setting. We propose two lower bounds based on the amount of
shared information between modalities and the disagreement between separately
trained unimodal classifiers, and derive an upper bound through connections to
approximate algorithms for min-entropy couplings. We validate these estimated
bounds and show how they accurately track true interactions. Finally, two
semi-supervised multimodal applications are explored based on these theoretical
results: (1) analyzing the relationship between multimodal performance and
estimated interactions, and (2) self-supervised learning that embraces
disagreement between modalities beyond agreement as is typically done.
|
[
"cs.LG",
"cs.CL",
"cs.CV",
"cs.IT",
"math.IT",
"stat.ML"
] | false |
2306.04085
|
2023-06-07T01:09:37Z
|
XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages
and Meaning Representations
|
[
"Yusen Zhang",
"Jun Wang",
"Zhiguo Wang",
"Rui Zhang"
] |
Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple
natural languages (NLs) into meaning representations (MRs) such as SQL, lambda
calculus, and logic forms. However, existing CLSP models are separately
proposed and evaluated on datasets of limited tasks and applications, impeding
a comprehensive and unified evaluation of CLSP on a diverse range of NLs and
MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual
semantic parsing featured with 22 natural languages and 8 meaning
representations by examining and selecting 9 existing datasets to cover 5 tasks
and 164 domains. We use XSemPLR to conduct a comprehensive benchmark study on a
wide range of multilingual language models including encoder-based models
(mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models
(Codex, BLOOM). We design 6 experiment settings covering various lingual
combinations (monolingual, multilingual, cross-lingual) and numbers of learning
samples (full dataset, few-shot, and zero-shot). Our experiments show that
encoder-decoder models (mT5) achieve the highest performance compared with
other popular models, and multilingual training can further improve the average
performance. Notably, multilingual large language models (e.g., BLOOM) are
still inadequate to perform CLSP tasks. We also find that the performance gap
between monolingual training and cross-lingual transfer learning is still
significant for multilingual models, though it can be mitigated by
cross-lingual few-shot training. Our dataset and code are available at
https://github.com/psunlpgroup/XSemPLR.
|
[
"cs.CL"
] | false |
2306.04136
|
2023-06-07T04:15:21Z
|
Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge
Graph Question Answering
|
[
"Jinheon Baek",
"Alham Fikri Aji",
"Amir Saffari"
] |
Large Language Models (LLMs) are capable of performing zero-shot closed-book
question answering tasks, based on their internal knowledge stored in
parameters during pre-training. However, such internalized knowledge might be
insufficient and incorrect, which could lead LLMs to generate factually wrong
answers. Furthermore, fine-tuning LLMs to update their knowledge is expensive.
To this end, we propose to augment the knowledge directly in the input of LLMs.
Specifically, we first retrieve the relevant facts to the input question from
the knowledge graph based on semantic similarities between the question and its
associated facts. After that, we prepend the retrieved facts to the input
question in the form of the prompt, which is then forwarded to LLMs to generate
the answer. Our framework, Knowledge-Augmented language model PromptING
(KAPING), requires no model training, thus completely zero-shot. We validate
the performance of our KAPING framework on the knowledge graph question
answering task, that aims to answer the user's question based on facts over a
knowledge graph, on which ours outperforms relevant zero-shot baselines by up
to 48% in average, across multiple LLMs of various sizes.
|
[
"cs.CL"
] | false |
2306.04140
|
2023-06-07T04:27:09Z
|
Increasing Diversity While Maintaining Accuracy: Text Data Generation
with Large Language Models and Human Interventions
|
[
"John Joon Young Chung",
"Ece Kamar",
"Saleema Amershi"
] |
Large language models (LLMs) can be used to generate text data for training
and evaluating other models. However, creating high-quality datasets with LLMs
can be challenging. In this work, we explore human-AI partnerships to
facilitate high diversity and accuracy in LLM-based text data generation. We
first examine two approaches to diversify text generation: 1) logit
suppression, which minimizes the generation of languages that have already been
frequently generated, and 2) temperature sampling, which flattens the token
sampling probability. We found that diversification approaches can increase
data diversity but often at the cost of data accuracy (i.e., text and labels
being appropriate for the target domain). To address this issue, we examined
two human interventions, 1) label replacement (LR), correcting misaligned
labels, and 2) out-of-scope filtering (OOSF), removing instances that are out
of the user's domain of interest or to which no considered label applies. With
oracle studies, we found that LR increases the absolute accuracy of models
trained with diversified datasets by 14.4%. Moreover, we found that some models
trained with data generated with LR interventions outperformed LLM-based
few-shot classification. In contrast, OOSF was not effective in increasing
model accuracy, implying the need for future work in human-in-the-loop text
data generation.
|
[
"cs.CL"
] | true |
2306.04170
|
2023-06-07T05:46:19Z
|
From the One, Judge of the Whole: Typed Entailment Graph Construction
with Predicate Generation
|
[
"Zhibin Chen",
"Yansong Feng",
"Dongyan Zhao"
] |
Entailment Graphs (EGs) have been constructed based on extracted corpora as a
strong and explainable form to indicate context-independent entailment
relations in natural languages. However, EGs built by previous methods often
suffer from the severe sparsity issues, due to limited corpora available and
the long-tail phenomenon of predicate distributions. In this paper, we propose
a multi-stage method, Typed Predicate-Entailment Graph Generator (TP-EGG), to
tackle this problem. Given several seed predicates, TP-EGG builds the graphs by
generating new predicates and detecting entailment relations among them. The
generative nature of TP-EGG helps us leverage the recent advances from large
pretrained language models (PLMs), while avoiding the reliance on carefully
prepared corpora. Experiments on benchmark datasets show that TP-EGG can
generate high-quality and scale-controllable entailment graphs, achieving
significant in-domain improvement over state-of-the-art EGs and boosting the
performance of down-stream inference tasks.
|
[
"cs.CL"
] | false |
2306.04188
|
2023-06-07T06:47:34Z
|
A New Dataset and Empirical Study for Sentence Simplification in Chinese
|
[
"Shiping Yang",
"Renliang Sun",
"Xiaojun Wan"
] |
Sentence Simplification is a valuable technique that can benefit language
learners and children a lot. However, current research focuses more on English
sentence simplification. The development of Chinese sentence simplification is
relatively slow due to the lack of data. To alleviate this limitation, this
paper introduces CSS, a new dataset for assessing sentence simplification in
Chinese. We collect manual simplifications from human annotators and perform
data analysis to show the difference between English and Chinese sentence
simplifications. Furthermore, we test several unsupervised and zero/few-shot
learning methods on CSS and analyze the automatic evaluation and human
evaluation results. In the end, we explore whether Large Language Models can
serve as high-quality Chinese sentence simplification systems by evaluating
them on CSS.
|
[
"cs.CL"
] | false |
2306.04217
|
2023-06-07T07:45:38Z
|
Effective Neural Topic Modeling with Embedding Clustering Regularization
|
[
"Xiaobao Wu",
"Xinshuai Dong",
"Thong Nguyen",
"Anh Tuan Luu"
] |
Topic models have been prevalent for decades with various applications.
However, existing topic models commonly suffer from the notorious topic
collapsing: discovered topics semantically collapse towards each other, leading
to highly repetitive topics, insufficient topic discovery, and damaged model
interpretability. In this paper, we propose a new neural topic model, Embedding
Clustering Regularization Topic Model (ECRTM). Besides the existing
reconstruction error, we propose a novel Embedding Clustering Regularization
(ECR), which forces each topic embedding to be the center of a separately
aggregated word embedding cluster in the semantic space. This enables each
produced topic to contain distinct word semantics, which alleviates topic
collapsing. Regularized by ECR, our ECRTM generates diverse and coherent topics
together with high-quality topic distributions of documents. Extensive
experiments on benchmark datasets demonstrate that ECRTM effectively addresses
the topic collapsing issue and consistently surpasses state-of-the-art
baselines in terms of topic quality, topic distributions of documents, and
downstream classification tasks.
|
[
"cs.CL"
] | false |
2306.04277
|
2023-06-07T09:23:26Z
|
Analysis of the Fed's communication by using textual entailment model of
Zero-Shot classification
|
[
"Yasuhiro Nakayama",
"Tomochika Sawaki"
] |
In this study, we analyze documents published by central banks using text
mining techniques and propose a method to evaluate the policy tone of central
banks. Since the monetary policies of major central banks have a broad impact
on financial market trends, the pricing of risky assets, and the real economy,
market participants are attempting to more accurately capture changes in the
outlook for central banks' future monetary policies. Since the published
documents are also an important tool for the central bank to communicate with
the market, they are meticulously elaborated on grammatical syntax and wording,
and investors are urged to read more accurately about the central bank's policy
stance. Sentiment analysis on central bank documents has long been carried out,
but it has been difficult to interpret the meaning of the documents accurately
and to explicitly capture even the intentional change in nuance. This study
attempts to evaluate the implication of the zero-shot text classification
method for an unknown economic environment using the same model. We compare the
tone of the statements, minutes, press conference transcripts of FOMC meetings,
and the Fed officials' (chair, vice chair, and Governors) speeches. In
addition, the minutes of the FOMC meetings were subjected to a phase analysis
of changes in each policy stance since 1971.
|
[
"cs.CL"
] | false |
2306.04314
|
2023-06-07T10:19:50Z
|
Cross-Genre Argument Mining: Can Language Models Automatically Fill in
Missing Discourse Markers?
|
[
"Gil Rocha",
"Henrique Lopes Cardoso",
"Jonas Belouadi",
"Steffen Eger"
] |
Available corpora for Argument Mining differ along several axes, and one of
the key differences is the presence (or absence) of discourse markers to signal
argumentative content. Exploring effective ways to use discourse markers has
received wide attention in various discourse parsing tasks, from which it is
well-known that discourse markers are strong indicators of discourse relations.
To improve the robustness of Argument Mining systems across different genres,
we propose to automatically augment a given text with discourse markers such
that all relations are explicitly signaled. Our analysis unveils that popular
language models taken out-of-the-box fail on this task; however, when
fine-tuned on a new heterogeneous dataset that we construct (including
synthetic and real examples), they perform considerably better. We demonstrate
the impact of our approach on an Argument Mining downstream task, evaluated on
different corpora, showing that language models can be trained to automatically
fill in discourse markers across different corpora, improving the performance
of a downstream model in some, but not all, cases. Our proposed approach can
further be employed as an assistive tool for better discourse understanding.
|
[
"cs.CL"
] | false |
2306.04328
|
2023-06-07T10:47:33Z
|
IUTEAM1 at MEDIQA-Chat 2023: Is simple fine tuning effective for
multilayer summarization of clinical conversations?
|
[
"Dhananjay Srivastava"
] |
Clinical conversation summarization has become an important application of
Natural language Processing. In this work, we intend to analyze summarization
model ensembling approaches, that can be utilized to improve the overall
accuracy of the generated medical report called chart note. The work starts
with a single summarization model creating the baseline. Then leads to an
ensemble of summarization models trained on a separate section of the chart
note. This leads to the final approach of passing the generated results to
another summarization model in a multi-layer/stage fashion for better coherency
of the generated text. Our results indicate that although an ensemble of models
specialized in each section produces better results, the multi-layer/stage
approach does not improve accuracy. The code for the above paper is available
at https://github.com/dhananjay-srivastava/MEDIQA-Chat-2023-iuteam1.git
|
[
"cs.CL"
] | false |
2306.04334
|
2023-06-07T11:01:39Z
|
Echoes from Alexandria: A Large Resource for Multilingual Book
Summarization
|
[
"Alessandro Scirè",
"Simone Conia",
"Simone Ciciliano",
"Roberto Navigli"
] |
In recent years, research in text summarization has mainly focused on the
news domain, where texts are typically short and have strong layout features.
The task of full-book summarization presents additional challenges which are
hard to tackle with current resources, due to their limited size and
availability in English only. To overcome these limitations, we present "Echoes
from Alexandria", or in shortened form, "Echoes", a large resource for
multilingual book summarization. Echoes features three novel datasets: i)
Echo-Wiki, for multilingual book summarization, ii) Echo-XSum, for
extremely-compressive multilingual book summarization, and iii) Echo-FairySum,
for extractive book summarization. To the best of our knowledge, Echoes, with
its thousands of books and summaries, is the largest resource, and the first to
be multilingual, featuring 5 languages and 25 language pairs. In addition to
Echoes, we also introduce a new extractive-then-abstractive baseline, and,
supported by our experimental results and manual analysis of the summaries
generated, we argue that this baseline is more suitable for book summarization
than purely-abstractive approaches. We release our resource and software at
https://github.com/Babelscape/echoes-from-alexandria in the hope of fostering
innovative research in multilingual book summarization.
|
[
"cs.CL"
] | false |
2306.04347
|
2023-06-07T11:25:20Z
|
World Models for Math Story Problems
|
[
"Andreas Opedal",
"Niklas Stoehr",
"Abulhair Saparov",
"Mrinmaya Sachan"
] |
Solving math story problems is a complex task for students and NLP models
alike, requiring them to understand the world as described in the story and
reason over it to compute an answer. Recent years have seen impressive
performance on automatically solving these problems with large pre-trained
language models and innovative techniques to prompt them. However, it remains
unclear if these models possess accurate representations of mathematical
concepts. This leads to lack of interpretability and trustworthiness which
impedes their usefulness in various applications. In this paper, we consolidate
previous work on categorizing and representing math story problems and develop
MathWorld, which is a graph-based semantic formalism specific for the domain of
math story problems. With MathWorld, we can assign world models to math story
problems which represent the situations and actions introduced in the text and
their mathematical relationships. We combine math story problems from several
existing datasets and annotate a corpus of 1,019 problems and 3,204 logical
forms with MathWorld. Using this data, we demonstrate the following use cases
of MathWorld: (1) prompting language models with synthetically generated
question-answer pairs to probe their reasoning and world modeling abilities,
and (2) generating new problems by using the world models as a design space.
|
[
"cs.CL"
] | false |
2306.04399
|
2023-06-07T12:58:46Z
|
Transfer Learning of Transformer-based Speech Recognition Models from
Czech to Slovak
|
[
"Jan Lehečka",
"Josef V. Psutka",
"Josef Psutka"
] |
In this paper, we are comparing several methods of training the Slovak speech
recognition models based on the Transformers architecture. Specifically, we are
exploring the approach of transfer learning from the existing Czech pre-trained
Wav2Vec 2.0 model into Slovak. We are demonstrating the benefits of the
proposed approach on three Slovak datasets. Our Slovak models scored the best
results when initializing the weights from the Czech model at the beginning of
the pre-training phase. Our results show that the knowledge stored in the Cezch
pre-trained model can be successfully reused to solve tasks in Slovak while
outperforming even much larger public multilingual models.
|
[
"cs.CL"
] | false |
2306.04424
|
2023-06-07T13:31:02Z
|
Examining Bias in Opinion Summarisation Through the Perspective of
Opinion Diversity
|
[
"Nannan Huang",
"Lin Tian",
"Haytham Fayek",
"Xiuzhen Zhang"
] |
Opinion summarisation is a task that aims to condense the information
presented in the source documents while retaining the core message and
opinions. A summary that only represents the majority opinions will leave the
minority opinions unrepresented in the summary. In this paper, we use the
stance towards a certain target as an opinion. We study bias in opinion
summarisation from the perspective of opinion diversity, which measures whether
the model generated summary can cover a diverse set of opinions. In addition,
we examine opinion similarity, a measure of how closely related two opinions
are in terms of their stance on a given topic, and its relationship with
opinion diversity. Through the lens of stances towards a topic, we examine
opinion diversity and similarity using three debatable topics under COVID-19.
Experimental results on these topics revealed that a higher degree of
similarity of opinions did not indicate good diversity or fairly cover the
various opinions originally presented in the source documents. We found that
BART and ChatGPT can better capture diverse opinions presented in the source
documents.
|
[
"cs.CL"
] | false |
2306.04441
|
2023-06-07T13:58:55Z
|
STEPS: A Benchmark for Order Reasoning in Sequential Tasks
|
[
"Weizhi Wang",
"Hong Wang",
"Xifeng Yan"
] |
Various human activities can be abstracted into a sequence of actions in
natural text, i.e. cooking, repairing, manufacturing, etc. Such action
sequences heavily depend on the executing order, while disorder in action
sequences leads to failure of further task execution by robots or AI agents.
Therefore, to verify the order reasoning capability of current neural models in
sequential tasks, we propose a challenging benchmark , named STEPS. STEPS
involves two subtask settings, focusing on determining the rationality of given
next step in recipes and selecting the reasonable step from the multi-choice
question, respectively. We describe the data construction and task
formulations, and benchmark most of significant Large Language Models (LLMs).
The experimental results demonstrate 1) The commonsense reasoning of action
orders in sequential tasks are challenging to resolve via zero-shot prompting
or few-shot in-context learning for LLMs; 2) Prompting method still
significantly lags behind tuning-based method on STEPS.
|
[
"cs.CL"
] | false |
2306.04523
|
2023-06-07T15:33:07Z
|
Can current NLI systems handle German word order? Investigating language
model performance on a new German challenge set of minimal pairs
|
[
"Ines Reinig",
"Katja Markert"
] |
Compared to English, German word order is freer and therefore poses
additional challenges for natural language inference (NLI). We create WOGLI
(Word Order in German Language Inference), the first adversarial NLI dataset
for German word order that has the following properties: (i) each premise has
an entailed and a non-entailed hypothesis; (ii) premise and hypotheses differ
only in word order and necessary morphological changes to mark case and number.
In particular, each premise andits two hypotheses contain exactly the same
lemmata. Our adversarial examples require the model to use morphological
markers in order to recognise or reject entailment. We show that current German
autoencoding models fine-tuned on translated NLI data can struggle on this
challenge set, reflecting the fact that translated NLI datasets will not mirror
all necessary language phenomena in the target language. We also examine
performance after data augmentation as well as on related word order phenomena
derived from WOGLI. Our datasets are publically available at
https://github.com/ireinig/wogli.
|
[
"cs.CL"
] | false |
2306.04530
|
2023-06-07T15:39:02Z
|
Lenient Evaluation of Japanese Speech Recognition: Modeling Naturally
Occurring Spelling Inconsistency
|
[
"Shigeki Karita",
"Richard Sproat",
"Haruko Ishikawa"
] |
Word error rate (WER) and character error rate (CER) are standard metrics in
Speech Recognition (ASR), but one problem has always been alternative
spellings: If one's system transcribes adviser whereas the ground truth has
advisor, this will count as an error even though the two spellings really
represent the same word.
Japanese is notorious for ``lacking orthography'': most words can be spelled
in multiple ways, presenting a problem for accurate ASR evaluation. In this
paper we propose a new lenient evaluation metric as a more defensible CER
measure for Japanese ASR. We create a lattice of plausible respellings of the
reference transcription, using a combination of lexical resources, a Japanese
text-processing system, and a neural machine translation model for
reconstructing kanji from hiragana or katakana. In a manual evaluation, raters
rated 95.4% of the proposed spelling variants as plausible. ASR results show
that our method, which does not penalize the system for choosing a valid
alternate spelling of a word, affords a 2.4%-3.1% absolute reduction in CER
depending on the task.
|
[
"cs.CL"
] | false |
2306.04535
|
2023-06-07T15:41:40Z
|
PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts
|
[
"Xiangjue Dong",
"Yun He",
"Ziwei Zhu",
"James Caverlee"
] |
A key component of modern conversational systems is the Dialogue State
Tracker (or DST), which models a user's goals and needs. Toward building more
robust and reliable DSTs, we introduce a prompt-based learning approach to
automatically generate effective adversarial examples to probe DST models. Two
key characteristics of this approach are: (i) it only needs the output of the
DST with no need for model parameters, and (ii) it can learn to generate
natural language utterances that can target any DST. Through experiments over
state-of-the-art DSTs, the proposed framework leads to the greatest reduction
in accuracy and the best attack success rate while maintaining good fluency and
a low perturbation ratio. We also show how much the generated adversarial
examples can bolster a DST through adversarial training. These results indicate
the strength of prompt-based attacks on DSTs and leave open avenues for
continued refinement.
|
[
"cs.CL"
] | false |
2306.04537
|
2023-06-07T15:42:31Z
|
Long-form analogies generated by chatGPT lack human-like
psycholinguistic properties
|
[
"S. M. Seals",
"Valerie L. Shalin"
] |
Psycholinguistic analyses provide a means of evaluating large language model
(LLM) output and making systematic comparisons to human-generated text. These
methods can be used to characterize the psycholinguistic properties of LLM
output and illustrate areas where LLMs fall short in comparison to
human-generated text. In this work, we apply psycholinguistic methods to
evaluate individual sentences from long-form analogies about biochemical
concepts. We compare analogies generated by human subjects enrolled in
introductory biochemistry courses to analogies generated by chatGPT. We perform
a supervised classification analysis using 78 features extracted from
Coh-metrix that analyze text cohesion, language, and readability (Graesser et.
al., 2004). Results illustrate high performance for classifying
student-generated and chatGPT-generated analogies. To evaluate which features
contribute most to model performance, we use a hierarchical clustering
approach. Results from this analysis illustrate several linguistic differences
between the two sources.
|
[
"cs.CL"
] | false |
2306.04573
|
2023-06-07T16:21:59Z
|
Gender, names and other mysteries: Towards the ambiguous for
gender-inclusive translation
|
[
"Danielle Saunders",
"Katrina Olsen"
] |
The vast majority of work on gender in MT focuses on 'unambiguous' inputs,
where gender markers in the source language are expected to be resolved in the
output. Conversely, this paper explores the widespread case where the source
sentence lacks explicit gender markers, but the target sentence contains them
due to richer grammatical gender. We particularly focus on inputs containing
person names.
Investigating such sentence pairs casts a new light on research into MT
gender bias and its mitigation. We find that many name-gender co-occurrences in
MT data are not resolvable with 'unambiguous gender' in the source language,
and that gender-ambiguous examples can make up a large proportion of training
examples. From this, we discuss potential steps toward gender-inclusive
translation which accepts the ambiguity in both gender and translation.
|
[
"cs.CL"
] | false |
2306.04724
|
2023-06-07T18:39:57Z
|
Prompter: Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain
Adaptation
|
[
"Taha Aksu",
"Min-Yen Kan",
"Nancy F. Chen"
] |
A challenge in the Dialogue State Tracking (DST) field is adapting models to
new domains without using any supervised data, zero-shot domain adaptation.
Parameter-Efficient Transfer Learning (PETL) has the potential to address this
problem due to its robustness. However, it has yet to be applied to the
zero-shot scenarios, as it is not clear how to apply it unsupervisedly.
Our method, Prompter, uses descriptions of target domain slots to generate
dynamic prefixes that are concatenated to the key and values at each layer's
self-attention mechanism. This allows for the use of prefix-tuning in
zero-shot. Prompter outperforms previous methods on both the MultiWOZ and SGD
benchmarks. In generating prefixes, our analyses find that Prompter not only
utilizes the semantics of slot descriptions but also how often the slots appear
together in conversation. Moreover, Prompter's gains are due to its improved
ability to distinguish "none"-valued dialogue slots, compared against
baselines.
|
[
"cs.CL"
] | false |
2306.04820
|
2023-06-07T22:56:53Z
|
Good Data, Large Data, or No Data? Comparing Three Approaches in
Developing Research Aspect Classifiers for Biomedical Papers
|
[
"Shreya Chandrasekhar",
"Chieh-Yang Huang",
"Ting-Hao 'Kenneth' Huang"
] |
The rapid growth of scientific publications, particularly during the COVID-19
pandemic, emphasizes the need for tools to help researchers efficiently
comprehend the latest advancements. One essential part of understanding
scientific literature is research aspect classification, which categorizes
sentences in abstracts to Background, Purpose, Method, and Finding. In this
study, we investigate the impact of different datasets on model performance for
the crowd-annotated CODA-19 research aspect classification task. Specifically,
we explore the potential benefits of using the large, automatically curated
PubMed 200K RCT dataset and evaluate the effectiveness of large language models
(LLMs), such as LLaMA, GPT-3, ChatGPT, and GPT-4. Our results indicate that
using the PubMed 200K RCT dataset does not improve performance for the CODA-19
task. We also observe that while GPT-4 performs well, it does not outperform
the SciBERT model fine-tuned on the CODA-19 dataset, emphasizing the importance
of a dedicated and task-aligned datasets dataset for the target task. Our code
is available at https://github.com/Crowd-AI-Lab/CODA-19-exp.
|
[
"cs.CL"
] | false |
2306.04823
|
2023-06-07T23:07:23Z
|
Data Augmentation for Improving Tail-traffic Robustness in Skill-routing
for Dialogue Systems
|
[
"Ting-Wei Wu",
"Fatemeh Sheikholeslami",
"Mohammad Kachuee",
"Jaeyoung Do",
"Sungjin Lee"
] |
Large-scale conversational systems typically rely on a skill-routing
component to route a user request to an appropriate skill and interpretation to
serve the request. In such system, the agent is responsible for serving
thousands of skills and interpretations which create a long-tail distribution
due to the natural frequency of requests. For example, the samples related to
play music might be a thousand times more frequent than those asking for
theatre show times. Moreover, inputs used for ML-based skill routing are often
a heterogeneous mix of strings, embedding vectors, categorical and scalar
features which makes employing augmentation-based long-tail learning approaches
challenging. To improve the skill-routing robustness, we propose an
augmentation of heterogeneous skill-routing data and training targeted for
robust operation in long-tail data regimes. We explore a variety of conditional
encoder-decoder generative frameworks to perturb original data fields and
create synthetic training data. To demonstrate the effectiveness of the
proposed method, we conduct extensive experiments using real-world data from a
commercial conversational system. Based on the experiment results, the proposed
approach improves more than 80% (51 out of 63) of intents with less than 10K of
traffic instances in the skill-routing replication task.
|
[
"cs.CL"
] | false |
2306.04101
|
2023-06-07T01:44:43Z
|
Gotta: Generative Few-shot Question Answering by Prompt-based Cloze Data
Augmentation
|
[
"Xiusi Chen",
"Yu Zhang",
"Jinliang Deng",
"Jyun-Yu Jiang",
"Wei Wang"
] |
Few-shot question answering (QA) aims at precisely discovering answers to a
set of questions from context passages while only a few training samples are
available. Although existing studies have made some progress and can usually
achieve proper results, they suffer from understanding deep semantics for
reasoning out the questions. In this paper, we develop Gotta, a Generative
prOmpT-based daTa Augmentation framework to mitigate the challenge above.
Inspired by the human reasoning process, we propose to integrate the cloze task
to enhance few-shot QA learning. Following the recent success of prompt-tuning,
we present the cloze task in the same format as the main QA task, allowing the
model to learn both tasks seamlessly together to fully take advantage of the
power of prompt-tuning. Extensive experiments on widely used benchmarks
demonstrate that Gotta consistently outperforms competitive baselines,
validating the effectiveness of our proposed prompt-tuning-based cloze task,
which not only fine-tunes language models but also learns to guide reasoning in
QA tasks. Further analysis shows that the prompt-based loss incorporates the
auxiliary task better than the multi-task loss, highlighting the strength of
prompt-tuning on the few-shot QA task.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.04116
|
2023-06-07T03:03:41Z
|
Unbalanced Optimal Transport for Unbalanced Word Alignment
|
[
"Yuki Arase",
"Han Bao",
"Sho Yokoi"
] |
Monolingual word alignment is crucial to model semantic interactions between
sentences. In particular, null alignment, a phenomenon in which words have no
corresponding counterparts, is pervasive and critical in handling semantically
divergent sentences. Identification of null alignment is useful on its own to
reason about the semantic similarity of sentences by indicating there exists
information inequality. To achieve unbalanced word alignment that values both
alignment and null alignment, this study shows that the family of optimal
transport (OT), i.e., balanced, partial, and unbalanced OT, are natural and
powerful approaches even without tailor-made techniques. Our extensive
experiments covering unsupervised and supervised settings indicate that our
generic OT-based alignment methods are competitive against the
state-of-the-arts specially designed for word alignment, remarkably on
challenging datasets with high null alignment frequencies.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.04176
|
2023-06-07T06:03:39Z
|
When to Read Documents or QA History: On Unified and Selective
Open-domain QA
|
[
"Kyungjae Lee",
"Sang-eun Han",
"Seung-won Hwang",
"Moontae Lee"
] |
This paper studies the problem of open-domain question answering, with the
aim of answering a diverse range of questions leveraging knowledge resources.
Two types of sources, QA-pair and document corpora, have been actively
leveraged with the following complementary strength. The former is highly
precise when the paraphrase of given question $q$ was seen and answered during
training, often posed as a retrieval problem, while the latter generalizes
better for unseen questions. A natural follow-up is thus leveraging both
models, while a naive pipelining or integration approaches have failed to bring
additional gains over either model alone. Our distinction is interpreting the
problem as calibration, which estimates the confidence of predicted answers as
an indicator to decide when to use a document or QA-pair corpus. The
effectiveness of our method was validated on widely adopted benchmarks such as
Natural Questions and TriviaQA.
|
[
"cs.CL",
"cs.AI",
"I.2.7"
] | false |
2306.04203
|
2023-06-07T07:15:20Z
|
Leveraging Knowledge Graph Embeddings to Enhance Contextual
Representations for Relation Extraction
|
[
"Fréjus A. A. Laleye",
"Loïc Rakotoson",
"Sylvain Massip"
] |
Relation extraction task is a crucial and challenging aspect of Natural
Language Processing. Several methods have surfaced as of late, exhibiting
notable performance in addressing the task; however, most of these approaches
rely on vast amounts of data from large-scale knowledge graphs or language
models pretrained on voluminous corpora. In this paper, we hone in on the
effective utilization of solely the knowledge supplied by a corpus to create a
high-performing model. Our objective is to showcase that by leveraging the
hierarchical structure and relational distribution of entities within a corpus
without introducing external knowledge, a relation extraction model can achieve
significantly enhanced performance. We therefore proposed a relation extraction
approach based on the incorporation of pretrained knowledge graph embeddings at
the corpus scale into the sentence-level contextual representation. We
conducted a series of experiments which revealed promising and very interesting
results for our proposed approach.The obtained results demonstrated an
outperformance of our method compared to context-based relation extraction
models.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.04340
|
2023-06-07T11:11:12Z
|
Co-evolving Graph Reasoning Network for Emotion-Cause Pair Extraction
|
[
"Bowen Xing",
"Ivor W. Tsang"
] |
Emotion-Cause Pair Extraction (ECPE) aims to extract all emotion clauses and
their corresponding cause clauses from a document. Existing approaches tackle
this task through multi-task learning (MTL) framework in which the two subtasks
provide indicative clues for ECPE. However, the previous MTL framework
considers only one round of multi-task reasoning and ignores the reverse
feedbacks from ECPE to the subtasks. Besides, its multi-task reasoning only
relies on semantics-level interactions, which cannot capture the explicit
dependencies, and both the encoder sharing and multi-task hidden states
concatenations can hardly capture the causalities. To solve these issues, we
first put forward a new MTL framework based on Co-evolving Reasoning. It (1)
models the bidirectional feedbacks between ECPE and its subtasks; (2) allows
the three tasks to evolve together and prompt each other recurrently; (3)
integrates prediction-level interactions to capture explicit dependencies. Then
we propose a novel multi-task relational graph (MRG) to sufficiently exploit
the causal relations. Finally, we propose a Co-evolving Graph Reasoning Network
(CGR-Net) that implements our MTL framework and conducts Co-evolving Reasoning
on MRG. Experimental results show that our model achieves new state-of-the-art
performance, and further analysis confirms the advantages of our method.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.04508
|
2023-06-07T15:20:24Z
|
Enhancing In-Context Learning with Answer Feedback for Multi-Span
Question Answering
|
[
"Zixian Huang",
"Jiaying Zhou",
"Gengyang Xiao",
"Gong Cheng"
] |
Whereas the recent emergence of large language models (LLMs) like ChatGPT has
exhibited impressive general performance, it still has a large gap with
fully-supervised models on specific tasks such as multi-span question
answering. Previous researches found that in-context learning is an effective
approach to exploiting LLM, by using a few task-related labeled data as
demonstration examples to construct a few-shot prompt for answering new
questions. A popular implementation is to concatenate a few questions and their
correct answers through simple templates, informing LLM of the desired output.
In this paper, we propose a novel way of employing labeled data such that it
also informs LLM of some undesired output, by extending demonstration examples
with feedback about answers predicted by an off-the-shelf model, e.g., correct,
incorrect, or incomplete. Experiments on three multi-span question answering
datasets as well as a keyphrase extraction dataset show that our new prompting
strategy consistently improves LLM's in-context learning performance.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.04544
|
2023-06-07T15:49:04Z
|
Contrastive Bootstrapping for Label Refinement
|
[
"Shudi Hou",
"Yu Xia",
"Muhao Chen",
"Sujian Li"
] |
Traditional text classification typically categorizes texts into pre-defined
coarse-grained classes, from which the produced models cannot handle the
real-world scenario where finer categories emerge periodically for accurate
services. In this work, we investigate the setting where fine-grained
classification is done only using the annotation of coarse-grained categories
and the coarse-to-fine mapping. We propose a lightweight contrastive
clustering-based bootstrapping method to iteratively refine the labels of
passages. During clustering, it pulls away negative passage-prototype pairs
under the guidance of the mapping from both global and local perspectives.
Experiments on NYT and 20News show that our method outperforms the
state-of-the-art methods by a large margin.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.04597
|
2023-06-07T16:50:03Z
|
Language Models Get a Gender Makeover: Mitigating Gender Bias with
Few-Shot Data Interventions
|
[
"Himanshu Thakur",
"Atishay Jain",
"Praneetha Vaddamanu",
"Paul Pu Liang",
"Louis-Philippe Morency"
] |
Societal biases present in pre-trained large language models are a critical
issue as these models have been shown to propagate biases in countless
downstream applications, rendering them unfair towards specific groups of
people. Since large-scale retraining of these models from scratch is both time
and compute-expensive, a variety of approaches have been previously proposed
that de-bias a pre-trained model. While the majority of current
state-of-the-art debiasing methods focus on changes to the training regime, in
this paper, we propose data intervention strategies as a powerful yet simple
technique to reduce gender bias in pre-trained models. Specifically, we
empirically show that by fine-tuning a pre-trained model on only 10 de-biased
(intervened) training examples, the tendency to favor any gender is
significantly reduced. Since our proposed method only needs a few training
examples, our few-shot debiasing approach is highly feasible and practical.
Through extensive experimentation, we show that our debiasing technique
performs better than competitive state-of-the-art baselines with minimal loss
in language modeling ability.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.04610
|
2023-06-07T17:22:03Z
|
The Two Word Test: A Semantic Benchmark for Large Language Models
|
[
"Nicholas Riccardi",
"Rutvik H. Desai"
] |
Large Language Models (LLMs) have shown remarkable abilities recently,
including passing advanced professional exams and demanding benchmark tests.
This performance has led many to suggest that they are close to achieving
humanlike or 'true' understanding of language, and even Artificial General
Intelligence (AGI). Here, we provide a new open-source benchmark that can
assess semantic abilities of LLMs using two-word phrases using a task that can
be performed relatively easily by humans without advanced training. Combining
multiple words into a single concept is a fundamental aspect of human language
and intelligence. The test requires meaningfulness judgments of 1768 noun-noun
combinations that have been rated as meaningful (e.g., baby boy) or not
meaningful (e.g., goat sky). by 150 human raters. We provide versions of the
task that probe meaningfulness ratings on a 0-4 scale as well as binary
judgments. We conducted a series of experiments using the TWT on GPT-4,
GPT-3.5, and Bard, with both versions. Results demonstrated that, compared to
humans, all models perform poorly at rating meaningfulness of these phrases.
GPT-3.5 and Bard are also unable to make binary discriminations between
sensible and nonsense phrases as making sense. GPT-4 makes a substantial
improvement in binary discrimination of combinatorial phrases but is still
significantly worse than human performance. The TWT can be used to understand
the limitations and weaknesses of current LLMs, and potentially improve them.
The test also reminds us that caution is warranted in attributing 'true
understanding' or AGI to LLMs. TWT is available at:
https://github.com/NickRiccardi/two-word-test
|
[
"cs.CL",
"cs.AI"
] | false |
2306.04707
|
2023-06-07T18:19:46Z
|
Improving Open Language Models by Learning from Organic Interactions
|
[
"Jing Xu",
"Da Ju",
"Joshua Lane",
"Mojtaba Komeili",
"Eric Michael Smith",
"Megan Ung",
"Morteza Behrooz",
"William Ngan",
"Rashel Moritz",
"Sainbayar Sukhbaatar",
"Y-Lan Boureau",
"Jason Weston",
"Kurt Shuster"
] |
We present BlenderBot 3x, an update on the conversational model BlenderBot 3,
which is now trained using organic conversation and feedback data from
participating users of the system in order to improve both its skills and
safety. We are publicly releasing the participating de-identified interaction
data for use by the research community, in order to spur further progress.
Training models with organic data is challenging because interactions with
people "in the wild" include both high quality conversations and feedback, as
well as adversarial and toxic behavior. We study techniques that enable
learning from helpful teachers while avoiding learning from people who are
trying to trick the model into unhelpful or toxic responses. BlenderBot 3x is
both preferred in conversation to BlenderBot 3, and is shown to produce safer
responses in challenging situations. While our current models are still far
from perfect, we believe further improvement can be achieved by continued use
of the techniques explored in this work.
|
[
"cs.CL",
"cs.AI"
] | true |
2306.04765
|
2023-06-07T20:24:43Z
|
The HCI Aspects of Public Deployment of Research Chatbots: A User Study,
Design Recommendations, and Open Challenges
|
[
"Morteza Behrooz",
"William Ngan",
"Joshua Lane",
"Giuliano Morse",
"Benjamin Babcock",
"Kurt Shuster",
"Mojtaba Komeili",
"Moya Chen",
"Melanie Kambadur",
"Y-Lan Boureau",
"Jason Weston"
] |
Publicly deploying research chatbots is a nuanced topic involving necessary
risk-benefit analyses. While there have recently been frequent discussions on
whether it is responsible to deploy such models, there has been far less focus
on the interaction paradigms and design approaches that the resulting
interfaces should adopt, in order to achieve their goals more effectively. We
aim to pose, ground, and attempt to answer HCI questions involved in this
scope, by reporting on a mixed-methods user study conducted on a recent
research chatbot. We find that abstract anthropomorphic representation for the
agent has a significant effect on user's perception, that offering AI
explainability may have an impact on feedback rates, and that two (diegetic and
extradiegetic) levels of the chat experience should be intentionally designed.
We offer design recommendations and areas of further focus for the research
community.
|
[
"cs.AI",
"cs.CL"
] | false |
2306.04787
|
2023-06-07T21:18:23Z
|
Absformer: Transformer-based Model for Unsupervised Multi-Document
Abstractive Summarization
|
[
"Mohamed Trabelsi",
"Huseyin Uzunalioglu"
] |
Multi-document summarization (MDS) refers to the task of summarizing the text
in multiple documents into a concise summary. The generated summary can save
the time of reading many documents by providing the important content in the
form of a few sentences. Abstractive MDS aims to generate a coherent and fluent
summary for multiple documents using natural language generation techniques. In
this paper, we consider the unsupervised abstractive MDS setting where there
are only documents with no groundtruh summaries provided, and we propose
Absformer, a new Transformer-based method for unsupervised abstractive summary
generation. Our method consists of a first step where we pretrain a
Transformer-based encoder using the masked language modeling (MLM) objective as
the pretraining task in order to cluster the documents into semantically
similar groups; and a second step where we train a Transformer-based decoder to
generate abstractive summaries for the clusters of documents. To our knowledge,
we are the first to successfully incorporate a Transformer-based model to solve
the unsupervised abstractive MDS task. We evaluate our approach using three
real-world datasets from different domains, and we demonstrate both substantial
improvements in terms of evaluation metrics over state-of-the-art
abstractive-based methods, and generalization to datasets from different
domains.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.04076
|
2023-06-07T00:33:02Z
|
Text-only Domain Adaptation using Unified Speech-Text Representation in
Transducer
|
[
"Lu Huang",
"Boyu Li",
"Jun Zhang",
"Lu Lu",
"Zejun Ma"
] |
Domain adaptation using text-only corpus is challenging in end-to-end(E2E)
speech recognition. Adaptation by synthesizing audio from text through TTS is
resource-consuming. We present a method to learn Unified Speech-Text
Representation in Conformer Transducer(USTR-CT) to enable fast domain
adaptation using the text-only corpus. Different from the previous textogram
method, an extra text encoder is introduced in our work to learn text
representation and is removed during inference, so there is no modification for
online deployment. To improve the efficiency of adaptation, single-step and
multi-step adaptations are also explored. The experiments on adapting
LibriSpeech to SPGISpeech show the proposed method reduces the word error
rate(WER) by relatively 44% on the target domain, which is better than those of
TTS method and textogram method. Also, it is shown the proposed method can be
combined with internal language model estimation(ILME) to further improve the
performance.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | true |
2306.04190
|
2023-06-07T06:58:38Z
|
An ASR-Based Tutor for Learning to Read: How to Optimize Feedback to
First Graders
|
[
"Yu Bai",
"Cristian Tejedor-Garcia",
"Ferdy Hubers",
"Catia Cucchiarini",
"Helmer Strik"
] |
The interest in employing automatic speech recognition (ASR) in applications
for reading practice has been growing in recent years. In a previous study, we
presented an ASR-based Dutch reading tutor application that was developed to
provide instantaneous feedback to first-graders learning to read. We saw that
ASR has potential at this stage of the reading process, as the results
suggested that pupils made progress in reading accuracy and fluency by using
the software. In the current study, we used children's speech from an existing
corpus (JASMIN) to develop two new ASR systems, and compared the results to
those of the previous study. We analyze correct/incorrect classification of the
ASR systems using human transcripts at word level, by means of evaluation
measures such as Cohen's Kappa, Matthews Correlation Coefficient (MCC),
precision, recall and F-measures. We observe improvements for the newly
developed ASR systems regarding the agreement with human-based judgment and
correct rejection (CR). The accuracy of the ASR systems varies for different
reading tasks and word types. Our results suggest that, in the current
configuration, it is difficult to classify isolated words. We discuss these
results, possible ways to improve our systems and avenues for future research.
|
[
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.04233
|
2023-06-07T08:23:58Z
|
Transfer Learning from Pre-trained Language Models Improves End-to-End
Speech Summarization
|
[
"Kohei Matsuura",
"Takanori Ashihara",
"Takafumi Moriya",
"Tomohiro Tanaka",
"Takatomo Kano",
"Atsunori Ogawa",
"Marc Delcroix"
] |
End-to-end speech summarization (E2E SSum) directly summarizes input speech
into easy-to-read short sentences with a single model. This approach is
promising because it, in contrast to the conventional cascade approach, can
utilize full acoustical information and mitigate to the propagation of
transcription errors. However, due to the high cost of collecting
speech-summary pairs, an E2E SSum model tends to suffer from training data
scarcity and output unnatural sentences. To overcome this drawback, we propose
for the first time to integrate a pre-trained language model (LM), which is
highly capable of generating natural sentences, into the E2E SSum decoder via
transfer learning. In addition, to reduce the gap between the independently
pre-trained encoder and decoder, we also propose to transfer the baseline E2E
SSum encoder instead of the commonly used automatic speech recognition encoder.
Experimental results show that the proposed model outperforms baseline and data
augmented models.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.04268
|
2023-06-07T09:09:00Z
|
Multi-microphone Automatic Speech Segmentation in Meetings Based on
Circular Harmonics Features
|
[
"Théo Mariotte",
"Anthony Larcher",
"Silvio Montrésor",
"Jean-Hugh Thomas"
] |
Speaker diarization is the task of answering Who spoke and when? in an audio
stream. Pipeline systems rely on speech segmentation to extract speakers'
segments and achieve robust speaker diarization. This paper proposes a common
framework to solve three segmentation tasks in the distant speech scenario:
Voice Activity Detection (VAD), Overlapped Speech Detection (OSD), and Speaker
Change Detection (SCD). In the literature, a few studies investigate the
multi-microphone distant speech scenario. In this work, we propose a new set of
spatial features based on direction-of-arrival estimations in the circular
harmonic domain (CH-DOA). These spatial features are extracted from
multi-microphone audio data and combined with standard acoustic features.
Experiments on the AMI meeting corpus show that CH-DOA can improve the
segmentation while being robust in the case of deactivated microphones.
|
[
"cs.SD",
"cs.CL",
"eess.AS"
] | false |
2306.04293
|
2023-06-07T09:46:38Z
|
Phrase Retrieval for Open-Domain Conversational Question Answering with
Conversational Dependency Modeling via Contrastive Learning
|
[
"Soyeong Jeong",
"Jinheon Baek",
"Sung Ju Hwang",
"Jong C. Park"
] |
Open-Domain Conversational Question Answering (ODConvQA) aims at answering
questions through a multi-turn conversation based on a retriever-reader
pipeline, which retrieves passages and then predicts answers with them.
However, such a pipeline approach not only makes the reader vulnerable to the
errors propagated from the retriever, but also demands additional effort to
develop both the retriever and the reader, which further makes it slower since
they are not runnable in parallel. In this work, we propose a method to
directly predict answers with a phrase retrieval scheme for a sequence of
words, reducing the conventional two distinct subtasks into a single one. Also,
for the first time, we study its capability for ODConvQA tasks. However, simply
adopting it is largely problematic, due to the dependencies between previous
and current turns in a conversation. To address this problem, we further
introduce a novel contrastive learning strategy, making sure to reflect
previous turns when retrieving the phrase for the current context, by
maximizing representational similarities of consecutive turns in a conversation
while minimizing irrelevant conversational contexts. We validate our model on
two ODConvQA datasets, whose experimental results show that it substantially
outperforms the relevant baselines with the retriever-reader. Code is available
at: https://github.com/starsuzi/PRO-ConvQA.
|
[
"cs.CL",
"cs.IR",
"cs.LG"
] | false |
2306.04368
|
2023-06-07T12:01:46Z
|
Arabic Dysarthric Speech Recognition Using Adversarial and Signal-Based
Augmentation
|
[
"Massa Baali",
"Ibrahim Almakky",
"Shady Shehata",
"Fakhri Karray"
] |
Despite major advancements in Automatic Speech Recognition (ASR), the
state-of-the-art ASR systems struggle to deal with impaired speech even with
high-resource languages. In Arabic, this challenge gets amplified, with added
complexities in collecting data from dysarthric speakers. In this paper, we aim
to improve the performance of Arabic dysarthric automatic speech recognition
through a multi-stage augmentation approach. To this effect, we first propose a
signal-based approach to generate dysarthric Arabic speech from healthy Arabic
speech by modifying its speed and tempo. We also propose a second stage
Parallel Wave Generative (PWG) adversarial model that is trained on an English
dysarthric dataset to capture language-independant dysarthric speech patterns
and further augment the signal-adjusted speech samples. Furthermore, we propose
a fine-tuning and text-correction strategies for Arabic Conformer at different
dysarthric speech severity levels. Our fine-tuned Conformer achieved 18% Word
Error Rate (WER) and 17.2% Character Error Rate (CER) on synthetically
generated dysarthric speech from the Arabic commonvoice speech dataset. This
shows significant WER improvement of 81.8% compared to the baseline model
trained solely on healthy data. We perform further validation on real English
dysarthric speech showing a WER improvement of 124% compared to the baseline
trained only on healthy English LJSpeech dataset.
|
[
"cs.SD",
"cs.CL",
"eess.AS"
] | false |
2306.04374
|
2023-06-07T12:14:16Z
|
Label Aware Speech Representation Learning For Language Identification
|
[
"Shikhar Vashishth",
"Shikhar Bharadwaj",
"Sriram Ganapathy",
"Ankur Bapna",
"Min Ma",
"Wei Han",
"Vera Axelrod",
"Partha Talukdar"
] |
Speech representation learning approaches for non-semantic tasks such as
language recognition have either explored supervised embedding extraction
methods using a classifier model or self-supervised representation learning
approaches using raw data. In this paper, we propose a novel framework of
combining self-supervised representation learning with the language label
information for the pre-training task. This framework, termed as Label Aware
Speech Representation (LASR) learning, uses a triplet based objective function
to incorporate language labels along with the self-supervised loss function.
The speech representations are further fine-tuned for the downstream task. The
language recognition experiments are performed on two public datasets - FLEURS
and Dhwani. In these experiments, we illustrate that the proposed LASR
framework improves over the state-of-the-art systems on language
identification. We also report an analysis of the robustness of LASR approach
to noisy/missing labels as well as its application to multi-lingual speech
recognition tasks.
|
[
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.04384
|
2023-06-07T12:31:07Z
|
Multilingual Clinical NER: Translation or Cross-lingual Transfer?
|
[
"Xavier Fontaine",
"Félix Gaschi",
"Parisa Rastin",
"Yannick Toussaint"
] |
Natural language tasks like Named Entity Recognition (NER) in the clinical
domain on non-English texts can be very time-consuming and expensive due to the
lack of annotated data. Cross-lingual transfer (CLT) is a way to circumvent
this issue thanks to the ability of multilingual large language models to be
fine-tuned on a specific task in one language and to provide high accuracy for
the same task in another language. However, other methods leveraging
translation models can be used to perform NER without annotated data in the
target language, by either translating the training set or test set. This paper
compares cross-lingual transfer with these two alternative methods, to perform
clinical NER in French and in German without any training data in those
languages. To this end, we release MedNERF a medical NER test set extracted
from French drug prescriptions and annotated with the same guidelines as an
English dataset. Through extensive experiments on this dataset and on a German
medical dataset (Frei and Kramer, 2021), we show that translation-based methods
can achieve similar performance to CLT but require more care in their design.
And while they can take advantage of monolingual clinical language models,
those do not guarantee better results than large general-purpose multilingual
models, whether with cross-lingual transfer or translation.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.04563
|
2023-06-07T16:10:21Z
|
ChatGPT is fun, but it is not funny! Humor is still challenging Large
Language Models
|
[
"Sophie Jentzsch",
"Kristian Kersting"
] |
Humor is a central aspect of human communication that has not been solved for
artificial agents so far. Large language models (LLMs) are increasingly able to
capture implicit and contextual information. Especially, OpenAI's ChatGPT
recently gained immense public attention. The GPT3-based model almost seems to
communicate on a human level and can even tell jokes. Humor is an essential
component of human communication. But is ChatGPT really funny? We put ChatGPT's
sense of humor to the test. In a series of exploratory experiments around
jokes, i.e., generation, explanation, and detection, we seek to understand
ChatGPT's capability to grasp and reproduce human humor. Since the model itself
is not accessible, we applied prompt-based experiments. Our empirical evidence
indicates that jokes are not hard-coded but mostly also not newly generated by
the model. Over 90% of 1008 generated jokes were the same 25 Jokes. The system
accurately explains valid jokes but also comes up with fictional explanations
for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the
classification of jokes. ChatGPT has not solved computational humor yet but it
can be a big leap toward "funny" machines.
|
[
"cs.AI",
"cs.CL",
"cs.HC",
"cs.LG"
] | false |
2306.04803
|
2023-06-07T21:53:14Z
|
Privately generating tabular data using language models
|
[
"Alexandre Sablayrolles",
"Yue Wang",
"Brian Karrer"
] |
Privately generating synthetic data from a table is an important brick of a
privacy-first world. We propose and investigate a simple approach of treating
each row in a table as a sentence and training a language model with
differential privacy. We show this approach obtains competitive results in
modelling tabular data across multiple datasets, even at small scales that
favor alternative methods based on marginal distributions.
|
[
"cs.LG",
"cs.CL",
"cs.CR"
] | false |
2306.07936
|
2023-06-07T12:33:02Z
|
FOOCTTS: Generating Arabic Speech with Acoustic Environment for Football
Commentator
|
[
"Massa Baali",
"Ahmed Ali"
] |
This paper presents FOOCTTS, an automatic pipeline for a football commentator
that generates speech with background crowd noise. The application gets the
text from the user, applies text pre-processing such as vowelization, followed
by the commentator's speech synthesizer. Our pipeline included Arabic automatic
speech recognition for data labeling, CTC segmentation, transcription
vowelization to match speech, and fine-tuning the TTS. Our system is capable of
generating speech with its acoustic environment within limited 15 minutes of
football commentator recording. Our prototype is generalizable and can be
easily applied to different domains and languages.
|
[
"eess.AS",
"cs.CL",
"cs.SD"
] | false |
2306.04073
|
2023-06-07T00:16:10Z
|
Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient
for Convolutional Neural Networks
|
[
"Mohammed Nowaz Rabbani Chowdhury",
"Shuai Zhang",
"Meng Wang",
"Sijia Liu",
"Pin-Yu Chen"
] |
In deep learning, mixture-of-experts (MoE) activates one or few experts
(sub-networks) on a per-sample or per-token basis, resulting in significant
computation reduction. The recently proposed \underline{p}atch-level routing in
\underline{MoE} (pMoE) divides each input into $n$ patches (or tokens) and
sends $l$ patches ($l\ll n$) to each expert through prioritized routing. pMoE
has demonstrated great empirical success in reducing training and inference
costs while maintaining test accuracy. However, the theoretical explanation of
pMoE and the general MoE remains elusive. Focusing on a supervised
classification task using a mixture of two-layer convolutional neural networks
(CNNs), we show for the first time that pMoE provably reduces the required
number of training samples to achieve desirable generalization (referred to as
the sample complexity) by a factor in the polynomial order of $n/l$, and
outperforms its single-expert counterpart of the same or even larger capacity.
The advantage results from the discriminative routing property, which is
justified in both theory and practice that pMoE routers can filter
label-irrelevant patches and route similar class-discriminative patches to the
same expert. Our experimental results on MNIST, CIFAR-10, and CelebA support
our theoretical findings on pMoE's generalization and show that pMoE can avoid
learning spurious correlations.
|
[
"cs.LG"
] | false |
2306.04099
|
2023-06-07T01:43:47Z
|
NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating
True Coverage
|
[
"Ziting Wen",
"Oscar Pizarro",
"Stefan Williams"
] |
High annotation cost for training machine learning classifiers has driven
extensive research in active learning and self-supervised learning. Recent
research has shown that in the context of supervised learning different active
learning strategies need to be applied at various stages of the training
process to ensure improved performance over the random baseline. We refer to
the point where the number of available annotations changes the suitable active
learning strategy as the phase transition point. In this paper, we establish
that when combining active learning with self-supervised models to achieve
improved performance, the phase transition point occurs earlier. It becomes
challenging to determine which strategy should be used for previously unseen
datasets. We argue that existing active learning algorithms are heavily
influenced by the phase transition because the empirical risk over the entire
active learning pool estimated by these algorithms is inaccurate and influenced
by the number of labeled samples. To address this issue, we propose a novel
active learning strategy, neural tangent kernel clustering-pseudo-labels
(NTKCPL). It estimates empirical risk based on pseudo-labels and the model
prediction with NTK approximation. We analyze the factors affecting this
approximation error and design a pseudo-label clustering generation method to
reduce the approximation error. We validate our method on five datasets,
empirically demonstrating that it outperforms the baseline methods in most
cases and is valid over a wider range of training budgets.
|
[
"cs.LG"
] | false |
2306.04109
|
2023-06-07T02:29:58Z
|
Membership inference attack with relative decision boundary distance
|
[
"JiaCheng Xu",
"ChengXiang Tan"
] |
Membership inference attack is one of the most popular privacy attacks in
machine learning, which aims to predict whether a given sample was contained in
the target model's training set. Label-only membership inference attack is a
variant that exploits sample robustness and attracts more attention since it
assumes a practical scenario in which the adversary only has access to the
predicted labels of the input samples. However, since the decision boundary
distance, which measures robustness, is strongly affected by the random initial
image, the adversary may get opposite results even for the same input samples.
In this paper, we propose a new attack method, called muti-class adaptive
membership inference attack in the label-only setting. All decision boundary
distances for all target classes have been traversed in the early attack
iterations, and the subsequent attack iterations continue with the shortest
decision boundary distance to obtain a stable and optimal decision boundary
distance. Instead of using a single boundary distance, the relative boundary
distance between samples and neighboring points has also been employed as a new
membership score to distinguish between member samples inside the training set
and nonmember samples outside the training set. Experiments show that previous
label-only membership inference attacks using the untargeted HopSkipJump
algorithm fail to achieve optimal decision bounds in more than half of the
samples, whereas our multi-targeted HopSkipJump algorithm succeeds in almost
all samples. In addition, extensive experiments show that our multi-class
adaptive MIA outperforms current label-only membership inference attacks in the
CIFAR10, and CIFAR100 datasets, especially for the true positive rate at low
false positive rates metric.
|
[
"cs.LG"
] | false |
2306.04160
|
2023-06-07T05:18:27Z
|
Rethinking Weak Supervision in Helping Contrastive Learning
|
[
"Jingyi Cui",
"Weiran Huang",
"Yifei Wang",
"Yisen Wang"
] |
Contrastive learning has shown outstanding performances in both supervised
and unsupervised learning, and has recently been introduced to solve weakly
supervised learning problems such as semi-supervised learning and noisy label
learning. Despite the empirical evidence showing that semi-supervised labels
improve the representations of contrastive learning, it remains unknown if
noisy supervised information can be directly used in training instead of after
manual denoising. Therefore, to explore the mechanical differences between
semi-supervised and noisy-labeled information in helping contrastive learning,
we establish a unified theoretical framework of contrastive learning under weak
supervision. Specifically, we investigate the most intuitive paradigm of
jointly training supervised and unsupervised contrastive losses. By translating
the weakly supervised information into a similarity graph under the framework
of spectral clustering based on the posterior probability of weak labels, we
establish the downstream classification error bound. We prove that
semi-supervised labels improve the downstream error bound whereas noisy labels
have limited effects under such a paradigm. Our theoretical findings here
provide new insights for the community to rethink the role of weak supervision
in helping contrastive learning.
|
[
"cs.LG"
] | false |
2306.04343
|
2023-06-07T11:17:07Z
|
Bayesian Optimisation Against Climate Change: Applications and
Benchmarks
|
[
"Sigrid Passano Hellan",
"Christopher G. Lucas",
"Nigel H. Goddard"
] |
Bayesian optimisation is a powerful method for optimising black-box
functions, popular in settings where the true function is expensive to evaluate
and no gradient information is available. Bayesian optimisation can improve
responses to many optimisation problems within climate change for which
simulator models are unavailable or expensive to sample from. While there have
been several feasibility demonstrations of Bayesian optimisation in
climate-related applications, there has been no unifying review of applications
and benchmarks. We provide such a review here, to encourage the use of Bayesian
optimisation in important and well-suited application domains. We identify four
main application domains: material discovery, wind farm layout, optimal
renewable control and environmental monitoring. For each domain we identify a
public benchmark or data set that is easy to use and evaluate systems against,
while being representative of real-world problems. Due to the lack of a
suitable benchmark for environmental monitoring, we propose LAQN-BO, based on
air pollution data. Our contributions are: a) identifying a representative
range of benchmarks, providing example code where necessary; b) introducing a
new benchmark, LAQN-BO; and c) promoting a wider use of climate change
applications among Bayesian optimisation practitioners.
|
[
"cs.LG"
] | false |
2306.04403
|
2023-06-07T13:02:24Z
|
Policy-Based Self-Competition for Planning Problems
|
[
"Jonathan Pirnay",
"Quirin Göttl",
"Jakob Burger",
"Dominik Gerhard Grimm"
] |
AlphaZero-type algorithms may stop improving on single-player tasks in case
the value network guiding the tree search is unable to approximate the outcome
of an episode sufficiently well. One technique to address this problem is
transforming the single-player task through self-competition. The main idea is
to compute a scalar baseline from the agent's historical performances and to
reshape an episode's reward into a binary output, indicating whether the
baseline has been exceeded or not. However, this baseline only carries limited
information for the agent about strategies how to improve. We leverage the idea
of self-competition and directly incorporate a historical policy into the
planning process instead of its scalar performance. Based on the recently
introduced Gumbel AlphaZero (GAZ), we propose our algorithm GAZ 'Play-to-Plan'
(GAZ PTP), in which the agent learns to find strong trajectories by planning
against possible strategies of its past self. We show the effectiveness of our
approach in two well-known combinatorial optimization problems, the Traveling
Salesman Problem and the Job-Shop Scheduling Problem. With only half of the
simulation budget for search, GAZ PTP consistently outperforms all selected
single-player variants of GAZ.
|
[
"cs.LG"
] | false |
2306.04548
|
2023-06-07T15:51:06Z
|
Convergence of SARSA with linear function approximation: The random
horizon case
|
[
"Lina Palmborg"
] |
The reinforcement learning algorithm SARSA combined with linear function
approximation has been shown to converge for infinite horizon discounted Markov
decision problems (MDPs). In this paper, we investigate the convergence of the
algorithm for random horizon MDPs, which has not previously been shown. We
show, similar to earlier results for infinite horizon discounted MDPs, that if
the behaviour policy is $\varepsilon$-soft and Lipschitz continuous with
respect to the weight vector of the linear function approximation, with small
enough Lipschitz constant, then the algorithm will converge with probability
one when considering a random horizon MDP.
|
[
"cs.LG"
] | false |
2306.04718
|
2023-06-07T18:30:25Z
|
Neural Symbolic Regression using Control Variables
|
[
"Xieting Chu",
"Hongjue Zhao",
"Enze Xu",
"Hairong Qi",
"Minghan Chen",
"Huajie Shao"
] |
Symbolic regression (SR) is a powerful technique for discovering the
analytical mathematical expression from data, finding various applications in
natural sciences due to its good interpretability of results. However, existing
methods face scalability issues when dealing with complex equations involving
multiple variables. To address this challenge, we propose SRCV, a novel neural
symbolic regression method that leverages control variables to enhance both
accuracy and scalability. The core idea is to decompose multi-variable symbolic
regression into a set of single-variable SR problems, which are then combined
in a bottom-up manner. The proposed method involves a four-step process. First,
we learn a data generator from observed data using deep neural networks (DNNs).
Second, the data generator is used to generate samples for a certain variable
by controlling the input variables. Thirdly, single-variable symbolic
regression is applied to estimate the corresponding mathematical expression.
Lastly, we repeat steps 2 and 3 by gradually adding variables one by one until
completion. We evaluate the performance of our method on multiple benchmark
datasets. Experimental results demonstrate that the proposed SRCV significantly
outperforms state-of-the-art baselines in discovering mathematical expressions
with multiple variables. Moreover, it can substantially reduce the search space
for symbolic regression. The source code will be made publicly available upon
publication.
|
[
"cs.LG"
] | false |
2306.04739
|
2023-06-07T19:28:32Z
|
Automatic retrieval of corresponding US views in longitudinal
examinations
|
[
"Hamideh Kerdegari",
"Tran Huy Nhat Phung1",
"Van Hao Nguyen",
"Thi Phuong Thao Truong",
"Ngoc Minh Thu Le",
"Thanh Phuong Le",
"Thi Mai Thao Le",
"Luigi Pisani",
"Linda Denehy",
"Vital Consortium",
"Reza Razavi",
"Louise Thwaites",
"Sophie Yacoub",
"Andrew P. King",
"Alberto Gomez"
] |
Skeletal muscle atrophy is a common occurrence in critically ill patients in
the intensive care unit (ICU) who spend long periods in bed. Muscle mass must
be recovered through physiotherapy before patient discharge and ultrasound
imaging is frequently used to assess the recovery process by measuring the
muscle size over time. However, these manual measurements are subject to large
variability, particularly since the scans are typically acquired on different
days and potentially by different operators. In this paper, we propose a
self-supervised contrastive learning approach to automatically retrieve similar
ultrasound muscle views at different scan times. Three different models were
compared using data from 67 patients acquired in the ICU. Results indicate that
our contrastive model outperformed a supervised baseline model in the task of
view retrieval with an AUC of 73.52% and when combined with an automatic
segmentation model achieved 5.7%+/-0.24% error in cross-sectional area.
Furthermore, a user study survey confirmed the efficacy of our model for muscle
view retrieval.
|
[
"cs.LG"
] | false |
2306.04748
|
2023-06-07T19:54:56Z
|
Analysis, Identification and Prediction of Parkinson's disease sub-types
and progression through Machine Learning
|
[
"Ashwin Ram"
] |
Parkinson's disease (PD) is a prevalent neurodegenerative disorder with
varying patient trajectories, yet little is understood about the underlying
causes and symptom progression. The Parkinson's Progression Markers Initiative
(PPMI) has collected comprehensive longitudinal data from diverse patient
cohorts to identify biomarkers and aid in the development of interventions.
Despite over 110 machine learning studies using the PPMI database, the majority
have focused on supervised models for diagnosis prediction, which has limited
impact on understanding patient variability and progression. This paper
addresses this gap by combining supervised and unsupervised machine learning
methods to identify subtypes that accurately predict disease progression in
Parkinson's patients. Building upon previous work, we replicate and extend the
study by integrating unsupervised patient clustering and prediction of present
and future symptoms using 5 additional years of longitudinal data from the
Progressive Parkinson's Markers Initiative (PPMI) database. Our findings
demonstrate accurate prediction of disease trajectories and symptoms at
baseline, offering valuable insights into patient heterogeneity and the
potential for personalized interventions. The integration of supervised and
unsupervised models presents a promising avenue for uncovering latent subgroups
and understanding the complexity of Parkinson's disease progression.
|
[
"cs.LG"
] | false |
2306.04118
|
2023-06-07T03:20:44Z
|
M$^3$Fair: Mitigating Bias in Healthcare Data through Multi-Level and
Multi-Sensitive-Attribute Reweighting Method
|
[
"Yinghao Zhu",
"Jingkun An",
"Enshen Zhou",
"Lu An",
"Junyi Gao",
"Hao Li",
"Haoran Feng",
"Bo Hou",
"Wen Tang",
"Chengwei Pan",
"Liantao Ma"
] |
In the data-driven artificial intelligence paradigm, models heavily rely on
large amounts of training data. However, factors like sampling distribution
imbalance can lead to issues of bias and unfairness in healthcare data.
Sensitive attributes, such as race, gender, age, and medical condition, are
characteristics of individuals that are commonly associated with discrimination
or bias. In healthcare AI, these attributes can play a significant role in
determining the quality of care that individuals receive. For example, minority
groups often receive fewer procedures and poorer-quality medical care than
white individuals in US. Therefore, detecting and mitigating bias in data is
crucial to enhancing health equity. Bias mitigation methods include
pre-processing, in-processing, and post-processing. Among them, Reweighting
(RW) is a widely used pre-processing method that performs well in balancing
machine learning performance and fairness performance. RW adjusts the weights
for samples within each (group, label) combination, where these weights are
utilized in loss functions. However, RW is limited to considering only a single
sensitive attribute when mitigating bias and assumes that each sensitive
attribute is equally important. This may result in potential inaccuracies when
addressing intersectional bias. To address these limitations, we propose
M3Fair, a multi-level and multi-sensitive-attribute reweighting method by
extending the RW method to multiple sensitive attributes at multiple levels.
Our experiments on real-world datasets show that the approach is effective,
straightforward, and generalizable in addressing the healthcare fairness
issues.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.04123
|
2023-06-07T03:38:03Z
|
Retrosynthesis Prediction with Local Template Retrieval
|
[
"Shufang Xie",
"Rui Yan",
"Junliang Guo",
"Yingce Xia",
"Lijun Wu",
"Tao Qin"
] |
Retrosynthesis, which predicts the reactants of a given target molecule, is
an essential task for drug discovery. In recent years, the machine learing
based retrosynthesis methods have achieved promising results. In this work, we
introduce RetroKNN, a local reaction template retrieval method to further boost
the performance of template-based systems with non-parametric retrieval. We
first build an atom-template store and a bond-template store that contain the
local templates in the training data, then retrieve from these templates with a
k-nearest-neighbor (KNN) search during inference. The retrieved templates are
combined with neural network predictions as the final output. Furthermore, we
propose a lightweight adapter to adjust the weights when combing neural network
and KNN predictions conditioned on the hidden representation and the retrieved
templates. We conduct comprehensive experiments on two widely used benchmarks,
the USPTO-50K and USPTO-MIT. Especially for the top-1 accuracy, we improved
7.1% on the USPTO-50K dataset and 12.0% on the USPTO-MIT dataset. These results
demonstrate the effectiveness of our method.
|
[
"cs.AI",
"cs.LG"
] | false |
2306.04133
|
2023-06-07T04:04:36Z
|
Answering Compositional Queries with Set-Theoretic Embeddings
|
[
"Shib Dasgupta",
"Andrew McCallum",
"Steffen Rendle",
"Li Zhang"
] |
The need to compactly and robustly represent item-attribute relations arises
in many important tasks, such as faceted browsing and recommendation systems. A
popular machine learning approach for this task denotes that an item has an
attribute by a high dot-product between vectors for the item and attribute -- a
representation that is not only dense, but also tends to correct noisy and
incomplete data. While this method works well for queries retrieving items by a
single attribute (such as \emph{movies that are comedies}), we find that vector
embeddings do not so accurately support compositional queries (such as movies
that are comedies and British but not romances). To address these set-theoretic
compositions, this paper proposes to replace vectors with box embeddings, a
region-based representation that can be thought of as learnable Venn diagrams.
We introduce a new benchmark dataset for compositional queries, and present
experiments and analysis providing insights into the behavior of both. We find
that, while vector and box embeddings are equally suited to single attribute
queries, for compositional queries box embeddings provide substantial
advantages over vectors, particularly at the moderate and larger retrieval set
sizes that are most useful for users' search and browsing.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.04201
|
2023-06-07T07:15:08Z
|
Improving Hyperparameter Learning under Approximate Inference in
Gaussian Process Models
|
[
"Rui Li",
"ST John",
"Arno Solin"
] |
Approximate inference in Gaussian process (GP) models with non-conjugate
likelihoods gets entangled with the learning of the model hyperparameters. We
improve hyperparameter learning in GP models and focus on the interplay between
variational inference (VI) and the learning target. While VI's lower bound to
the marginal likelihood is a suitable objective for inferring the approximate
posterior, we show that a direct approximation of the marginal likelihood as in
Expectation Propagation (EP) is a better learning objective for hyperparameter
optimization. We design a hybrid training procedure to bring the best of both
worlds: it leverages conjugate-computation VI for inference and uses an EP-like
marginal likelihood approximation for hyperparameter learning. We compare VI,
EP, Laplace approximation, and our proposed training procedure and empirically
demonstrate the effectiveness of our proposal across a wide range of data sets.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.04235
|
2023-06-07T08:25:51Z
|
MobileNMT: Enabling Translation in 15MB and 30ms
|
[
"Ye Lin",
"Xiaohui Wang",
"Zhexi Zhang",
"Mingxuan Wang",
"Tong Xiao",
"Jingbo Zhu"
] |
Deploying NMT models on mobile devices is essential for privacy, low latency,
and offline scenarios. For high model capacity, NMT models are rather large.
Running these models on devices is challenging with limited storage, memory,
computation, and power consumption. Existing work either only focuses on a
single metric such as FLOPs or general engine which is not good at
auto-regressive decoding. In this paper, we present MobileNMT, a system that
can translate in 15MB and 30ms on devices. We propose a series of principles
for model compression when combined with quantization. Further, we implement an
engine that is friendly to INT8 and decoding. With the co-design of model and
engine, compared with the existing system, we speed up 47.0x and save 99.5% of
memory with only 11.6% loss of BLEU. The code is publicly available at
https://github.com/zjersey/Lightseq-ARM.
|
[
"cs.AI",
"cs.LG"
] | true |
2306.04255
|
2023-06-07T08:51:06Z
|
Accounting For Informative Sampling When Learning to Forecast Treatment
Outcomes Over Time
|
[
"Toon Vanderschueren",
"Alicia Curth",
"Wouter Verbeke",
"Mihaela van der Schaar"
] |
Machine learning (ML) holds great potential for accurately forecasting
treatment outcomes over time, which could ultimately enable the adoption of
more individualized treatment strategies in many practical applications.
However, a significant challenge that has been largely overlooked by the ML
literature on this topic is the presence of informative sampling in
observational data. When instances are observed irregularly over time, sampling
times are typically not random, but rather informative -- depending on the
instance's characteristics, past outcomes, and administered treatments. In this
work, we formalize informative sampling as a covariate shift problem and show
that it can prohibit accurate estimation of treatment outcomes if not properly
accounted for. To overcome this challenge, we present a general framework for
learning treatment outcomes in the presence of informative sampling using
inverse intensity-weighting, and propose a novel method, TESAR-CDE, that
instantiates this framework using Neural CDEs. Using a simulation environment
based on a clinical use case, we demonstrate the effectiveness of our approach
in learning under informative sampling.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.04299
|
2023-06-07T10:02:16Z
|
Timing Process Interventions with Causal Inference and Reinforcement
Learning
|
[
"Hans Weytjens",
"Wouter Verbeke",
"Jochen De Weerdt"
] |
The shift from the understanding and prediction of processes to their
optimization offers great benefits to businesses and other organizations.
Precisely timed process interventions are the cornerstones of effective
optimization. Prescriptive process monitoring (PresPM) is the sub-field of
process mining that concentrates on process optimization. The emerging PresPM
literature identifies state-of-the-art methods, causal inference (CI) and
reinforcement learning (RL), without presenting a quantitative comparison. Most
experiments are carried out using historical data, causing problems with the
accuracy of the methods' evaluations and preempting online RL. Our contribution
consists of experiments on timed process interventions with synthetic data that
renders genuine online RL and the comparison to CI possible, and allows for an
accurate evaluation of the results. Our experiments reveal that RL's policies
outperform those from CI and are more robust at the same time. Indeed, the RL
policies approach perfect policies. Unlike CI, the unaltered online RL approach
can be applied to other, more generic PresPM problems such as next best
activity recommendations. Nonetheless, CI has its merits in settings where
online learning is not an option.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.04338
|
2023-06-07T11:08:12Z
|
Changing Data Sources in the Age of Machine Learning for Official
Statistics
|
[
"Cedric De Boom",
"Michael Reusens"
] |
Data science has become increasingly essential for the production of official
statistics, as it enables the automated collection, processing, and analysis of
large amounts of data. With such data science practices in place, it enables
more timely, more insightful and more flexible reporting. However, the quality
and integrity of data-science-driven statistics rely on the accuracy and
reliability of the data sources and the machine learning techniques that
support them. In particular, changes in data sources are inevitable to occur
and pose significant risks that are crucial to address in the context of
machine learning for official statistics.
This paper gives an overview of the main risks, liabilities, and
uncertainties associated with changing data sources in the context of machine
learning for official statistics. We provide a checklist of the most prevalent
origins and causes of changing data sources; not only on a technical level but
also regarding ownership, ethics, regulation, and public perception. Next, we
highlight the repercussions of changing data sources on statistical reporting.
These include technical effects such as concept drift, bias, availability,
validity, accuracy and completeness, but also the neutrality and potential
discontinuation of the statistical offering. We offer a few important
precautionary measures, such as enhancing robustness in both data sourcing and
statistical techniques, and thorough monitoring. In doing so, machine
learning-based official statistics can maintain integrity, reliability,
consistency, and relevance in policy-making, decision-making, and public
discourse.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.04365
|
2023-06-07T11:55:20Z
|
Edge conductivity in PtSe$_2$ nanostructures
|
[
"Roman Kempt",
"Agnieszka Kuc",
"Thomas Brumme",
"Thomas Heine"
] |
PtSe$_2$ is a promising 2D material for nanoelectromechanical sensing and
photodetection in the infrared regime. One of its most compelling features is
the facile synthesis at temperatures below 500 {\deg}C, which is compatible
with current back-end-of-line semiconductor processing. However, this process
generates polycrystalline thin films with nanoflake-like domains of 5 to 100 nm
size. To investigate the lateral quantum confinement effect in this size
regime, we train a deep neural network to obtain an interatomic potential at
DFT accuracy and use that to model ribbons, surfaces, nanoflakes, and
nanoplatelets of PtSe$_2$ with lateral widths between 5 to 15 nm. We determine
which edge terminations are the most stable and find evidence that the
electrical conductivity is localized on the edges for lateral sizes below 10
nm. This suggests that the transport channels in thin films of PtSe$_2$ might
be dominated by networks of edges, instead of transport through the layers
themselves.
|
[
"cond-mat.mtrl-sci",
"cs.LG"
] | false |
2306.04423
|
2023-06-07T13:30:43Z
|
On Computing Optimal Tree Ensembles
|
[
"Christian Komusiewicz",
"Pascal Kunz",
"Frank Sommer",
"Manuel Sorge"
] |
Random forests and, more generally, (decision\nobreakdash-)tree ensembles are
widely used methods for classification and regression. Recent algorithmic
advances allow to compute decision trees that are optimal for various measures
such as their size or depth. We are not aware of such research for tree
ensembles and aim to contribute to this area. Mainly, we provide two novel
algorithms and corresponding lower bounds. First, we are able to carry over and
substantially improve on tractability results for decision trees, obtaining a
$(6\delta D S)^S \cdot poly$-time algorithm, where $S$ is the number of cuts in
the tree ensemble, $D$ the largest domain size, and $\delta$ is the largest
number of features in which two examples differ. To achieve this, we introduce
the witness-tree technique which also seems promising for practice. Second, we
show that dynamic programming, which has been successful for decision trees,
may also be viable for tree ensembles, providing an $\ell^n \cdot poly$-time
algorithm, where $\ell$ is the number of trees and $n$ the number of examples.
Finally, we compare the number of cuts necessary to classify training data sets
for decision trees and tree ensembles, showing that ensembles may need
exponentially fewer cuts for increasing number of trees.
|
[
"cs.LG",
"cs.DS"
] | false |
2306.04425
|
2023-06-07T13:31:57Z
|
Towards High-Performance Exploratory Data Analysis (EDA) Via Stable
Equilibrium Point
|
[
"Yuxuan Song",
"Yongyu Wang"
] |
Exploratory data analysis (EDA) is a vital procedure for data science
projects. In this work, we introduce a stable equilibrium point (SEP) - based
framework for improving the efficiency and solution quality of EDA. By
exploiting the SEPs to be the representative points, our approach aims to
generate high-quality clustering and data visualization for large-scale data
sets. A very unique property of the proposed method is that the SEPs will
directly encode the clustering properties of data sets. Compared with prior
state-of-the-art clustering and data visualization methods, the proposed
methods allow substantially improving computing efficiency and solution quality
for large-scale data analysis tasks.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.04429
|
2023-06-07T13:40:20Z
|
Balancing of competitive two-player Game Levels with Reinforcement
Learning
|
[
"Florian Rupp",
"Manuel Eberhardinger",
"Kai Eckert"
] |
The balancing process for game levels in a competitive two-player context
involves a lot of manual work and testing, particularly in non-symmetrical game
levels. In this paper, we propose an architecture for automated balancing of
tile-based levels within the recently introduced PCGRL framework (procedural
content generation via reinforcement learning). Our architecture is divided
into three parts: (1) a level generator, (2) a balancing agent and, (3) a
reward modeling simulation. By playing the level in a simulation repeatedly,
the balancing agent is rewarded for modifying it towards the same win rates for
all players. To this end, we introduce a novel family of swap-based
representations to increase robustness towards playability. We show that this
approach is capable to teach an agent how to alter a level for balancing better
and faster than plain PCGRL. In addition, by analyzing the agent's swapping
behavior, we can draw conclusions about which tile types influence the
balancing most. We test and show our results using the Neural MMO (NMMO)
environment in a competitive two-player setting.
|
[
"cs.LG",
"cs.GT"
] | false |
2306.04454
|
2023-06-07T14:28:42Z
|
Training-Free Neural Active Learning with Initialization-Robustness
Guarantees
|
[
"Apivich Hemachandra",
"Zhongxiang Dai",
"Jasraj Singh",
"See-Kiong Ng",
"Bryan Kian Hsiang Low"
] |
Existing neural active learning algorithms have aimed to optimize the
predictive performance of neural networks (NNs) by selecting data for
labelling. However, other than a good predictive performance, being robust
against random parameter initializations is also a crucial requirement in
safety-critical applications. To this end, we introduce our expected variance
with Gaussian processes (EV-GP) criterion for neural active learning, which is
theoretically guaranteed to select data points which lead to trained NNs with
both (a) good predictive performances and (b) initialization robustness.
Importantly, our EV-GP criterion is training-free, i.e., it does not require
any training of the NN during data selection, which makes it computationally
efficient. We empirically demonstrate that our EV-GP criterion is highly
correlated with both initialization robustness and generalization performance,
and show that it consistently outperforms baseline methods in terms of both
desiderata, especially in situations with limited initial data or large batch
sizes.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.04495
|
2023-06-07T15:04:58Z
|
Limits, approximation and size transferability for GNNs on sparse graphs
via graphops
|
[
"Thien Le",
"Stefanie Jegelka"
] |
Can graph neural networks generalize to graphs that are different from the
graphs they were trained on, e.g., in size? In this work, we study this
question from a theoretical perspective. While recent work established such
transferability and approximation results via graph limits, e.g., via graphons,
these only apply non-trivially to dense graphs. To include frequently
encountered sparse graphs such as bounded-degree or power law graphs, we take a
perspective of taking limits of operators derived from graphs, such as the
aggregation operation that makes up GNNs. This leads to the recently introduced
limit notion of graphops (Backhausz and Szegedy, 2022). We demonstrate how the
operator perspective allows us to develop quantitative bounds on the distance
between a finite GNN and its limit on an infinite graph, as well as the
distance between the GNN on graphs of different sizes that share structural
properties, under a regularity assumption verified for various graph sequences.
Our results hold for dense and sparse graphs, and various notions of graph
limits.
|
[
"cs.LG",
"cs.SI"
] | false |
2306.04518
|
2023-06-07T15:29:12Z
|
Optimal sensor placement for reconstructing wind pressure field around
buildings using compressed sensing
|
[
"Xihaier Luo",
"Ahsan Kareem",
"Shinjae Yoo"
] |
Deciding how to optimally deploy sensors in a large, complex, and spatially
extended structure is critical to ensure that the surface pressure field is
accurately captured for subsequent analysis and design. In some cases,
reconstruction of missing data is required in downstream tasks such as the
development of digital twins. This paper presents a data-driven sparse sensor
selection algorithm, aiming to provide the most information contents for
reconstructing aerodynamic characteristics of wind pressures over tall building
structures parsimoniously. The algorithm first fits a set of basis functions to
the training data, then applies a computationally efficient QR algorithm that
ranks existing pressure sensors in order of importance based on the state
reconstruction to this tailored basis. The findings of this study show that the
proposed algorithm successfully reconstructs the aerodynamic characteristics of
tall buildings from sparse measurement locations, generating stable and optimal
solutions across a range of conditions. As a result, this study serves as a
promising first step toward leveraging the success of data-driven and machine
learning algorithms to supplement traditional genetic algorithms currently used
in wind engineering.
|
[
"physics.flu-dyn",
"cs.LG"
] | false |
2306.04519
|
2023-06-07T15:29:46Z
|
Sample-Level Weighting for Multi-Task Learning with Auxiliary Tasks
|
[
"Emilie Grégoire",
"Hafeez Chaudhary",
"Sam Verboven"
] |
Multi-task learning (MTL) can improve the generalization performance of
neural networks by sharing representations with related tasks. Nonetheless, MTL
can also degrade performance through harmful interference between tasks. Recent
work has pursued task-specific loss weighting as a solution for this
interference. However, existing algorithms treat tasks as atomic, lacking the
ability to explicitly separate harmful and helpful signals beyond the task
level. To this end, we propose SLGrad, a sample-level weighting algorithm for
multi-task learning with auxiliary tasks. Through sample-specific task weights,
SLGrad reshapes the task distributions during training to eliminate harmful
auxiliary signals and augment useful task signals. Substantial generalization
performance gains are observed on (semi-) synthetic datasets and common
supervised multi-task problems.
|
[
"cs.LG",
"cs.AI",
"I.2.6"
] | false |
2306.04529
|
2023-06-07T15:37:50Z
|
Git-Theta: A Git Extension for Collaborative Development of Machine
Learning Models
|
[
"Nikhil Kandpal",
"Brian Lester",
"Mohammed Muqeeth",
"Anisha Mascarenhas",
"Monty Evans",
"Vishal Baskaran",
"Tenghao Huang",
"Haokun Liu",
"Colin Raffel"
] |
Currently, most machine learning models are trained by centralized teams and
are rarely updated. In contrast, open-source software development involves the
iterative development of a shared artifact through distributed collaboration
using a version control system. In the interest of enabling collaborative and
continual improvement of machine learning models, we introduce Git-Theta, a
version control system for machine learning models. Git-Theta is an extension
to Git, the most widely used version control software, that allows fine-grained
tracking of changes to model parameters alongside code and other artifacts.
Unlike existing version control systems that treat a model checkpoint as a blob
of data, Git-Theta leverages the structure of checkpoints to support
communication-efficient updates, automatic model merges, and meaningful
reporting about the difference between two versions of a model. In addition,
Git-Theta includes a plug-in system that enables users to easily add support
for new functionality. In this paper, we introduce Git-Theta's design and
features and include an example use-case of Git-Theta where a pre-trained model
is continually adapted and modified. We publicly release Git-Theta in hopes of
kickstarting a new era of collaborative model development.
|
[
"cs.LG",
"cs.SE"
] | false |
2306.04663
|
2023-06-07T08:27:36Z
|
U-PASS: an Uncertainty-guided deep learning Pipeline for Automated Sleep
Staging
|
[
"Elisabeth R. M. Heremans",
"Nabeel Seedat",
"Bertien Buyse",
"Dries Testelmans",
"Mihaela van der Schaar",
"Maarten De Vos"
] |
As machine learning becomes increasingly prevalent in critical fields such as
healthcare, ensuring the safety and reliability of machine learning systems
becomes paramount. A key component of reliability is the ability to estimate
uncertainty, which enables the identification of areas of high and low
confidence and helps to minimize the risk of error. In this study, we propose a
machine learning pipeline called U-PASS tailored for clinical applications that
incorporates uncertainty estimation at every stage of the process, including
data acquisition, training, and model deployment. The training process is
divided into a supervised pre-training step and a semi-supervised finetuning
step. We apply our uncertainty-guided deep learning pipeline to the challenging
problem of sleep staging and demonstrate that it systematically improves
performance at every stage. By optimizing the training dataset, actively
seeking informative samples, and deferring the most uncertain samples to an
expert, we achieve an expert-level accuracy of 85% on a challenging clinical
dataset of elderly sleep apnea patients, representing a significant improvement
over the baseline accuracy of 75%. U-PASS represents a promising approach to
incorporating uncertainty estimation into machine learning pipelines, thereby
improving their reliability and unlocking their potential in clinical settings.
|
[
"eess.SP",
"cs.LG"
] | false |
2306.04667
|
2023-06-07T14:50:34Z
|
Neural Embeddings for Protein Graphs
|
[
"Francesco Ceccarelli",
"Lorenzo Giusti",
"Sean B. Holden",
"Pietro Liò"
] |
Proteins perform much of the work in living organisms, and consequently the
development of efficient computational methods for protein representation is
essential for advancing large-scale biological research. Most current
approaches struggle to efficiently integrate the wealth of information
contained in the protein sequence and structure. In this paper, we propose a
novel framework for embedding protein graphs in geometric vector spaces, by
learning an encoder function that preserves the structural distance between
protein graphs. Utilizing Graph Neural Networks (GNNs) and Large Language
Models (LLMs), the proposed framework generates structure- and sequence-aware
protein representations. We demonstrate that our embeddings are successful in
the task of comparing protein structures, while providing a significant
speed-up compared to traditional approaches based on structural alignment. Our
framework achieves remarkable results in the task of protein structure
classification; in particular, when compared to other work, the proposed method
shows an average F1-Score improvement of 26% on out-of-distribution (OOD)
samples and of 32% when tested on samples coming from the same distribution as
the training data. Our approach finds applications in areas such as drug
prioritization, drug re-purposing, disease sub-type analysis and elsewhere.
|
[
"q-bio.QM",
"cs.LG"
] | false |
2306.04756
|
2023-06-07T20:08:27Z
|
A Linearly Convergent GAN Inversion-based Algorithm for Reverse
Engineering of Deceptions
|
[
"Darshan Thaker",
"Paris Giampouras",
"René Vidal"
] |
An important aspect of developing reliable deep learning systems is devising
strategies that make these systems robust to adversarial attacks. There is a
long line of work that focuses on developing defenses against these attacks,
but recently, researchers have began to study ways to reverse engineer the
attack process. This allows us to not only defend against several attack
models, but also classify the threat model. However, there is still a lack of
theoretical guarantees for the reverse engineering process. Current approaches
that give any guarantees are based on the assumption that the data lies in a
union of linear subspaces, which is not a valid assumption for more complex
datasets. In this paper, we build on prior work and propose a novel framework
for reverse engineering of deceptions which supposes that the clean data lies
in the range of a GAN. To classify the signal and attack, we jointly solve a
GAN inversion problem and a block-sparse recovery problem. For the first time
in the literature, we provide deterministic linear convergence guarantees for
this problem. We also empirically demonstrate the merits of the proposed
approach on several nonlinear datasets as compared to state-of-the-art methods.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.04766
|
2023-06-07T20:27:17Z
|
Enabling tabular deep learning when $d \gg n$ with an auxiliary
knowledge graph
|
[
"Camilo Ruiz",
"Hongyu Ren",
"Kexin Huang",
"Jure Leskovec"
] |
Machine learning models exhibit strong performance on datasets with abundant
labeled samples. However, for tabular datasets with extremely high
$d$-dimensional features but limited $n$ samples (i.e. $d \gg n$), machine
learning models struggle to achieve strong performance due to the risk of
overfitting. Here, our key insight is that there is often abundant, auxiliary
domain information describing input features which can be structured as a
heterogeneous knowledge graph (KG). We propose PLATO, a method that achieves
strong performance on tabular data with $d \gg n$ by using an auxiliary KG
describing input features to regularize a multilayer perceptron (MLP). In
PLATO, each input feature corresponds to a node in the auxiliary KG. In the
MLP's first layer, each input feature also corresponds to a weight vector.
PLATO is based on the inductive bias that two input features corresponding to
similar nodes in the auxiliary KG should have similar weight vectors in the
MLP's first layer. PLATO captures this inductive bias by inferring the weight
vector for each input feature from its corresponding node in the KG via a
trainable message-passing function. Across 6 $d \gg n$ datasets, PLATO
outperforms 13 state-of-the-art baselines by up to 10.19%.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.04785
|
2023-06-07T21:08:09Z
|
Interpretable Deep Clustering
|
[
"Jonathan Svirsky",
"Ofir Lindenbaum"
] |
Clustering is a fundamental learning task widely used as a first step in data
analysis. For example, biologists often use cluster assignments to analyze
genome sequences, medical records, or images. Since downstream analysis is
typically performed at the cluster level, practitioners seek reliable and
interpretable clustering models. We propose a new deep-learning framework that
predicts interpretable cluster assignments at the instance and cluster levels.
First, we present a self-supervised procedure to identify a subset of
informative features from each data point. Then, we design a model that
predicts cluster assignments and a gate matrix that leads to cluster-level
feature selection. We show that the proposed method can reliably predict
cluster assignments using synthetic and real data. Furthermore, we verify that
our model leads to interpretable results at a sample and cluster level.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.04791
|
2023-06-07T21:25:32Z
|
XInsight: Revealing Model Insights for GNNs with Flow-based Explanations
|
[
"Eli Laird",
"Ayesh Madushanka",
"Elfi Kraka",
"Corey Clark"
] |
Progress in graph neural networks has grown rapidly in recent years, with
many new developments in drug discovery, medical diagnosis, and recommender
systems. While this progress is significant, many networks are `black boxes'
with little understanding of the `what' exactly the network is learning. Many
high-stakes applications, such as drug discovery, require human-intelligible
explanations from the models so that users can recognize errors and discover
new knowledge. Therefore, the development of explainable AI algorithms is
essential for us to reap the benefits of AI.
We propose an explainability algorithm for GNNs called eXplainable Insight
(XInsight) that generates a distribution of model explanations using GFlowNets.
Since GFlowNets generate objects with probabilities proportional to a reward,
XInsight can generate a diverse set of explanations, compared to previous
methods that only learn the maximum reward sample. We demonstrate XInsight by
generating explanations for GNNs trained on two graph classification tasks:
classifying mutagenic compounds with the MUTAG dataset and classifying acyclic
graphs with a synthetic dataset that we have open-sourced. We show the utility
of XInsight's explanations by analyzing the generated compounds using QSAR
modeling, and we find that XInsight generates compounds that cluster by
lipophilicity, a known correlate of mutagenicity. Our results show that
XInsight generates a distribution of explanations that uncovers the underlying
relationships demonstrated by the model. They also highlight the importance of
generating a diverse set of explanations, as it enables us to discover hidden
relationships in the model and provides valuable guidance for further analysis.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.04793
|
2023-06-07T21:35:26Z
|
On the Joint Interaction of Models, Data, and Features
|
[
"Yiding Jiang",
"Christina Baek",
"J. Zico Kolter"
] |
Learning features from data is one of the defining characteristics of deep
learning, but our theoretical understanding of the role features play in deep
learning is still rudimentary. To address this gap, we introduce a new tool,
the interaction tensor, for empirically analyzing the interaction between data
and model through features. With the interaction tensor, we make several key
observations about how features are distributed in data and how models with
different random seeds learn different features. Based on these observations,
we propose a conceptual framework for feature learning. Under this framework,
the expected accuracy for a single hypothesis and agreement for a pair of
hypotheses can both be derived in closed-form. We demonstrate that the proposed
framework can explain empirically observed phenomena, including the recently
discovered Generalization Disagreement Equality (GDE) that allows for
estimating the generalization error with only unlabeled data. Further, our
theory also provides explicit construction of natural data distributions that
break the GDE. Thus, we believe this work provides valuable new insight into
our understanding of feature learning.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.07291
|
2023-06-07T22:26:50Z
|
An Ensemble Machine Learning Approach for Tropical Cyclone Detection
Using ERA5 Reanalysis Data
|
[
"Gabriele Accarino",
"Davide Donno",
"Francesco Immorlano",
"Donatello Elia",
"Giovanni Aloisio"
] |
Tropical Cyclones (TCs) are counted among the most destructive phenomena that
can be found in nature. Every year, globally an average of 90 TCs occur over
tropical waters, and global warming is making them stronger, larger and more
destructive. The accurate detection and tracking of such phenomena have become
a relevant and interesting area of research in weather and climate science.
Traditionally, TCs have been identified in large climate datasets through the
use of deterministic tracking schemes that rely on subjective thresholds.
Machine Learning (ML) models can complement deterministic approaches due to
their ability to capture the mapping between the input climatic drivers and the
geographical position of the TC center from the available data. This study
presents a ML ensemble approach for locating TC center coordinates, embedding
both TC classification and localization in a single end-to-end learning task.
The ensemble combines TC center estimates of different ML models that agree
about the presence of a TC in input data. ERA5 reanalysis were used for model
training and testing jointly with the International Best Track Archive for
Climate Stewardship records. Results showed that the ML approach is well-suited
for TC detection providing good generalization capabilities on out of sample
data. In particular, it was able to accurately detect lower TC categories than
those used for training the models. On top of this, the ensemble approach was
able to further improve TC localization performance with respect to single
model TC center estimates, demonstrating the good capabilities of the proposed
approach.
|
[
"physics.ao-ph",
"cs.LG"
] | false |
2306.10034
|
2023-06-07T20:19:25Z
|
Unlocking Insights into Business Trajectories with Transformer-based
Spatio-temporal Data Analysis
|
[
"Muhammad Arslan",
"Christophe Cruz"
] |
The world of business is constantly evolving and staying ahead of the curve
requires a deep understanding of market trends and performance. This article
addresses this requirement by modeling business trajectories using news
articles data.
|
[
"cs.IR",
"cs.LG"
] | false |
2307.05380
|
2023-06-07T15:30:03Z
|
Optimized Crystallographic Graph Generation for Material Science
|
[
"Astrid Klipfel",
"Yaël Frégier",
"Adlane Sayede",
"Zied Bouraoui"
] |
Graph neural networks are widely used in machine learning applied to
chemistry, and in particular for material science discovery. For crystalline
materials, however, generating graph-based representation from geometrical
information for neural networks is not a trivial task. The periodicity of
crystalline needs efficient implementations to be processed in real-time under
a massively parallel environment. With the aim of training graph-based
generative models of new material discovery, we propose an efficient tool to
generate cutoff graphs and k-nearest-neighbours graphs of periodic structures
within GPU optimization. We provide pyMatGraph a Pytorch-compatible framework
to generate graphs in real-time during the training of neural network
architecture. Our tool can update a graph of a structure, making generative
models able to update the geometry and process the updated graph during the
forward propagation on the GPU side. Our code is publicly available at
https://github.com/aklipf/mat-graph.
|
[
"cond-mat.mtrl-sci",
"cs.LG"
] | false |
2306.04148
|
2023-06-07T04:50:09Z
|
SANGEET: A XML based Open Dataset for Research in Hindustani Sangeet
|
[
"Chandan Misra",
"Swarup Chattopadhyay"
] |
It is very important to access a rich music dataset that is useful in a wide
variety of applications. Currently, available datasets are mostly focused on
storing vocal or instrumental recording data and ignoring the requirement of
its visual representation and retrieval. This paper attempts to build an
XML-based public dataset, called SANGEET, that stores comprehensive information
of Hindustani Sangeet (North Indian Classical Music) compositions written by
famous musicologist Pt. Vishnu Narayan Bhatkhande. SANGEET preserves all the
required information of any given composition including metadata, structural,
notational, rhythmic, and melodic information in a standardized way for easy
and efficient storage and extraction of musical information. The dataset is
intended to provide the ground truth information for music information research
tasks, thereby supporting several data-driven analysis from a machine learning
perspective. We present the usefulness of the dataset by demonstrating its
application on music information retrieval using XQuery, visualization through
Omenad rendering system. Finally, we propose approaches to transform the
dataset for performing statistical and machine learning tasks for a better
understanding of Hindustani Sangeet. The dataset can be found at
https://github.com/cmisra/Sangeet.
|
[
"cs.SD",
"cs.IR",
"cs.LG",
"eess.AS"
] | false |
2306.04223
|
2023-06-07T07:58:58Z
|
Causally Learning an Optimal Rework Policy
|
[
"Oliver Schacht",
"Sven Klaassen",
"Philipp Schwarz",
"Martin Spindler",
"Daniel Grünbaum",
"Sebastian Imhof"
] |
In manufacturing, rework refers to an optional step of a production process
which aims to eliminate errors or remedy products that do not meet the desired
quality standards. Reworking a production lot involves repeating a previous
production stage with adjustments to ensure that the final product meets the
required specifications. While offering the chance to improve the yield and
thus increase the revenue of a production lot, a rework step also incurs
additional costs. Additionally, the rework of parts that already meet the
target specifications may damage them and decrease the yield. In this paper, we
apply double/debiased machine learning (DML) to estimate the conditional
treatment effect of a rework step during the color conversion process in
opto-electronic semiconductor manufacturing on the final product yield. We
utilize the implementation DoubleML to develop policies for the rework of
components and estimate their value empirically. From our causal machine
learning analysis we derive implications for the coating of monochromatic LEDs
with conversion layers.
|
[
"stat.ML",
"cs.AI",
"cs.LG"
] | false |
2306.04228
|
2023-06-07T08:06:50Z
|
Data Mining for Faster, Interpretable Solutions to Inverse Problems: A
Case Study Using Additive Manufacturing
|
[
"Chandrika Kamath",
"Juliette Franzman",
"Ravi Ponmalai"
] |
Solving inverse problems, where we find the input values that result in
desired values of outputs, can be challenging. The solution process is often
computationally expensive and it can be difficult to interpret the solution in
high-dimensional input spaces. In this paper, we use a problem from additive
manufacturing to address these two issues with the intent of making it easier
to solve inverse problems and exploit their results. First, focusing on
Gaussian process surrogates that are used to solve inverse problems, we
describe how a simple modification to the idea of tapering can substantially
speed up the surrogate without losing accuracy in prediction. Second, we
demonstrate that Kohonen self-organizing maps can be used to visualize and
interpret the solution to the inverse problem in the high-dimensional input
space. For our data set, as not all input dimensions are equally important, we
show that using weighted distances results in a better organized map that makes
the relationships among the inputs obvious.
|
[
"cs.LG",
"cs.NA",
"cs.NE",
"math.NA"
] | false |
2306.04319
|
2023-06-07T10:32:53Z
|
CaptAinGlove: Capacitive and Inertial Fusion-Based Glove for Real-Time
on Edge Hand Gesture Recognition for Drone Control
|
[
"Hymalai Bello",
"Sungho Suh",
"Daniel Geißler",
"Lala Ray",
"Bo Zhou",
"Paul Lukowicz"
] |
We present CaptAinGlove, a textile-based, low-power (1.15Watts),
privacy-conscious, real-time on-the-edge (RTE) glove-based solution with a tiny
memory footprint (2MB), designed to recognize hand gestures used for drone
control. We employ lightweight convolutional neural networks as the backbone
models and a hierarchical multimodal fusion to reduce power consumption and
improve accuracy. The system yields an F1-score of 80% for the offline
evaluation of nine classes; eight hand gesture commands and null activity. For
the RTE, we obtained an F1-score of 67% (one user).
|
[
"cs.LG",
"cs.HC",
"cs.RO"
] | false |
2306.04400
|
2023-06-07T12:58:52Z
|
A Fair Classifier Embracing Triplet Collapse
|
[
"A. Martzloff",
"N. Posocco",
"Q. Ferré"
] |
In this paper, we study the behaviour of the triplet loss and show that it
can be exploited to limit the biases created and perpetuated by machine
learning models. Our fair classifier uses the collapse of the triplet loss when
its margin is greater than the maximum distance between two points in the
latent space, in the case of stochastic triplet selection.
|
[
"cs.LG",
"cs.AI",
"cs.CY",
"I.2.6; I.5.1; K.4.2"
] | false |
2306.04505
|
2023-06-07T15:12:16Z
|
Hardness of Deceptive Certificate Selection
|
[
"Stephan Wäldchen"
] |
Recent progress towards theoretical interpretability guarantees for AI has
been made with classifiers that are based on interactive proof systems. A
prover selects a certificate from the datapoint and sends it to a verifier who
decides the class. In the context of machine learning, such a certificate can
be a feature that is informative of the class. For a setup with high soundness
and completeness, the exchanged certificates must have a high mutual
information with the true class of the datapoint. However, this guarantee
relies on a bound on the Asymmetric Feature Correlation of the dataset, a
property that so far is difficult to estimate for high-dimensional data. It was
conjectured in W\"aldchen et al. that it is computationally hard to exploit the
AFC, which is what we prove here.
We consider a malicious prover-verifier duo that aims to exploit the AFC to
achieve high completeness and soundness while using uninformative certificates.
We show that this task is $\mathsf{NP}$-hard and cannot be approximated better
than $\mathcal{O}(m^{1/8 - \epsilon})$, where $m$ is the number of possible
certificates, for $\epsilon>0$ under the Dense-vs-Random conjecture. This is
some evidence that AFC should not prevent the use of interactive classification
for real-world tasks, as it is computationally hard to be exploited.
|
[
"cs.LG",
"cs.AI",
"cs.CC",
"cs.CR",
"68T01, 91A06",
"I.2.0"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.