arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.01732
|
2023-06-02T17:58:00Z
|
Video Colorization with Pre-trained Text-to-Image Diffusion Models
|
[
"Hanyuan Liu",
"Minshan Xie",
"Jinbo Xing",
"Chengze Li",
"Tien-Tsin Wong"
] |
Video colorization is a challenging task that involves inferring plausible
and temporally consistent colors for grayscale frames. In this paper, we
present ColorDiffuser, an adaptation of a pre-trained text-to-image latent
diffusion model for video colorization. With the proposed adapter-based
approach, we repropose the pre-trained text-to-image model to accept input
grayscale video frames, with the optional text description, for video
colorization. To enhance the temporal coherence and maintain the vividness of
colorization across frames, we propose two novel techniques: the Color
Propagation Attention and Alternated Sampling Strategy. Color Propagation
Attention enables the model to refine its colorization decision based on a
reference latent frame, while Alternated Sampling Strategy captures
spatiotemporal dependencies by using the next and previous adjacent latent
frames alternatively as reference during the generative diffusion sampling
steps. This encourages bidirectional color information propagation between
adjacent video frames, leading to improved color consistency across frames. We
conduct extensive experiments on benchmark datasets, and the results
demonstrate the effectiveness of our proposed framework. The evaluations show
that ColorDiffuser achieves state-of-the-art performance in video colorization,
surpassing existing methods in terms of color fidelity, temporal consistency,
and visual quality.
|
[
"cs.CV",
"cs.AI",
"cs.GR"
] | false |
2306.01733
|
2023-06-02T17:58:03Z
|
DocFormerv2: Local Features for Document Understanding
|
[
"Srikar Appalaraju",
"Peng Tang",
"Qi Dong",
"Nishant Sankaran",
"Yichu Zhou",
"R. Manmatha"
] |
We propose DocFormerv2, a multi-modal transformer for Visual Document
Understanding (VDU). The VDU domain entails understanding documents (beyond
mere OCR predictions) e.g., extracting information from a form, VQA for
documents and other tasks. VDU is challenging as it needs a model to make sense
of multiple modalities (visual, language and spatial) to make a prediction. Our
approach, termed DocFormerv2 is an encoder-decoder transformer which takes as
input - vision, language and spatial features. DocFormerv2 is pre-trained with
unsupervised tasks employed asymmetrically i.e., two novel document tasks on
encoder and one on the auto-regressive decoder. The unsupervised tasks have
been carefully designed to ensure that the pre-training encourages
local-feature alignment between multiple modalities. DocFormerv2 when evaluated
on nine datasets shows state-of-the-art performance over strong baselines e.g.
TabFact (4.3%), InfoVQA (1.4%), FUNSD (1%). Furthermore, to show generalization
capabilities, on three VQA tasks involving scene-text, Doc- Formerv2
outperforms previous comparably-sized models and even does better than much
larger models (such as GIT2, PaLi and Flamingo) on some tasks. Extensive
ablations show that due to its pre-training, DocFormerv2 understands multiple
modalities better than prior-art in VDU.
|
[
"cs.CV",
"cs.CL",
"cs.LG"
] | false |
2306.01735
|
2023-06-02T17:59:09Z
|
Multilingual Conceptual Coverage in Text-to-Image Models
|
[
"Michael Saxon",
"William Yang Wang"
] |
We propose "Conceptual Coverage Across Languages" (CoCo-CroLa), a technique
for benchmarking the degree to which any generative text-to-image system
provides multilingual parity to its training language in terms of tangible
nouns. For each model we can assess "conceptual coverage" of a given target
language relative to a source language by comparing the population of images
generated for a series of tangible nouns in the source language to the
population of images generated for each noun under translation in the target
language. This technique allows us to estimate how well-suited a model is to a
target language as well as identify model-specific weaknesses, spurious
correlations, and biases without a-priori assumptions. We demonstrate how it
can be used to benchmark T2I models in terms of multilinguality, and how
despite its simplicity it is a good proxy for impressive generalization.
|
[
"cs.CL",
"cs.AI",
"cs.CV",
"eess.IV"
] | false |
2306.01736
|
2023-06-02T17:59:24Z
|
DaTaSeg: Taming a Universal Multi-Dataset Multi-Task Segmentation Model
|
[
"Xiuye Gu",
"Yin Cui",
"Jonathan Huang",
"Abdullah Rashwan",
"Xuan Yang",
"Xingyi Zhou",
"Golnaz Ghiasi",
"Weicheng Kuo",
"Huizhong Chen",
"Liang-Chieh Chen",
"David A Ross"
] |
Observing the close relationship among panoptic, semantic and instance
segmentation tasks, we propose to train a universal multi-dataset multi-task
segmentation model: DaTaSeg.We use a shared representation (mask proposals with
class predictions) for all tasks. To tackle task discrepancy, we adopt
different merge operations and post-processing for different tasks. We also
leverage weak-supervision, allowing our segmentation model to benefit from
cheaper bounding box annotations. To share knowledge across datasets, we use
text embeddings from the same semantic embedding space as classifiers and share
all network parameters among datasets. We train DaTaSeg on ADE semantic, COCO
panoptic, and Objects365 detection datasets. DaTaSeg improves performance on
all datasets, especially small-scale datasets, achieving 54.0 mIoU on ADE
semantic and 53.5 PQ on COCO panoptic. DaTaSeg also enables weakly-supervised
knowledge transfer on ADE panoptic and Objects365 instance segmentation.
Experiments show DaTaSeg scales with the number of training datasets and
enables open-vocabulary segmentation through direct transfer. In addition, we
annotate an Objects365 instance segmentation set of 1,000 images and will
release it as a public benchmark.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | true |
2306.01809
|
2023-06-02T03:11:32Z
|
Adversarial Attack Based on Prediction-Correction
|
[
"Chen Wan",
"Fangjun Huang"
] |
Deep neural networks (DNNs) are vulnerable to adversarial examples obtained
by adding small perturbations to original examples. The added perturbations in
existing attacks are mainly determined by the gradient of the loss function
with respect to the inputs. In this paper, the close relationship between
gradient-based attacks and the numerical methods for solving ordinary
differential equation (ODE) is studied for the first time. Inspired by the
numerical solution of ODE, a new prediction-correction (PC) based adversarial
attack is proposed. In our proposed PC-based attack, some existing attack can
be selected to produce a predicted example first, and then the predicted
example and the current example are combined together to determine the added
perturbations. The proposed method possesses good extensibility and can be
applied to all available gradient-based attacks easily. Extensive experiments
demonstrate that compared with the state-of-the-art gradient-based adversarial
attacks, our proposed PC-based attacks have higher attack success rates, and
exhibit better transferability.
|
[
"cs.CR",
"cs.AI",
"cs.CV",
"cs.LG"
] | false |
2306.01245
|
2023-06-02T03:09:31Z
|
THiFLY Research at SemEval-2023 Task 7: A Multi-granularity System for
CTR-based Textual Entailment and Evidence Retrieval
|
[
"Yuxuan Zhou",
"Ziyu Jin",
"Meiwei Li",
"Miao Li",
"Xien Liu",
"Xinxin You",
"Ji Wu"
] |
The NLI4CT task aims to entail hypotheses based on Clinical Trial Reports
(CTRs) and retrieve the corresponding evidence supporting the justification.
This task poses a significant challenge, as verifying hypotheses in the NLI4CT
task requires the integration of multiple pieces of evidence from one or two
CTR(s) and the application of diverse levels of reasoning, including textual
and numerical. To address these problems, we present a multi-granularity system
for CTR-based textual entailment and evidence retrieval in this paper.
Specifically, we construct a Multi-granularity Inference Network (MGNet) that
exploits sentence-level and token-level encoding to handle both textual
entailment and evidence retrieval tasks. Moreover, we enhance the numerical
inference capability of the system by leveraging a T5-based model, SciFive,
which is pre-trained on the medical corpus. Model ensembling and a joint
inference method are further utilized in the system to increase the stability
and consistency of inference. The system achieves f1-scores of 0.856 and 0.853
on textual entailment and evidence retrieval tasks, resulting in the best
performance on both subtasks. The experimental results corroborate the
effectiveness of our proposed method. Our code is publicly available at
https://github.com/THUMLP/NLI4CT.
|
[
"cs.CL"
] | false |
2306.01261
|
2023-06-02T04:03:14Z
|
Automatic Translation of Hate Speech to Non-hate Speech in Social Media
Texts
|
[
"Yevhen Kostiuk",
"Atnafu Lambebo Tonja",
"Grigori Sidorov",
"Olga Kolesnikova"
] |
In this paper, we investigate the issue of hate speech by presenting a novel
task of translating hate speech into non-hate speech text while preserving its
meaning. As a case study, we use Spanish texts. We provide a dataset and
several baselines as a starting point for further research in the task. We
evaluated our baseline results using multiple metrics, including BLEU scores.
The aim of this study is to contribute to the development of more effective
methods for reducing the spread of hate speech in online communities.
|
[
"cs.CL"
] | false |
2306.01273
|
2023-06-02T05:18:19Z
|
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard
Labels of Transformations
|
[
"Hoang-Quoc Nguyen-Son",
"Seira Hidano",
"Kazuhide Fukushima",
"Shinsaku Kiyomoto",
"Isao Echizen"
] |
Adversarial attacks reveal serious flaws in deep learning models. More
dangerously, these attacks preserve the original meaning and escape human
recognition. Existing methods for detecting these attacks need to be trained
using original/adversarial data. In this paper, we propose detection without
training by voting on hard labels from predictions of transformations, namely,
VoteTRANS. Specifically, VoteTRANS detects adversarial text by comparing the
hard labels of input text and its transformation. The evaluation demonstrates
that VoteTRANS effectively detects adversarial text across various
state-of-the-art attacks, models, and datasets.
|
[
"cs.CL"
] | false |
2306.01311
|
2023-06-02T07:21:03Z
|
MetaVL: Transferring In-Context Learning Ability From Language Models to
Vision-Language Models
|
[
"Masoud Monajatipoor",
"Liunian Harold Li",
"Mozhdeh Rouhsedaghat",
"Lin F. Yang",
"Kai-Wei Chang"
] |
Large-scale language models have shown the ability to adapt to a new task via
conditioning on a few demonstrations (i.e., in-context learning). However, in
the vision-language domain, most large-scale pre-trained vision-language (VL)
models do not possess the ability to conduct in-context learning. How can we
enable in-context learning for VL models? In this paper, we study an
interesting hypothesis: can we transfer the in-context learning ability from
the language domain to VL domain? Specifically, we first meta-trains a language
model to perform in-context learning on NLP tasks (as in MetaICL); then we
transfer this model to perform VL tasks by attaching a visual encoder. Our
experiments suggest that indeed in-context learning ability can be transferred
cross modalities: our model considerably improves the in-context learning
capability on VL tasks and can even compensate for the size of the model
significantly. On VQA, OK-VQA, and GQA, our method could outperform the
baseline model while having 20 times fewer parameters.
|
[
"cs.CL"
] | false |
2306.01444
|
2023-06-02T11:07:13Z
|
Unsupervised Extractive Summarization of Emotion Triggers
|
[
"Tiberiu Sosea",
"Hongli Zhan",
"Junyi Jessy Li",
"Cornelia Caragea"
] |
Understanding what leads to emotions during large-scale crises is important
as it can provide groundings for expressed emotions and subsequently improve
the understanding of ongoing disasters. Recent approaches trained supervised
models to both detect emotions and explain emotion triggers (events and
appraisals) via abstractive summarization. However, obtaining timely and
qualitative abstractive summaries is expensive and extremely time-consuming,
requiring highly-trained expert annotators. In time-sensitive, high-stake
contexts, this can block necessary responses. We instead pursue unsupervised
systems that extract triggers from text. First, we introduce CovidET-EXT,
augmenting (Zhan et al. 2022)'s abstractive dataset (in the context of the
COVID-19 crisis) with extractive triggers. Second, we develop new unsupervised
learning models that can jointly detect emotions and summarize their triggers.
Our best approach, entitled Emotion-Aware Pagerank, incorporates emotion
information from external sources combined with a language understanding
module, and outperforms strong baselines. We release our data and code at
https://github.com/tsosea2/CovidET-EXT.
|
[
"cs.CL"
] | false |
2306.01465
|
2023-06-02T11:41:24Z
|
Light Coreference Resolution for Russian with Hierarchical Discourse
Features
|
[
"Elena Chistova",
"Ivan Smirnov"
] |
Coreference resolution is the task of identifying and grouping mentions
referring to the same real-world entity. Previous neural models have mainly
focused on learning span representations and pairwise scores for coreference
decisions. However, current methods do not explicitly capture the referential
choice in the hierarchical discourse, an important factor in coreference
resolution. In this study, we propose a new approach that incorporates
rhetorical information into neural coreference resolution models. We collect
rhetorical features from automated discourse parses and examine their impact.
As a base model, we implement an end-to-end span-based coreference resolver
using a partially fine-tuned multilingual entity-aware language model LUKE. We
evaluate our method on the RuCoCo-23 Shared Task for coreference resolution in
Russian. Our best model employing rhetorical distance between mentions has
ranked 1st on the development set (74.6% F1) and 2nd on the test set (73.3% F1)
of the Shared Task. We hope that our work will inspire further research on
incorporating discourse information in neural coreference resolution models.
|
[
"cs.CL"
] | false |
2306.01481
|
2023-06-02T12:09:59Z
|
GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training
Data Exploration
|
[
"Aleksandra Piktus",
"Odunayo Ogundepo",
"Christopher Akiki",
"Akintunde Oladipo",
"Xinyu Zhang",
"Hailey Schoelkopf",
"Stella Biderman",
"Martin Potthast",
"Jimmy Lin"
] |
Noticing the urgent need to provide tools for fast and user-friendly
qualitative analysis of large-scale textual corpora of the modern NLP, we
propose to turn to the mature and well-tested methods from the domain of
Information Retrieval (IR) - a research field with a long history of tackling
TB-scale document collections. We discuss how Pyserini - a widely used toolkit
for reproducible IR research can be integrated with the Hugging Face ecosystem
of open-source AI libraries and artifacts. We leverage the existing
functionalities of both platforms while proposing novel features further
facilitating their integration. Our goal is to give NLP researchers tools that
will allow them to develop retrieval-based instrumentation for their data
analytics needs with ease and agility. We include a Jupyter Notebook-based walk
through the core interoperability features, available on GitHub at
https://github.com/huggingface/gaia. We then demonstrate how the ideas we
present can be operationalized to create a powerful tool for qualitative data
analysis in NLP. We present GAIA Search - a search engine built following
previously laid out principles, giving access to four popular large-scale text
collections. GAIA serves a dual purpose of illustrating the potential of
methodologies we discuss but also as a standalone qualitative analysis tool
that can be leveraged by NLP researchers aiming to understand datasets prior to
using them in training. GAIA is hosted live on Hugging Face Spaces -
https://huggingface.co/spaces/spacerini/gaia.
|
[
"cs.CL"
] | false |
2306.01497
|
2023-06-02T12:45:34Z
|
Data-Efficient French Language Modeling with CamemBERTa
|
[
"Wissam Antoun",
"Benoît Sagot",
"Djamé Seddah"
] |
Recent advances in NLP have significantly improved the performance of
language models on a variety of tasks. While these advances are largely driven
by the availability of large amounts of data and computational power, they also
benefit from the development of better training methods and architectures. In
this paper, we introduce CamemBERTa, a French DeBERTa model that builds upon
the DeBERTaV3 architecture and training objective. We evaluate our model's
performance on a variety of French downstream tasks and datasets, including
question answering, part-of-speech tagging, dependency parsing, named entity
recognition, and the FLUE benchmark, and compare against CamemBERT, the
state-of-the-art monolingual model for French. Our results show that, given the
same amount of training tokens, our model outperforms BERT-based models trained
with MLM on most tasks. Furthermore, our new model reaches similar or superior
performance on downstream tasks compared to CamemBERT, despite being trained on
only 30% of its total number of input tokens. In addition to our experimental
results, we also publicly release the weights and code implementation of
CamemBERTa, making it the first publicly available DeBERTaV3 model outside of
the original paper and the first openly available implementation of a DeBERTaV3
training objective. https://gitlab.inria.fr/almanach/CamemBERTa
|
[
"cs.CL"
] | false |
2306.01551
|
2023-06-02T13:58:59Z
|
Comparing a composite model versus chained models to locate a nearest
visual object
|
[
"Antoine Le Borgne",
"Xavier Marjou",
"Fanny Parzysz",
"Tayeb Lemlouma"
] |
Extracting information from geographic images and text is crucial for
autonomous vehicles to determine in advance the best cell stations to connect
to along their future path. Multiple artificial neural network models can
address this challenge; however, there is no definitive guidance on the
selection of an appropriate model for such use cases. Therefore, we
experimented two architectures to solve such a task: a first architecture with
chained models where each model in the chain addresses a sub-task of the task;
and a second architecture with a single model that addresses the whole task.
Our results showed that these two architectures achieved the same level
performance with a root mean square error (RMSE) of 0.055 and 0.056; The
findings further revealed that when the task can be decomposed into sub-tasks,
the chain architecture exhibits a twelve-fold increase in training speed
compared to the composite model. Nevertheless, the composite model
significantly alleviates the burden of data labeling.
|
[
"cs.CL"
] | false |
2306.01579
|
2023-06-02T14:48:19Z
|
EmoUS: Simulating User Emotions in Task-Oriented Dialogues
|
[
"Hsien-Chin Lin",
"Shutong Feng",
"Christian Geishauser",
"Nurul Lubis",
"Carel van Niekerk",
"Michael Heck",
"Benjamin Ruppik",
"Renato Vukovic",
"Milica Gašić"
] |
Existing user simulators (USs) for task-oriented dialogue systems only model
user behaviour on semantic and natural language levels without considering the
user persona and emotions. Optimising dialogue systems with generic user
policies, which cannot model diverse user behaviour driven by different
emotional states, may result in a high drop-off rate when deployed in the real
world. Thus, we present EmoUS, a user simulator that learns to simulate user
emotions alongside user behaviour. EmoUS generates user emotions, semantic
actions, and natural language responses based on the user goal, the dialogue
history, and the user persona. By analysing what kind of system behaviour
elicits what kind of user emotions, we show that EmoUS can be used as a probe
to evaluate a variety of dialogue systems and in particular their effect on the
user's emotional state. Developing such methods is important in the age of
large language model chat-bots and rising ethical concerns.
|
[
"cs.CL"
] | false |
2306.01709
|
2023-06-02T17:31:52Z
|
Distilling Efficient Language-Specific Models for Cross-Lingual Transfer
|
[
"Alan Ansell",
"Edoardo Maria Ponti",
"Anna Korhonen",
"Ivan Vulić"
] |
Massively multilingual Transformers (MMTs), such as mBERT and XLM-R, are
widely used for cross-lingual transfer learning. While these are pretrained to
represent hundreds of languages, end users of NLP systems are often interested
only in individual languages. For such purposes, the MMTs' language coverage
makes them unnecessarily expensive to deploy in terms of model size, inference
time, energy, and hardware cost. We thus propose to extract compressed,
language-specific models from MMTs which retain the capacity of the original
MMTs for cross-lingual transfer. This is achieved by distilling the MMT
bilingually, i.e., using data from only the source and target language of
interest. Specifically, we use a two-phase distillation approach, termed
BiStil: (i) the first phase distils a general bilingual model from the MMT,
while (ii) the second, task-specific phase sparsely fine-tunes the bilingual
"student" model using a task-tuned variant of the original MMT as its
"teacher". We evaluate this distillation technique in zero-shot cross-lingual
transfer across a number of standard cross-lingual benchmarks. The key results
indicate that the distilled models exhibit minimal degradation in target
language performance relative to the base MMT despite being significantly
smaller and faster. Furthermore, we find that they outperform multilingually
distilled models such as DistilmBERT and MiniLMv2 while having a very modest
training budget in comparison, even on a per-language basis. We also show that
bilingual models distilled from MMTs greatly outperform bilingual models
trained from scratch. Our code and models are available at
https://github.com/AlanAnsell/bistil.
|
[
"cs.CL"
] | false |
2306.01841
|
2023-06-02T18:01:02Z
|
Binary and Ternary Natural Language Generation
|
[
"Zechun Liu",
"Barlas Oguz",
"Aasish Pappu",
"Yangyang Shi",
"Raghuraman Krishnamoorthi"
] |
Ternary and binary neural networks enable multiplication-free computation and
promise multiple orders of magnitude efficiency gains over full-precision
networks if implemented on specialized hardware. However, since both the
parameter and the output space are highly discretized, such networks have
proven very difficult to optimize. The difficulties are compounded for the
class of transformer text generation models due to the sensitivity of the
attention operation to quantization and the noise-compounding effects of
autoregressive decoding in the high-cardinality output space. We approach the
problem with a mix of statistics-based quantization for the weights and elastic
quantization of the activations and demonstrate the first ternary and binary
transformer models on the downstream tasks of summarization and machine
translation. Our ternary BART base achieves an R1 score of 41 on the
CNN/DailyMail benchmark, which is merely 3.9 points behind the full model while
being 16x more efficient. Our binary model, while less accurate, achieves a
highly non-trivial score of 35.6. For machine translation, we achieved BLEU
scores of 21.7 and 17.6 on the WMT16 En-Ro benchmark, compared with a full
precision mBART model score of 26.8. We also compare our approach in the 8-bit
activation setting, where our ternary and even binary weight models can match
or outperform the best existing 8-bit weight models in the literature. Our code
and models are available at:
https://github.com/facebookresearch/Ternary_Binary_Transformer
|
[
"cs.CL"
] | true |
2306.01857
|
2023-06-02T18:23:35Z
|
Knowledge of cultural moral norms in large language models
|
[
"Aida Ramezani",
"Yang Xu"
] |
Moral norms vary across cultures. A recent line of work suggests that English
large language models contain human-like moral biases, but these studies
typically do not examine moral variation in a diverse cultural setting. We
investigate the extent to which monolingual English language models contain
knowledge about moral norms in different countries. We consider two levels of
analysis: 1) whether language models capture fine-grained moral variation
across countries over a variety of topics such as ``homosexuality'' and
``divorce''; 2) whether language models capture cultural diversity and shared
tendencies in which topics people around the globe tend to diverge or agree on
in their moral judgment. We perform our analyses with two public datasets from
the World Values Survey (across 55 countries) and PEW global surveys (across 40
countries) on morality. We find that pre-trained English language models
predict empirical moral norms across countries worse than the English moral
norms reported previously. However, fine-tuning language models on the survey
data improves inference across countries at the expense of a less accurate
estimate of the English moral norms. We discuss the relevance and challenges of
incorporating cultural knowledge into the automated inference of moral norms.
|
[
"cs.CL"
] | false |
2306.01907
|
2023-06-02T20:31:58Z
|
A Simple yet Effective Self-Debiasing Framework for Transformer Models
|
[
"Xiaoyue Wang",
"Lijie Wang",
"Xin Liu",
"Suhang Wu",
"Jinsong Su",
"Hua Wu"
] |
Current Transformer-based natural language understanding (NLU) models heavily
rely on dataset biases, while failing to handle real-world out-of-distribution
(OOD) instances. Many methods have been proposed to deal with this issue, but
they ignore the fact that the features learned in different layers of
Transformer-based NLU models are different. In this paper, we first conduct
preliminary studies to obtain two conclusions: 1) both low- and high-layer
sentence representations encode common biased features during training; 2) the
low-layer sentence representations encode fewer unbiased features than the
highlayer ones. Based on these conclusions, we propose a simple yet effective
self-debiasing framework for Transformer-based NLU models. Concretely, we first
stack a classifier on a selected low layer. Then, we introduce a residual
connection that feeds the low-layer sentence representation to the top-layer
classifier. In this way, the top-layer sentence representation will be trained
to ignore the common biased features encoded by the low-layer sentence
representation and focus on task-relevant unbiased features. During inference,
we remove the residual connection and directly use the top-layer sentence
representation to make predictions. Extensive experiments and indepth analyses
on NLU tasks show that our framework performs better than several competitive
baselines, achieving a new SOTA on all OOD test sets.
|
[
"cs.CL"
] | false |
2306.01296
|
2023-06-02T06:46:14Z
|
Improved Training for End-to-End Streaming Automatic Speech Recognition
Model with Punctuation
|
[
"Hanbyul Kim",
"Seunghyun Seo",
"Lukas Lee",
"Seolki Baek"
] |
Punctuated text prediction is crucial for automatic speech recognition as it
enhances readability and impacts downstream natural language processing tasks.
In streaming scenarios, the ability to predict punctuation in real-time is
particularly desirable but presents a difficult technical challenge. In this
work, we propose a method for predicting punctuated text from input speech
using a chunk-based Transformer encoder trained with Connectionist Temporal
Classification (CTC) loss. The acoustic model trained with long sequences by
concatenating the input and target sequences can learn punctuation marks
attached to the end of sentences more effectively. Additionally, by combining
CTC losses on the chunks and utterances, we achieved both the improved F1 score
of punctuation prediction and Word Error Rate (WER).
|
[
"eess.AS",
"cs.CL"
] | false |
2306.01318
|
2023-06-02T07:33:47Z
|
Text Style Transfer Back-Translation
|
[
"Daimeng Wei",
"Zhanglin Wu",
"Hengchao Shang",
"Zongyao Li",
"Minghan Wang",
"Jiaxin Guo",
"Xiaoyu Chen",
"Zhengzhe Yu",
"Hao Yang"
] |
Back Translation (BT) is widely used in the field of machine translation, as
it has been proved effective for enhancing translation quality. However, BT
mainly improves the translation of inputs that share a similar style (to be
more specific, translation-like inputs), since the source side of BT data is
machine-translated. For natural inputs, BT brings only slight improvements and
sometimes even adverse effects. To address this issue, we propose Text Style
Transfer Back Translation (TST BT), which uses a style transfer model to modify
the source side of BT data. By making the style of source-side text more
natural, we aim to improve the translation of natural inputs. Our experiments
on various language pairs, including both high-resource and low-resource ones,
demonstrate that TST BT significantly improves translation performance against
popular BT benchmarks. In addition, TST BT is proved to be effective in domain
adaptation so this strategy can be regarded as a general data augmentation
method. Our training code and text style transfer model are open-sourced.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.01325
|
2023-06-02T07:48:20Z
|
LyricSIM: A novel Dataset and Benchmark for Similarity Detection in
Spanish Song LyricS
|
[
"Alejandro Benito-Santos",
"Adrián Ghajari",
"Pedro Hernández",
"Víctor Fresno",
"Salvador Ros",
"Elena González-Blanco"
] |
In this paper, we present a new dataset and benchmark tailored to the task of
semantic similarity in song lyrics. Our dataset, originally consisting of 2775
pairs of Spanish songs, was annotated in a collective annotation experiment by
63 native annotators. After collecting and refining the data to ensure a high
degree of consensus and data integrity, we obtained 676 high-quality annotated
pairs that were used to evaluate the performance of various state-of-the-art
monolingual and multilingual language models. Consequently, we established
baseline results that we hope will be useful to the community in all future
academic and industrial applications conducted in this context.
|
[
"cs.CL",
"cs.IR"
] | false |
2306.01386
|
2023-06-02T09:15:01Z
|
ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an
Opportunity?
|
[
"Michael Heck",
"Nurul Lubis",
"Benjamin Ruppik",
"Renato Vukovic",
"Shutong Feng",
"Christian Geishauser",
"Hsien-Chin Lin",
"Carel van Niekerk",
"Milica Gašić"
] |
Recent research on dialogue state tracking (DST) focuses on methods that
allow few- and zero-shot transfer to new domains or schemas. However,
performance gains heavily depend on aggressive data augmentation and
fine-tuning of ever larger language model based architectures. In contrast,
general purpose language models, trained on large amounts of diverse data, hold
the promise of solving any kind of task without task-specific training. We
present preliminary experimental results on the ChatGPT research preview,
showing that ChatGPT achieves state-of-the-art performance in zero-shot DST.
Despite our findings, we argue that properties inherent to general purpose
models limit their ability to replace specialized systems. We further theorize
that the in-context learning capabilities of such models will likely become
powerful tools to support the development of dedicated and dynamic dialogue
state trackers.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.01457
|
2023-06-02T11:33:06Z
|
Driving Context into Text-to-Text Privatization
|
[
"Stefan Arnold",
"Dilara Yesilbas",
"Sven Weinzierl"
] |
\textit{Metric Differential Privacy} enables text-to-text privatization by
adding calibrated noise to the vector of a word derived from an embedding space
and projecting this noisy vector back to a discrete vocabulary using a nearest
neighbor search. Since words are substituted without context, this mechanism is
expected to fall short at finding substitutes for words with ambiguous
meanings, such as \textit{'bank'}. To account for these ambiguous words, we
leverage a sense embedding and incorporate a sense disambiguation step prior to
noise injection. We encompass our modification to the privatization mechanism
with an estimation of privacy and utility. For word sense disambiguation on the
\textit{Words in Context} dataset, we demonstrate a substantial increase in
classification accuracy by $6.05\%$.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.01499
|
2023-06-02T12:47:45Z
|
Can LLMs like GPT-4 outperform traditional AI tools in dementia
diagnosis? Maybe, but not today
|
[
"Zhuo Wang",
"Rongzhen Li",
"Bowen Dong",
"Jie Wang",
"Xiuxing Li",
"Ning Liu",
"Chenhui Mao",
"Wei Zhang",
"Liling Dong",
"Jing Gao",
"Jianyong Wang"
] |
Recent investigations show that large language models (LLMs), specifically
GPT-4, not only have remarkable capabilities in common Natural Language
Processing (NLP) tasks but also exhibit human-level performance on various
professional and academic benchmarks. However, whether GPT-4 can be directly
used in practical applications and replace traditional artificial intelligence
(AI) tools in specialized domains requires further experimental validation. In
this paper, we explore the potential of LLMs such as GPT-4 to outperform
traditional AI tools in dementia diagnosis. Comprehensive comparisons between
GPT-4 and traditional AI tools are conducted to examine their diagnostic
accuracy in a clinical setting. Experimental results on two real clinical
datasets show that, although LLMs like GPT-4 demonstrate potential for future
advancements in dementia diagnosis, they currently do not surpass the
performance of traditional AI tools. The interpretability and faithfulness of
GPT-4 are also evaluated by comparison with real doctors. We discuss the
limitations of GPT-4 in its current state and propose future research
directions to enhance GPT-4 in dementia diagnosis.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.01549
|
2023-06-02T13:56:30Z
|
Evaluating Machine Translation Quality with Conformal Predictive
Distributions
|
[
"Patrizio Giovannotti"
] |
This paper presents a new approach for assessing uncertainty in machine
translation by simultaneously evaluating translation quality and providing a
reliable confidence score. Our approach utilizes conformal predictive
distributions to produce prediction intervals with guaranteed coverage, meaning
that for any given significance level $\epsilon$, we can expect the true
quality score of a translation to fall out of the interval at a rate of
$1-\epsilon$. In this paper, we demonstrate how our method outperforms a
simple, but effective baseline on six different language pairs in terms of
coverage and sharpness. Furthermore, we validate that our approach requires the
data exchangeability assumption to hold for optimal performance.
|
[
"cs.CL",
"stat.ML"
] | false |
2306.01729
|
2023-06-02T17:54:36Z
|
Improving Generalization in Task-oriented Dialogues with Workflows and
Action Plans
|
[
"Stefania Raimondo",
"Christopher Pal",
"Xiaotian Liu",
"David Vazquez",
"Hector Palacios"
] |
Task-oriented dialogue is difficult in part because it involves understanding
user intent, collecting information from the user, executing API calls, and
generating helpful and fluent responses. However, for complex tasks one must
also correctly do all of these things over multiple steps, and in a specific
order. While large pre-trained language models can be fine-tuned end-to-end to
create multi-step task-oriented dialogue agents that generate fluent text, our
experiments confirm that this approach alone cannot reliably perform new
multi-step tasks that are unseen during training. To address these limitations,
we augment the dialogue contexts given to \textmd{text2text} transformers with
known \textit{valid workflow names} and \textit{action plans}. Action plans
consist of sequences of actions required to accomplish a task, and are encoded
as simple sequences of keywords (e.g. verify-identity, pull-up-account,
reset-password, etc.). We perform extensive experiments on the Action-Based
Conversations Dataset (ABCD) with T5-small, base and large models, and show
that such models: a) are able to more readily generalize to unseen workflows by
following the provided plan, and b) are able to generalize to executing unseen
actions if they are provided in the plan. In contrast, models are unable to
fully accomplish new multi-step tasks when they are not provided action plan
information, even when given new valid workflow names.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.01807
|
2023-06-02T01:00:44Z
|
Word Embeddings for Banking Industry
|
[
"Avnish Patel"
] |
Applications of Natural Language Processing (NLP) are plentiful, from
sentiment analysis to text classification. Practitioners rely on static word
embeddings (e.g. Word2Vec or GloVe) or static word representation from
contextual models (e.g. BERT or ELMo) to perform many of these NLP tasks. These
widely available word embeddings are built from large amount of text, so they
are likely to have captured most of the vocabulary in different context.
However, how well would they capture domain-specific semantics and word
relatedness? This paper explores this idea by creating a bank-specific word
embeddings and evaluates them against other sources of word embeddings such as
GloVe and BERT. Not surprising that embeddings built from bank-specific corpora
does a better job of capturing the bank-specific semantics and word
relatedness. This finding suggests that bank-specific word embeddings could be
a good stand-alone source or a complement to other widely available embeddings
when performing NLP tasks specific to the banking industry.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.01945
|
2023-06-02T23:04:19Z
|
Efficient Spoken Language Recognition via Multilabel Classification
|
[
"Oriol Nieto",
"Zeyu Jin",
"Franck Dernoncourt",
"Justin Salamon"
] |
Spoken language recognition (SLR) is the task of automatically identifying
the language present in a speech signal. Existing SLR models are either too
computationally expensive or too large to run effectively on devices with
limited resources. For real-world deployment, a model should also gracefully
handle unseen languages outside of the target language set, yet prior work has
focused on closed-set classification where all input languages are known
a-priori. In this paper we address these two limitations: we explore efficient
model architectures for SLR based on convolutional networks, and propose a
multilabel training strategy to handle non-target languages at inference time.
Using the VoxLingua107 dataset, we show that our models obtain competitive
results while being orders of magnitude smaller and faster than current
state-of-the-art methods, and that our multilabel strategy is more robust to
unseen non-target languages compared to multiclass classification.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.03103
|
2023-06-02T09:55:15Z
|
Sampling and Ranking for Digital Ink Generation on a tight computational
budget
|
[
"Andrei Afonin",
"Andrii Maksai",
"Aleksandr Timofeev",
"Claudiu Musat"
] |
Digital ink (online handwriting) generation has a number of potential
applications for creating user-visible content, such as handwriting
autocompletion, spelling correction, and beautification. Writing is personal
and usually the processing is done on-device. Ink generative models thus need
to produce high quality content quickly, in a resource constrained environment.
In this work, we study ways to maximize the quality of the output of a
trained digital ink generative model, while staying within an inference time
budget. We use and compare the effect of multiple sampling and ranking
techniques, in the first ablation study of its kind in the digital ink domain.
We confirm our findings on multiple datasets - writing in English and
Vietnamese, as well as mathematical formulas - using two model types and two
common ink data representations. In all combinations, we report a meaningful
improvement in the recognizability of the synthetic inks, in some cases more
than halving the character error rate metric, and describe a way to select the
optimal combination of sampling and ranking techniques for any given
computational budget.
|
[
"cs.HC",
"cs.CL"
] | false |
2306.03778
|
2023-06-02T20:28:14Z
|
Streaming Speech-to-Confusion Network Speech Recognition
|
[
"Denis Filimonov",
"Prabhat Pandey",
"Ariya Rastrow",
"Ankur Gandhe",
"Andreas Stolcke"
] |
In interactive automatic speech recognition (ASR) systems, low-latency
requirements limit the amount of search space that can be explored during
decoding, particularly in end-to-end neural ASR. In this paper, we present a
novel streaming ASR architecture that outputs a confusion network while
maintaining limited latency, as needed for interactive applications. We show
that 1-best results of our model are on par with a comparable RNN-T system,
while the richer hypothesis set allows second-pass rescoring to achieve 10-20\%
lower word error rate on the LibriSpeech task. We also show that our model
outperforms a strong RNN-T baseline on a far-field voice assistant task.
|
[
"eess.AS",
"cs.CL"
] | false |
2306.01303
|
2023-06-02T07:03:06Z
|
DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model
|
[
"Haoyu Wang",
"Siyuan Wang",
"Wei-Qiang Zhang",
"Jinfeng Bai"
] |
Multilingual self-supervised speech representation models have greatly
enhanced the speech recognition performance for low-resource languages, and the
compression of these huge models has also become a crucial prerequisite for
their industrial application. In this paper, we propose DistilXLSR, a distilled
cross-lingual speech representation model. By randomly shuffling the phonemes
of existing speech, we reduce the linguistic information and distill
cross-lingual models using only English data. We also design a layer-jumping
initialization method to fully leverage the teacher's pre-trained weights.
Experiments on 2 kinds of teacher models and 15 low-resource languages show
that our method can reduce the parameters by 50% while maintaining
cross-lingual representation ability. Our method is proven to be generalizable
to various languages/teacher models and has the potential to improve the
cross-lingual performance of the English pre-trained models.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.01327
|
2023-06-02T07:48:37Z
|
Speech Translation with Foundation Models and Optimal Transport: UPC at
IWSLT23
|
[
"Ioannis Tsiamas",
"Gerard I. Gállego",
"José A. R. Fonollosa",
"Marta R. Costa-jussà"
] |
This paper describes the submission of the UPC Machine Translation group to
the IWSLT 2023 Offline Speech Translation task. Our Speech Translation systems
utilize foundation models for speech (wav2vec 2.0) and text (mBART50). We
incorporate a Siamese pretraining step of the speech and text encoders with CTC
and Optimal Transport, to adapt the speech representations to the space of the
text model, thus maximizing transfer learning from MT. After this pretraining,
we fine-tune our system end-to-end on ST, with Cross Entropy and Knowledge
Distillation. Apart from the available ST corpora, we create synthetic data
with SegAugment to better adapt our models to the custom segmentations of the
IWSLT test sets. Our best single model obtains 31.2 BLEU points on MuST-C
tst-COMMON, 29.8 points on IWLST.tst2020 and 33.4 points on the newly released
IWSLT.ACLdev2023.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.01399
|
2023-06-02T09:46:29Z
|
Knowledge Graph Reasoning over Entities and Numerical Values
|
[
"Jiaxin Bai",
"Chen Luo",
"Zheng Li",
"Qingyu Yin",
"Bing Yin",
"Yangqiu Song"
] |
A complex logic query in a knowledge graph refers to a query expressed in
logic form that conveys a complex meaning, such as where did the Canadian
Turing award winner graduate from? Knowledge graph reasoning-based
applications, such as dialogue systems and interactive search engines, rely on
the ability to answer complex logic queries as a fundamental task. In most
knowledge graphs, edges are typically used to either describe the relationships
between entities or their associated attribute values. An attribute value can
be in categorical or numerical format, such as dates, years, sizes, etc.
However, existing complex query answering (CQA) methods simply treat numerical
values in the same way as they treat entities. This can lead to difficulties in
answering certain queries, such as which Australian Pulitzer award winner is
born before 1927, and which drug is a pain reliever and has fewer side effects
than Paracetamol. In this work, inspired by the recent advances in numerical
encoding and knowledge graph reasoning, we propose numerical complex query
answering. In this task, we introduce new numerical variables and operations to
describe queries involving numerical attribute values. To address the
difference between entities and numerical values, we also propose the framework
of Number Reasoning Network (NRN) for alternatively encoding entities and
numerical values into separate encoding structures. During the numerical
encoding process, NRN employs a parameterized density function to encode the
distribution of numerical values. During the entity encoding process, NRN uses
established query encoding methods for the original CQA problem. Experimental
results show that NRN consistently improves various query encoding methods on
three different knowledge graphs and achieves state-of-the-art results.
|
[
"cs.AI",
"cs.CL",
"cs.LO"
] | false |
2306.01442
|
2023-06-02T11:03:26Z
|
Towards Robust FastSpeech 2 by Modelling Residual Multimodality
|
[
"Fabian Kögel",
"Bac Nguyen",
"Fabien Cardinaux"
] |
State-of-the-art non-autoregressive text-to-speech (TTS) models based on
FastSpeech 2 can efficiently synthesise high-fidelity and natural speech. For
expressive speech datasets however, we observe characteristic audio
distortions. We demonstrate that such artefacts are introduced to the vocoder
reconstruction by over-smooth mel-spectrogram predictions, which are induced by
the choice of mean-squared-error (MSE) loss for training the mel-spectrogram
decoder. With MSE loss FastSpeech 2 is limited to learn conditional averages of
the training distribution, which might not lie close to a natural sample if the
distribution still appears multimodal after all conditioning signals. To
alleviate this problem, we introduce TVC-GMM, a mixture model of
Trivariate-Chain Gaussian distributions, to model the residual multimodality.
TVC-GMM reduces spectrogram smoothness and improves perceptual audio quality in
particular for expressive datasets as shown by both objective and subjective
evaluation.
|
[
"cs.SD",
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2306.01443
|
2023-06-02T11:06:48Z
|
Unsupervised Paraphrasing of Multiword Expressions
|
[
"Takashi Wada",
"Yuji Matsumoto",
"Timothy Baldwin",
"Jey Han Lau"
] |
We propose an unsupervised approach to paraphrasing multiword expressions
(MWEs) in context. Our model employs only monolingual corpus data and
pre-trained language models (without fine-tuning), and does not make use of any
external resources such as dictionaries. We evaluate our method on the SemEval
2022 idiomatic semantic text similarity task, and show that it outperforms all
unsupervised systems and rivals supervised systems.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.01471
|
2023-06-02T11:52:21Z
|
Guiding Text-to-Text Privatization by Syntax
|
[
"Stefan Arnold",
"Dilara Yesilbas",
"Sven Weinzierl"
] |
Metric Differential Privacy is a generalization of differential privacy
tailored to address the unique challenges of text-to-text privatization. By
adding noise to the representation of words in the geometric space of
embeddings, words are replaced with words located in the proximity of the noisy
representation. Since embeddings are trained based on word co-occurrences, this
mechanism ensures that substitutions stem from a common semantic context.
Without considering the grammatical category of words, however, this mechanism
cannot guarantee that substitutions play similar syntactic roles. We analyze
the capability of text-to-text privatization to preserve the grammatical
category of words after substitution and find that surrogate texts consist
almost exclusively of nouns. Lacking the capability to produce surrogate texts
that correlate with the structure of the sensitive texts, we encompass our
analysis by transforming the privatization step into a candidate selection
problem in which substitutions are directed to words with matching grammatical
properties. We demonstrate a substantial improvement in the performance of
downstream tasks by up to $4.66\%$ while retaining comparative privacy
guarantees.
|
[
"cs.CL",
"cs.CR",
"cs.LG"
] | false |
2306.01818
|
2023-06-02T11:59:57Z
|
Beta Thalassemia Carriers detection empowered federated Learning
|
[
"Muhammad Shoaib Farooq",
"Hafiz Ali Younas"
] |
Thalassemia is a group of inherited blood disorders that happen when
hemoglobin, the protein in red blood cells that carries oxygen, is not made
enough. It is found all over the body and is needed for survival. If both
parents have thalassemia, a child's chance of getting it increases. Genetic
counselling and early diagnosis are essential for treating thalassemia and
stopping it from being passed on to future generations. It may be hard for
healthcare professionals to differentiate between people with thalassemia
carriers and those without. The current blood tests for beta thalassemia
carriers are too expensive, take too long, and require too much screening
equipment. The World Health Organization says there is a high death rate for
people with thalassemia. Therefore, it is essential to find thalassemia
carriers to act quickly. High-performance liquid chromatography (HPLC), the
standard test method, has problems such as cost, time, and equipment needs. So,
there must be a quick and cheap way to find people carrying the thalassemia
gene. Using federated learning (FL) techniques, this study shows a new way to
find people with the beta-thalassemia gene. FL allows data to be collected and
processed on-site while following privacy rules, making it an excellent choice
for sensitive health data. Researchers used FL to train a model for
beta-thalassemia carriers by looking at the complete blood count results and
red blood cell indices. The model was 92.38 % accurate at telling the
difference between beta-thalassemia carriers and people who did not have the
disease. The proposed FL model is better than other published methods in terms
of how well it works, how reliable it is, and how private it is. This research
shows a promising, quick, accurate, and low-cost way to find thalassemia
carriers and opens the door for screening them on a large scale.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2306.01855
|
2023-06-02T18:17:52Z
|
5IDER: Unified Query Rewriting for Steering, Intent Carryover,
Disfluencies, Entity Carryover and Repair
|
[
"Jiarui Lu",
"Bo-Hsiang Tseng",
"Joel Ruben Antony Moniz",
"Site Li",
"Xueyun Zhu",
"Hong Yu",
"Murat Akbacak"
] |
Providing voice assistants the ability to navigate multi-turn conversations
is a challenging problem. Handling multi-turn interactions requires the system
to understand various conversational use-cases, such as steering, intent
carryover, disfluencies, entity carryover, and repair. The complexity of this
problem is compounded by the fact that these use-cases mix with each other,
often appearing simultaneously in natural language. This work proposes a
non-autoregressive query rewriting architecture that can handle not only the
five aforementioned tasks, but also complex compositions of these use-cases. We
show that our proposed model has competitive single task performance compared
to the baseline approach, and even outperforms a fine-tuned T5 model in
use-case compositions, despite being 15 times smaller in parameters and 25
times faster in latency.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.01937
|
2023-06-02T22:39:14Z
|
LIC-GAN: Language Information Conditioned Graph Generative GAN Model
|
[
"Robert Lo",
"Arnhav Datar",
"Abishek Sridhar"
] |
Deep generative models for Natural Language data offer a new angle on the
problem of graph synthesis: by optimizing differentiable models that directly
generate graphs, it is possible to side-step expensive search procedures in the
discrete and vast space of possible graphs. We introduce LIC-GAN, an implicit,
likelihood-free generative model for small graphs that circumvents the need for
expensive graph matching procedures. Our method takes as input a natural
language query and using a combination of language modelling and Generative
Adversarial Networks (GANs) and returns a graph that closely matches the
description of the query. We combine our approach with a reward network to
further enhance the graph generation with desired properties. Our experiments,
show that LIC-GAN does well on metrics such as PropMatch and Closeness getting
scores of 0.36 and 0.48. We also show that LIC-GAN performs as good as ChatGPT,
with ChatGPT getting scores of 0.40 and 0.42. We also conduct a few experiments
to demonstrate the robustness of our method, while also highlighting a few
interesting caveats of the model.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2306.01942
|
2023-06-02T22:56:01Z
|
Can Contextual Biasing Remain Effective with Whisper and GPT-2?
|
[
"Guangzhi Sun",
"Xianrui Zheng",
"Chao Zhang",
"Philip C. Woodland"
] |
End-to-end automatic speech recognition (ASR) and large language models, such
as Whisper and GPT-2, have recently been scaled to use vast amounts of training
data. Despite the large amount of training data, infrequent content words that
occur in a particular task may still exhibit poor ASR performance, with
contextual biasing a possible remedy. This paper investigates the effectiveness
of neural contextual biasing for Whisper combined with GPT-2. Specifically,
this paper proposes integrating an adapted tree-constrained pointer generator
(TCPGen) component for Whisper and a dedicated training scheme to dynamically
adjust the final output without modifying any Whisper model parameters.
Experiments across three datasets show a considerable reduction in errors on
biasing words with a biasing list of 1000 words. Contextual biasing was more
effective when applied to domain-specific data and can boost the performance of
Whisper and GPT-2 without losing their generality.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.01943
|
2023-06-02T23:02:09Z
|
NLPositionality: Characterizing Design Biases of Datasets and Models
|
[
"Sebastin Santy",
"Jenny T. Liang",
"Ronan Le Bras",
"Katharina Reinecke",
"Maarten Sap"
] |
Design biases in NLP systems, such as performance differences for different
populations, often stem from their creator's positionality, i.e., views and
lived experiences shaped by identity and background. Despite the prevalence and
risks of design biases, they are hard to quantify because researcher, system,
and dataset positionality is often unobserved. We introduce NLPositionality, a
framework for characterizing design biases and quantifying the positionality of
NLP datasets and models. Our framework continuously collects annotations from a
diverse pool of volunteer participants on LabintheWild, and statistically
quantifies alignment with dataset labels and model predictions. We apply
NLPositionality to existing datasets and models for two tasks -- social
acceptability and hate speech detection. To date, we have collected 16,299
annotations in over a year for 600 instances from 1,096 annotators across 87
countries. We find that datasets and models align predominantly with Western,
White, college-educated, and younger populations. Additionally, certain groups,
such as non-binary people and non-native English speakers, are further
marginalized by datasets and models as they rank least in alignment across all
tasks. Finally, we draw from prior literature to discuss how researchers can
examine their own positionality and that of their datasets and models, opening
the door for more inclusive NLP systems.
|
[
"cs.CL",
"cs.CY",
"cs.HC"
] | false |
2306.01944
|
2023-06-02T23:04:01Z
|
EdGCon: Auto-assigner of Iconicity Ratings Grounded by Lexical
Properties to Aid in Generation of Technical Gestures
|
[
"Sameena Hossain",
"Payal Kamboj",
"Aranyak Maity",
"Tamiko Azuma",
"Ayan Banerjee",
"Sandeep K. S. Gupta"
] |
Gestures that share similarities in their forms and are related in their
meanings, should be easier for learners to recognize and incorporate into their
existing lexicon. In that regard, to be more readily accepted as standard by
the Deaf and Hard of Hearing community, technical gestures in American Sign
Language (ASL) will optimally share similar in forms with their lexical
neighbors. We utilize a lexical database of ASL, ASL-LEX, to identify lexical
relations within a set of technical gestures. We use automated identification
for 3 unique sub-lexical properties in ASL- location, handshape and movement.
EdGCon assigned an iconicity rating based on the lexical property similarities
of the new gesture with an existing set of technical gestures and the
relatedness of the meaning of the new technical word to that of the existing
set of technical words. We collected 30 ad hoc crowdsourced technical gestures
from different internet websites and tested them against 31 gestures from the
DeafTEC technical corpus. We found that EdGCon was able to correctly
auto-assign the iconicity ratings 80.76% of the time.
|
[
"cs.HC",
"cs.AI",
"cs.CL"
] | false |
2306.03102
|
2023-06-02T06:28:21Z
|
ChatGPT is a Remarkable Tool -- For Experts
|
[
"Amos Azaria",
"Rina Azoulay",
"Shulamit Reches"
] |
This paper investigates the capabilities of ChatGPT as an automated assistant
in diverse domains, including scientific writing, mathematics, education,
programming, and healthcare. We explore the potential of ChatGPT to enhance
productivity, streamline problem-solving processes, and improve writing style.
Furthermore, we highlight the potential risks associated with excessive
reliance on ChatGPT in these fields. These limitations encompass factors like
incorrect and fictitious responses, inaccuracies in code, limited logical
reasoning abilities, overconfidence, and critical ethical concerns of
copyrights and privacy violation. We outline areas and objectives where ChatGPT
proves beneficial, applications where it should be used judiciously, and
scenarios where its reliability may be limited. In light of observed
limitations, and given that the tool's fundamental errors may pose a special
challenge for non-experts, ChatGPT should be used with a strategic methodology.
By drawing from comprehensive experimental studies, we offer methods and flow
charts for effectively using ChatGPT. Our recommendations emphasize iterative
interaction with ChatGPT and independent verification of its outputs.
Considering the importance of utilizing ChatGPT judiciously and with expertise,
we recommend its usage for experts who are well-versed in the respective
domains.
|
[
"cs.HC",
"cs.AI",
"cs.CL",
"cs.CY"
] | false |
2306.01240
|
2023-06-02T02:24:27Z
|
Federated Learning of Models Pre-Trained on Different Features with
Consensus Graphs
|
[
"Tengfei Ma",
"Trong Nghia Hoang",
"Jie Chen"
] |
Learning an effective global model on private and decentralized datasets has
become an increasingly important challenge of machine learning when applied in
practice. Existing distributed learning paradigms, such as Federated Learning,
enable this via model aggregation which enforces a strong form of modeling
homogeneity and synchronicity across clients. This is however not suitable to
many practical scenarios. For example, in distributed sensing, heterogeneous
sensors reading data from different views of the same phenomenon would need to
use different models for different data modalities. Local learning therefore
happens in isolation but inference requires merging the local models to achieve
consensus. To enable consensus among local models, we propose a feature fusion
approach that extracts local representations from local models and incorporates
them into a global representation that improves the prediction performance.
Achieving this requires addressing two non-trivial problems. First, we need to
learn an alignment between similar feature components which are arbitrarily
arranged across clients to enable representation aggregation. Second, we need
to learn a consensus graph that captures the high-order interactions between
local feature spaces and how to combine them to achieve a better prediction.
This paper presents solutions to these problems and demonstrates them in
real-world applications on time series data such as power grids and traffic
networks.
|
[
"cs.LG"
] | false |
2306.01244
|
2023-06-02T02:51:08Z
|
Towards Sustainable Learning: Coresets for Data-efficient Deep Learning
|
[
"Yu Yang",
"Hao Kang",
"Baharan Mirzasoleiman"
] |
To improve the efficiency and sustainability of learning deep models, we
propose CREST, the first scalable framework with rigorous theoretical
guarantees to identify the most valuable examples for training non-convex
models, particularly deep networks. To guarantee convergence to a stationary
point of a non-convex function, CREST models the non-convex loss as a series of
quadratic functions and extracts a coreset for each quadratic sub-region. In
addition, to ensure faster convergence of stochastic gradient methods such as
(mini-batch) SGD, CREST iteratively extracts multiple mini-batch coresets from
larger random subsets of training data, to ensure nearly-unbiased gradients
with small variances. Finally, to further improve scalability and efficiency,
CREST identifies and excludes the examples that are learned from the coreset
selection pipeline. Our extensive experiments on several deep networks trained
on vision and NLP datasets, including CIFAR-10, CIFAR-100, TinyImageNet, and
SNLI, confirm that CREST speeds up training deep networks on very large
datasets, by 1.7x to 2.5x with minimum loss in the performance. By analyzing
the learning difficulty of the subsets selected by CREST, we show that deep
models benefit the most by learning from subsets of increasing difficulty
levels.
|
[
"cs.LG"
] | false |
2306.01265
|
2023-06-02T04:29:57Z
|
Calibrating Multimodal Learning
|
[
"Huan Ma. Qingyang Zhang",
"Changqing Zhang",
"Bingzhe Wu",
"Huazhu Fu",
"Joey Tianyi Zhou",
"Qinghua Hu"
] |
Multimodal machine learning has achieved remarkable progress in a wide range
of scenarios. However, the reliability of multimodal learning remains largely
unexplored. In this paper, through extensive empirical studies, we identify
current multimodal classification methods suffer from unreliable predictive
confidence that tend to rely on partial modalities when estimating confidence.
Specifically, we find that the confidence estimated by current models could
even increase when some modalities are corrupted. To address the issue, we
introduce an intuitive principle for multimodal learning, i.e., the confidence
should not increase when one modality is removed. Accordingly, we propose a
novel regularization technique, i.e., Calibrating Multimodal Learning (CML)
regularization, to calibrate the predictive confidence of previous methods.
This technique could be flexibly equipped by existing models and improve the
performance in terms of confidence calibration, classification accuracy, and
model robustness.
|
[
"cs.LG"
] | false |
2306.01324
|
2023-06-02T07:48:18Z
|
Hyperparameters in Reinforcement Learning and How To Tune Them
|
[
"Theresa Eimer",
"Marius Lindauer",
"Roberta Raileanu"
] |
In order to improve reproducibility, deep reinforcement learning (RL) has
been adopting better scientific practices such as standardized evaluation
metrics and reporting. However, the process of hyperparameter optimization
still varies widely across papers, which makes it challenging to compare RL
algorithms fairly. In this paper, we show that hyperparameter choices in RL can
significantly affect the agent's final performance and sample efficiency, and
that the hyperparameter landscape can strongly depend on the tuning seed which
may lead to overfitting. We therefore propose adopting established best
practices from AutoML, such as the separation of tuning and testing seeds, as
well as principled hyperparameter optimization (HPO) across a broad search
space. We support this by comparing multiple state-of-the-art HPO tools on a
range of RL algorithms and environments to their hand-tuned counterparts,
demonstrating that HPO approaches often have higher performance and lower
compute overhead. As a result of our findings, we recommend a set of best
practices for the RL community, which should result in stronger empirical
results with fewer computational costs, better reproducibility, and thus faster
progress. In order to encourage the adoption of these practices, we provide
plug-and-play implementations of the tuning algorithms used in this paper at
https://github.com/facebookresearch/how-to-autorl.
|
[
"cs.LG"
] | false |
2306.01610
|
2023-06-02T15:19:08Z
|
Centered Self-Attention Layers
|
[
"Ameen Ali",
"Tomer Galanti",
"Lior Wolf"
] |
The self-attention mechanism in transformers and the message-passing
mechanism in graph neural networks are repeatedly applied within deep learning
architectures. We show that this application inevitably leads to oversmoothing,
i.e., to similar representations at the deeper layers for different tokens in
transformers and different nodes in graph neural networks. Based on our
analysis, we present a correction term to the aggregating operator of these
mechanisms. Empirically, this simple term eliminates much of the oversmoothing
problem in visual transformers, obtaining performance in weakly supervised
segmentation that surpasses elaborate baseline methods that introduce multiple
auxiliary networks and training phrases. In graph neural networks, the
correction term enables the training of very deep architectures more
effectively than many recent solutions to the same problem.
|
[
"cs.LG"
] | false |
2306.01618
|
2023-06-02T15:27:41Z
|
Analyzing Credit Risk Model Problems through NLP-Based Clustering and
Machine Learning: Insights from Validation Reports
|
[
"Szymon Lis",
"Mariusz Kubkowski",
"Olimpia Borkowska",
"Dobromił Serwa",
"Jarosław Kurpanik"
] |
This paper explores the use of clustering methods and machine learning
algorithms, including Natural Language Processing (NLP), to identify and
classify problems identified in credit risk models through textual information
contained in validation reports. Using a unique dataset of 657 findings raised
by validation teams in a large international banking group between January 2019
and December 2022. The findings are classified into nine validation dimensions
and assigned a severity level by validators using their expert knowledge. The
authors use embedding generation for the findings' titles and observations
using four different pre-trained models, including "module\_url" from
TensorFlow Hub and three models from the SentenceTransformer library, namely
"all-mpnet-base-v2", "all-MiniLM-L6-v2", and "paraphrase-mpnet-base-v2". The
paper uses and compares various clustering methods in grouping findings with
similar characteristics, enabling the identification of common problems within
each validation dimension and severity. The results of the study show that
clustering is an effective approach for identifying and classifying credit risk
model problems with accuracy higher than 60\%. The authors also employ machine
learning algorithms, including logistic regression and XGBoost, to predict the
validation dimension and its severity, achieving an accuracy of 80\% for
XGBoost algorithm. Furthermore, the study identifies the top 10 words that
predict a validation dimension and severity. Overall, this paper makes a
contribution by demonstrating the usefulness of clustering and machine learning
for analyzing textual information in validation reports, and providing insights
into the types of problems encountered in the development and validation of
credit risk models.
|
[
"cs.LG"
] | false |
2306.01650
|
2023-06-02T16:19:16Z
|
Fair multilingual vandalism detection system for Wikipedia
|
[
"Mykola Trokhymovych",
"Muniza Aslam",
"Ai-Jou Chou",
"Ricardo Baeza-Yates",
"Diego Saez-Trumper"
] |
This paper presents a novel design of the system aimed at supporting the
Wikipedia community in addressing vandalism on the platform. To achieve this,
we collected a massive dataset of 47 languages, and applied advanced filtering
and feature engineering techniques, including multilingual masked language
modeling to build the training dataset from human-generated data. The
performance of the system was evaluated through comparison with the one used in
production in Wikipedia, known as ORES. Our research results in a significant
increase in the number of languages covered, making Wikipedia patrolling more
efficient to a wider range of communities. Furthermore, our model outperforms
ORES, ensuring that the results provided are not only more accurate but also
less biased against certain groups of contributors.
|
[
"cs.LG"
] | false |
2306.01658
|
2023-06-02T16:27:34Z
|
An Adaptive Method for Weak Supervision with Drifting Data
|
[
"Alessio Mazzetto",
"Reza Esfandiarpoor",
"Eli Upfal",
"Stephen H. Bach"
] |
We introduce an adaptive method with formal quality guarantees for weak
supervision in a non-stationary setting. Our goal is to infer the unknown
labels of a sequence of data by using weak supervision sources that provide
independent noisy signals of the correct classification for each data point.
This setting includes crowdsourcing and programmatic weak supervision. We focus
on the non-stationary case, where the accuracy of the weak supervision sources
can drift over time, e.g., because of changes in the underlying data
distribution. Due to the drift, older data could provide misleading information
to infer the label of the current data point. Previous work relied on a priori
assumptions on the magnitude of the drift to decide how much data to use from
the past. Comparatively, our algorithm does not require any assumptions on the
drift, and it adapts based on the input. In particular, at each step, our
algorithm guarantees an estimation of the current accuracies of the weak
supervision sources over a window of past observations that minimizes a
trade-off between the error due to the variance of the estimation and the error
due to the drift. Experiments on synthetic and real-world labelers show that
our approach indeed adapts to the drift. Unlike fixed-window-size strategies,
it dynamically chooses a window size that allows it to consistently maintain
good performance.
|
[
"cs.LG"
] | false |
2306.01692
|
2023-06-02T17:07:12Z
|
Uniform Convergence of Deep Neural Networks with Lipschitz Continuous
Activation Functions and Variable Widths
|
[
"Yuesheng Xu",
"Haizhang Zhang"
] |
We consider deep neural networks with a Lipschitz continuous activation
function and with weight matrices of variable widths. We establish a uniform
convergence analysis framework in which sufficient conditions on weight
matrices and bias vectors together with the Lipschitz constant are provided to
ensure uniform convergence of the deep neural networks to a meaningful function
as the number of their layers tends to infinity. In the framework, special
results on uniform convergence of deep neural networks with a fixed width,
bounded widths and unbounded widths are presented. In particular, as
convolutional neural networks are special deep neural networks with weight
matrices of increasing widths, we put forward conditions on the mask sequence
which lead to uniform convergence of resulting convolutional neural networks.
The Lipschitz continuity assumption on the activation functions allows us to
include in our theory most of commonly used activation functions in
applications.
|
[
"cs.LG"
] | false |
2306.01725
|
2023-06-02T17:51:56Z
|
Graph Sparsification for GCN Towards Optimal Crop Yield Predictions
|
[
"Saghar Bagheri",
"Gene Cheung",
"Tim Eadie"
] |
In agronomics, predicting crop yield at a per field/county granularity is
important for farmers to minimize uncertainty and plan seeding for the next
crop cycle. While state-of-the-art prediction techniques employ graph
convolutional nets (GCN) to predict future crop yields given relevant features
and crop yields of previous years, a dense underlying graph kernel requires
long training and execution time. In this paper, we propose a graph
sparsification method based on the Fiedler number to remove edges from a
complete graph kernel, in order to lower the complexity of GCN
training/execution. Specifically, we first show that greedily removing an edge
at a time that induces the minimal change in the second eigenvalue leads to a
sparse graph with good GCN performance. We then propose a fast method to choose
an edge for removal per iteration based on an eigenvalue perturbation theorem.
Experiments show that our Fiedler-based method produces a sparse graph with
good GCN performance compared to other graph sparsification schemes in crop
yield prediction.
|
[
"cs.LG"
] | false |
2306.01812
|
2023-06-02T07:10:45Z
|
SAPI: Surroundings-Aware Vehicle Trajectory Prediction at Intersections
|
[
"Ethan Zhang",
"Hao Xiao",
"Yiqian Gan",
"Lei Wang"
] |
In this work we propose a deep learning model, i.e., SAPI, to predict vehicle
trajectories at intersections. SAPI uses an abstract way to represent and
encode surrounding environment by utilizing information from real-time map,
right-of-way, and surrounding traffic. The proposed model consists of two
convolutional network (CNN) and recurrent neural network (RNN)-based encoders
and one decoder. A refiner is proposed to conduct a look-back operation inside
the model, in order to make full use of raw history trajectory information. We
evaluate SAPI on a proprietary dataset collected in real-world intersections
through autonomous vehicles. It is demonstrated that SAPI shows promising
performance when predicting vehicle trajectories at intersection, and
outperforms benchmark methods. The average displacement error(ADE) and final
displacement error(FDE) for 6-second prediction are 1.84m and 4.32m
respectively. We also show that the proposed model can accurately predict
vehicle trajectories in different scenarios.
|
[
"cs.LG"
] | false |
2306.01820
|
2023-06-02T12:36:05Z
|
Concurrent Classifier Error Detection (CCED) in Large Scale Machine
Learning Systems
|
[
"Pedro Reviriego",
"Ziheng Wang",
"Alvaro Alonso",
"Zhen Gao",
"Farzad Niknia",
"Shanshan Liu",
"Fabrizio Lombardi"
] |
The complexity of Machine Learning (ML) systems increases each year, with
current implementations of large language models or text-to-image generators
having billions of parameters and requiring billions of arithmetic operations.
As these systems are widely utilized, ensuring their reliable operation is
becoming a design requirement. Traditional error detection mechanisms introduce
circuit or time redundancy that significantly impacts system performance. An
alternative is the use of Concurrent Error Detection (CED) schemes that operate
in parallel with the system and exploit their properties to detect errors. CED
is attractive for large ML systems because it can potentially reduce the cost
of error detection. In this paper, we introduce Concurrent Classifier Error
Detection (CCED), a scheme to implement CED in ML systems using a concurrent ML
classifier to detect errors. CCED identifies a set of check signals in the main
ML system and feeds them to the concurrent ML classifier that is trained to
detect errors. The proposed CCED scheme has been implemented and evaluated on
two widely used large-scale ML models: Contrastive Language Image Pretraining
(CLIP) used for image classification and Bidirectional Encoder Representations
from Transformers (BERT) used for natural language applications. The results
show that more than 95 percent of the errors are detected when using a simple
Random Forest classifier that is order of magnitude simpler than CLIP or BERT.
These results illustrate the potential of CCED to implement error detection in
large-scale ML models.
|
[
"cs.LG"
] | false |
2306.01922
|
2023-06-02T21:24:13Z
|
Agnostic Multi-Group Active Learning
|
[
"Nick Rittler",
"Kamalika Chaudhuri"
] |
Inspired by the problem of improving classification accuracy on rare or hard
subsets of a population, there has been recent interest in models of learning
where the goal is to generalize to a collection of distributions, each
representing a ``group''. We consider a variant of this problem from the
perspective of active learning, where the learner is endowed with the power to
decide which examples are labeled from each distribution in the collection, and
the goal is to minimize the number of label queries while maintaining
PAC-learning guarantees. Our main challenge is that standard active learning
techniques such as disagreement-based active learning do not directly apply to
the multi-group learning objective. We modify existing algorithms to provide a
consistent active learning algorithm for an agnostic formulation of multi-group
learning, which given a collection of $G$ distributions and a hypothesis class
$\mathcal{H}$ with VC-dimension $d$, outputs an $\epsilon$-optimal hypothesis
using $\tilde{O}\left( (\nu^2/\epsilon^2+1) G d \theta_{\mathcal{G}}^2
\log^2(1/\epsilon) + G\log(1/\epsilon)/\epsilon^2 \right)$ label queries, where
$\theta_{\mathcal{G}}$ is the worst-case disagreement coefficient over the
collection. Roughly speaking, this guarantee improves upon the label complexity
of standard multi-group learning in regimes where disagreement-based active
learning algorithms may be expected to succeed, and the number of groups is not
too large. We also consider the special case where each distribution in the
collection is individually realizable with respect to $\mathcal{H}$, and
demonstrate $\tilde{O}\left( G d \theta_{\mathcal{G}} \log(1/\epsilon) \right)$
label queries are sufficient for learning in this case. We further give an
approximation result for the full agnostic case inspired by the group
realizable strategy.
|
[
"cs.LG"
] | false |
2306.01214
|
2023-06-02T00:33:15Z
|
An Augmented Lagrangian Approach to Conically Constrained Non-monotone
Variational Inequality Problems
|
[
"Lei Zhao",
"Daoli Zhu",
"Shuzhong Zhang"
] |
In this paper we consider a non-monotone (mixed) variational inequality model
with (nonlinear) convex conic constraints. Through developing an equivalent
Lagrangian function-like primal-dual saddle-point system for the VI model in
question, we introduce an augmented Lagrangian primal-dual method, to be called
ALAVI in the current paper, for solving a general constrained VI model. Under
an assumption, to be called the primal-dual variational coherence condition in
the paper, we prove the convergence of ALAVI. Next, we show that many existing
generalized monotonicity properties are sufficient -- though by no means
necessary -- to imply the above mentioned coherence condition, thus are
sufficient to ensure convergence of ALAVI. Under that assumption, we further
show that ALAVI has in fact an $o(1/\sqrt{k})$ global rate of convergence where
$k$ is the iteration count. By introducing a new gap function, this rate
further improves to be $O(1/k)$ if the mapping is monotone. Finally, we show
that under a metric subregularity condition, even if the VI model may be
non-monotone the local convergence rate of ALAVI improves to be linear.
Numerical experiments on some randomly generated highly nonlinear and
non-monotone VI problems show practical efficacy of the newly proposed method.
|
[
"math.OC",
"cs.LG"
] | false |
2306.01249
|
2023-06-02T03:23:16Z
|
Transforming ECG Diagnosis:An In-depth Review of Transformer-based
DeepLearning Models in Cardiovascular Disease Detection
|
[
"Zibin Zhao"
] |
The emergence of deep learning has significantly enhanced the analysis of
electrocardiograms (ECGs), a non-invasive method that is essential for
assessing heart health. Despite the complexity of ECG interpretation, advanced
deep learning models outperform traditional methods. However, the increasing
complexity of ECG data and the need for real-time and accurate diagnosis
necessitate exploring more robust architectures, such as transformers. Here, we
present an in-depth review of transformer architectures that are applied to ECG
classification. Originally developed for natural language processing, these
models capture complex temporal relationships in ECG signals that other models
might overlook. We conducted an extensive search of the latest
transformer-based models and summarize them to discuss the advances and
challenges in their application and suggest potential future improvements. This
review serves as a valuable resource for researchers and practitioners and aims
to shed light on this innovative application in ECG interpretation.
|
[
"cs.LG",
"eess.SP"
] | false |
2306.01253
|
2023-06-02T03:32:44Z
|
Mixture Proportion Estimation Beyond Irreducibility
|
[
"Yilun Zhu",
"Aaron Fjeldsted",
"Darren Holland",
"George Landon",
"Azaree Lintereur",
"Clayton Scott"
] |
The task of mixture proportion estimation (MPE) is to estimate the weight of
a component distribution in a mixture, given observations from both the
component and mixture. Previous work on MPE adopts the irreducibility
assumption, which ensures identifiablity of the mixture proportion. In this
paper, we propose a more general sufficient condition that accommodates several
settings of interest where irreducibility does not hold. We further present a
resampling-based meta-algorithm that takes any existing MPE algorithm designed
to work under irreducibility and adapts it to work under our more general
condition. Our approach empirically exhibits improved estimation performance
relative to baseline methods and to a recently proposed regrouping-based
algorithm.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.01277
|
2023-06-02T05:40:11Z
|
Beyond Active Learning: Leveraging the Full Potential of Human
Interaction via Auto-Labeling, Human Correction, and Human Verification
|
[
"Nathan Beck",
"Krishnateja Killamsetty",
"Suraj Kothawade",
"Rishabh Iyer"
] |
Active Learning (AL) is a human-in-the-loop framework to interactively and
adaptively label data instances, thereby enabling significant gains in model
performance compared to random sampling. AL approaches function by selecting
the hardest instances to label, often relying on notions of diversity and
uncertainty. However, we believe that these current paradigms of AL do not
leverage the full potential of human interaction granted by automated label
suggestions. Indeed, we show that for many classification tasks and datasets,
most people verifying if an automatically suggested label is correct take
$3\times$ to $4\times$ less time than they do changing an incorrect suggestion
to the correct label (or labeling from scratch without any suggestion).
Utilizing this result, we propose CLARIFIER (aCtive LeARnIng From tIEred
haRdness), an Interactive Learning framework that admits more effective use of
human interaction by leveraging the reduced cost of verification. By targeting
the hard (uncertain) instances with existing AL methods, the intermediate
instances with a novel label suggestion scheme using submodular mutual
information functions on a per-class basis, and the easy (confident) instances
with highest-confidence auto-labeling, CLARIFIER can improve over the
performance of existing AL approaches on multiple datasets -- particularly on
those that have a large number of classes -- by almost 1.5$\times$ to 2$\times$
in terms of relative labeling cost.
|
[
"cs.LG",
"cs.HC"
] | false |
2306.01282
|
2023-06-02T05:50:57Z
|
Recent Advances in Graph-based Machine Learning for Applications in
Smart Urban Transportation Systems
|
[
"Hongde Wu",
"Sen Yan",
"Mingming Liu"
] |
The Intelligent Transportation System (ITS) is an important part of modern
transportation infrastructure, employing a combination of communication
technology, information processing and control systems to manage transportation
networks. This integration of various components such as roads, vehicles, and
communication systems, is expected to improve efficiency and safety by
providing better information, services, and coordination of transportation
modes. In recent years, graph-based machine learning has become an increasingly
important research focus in the field of ITS aiming at the development of
complex, data-driven solutions to address various ITS-related challenges. This
chapter presents background information on the key technical challenges for ITS
design, along with a review of research methods ranging from classic
statistical approaches to modern machine learning and deep learning-based
approaches. Specifically, we provide an in-depth review of graph-based machine
learning methods, including basic concepts of graphs, graph data
representation, graph neural network architectures and their relation to ITS
applications. Additionally, two case studies of graph-based ITS applications
proposed in our recent work are presented in detail to demonstrate the
potential of graph-based machine learning in the ITS domain.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01306
|
2023-06-02T07:12:04Z
|
Federated Learning Games for Reconfigurable Intelligent Surfaces via
Causal Representations
|
[
"Charbel Bou Chaaya",
"Sumudu Samarakoon",
"Mehdi Bennis"
] |
In this paper, we investigate the problem of robust Reconfigurable
Intelligent Surface (RIS) phase-shifts configuration over heterogeneous
communication environments. The problem is formulated as a distributed learning
problem over different environments in a Federated Learning (FL) setting.
Equivalently, this corresponds to a game played between multiple RISs, as
learning agents, in heterogeneous environments. Using Invariant Risk
Minimization (IRM) and its FL equivalent, dubbed FL Games, we solve the RIS
configuration problem by learning invariant causal representations across
multiple environments and then predicting the phases. The solution corresponds
to playing according to Best Response Dynamics (BRD) which yields the Nash
Equilibrium of the FL game. The representation learner and the phase predictor
are modeled by two neural networks, and their performance is validated via
simulations against other benchmarks from the literature. Our results show that
causality-based learning yields a predictor that is 15% more accurate in unseen
Out-of-Distribution (OoD) environments.
|
[
"cs.LG",
"eess.SP"
] | false |
2306.01310
|
2023-06-02T07:19:07Z
|
EPIC: Graph Augmentation with Edit Path Interpolation via Learnable Cost
|
[
"Jaeseung Heo",
"Seungbeom Lee",
"Sungsoo Ahn",
"Dongwoo Kim"
] |
Graph-based models have become increasingly important in various domains, but
the limited size and diversity of existing graph datasets often limit their
performance. To address this issue, we propose EPIC (Edit Path Interpolation
via learnable Cost), a novel interpolation-based method for augmenting graph
datasets. Our approach leverages graph edit distance to generate new graphs
that are similar to the original ones but exhibit some variation in their
structures. To achieve this, we learn the graph edit distance through a
comparison of labeled graphs and utilize this knowledge to create graph edit
paths between pairs of original graphs. With randomly sampled graphs from a
graph edit path, we enrich the training set to enhance the generalization
capability of classification models. We demonstrate the effectiveness of our
approach on several benchmark datasets and show that it outperforms existing
augmentation methods in graph classification tasks.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01339
|
2023-06-02T08:07:14Z
|
Resource-Efficient Federated Hyperdimensional Computing
|
[
"Nikita Zeulin",
"Olga Galinina",
"Nageen Himayat",
"Sergey Andreev"
] |
In conventional federated hyperdimensional computing (HDC), training larger
models usually results in higher predictive performance but also requires more
computational, communication, and energy resources. If the system resources are
limited, one may have to sacrifice the predictive performance by reducing the
size of the HDC model. The proposed resource-efficient federated
hyperdimensional computing (RE-FHDC) framework alleviates such constraints by
training multiple smaller independent HDC sub-models and refining the
concatenated HDC model using the proposed dropout-inspired procedure. Our
numerical comparison demonstrates that the proposed framework achieves a
comparable or higher predictive performance while consuming less computational
and wireless resources than the baseline federated HDC implementation.
|
[
"cs.LG",
"cs.DC"
] | false |
2306.01342
|
2023-06-02T08:11:32Z
|
Covert Communication Based on the Poisoning Attack in Federated Learning
|
[
"Junchuan Liang",
"Rong Wang"
] |
Covert communication has become an important area of research in computer
security. It involves hiding specific information on a carrier for message
transmission and is often used to transmit private data, military secrets, and
even malware. In deep learning, many methods have been developed for hiding
information in models to achieve covert communication. However, these methods
are not applicable to federated learning, where model aggregation invalidates
the exact information embedded in the model by the client. To address this
problem, we propose a novel method for covert communication in federated
learning based on the poisoning attack. Our approach achieves 100% accuracy in
covert message transmission between two clients and is shown to be both
stealthy and robust through extensive experiments. However, existing defense
methods are limited in their effectiveness against our attack scheme,
highlighting the urgent need for new protection methods to be developed. Our
study emphasizes the necessity of research in covert communication and serves
as a foundation for future research in federated learning attacks and defenses.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.01391
|
2023-06-02T09:37:03Z
|
Chemical Property-Guided Neural Networks for Naphtha Composition
Prediction
|
[
"Chonghyo Joo",
"Jeongdong Kim",
"Hyungtae Cho",
"Jaewon Lee",
"Sungho Suh",
"Junghwan Kim"
] |
The naphtha cracking process heavily relies on the composition of naphtha,
which is a complex blend of different hydrocarbons. Predicting the naphtha
composition accurately is crucial for efficiently controlling the cracking
process and achieving maximum performance. Traditional methods, such as gas
chromatography and true boiling curve, are not feasible due to the need for
pilot-plant-scale experiments or cost constraints. In this paper, we propose a
neural network framework that utilizes chemical property information to improve
the performance of naphtha composition prediction. Our proposed framework
comprises two parts: a Watson K factor estimation network and a naphtha
composition prediction network. Both networks share a feature extraction
network based on Convolutional Neural Network (CNN) architecture, while the
output layers use Multi-Layer Perceptron (MLP) based networks to generate two
different outputs - Watson K factor and naphtha composition. The naphtha
composition is expressed in percentages, and its sum should be 100%. To enhance
the naphtha composition prediction, we utilize a distillation simulator to
obtain the distillation curve from the naphtha composition, which is dependent
on its chemical properties. By designing a loss function between the estimated
and simulated Watson K factors, we improve the performance of both Watson K
estimation and naphtha composition prediction. The experimental results show
that our proposed framework can predict the naphtha composition accurately
while reflecting real naphtha chemical properties.
|
[
"cs.LG",
"cs.CE"
] | false |
2306.01400
|
2023-06-02T09:46:54Z
|
Adaptive Attractors: A Defense Strategy against ML Adversarial Collusion
Attacks
|
[
"Jiyi Zhang",
"Han Fang",
"Ee-Chien Chang"
] |
In the seller-buyer setting on machine learning models, the seller generates
different copies based on the original model and distributes them to different
buyers, such that adversarial samples generated on one buyer's copy would
likely not work on other copies. A known approach achieves this using
attractor-based rewriter which injects different attractors to different
copies. This induces different adversarial regions in different copies, making
adversarial samples generated on one copy not replicable on others. In this
paper, we focus on a scenario where multiple malicious buyers collude to
attack. We first give two formulations and conduct empirical studies to analyze
effectiveness of collusion attack under different assumptions on the attacker's
capabilities and properties of the attractors. We observe that existing
attractor-based methods do not effectively mislead the colluders in the sense
that adversarial samples found are influenced more by the original model
instead of the attractors as number of colluders increases. Based on this
observation, we propose using adaptive attractors whose weight is guided by a
U-shape curve to cover the shortfalls. Experimentation results show that when
using our approach, the attack success rate of a collusion attack converges to
around 15% even when lots of copies are applied for collusion. In contrast,
when using the existing attractor-based rewriter with fixed weight, the attack
success rate increases linearly with the number of copies used for collusion.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.01417
|
2023-06-02T10:07:12Z
|
The Flawed Foundations of Fair Machine Learning
|
[
"Robert Lee Poe",
"Soumia Zohra El Mestari"
] |
The definition and implementation of fairness in automated decisions has been
extensively studied by the research community. Yet, there hides fallacious
reasoning, misleading assertions, and questionable practices at the foundations
of the current fair machine learning paradigm. Those flaws are the result of a
failure to understand that the trade-off between statistically accurate
outcomes and group similar outcomes exists as independent, external constraint
rather than as a subjective manifestation as has been commonly argued. First,
we explain that there is only one conception of fairness present in the fair
machine learning literature: group similarity of outcomes based on a sensitive
attribute where the similarity benefits an underprivileged group. Second, we
show that there is, in fact, a trade-off between statistically accurate
outcomes and group similar outcomes in any data setting where group disparities
exist, and that the trade-off presents an existential threat to the equitable,
fair machine learning approach. Third, we introduce a proof-of-concept
evaluation to aid researchers and designers in understanding the relationship
between statistically accurate outcomes and group similar outcomes. Finally,
suggestions for future work aimed at data scientists, legal scholars, and data
ethicists that utilize the conceptual and experimental framework described
throughout this article are provided.
|
[
"cs.CY",
"cs.LG"
] | false |
2306.01429
|
2023-06-02T10:40:30Z
|
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models
|
[
"Zonghan Yang",
"Tianyu Pang",
"Yang Liu"
] |
Deep equilibrium models (DEQs) refrain from the traditional layer-stacking
paradigm and turn to find the fixed point of a single layer. DEQs have achieved
promising performance on different applications with featured memory
efficiency. At the same time, the adversarial vulnerability of DEQs raises
concerns. Several works propose to certify robustness for monotone DEQs.
However, limited efforts are devoted to studying empirical robustness for
general DEQs. To this end, we observe that an adversarially trained DEQ
requires more forward steps to arrive at the equilibrium state, or even
violates its fixed-point structure. Besides, the forward and backward tracks of
DEQs are misaligned due to the black-box solvers. These facts cause gradient
obfuscation when applying the ready-made attacks to evaluate or adversarially
train DEQs. Given this, we develop approaches to estimate the intermediate
gradients of DEQs and integrate them into the attacking pipelines. Our
approaches facilitate fully white-box evaluations and lead to effective
adversarial defense for DEQs. Extensive experiments on CIFAR-10 validate the
adversarial robustness of DEQs competitive with deep networks of similar sizes.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.01432
|
2023-06-02T10:43:42Z
|
Audio-Visual Speech Enhancement with Score-Based Generative Models
|
[
"Julius Richter",
"Simone Frintrop",
"Timo Gerkmann"
] |
This paper introduces an audio-visual speech enhancement system that
leverages score-based generative models, also known as diffusion models,
conditioned on visual information. In particular, we exploit audio-visual
embeddings obtained from a self-super\-vised learning model that has been
fine-tuned on lipreading. The layer-wise features of its transformer-based
encoder are aggregated, time-aligned, and incorporated into the noise
conditional score network. Experimental evaluations show that the proposed
audio-visual speech enhancement system yields improved speech quality and
reduces generative artifacts such as phonetic confusions with respect to the
audio-only equivalent. The latter is supported by the word error rate of a
downstream automatic speech recognition model, which decreases noticeably,
especially at low input signal-to-noise ratios.
|
[
"eess.AS",
"cs.LG"
] | false |
2306.01435
|
2023-06-02T10:49:35Z
|
Improving Adversarial Robustness of DEQs with Explicit Regulations Along
the Neural Dynamics
|
[
"Zonghan Yang",
"Peng Li",
"Tianyu Pang",
"Yang Liu"
] |
Deep equilibrium (DEQ) models replace the multiple-layer stacking of
conventional deep networks with a fixed-point iteration of a single-layer
transformation. Having been demonstrated to be competitive in a variety of
real-world scenarios, the adversarial robustness of general DEQs becomes
increasingly crucial for their reliable deployment. Existing works improve the
robustness of general DEQ models with the widely-used adversarial training (AT)
framework, but they fail to exploit the structural uniquenesses of DEQ models.
To this end, we interpret DEQs through the lens of neural dynamics and find
that AT under-regulates intermediate states. Besides, the intermediate states
typically provide predictions with a high prediction entropy. Informed by the
correlation between the entropy of dynamical systems and their stability
properties, we propose reducing prediction entropy by progressively updating
inputs along the neural dynamics. During AT, we also utilize random
intermediate states to compute the loss function. Our methods regulate the
neural dynamics of DEQ models in this manner. Extensive experiments demonstrate
that our methods substantially increase the robustness of DEQ models and even
outperform the strong deep network baselines.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.01436
|
2023-06-02T10:54:24Z
|
Multi-Objective Population Based Training
|
[
"Arkadiy Dushatskiy",
"Alexander Chebykin",
"Tanja Alderliesten",
"Peter A. N. Bosman"
] |
Population Based Training (PBT) is an efficient hyperparameter optimization
algorithm. PBT is a single-objective algorithm, but many real-world
hyperparameter optimization problems involve two or more conflicting
objectives. In this work, we therefore introduce a multi-objective version of
PBT, MO-PBT. Our experiments on diverse multi-objective hyperparameter
optimization problems (Precision/Recall, Accuracy/Fairness,
Accuracy/Adversarial Robustness) show that MO-PBT outperforms random search,
single-objective PBT, and the state-of-the-art multi-objective hyperparameter
optimization algorithm MO-ASHA.
|
[
"cs.LG",
"cs.NE"
] | false |
2306.01469
|
2023-06-02T11:48:28Z
|
GANs and alternative methods of synthetic noise generation for domain
adaption of defect classification of Non-destructive ultrasonic testing
|
[
"Shaun McKnight",
"S. Gareth Pierce",
"Ehsan Mohseni",
"Christopher MacKinnon",
"Charles MacLeod",
"Tom OHare",
"Charalampos Loukas"
] |
This work provides a solution to the challenge of small amounts of training
data in Non-Destructive Ultrasonic Testing for composite components. It was
demonstrated that direct simulation alone is ineffective at producing training
data that was representative of the experimental domain due to poor noise
reconstruction. Therefore, four unique synthetic data generation methods were
proposed which use semi-analytical simulated data as a foundation. Each method
was evaluated on its classification performance of real experimental images
when trained on a Convolutional Neural Network which underwent hyperparameter
optimization using a genetic algorithm. The first method introduced task
specific modifications to CycleGAN, to learn the mapping from physics-based
simulations of defect indications to experimental indications in resulting
ultrasound images. The second method was based on combining real experimental
defect free images with simulated defect responses. The final two methods fully
simulated the noise responses at an image and signal level respectively. The
purely simulated data produced a mean classification F1 score of 0.394.
However, when trained on the new synthetic datasets, a significant improvement
in classification performance on experimental data was realized, with mean
classification F1 scores of 0.843, 0.688, 0.629, and 0.738 for the respective
approaches.
|
[
"eess.IV",
"cs.LG"
] | false |
2306.01470
|
2023-06-02T11:51:24Z
|
MLP-Mixer as a Wide and Sparse MLP
|
[
"Tomohiro Hayase",
"Ryo Karakida"
] |
Multi-layer perceptron (MLP) is a fundamental component of deep learning that
has been extensively employed for various problems. However, recent empirical
successes in MLP-based architectures, particularly the progress of the
MLP-Mixer, have revealed that there is still hidden potential in improving MLPs
to achieve better performance. In this study, we reveal that the MLP-Mixer
works effectively as a wide MLP with certain sparse weights. Initially, we
clarify that the mixing layer of the Mixer has an effective expression as a
wider MLP whose weights are sparse and represented by the Kronecker product.
This expression naturally defines a permuted-Kronecker (PK) family, which can
be regarded as a general class of mixing layers and is also regarded as an
approximation of Monarch matrices. Subsequently, because the PK family
effectively constitutes a wide MLP with sparse weights, one can apply the
hypothesis proposed by Golubeva, Neyshabur and Gur-Ari (2021) that the
prediction performance improves as the width (sparsity) increases when the
number of weights is fixed. We empirically verify this hypothesis by maximizing
the effective width of the MLP-Mixer, which enables us to determine the
appropriate size of the mixing layers quantitatively.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.01475
|
2023-06-02T12:00:03Z
|
Prompt Tuning Large Language Models on Personalized Aspect Extraction
for Recommendations
|
[
"Pan Li",
"Yuyan Wang",
"Ed H. Chi",
"Minmin Chen"
] |
Existing aspect extraction methods mostly rely on explicit or ground truth
aspect information, or using data mining or machine learning approaches to
extract aspects from implicit user feedback such as user reviews. It however
remains under-explored how the extracted aspects can help generate more
meaningful recommendations to the users. Meanwhile, existing research on
aspect-based recommendations often relies on separate aspect extraction models
or assumes the aspects are given, without accounting for the fact the optimal
set of aspects could be dependent on the recommendation task at hand.
In this work, we propose to combine aspect extraction together with
aspect-based recommendations in an end-to-end manner, achieving the two goals
together in a single framework. For the aspect extraction component, we
leverage the recent advances in large language models and design a new prompt
learning mechanism to generate aspects for the end recommendation task. For the
aspect-based recommendation component, the extracted aspects are concatenated
with the usual user and item features used by the recommendation model. The
recommendation task mediates the learning of the user embeddings and item
embeddings, which are used as soft prompts to generate aspects. Therefore, the
extracted aspects are personalized and contextualized by the recommendation
task. We showcase the effectiveness of our proposed method through extensive
experiments on three industrial datasets, where our proposed framework
significantly outperforms state-of-the-art baselines in both the personalized
aspect extraction and aspect-based recommendation tasks. In particular, we
demonstrate that it is necessary and beneficial to combine the learning of
aspect extraction and aspect-based recommendation together. We also conduct
extensive ablation studies to understand the contribution of each design
component in our framework.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.01476
|
2023-06-02T12:02:23Z
|
Hierarchical Reinforcement Learning for Modeling User Novelty-Seeking
Intent in Recommender Systems
|
[
"Pan Li",
"Yuyan Wang",
"Ed H. Chi",
"Minmin Chen"
] |
Recommending novel content, which expands user horizons by introducing them
to new interests, has been shown to improve users' long-term experience on
recommendation platforms \cite{chen2021values}. Users however are not
constantly looking to explore novel content. It is therefore crucial to
understand their novelty-seeking intent and adjust the recommendation policy
accordingly. Most existing literature models a user's propensity to choose
novel content or to prefer a more diverse set of recommendations at individual
interactions. Hierarchical structure, on the other hand, exists in a user's
novelty-seeking intent, which is manifested as a static and intrinsic user
preference for seeking novelty along with a dynamic session-based propensity.
To this end, we propose a novel hierarchical reinforcement learning-based
method to model the hierarchical user novelty-seeking intent, and to adapt the
recommendation policy accordingly based on the extracted user novelty-seeking
propensity. We further incorporate diversity and novelty-related measurement in
the reward function of the hierarchical RL (HRL) agent to encourage user
exploration \cite{chen2021values}. We demonstrate the benefits of explicitly
modeling hierarchical user novelty-seeking intent in recommendations through
extensive experiments on simulated and real-world datasets. In particular, we
demonstrate that the effectiveness of our proposed hierarchical RL-based method
lies in its ability to capture such hierarchically-structured intent. As a
result, the proposed HRL model achieves superior performance on several public
datasets, compared with state-of-art baselines.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.01513
|
2023-06-02T13:02:52Z
|
Network Degeneracy as an Indicator of Training Performance: Comparing
Finite and Infinite Width Angle Predictions
|
[
"Cameron Jakub",
"Mihai Nica"
] |
Neural networks are powerful functions with widespread use, but the
theoretical behaviour of these functions is not fully understood. Creating deep
neural networks by stacking many layers has achieved exceptional performance in
many applications and contributed to the recent explosion of these methods.
Previous works have shown that depth can exponentially increase the
expressibility of the network. However, as networks get deeper and deeper, they
are more susceptible to becoming degenerate. We observe this degeneracy in the
sense that on initialization, inputs tend to become more and more correlated as
they travel through the layers of the network. If a network has too many
layers, it tends to approximate a (random) constant function, making it
effectively incapable of distinguishing between inputs. This seems to affect
the training of the network and cause it to perform poorly, as we empirically
investigate in this paper. We use a simple algorithm that can accurately
predict the level of degeneracy for any given fully connected ReLU network
architecture, and demonstrate how the predicted degeneracy relates to training
dynamics of the network. We also compare this prediction to predictions derived
using infinite width networks.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.01528
|
2023-06-02T13:28:53Z
|
Does it pay to optimize AUC?
|
[
"Baojian Zhou",
"Steven Skiena"
] |
The Area Under the ROC Curve (AUC) is an important model metric for
evaluating binary classifiers, and many algorithms have been proposed to
optimize AUC approximately. It raises the question of whether the generally
insignificant gains observed by previous studies are due to inherent
limitations of the metric or the inadequate quality of optimization.
To better understand the value of optimizing for AUC, we present an efficient
algorithm, namely AUC-opt, to find the provably optimal AUC linear classifier
in $\mathbb{R}^2$, which runs in $\mathcal{O}(n_+ n_- \log (n_+ n_-))$ where
$n_+$ and $n_-$ are the number of positive and negative samples respectively.
Furthermore, it can be naturally extended to $\mathbb{R}^d$ in
$\mathcal{O}((n_+n_-)^{d-1}\log (n_+n_-))$ by calling AUC-opt in
lower-dimensional spaces recursively. We prove the problem is NP-complete when
$d$ is not fixed, reducing from the \textit{open hemisphere problem}.
Experiments show that compared with other methods, AUC-opt achieves
statistically significant improvements on between 17 to 40 in $\mathbb{R}^2$
and between 4 to 42 in $\mathbb{R}^3$ of 50 t-SNE training datasets. However,
generally the gain proves insignificant on most testing datasets compared to
the best standard classifiers. Similar observations are found for nonlinear AUC
methods under real-world datasets.
|
[
"cs.CG",
"cs.LG"
] | false |
2306.01648
|
2023-06-02T16:17:43Z
|
Federated Multi-Sequence Stochastic Approximation with Local
Hypergradient Estimation
|
[
"Davoud Ataee Tarzanagh",
"Mingchen Li",
"Pranay Sharma",
"Samet Oymak"
] |
Stochastic approximation with multiple coupled sequences (MSA) has found
broad applications in machine learning as it encompasses a rich class of
problems including bilevel optimization (BLO), multi-level compositional
optimization (MCO), and reinforcement learning (specifically, actor-critic
methods). However, designing provably-efficient federated algorithms for MSA
has been an elusive question even for the special case of double sequence
approximation (DSA). Towards this goal, we develop FedMSA which is the first
federated algorithm for MSA, and establish its near-optimal communication
complexity. As core novelties, (i) FedMSA enables the provable estimation of
hypergradients in BLO and MCO via local client updates, which has been a
notable bottleneck in prior theory, and (ii) our convergence guarantees are
sensitive to the heterogeneity-level of the problem. We also incorporate
momentum and variance reduction techniques to achieve further acceleration
leading to near-optimal rates. Finally, we provide experiments that support our
theory and demonstrate the empirical benefits of FedMSA. As an example, FedMSA
enables order-of-magnitude savings in communication rounds compared to prior
federated BLO schemes.
|
[
"cs.LG",
"cs.DC"
] | false |
2306.01655
|
2023-06-02T16:24:15Z
|
Poisoning Network Flow Classifiers
|
[
"Giorgio Severi",
"Simona Boboila",
"Alina Oprea",
"John Holodnak",
"Kendra Kratkiewicz",
"Jason Matterer"
] |
As machine learning (ML) classifiers increasingly oversee the automated
monitoring of network traffic, studying their resilience against adversarial
attacks becomes critical. This paper focuses on poisoning attacks, specifically
backdoor attacks, against network traffic flow classifiers. We investigate the
challenging scenario of clean-label poisoning where the adversary's
capabilities are constrained to tampering only with the training data - without
the ability to arbitrarily modify the training labels or any other component of
the training process. We describe a trigger crafting strategy that leverages
model interpretability techniques to generate trigger patterns that are
effective even at very low poisoning rates. Finally, we design novel strategies
to generate stealthy triggers, including an approach based on generative
Bayesian network models, with the goal of minimizing the conspicuousness of the
trigger, and thus making detection of an ongoing poisoning campaign more
challenging. Our findings provide significant insights into the feasibility of
poisoning attacks on network traffic classifiers used in multiple scenarios,
including detecting malicious communication and application classification.
|
[
"cs.CR",
"cs.LG"
] | false |
2306.01668
|
2023-06-02T16:42:20Z
|
XAI Renaissance: Redefining Interpretability in Medical Diagnostic
Models
|
[
"Sujith K Mandala"
] |
As machine learning models become increasingly prevalent in medical
diagnostics, the need for interpretability and transparency becomes paramount.
The XAI Renaissance signifies a significant shift in the field, aiming to
redefine the interpretability of medical diagnostic models. This paper explores
the innovative approaches and methodologies within the realm of Explainable AI
(XAI) that are revolutionizing the interpretability of medical diagnostic
models. By shedding light on the underlying decision-making process, XAI
techniques empower healthcare professionals to understand, trust, and
effectively utilize these models for accurate and reliable medical diagnoses.
This review highlights the key advancements in XAI for medical diagnostics and
their potential to transform the healthcare landscape, ultimately improving
patient outcomes and fostering trust in AI-driven diagnostic systems.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01705
|
2023-06-02T17:28:46Z
|
The Information Pathways Hypothesis: Transformers are Dynamic
Self-Ensembles
|
[
"Md Shamim Hussain",
"Mohammed J. Zaki",
"Dharmashankar Subramanian"
] |
Transformers use the dense self-attention mechanism which gives a lot of
flexibility for long-range connectivity. Over multiple layers of a deep
transformer, the number of possible connectivity patterns increases
exponentially. However, very few of these contribute to the performance of the
network, and even fewer are essential. We hypothesize that there are sparsely
connected sub-networks within a transformer, called information pathways which
can be trained independently. However, the dynamic (i.e., input-dependent)
nature of these pathways makes it difficult to prune dense self-attention
during training. But the overall distribution of these pathways is often
predictable. We take advantage of this fact to propose Stochastically
Subsampled self-Attention (SSA) - a general-purpose training strategy for
transformers that can reduce both the memory and computational cost of
self-attention by 4 to 8 times during training while also serving as a
regularization method - improving generalization over dense training. We show
that an ensemble of sub-models can be formed from the subsampled pathways
within a network, which can achieve better performance than its densely
attended counterpart. We perform experiments on a variety of NLP, computer
vision and graph learning tasks in both generative and discriminative settings
to provide empirical evidence for our claims and show the effectiveness of the
proposed method.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01816
|
2023-06-02T11:35:58Z
|
Prediction of Citrus Diseases Using Machine Learning And Deep Learning:
Classifier, Models SLR
|
[
"Muhammad Shoaib Farooq",
"Abdullah Mehboob"
] |
Citrus diseases have been major issues for citrus growing worldwide for many
years they can lead significantly reduce fruit quality. the most harmful citrus
diseases are citrus canker, citrus greening, citrus black spot, citrus leaf
miner which can have significant economic losses of citrus industry in
worldwide prevention and management strategies like chemical treatments. Citrus
diseases existing in all over the world where citrus is growing its effects the
citrus tree root, citrus tree leaf, citrus tree orange etc. Existing of citrus
diseases is highly impact on economic factor that can also produce low quality
fruits and increased the rate for diseases management. Sanitation and routine
monitoring can be effective in managing certain citrus diseases, but others may
require more intensive treatments like chemical or biological control methods.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01817
|
2023-06-02T11:46:58Z
|
Heart Diseases Prediction Using Block-chain and Machine Learning
|
[
"Muhammad Shoaib Farooq",
"Kiran Amjad"
] |
Most people around the globe are dying due to heart disease. The main reason
behind the rapid increase in the death rate due to heart disease is that there
is no infrastructure developed for the healthcare department that can provide a
secure way of data storage and transmission. Due to redundancy in the patient
data, it is difficult for cardiac Professionals to predict the disease early
on. This rapid increase in the death rate due to heart disease can be
controlled by monitoring and eliminating some of the key attributes in the
early stages such as blood pressure, cholesterol level, body weight, and
addiction to smoking. Patient data can be monitored by cardiac Professionals
(Cp) by using the advanced framework in the healthcare departments. Blockchain
is the world's most reliable provider. The use of advanced systems in the
healthcare departments providing new ways of dealing with diseases has been
developed as well. In this article Machine Learning (ML) algorithm known as a
sine-cosine weighted k-nearest neighbor (SCA-WKNN) is used for predicting the
Hearth disease with the maximum accuracy among the existing approaches.
Blockchain technology has been used in the research to secure the data
throughout the session and can give more accurate results using this
technology. The performance of the system can be improved by using this
algorithm and the dataset proposed has been improved by using different
resources as well.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01822
|
2023-06-02T13:41:47Z
|
ErfReLU: Adaptive Activation Function for Deep Neural Network
|
[
"Ashish Rajanand",
"Pradeep Singh"
] |
Recent research has found that the activation function (AF) selected for
adding non-linearity into the output can have a big impact on how effectively
deep learning networks perform. Developing activation functions that can adapt
simultaneously with learning is a need of time. Researchers recently started
developing activation functions that can be trained throughout the learning
process, known as trainable, or adaptive activation functions (AAF). Research
on AAF that enhance the outcomes is still in its early stages. In this paper, a
novel activation function 'ErfReLU' has been developed based on the erf
function and ReLU. This function exploits the ReLU and the error function (erf)
to its advantage. State of art activation functions like Sigmoid, ReLU, Tanh,
and their properties have been briefly explained. Adaptive activation functions
like Tanhsoft1, Tanhsoft2, Tanhsoft3, TanhLU, SAAF, ErfAct, Pserf, Smish, and
Serf have also been described. Lastly, performance analysis of 9 trainable
activation functions along with the proposed one namely Tanhsoft1, Tanhsoft2,
Tanhsoft3, TanhLU, SAAF, ErfAct, Pserf, Smish, and Serf has been shown by
applying these activation functions in MobileNet, VGG16, and ResNet models on
CIFAR-10, MNIST, and FMNIST benchmark datasets.
|
[
"cs.NE",
"cs.LG"
] | false |
2306.01839
|
2023-06-02T18:00:33Z
|
Efficient Multi-Task and Transfer Reinforcement Learning with
Parameter-Compositional Framework
|
[
"Lingfeng Sun",
"Haichao Zhang",
"Wei Xu",
"Masayoshi Tomizuka"
] |
In this work, we investigate the potential of improving multi-task training
and also leveraging it for transferring in the reinforcement learning setting.
We identify several challenges towards this goal and propose a transferring
approach with a parameter-compositional formulation. We investigate ways to
improve the training of multi-task reinforcement learning which serves as the
foundation for transferring. Then we conduct a number of transferring
experiments on various manipulation tasks. Experimental results demonstrate
that the proposed approach can have improved performance in the multi-task
training stage, and further show effective transferring in terms of both sample
efficiency and performance.
|
[
"cs.RO",
"cs.LG"
] | false |
2306.01854
|
2023-06-02T18:16:35Z
|
Reinforcement Learning with General Utilities: Simpler Variance
Reduction and Large State-Action Space
|
[
"Anas Barakat",
"Ilyas Fatkhullin",
"Niao He"
] |
We consider the reinforcement learning (RL) problem with general utilities
which consists in maximizing a function of the state-action occupancy measure.
Beyond the standard cumulative reward RL setting, this problem includes as
particular cases constrained RL, pure exploration and learning from
demonstrations among others. For this problem, we propose a simpler single-loop
parameter-free normalized policy gradient algorithm. Implementing a recursive
momentum variance reduction mechanism, our algorithm achieves
$\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$
sample complexities for $\epsilon$-first-order stationarity and
$\epsilon$-global optimality respectively, under adequate assumptions. We
further address the setting of large finite state action spaces via linear
function approximation of the occupancy measure and show a
$\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy
gradient method with a linear regression subroutine.
|
[
"cs.LG",
"math.OC"
] | false |
2306.01869
|
2023-06-02T18:55:27Z
|
Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization
|
[
"Ameya Velingker",
"Maximilian Vötsch",
"David P. Woodruff",
"Samson Zhou"
] |
We introduce efficient $(1+\varepsilon)$-approximation algorithms for the
binary matrix factorization (BMF) problem, where the inputs are a matrix
$\mathbf{A}\in\{0,1\}^{n\times d}$, a rank parameter $k>0$, as well as an
accuracy parameter $\varepsilon>0$, and the goal is to approximate $\mathbf{A}$
as a product of low-rank factors $\mathbf{U}\in\{0,1\}^{n\times k}$ and
$\mathbf{V}\in\{0,1\}^{k\times d}$. Equivalently, we want to find $\mathbf{U}$
and $\mathbf{V}$ that minimize the Frobenius loss $\|\mathbf{U}\mathbf{V} -
\mathbf{A}\|_F^2$. Before this work, the state-of-the-art for this problem was
the approximation algorithm of Kumar et. al. [ICML 2019], which achieves a
$C$-approximation for some constant $C\ge 576$. We give the first
$(1+\varepsilon)$-approximation algorithm using running time singly exponential
in $k$, where $k$ is typically a small integer. Our techniques generalize to
other common variants of the BMF problem, admitting bicriteria
$(1+\varepsilon)$-approximation algorithms for $L_p$ loss functions and the
setting where matrix operations are performed in $\mathbb{F}_2$. Our approach
can be implemented in standard big data models, such as the streaming or
distributed models.
|
[
"cs.DS",
"cs.LG"
] | false |
2306.01870
|
2023-06-02T18:57:24Z
|
Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks
|
[
"Zachary Robertson",
"Oluwasanmi Koyejo"
] |
In the quest to enhance the efficiency and bio-plausibility of training deep
neural networks, Feedback Alignment (FA), which replaces the backward pass
weights with random matrices in the training process, has emerged as an
alternative to traditional backpropagation. While the appeal of FA lies in its
circumvention of computational challenges and its plausible biological
alignment, the theoretical understanding of this learning rule remains partial.
This paper uncovers a set of conservation laws underpinning the learning
dynamics of FA, revealing intriguing parallels between FA and Gradient Descent
(GD). Our analysis reveals that FA harbors implicit biases akin to those
exhibited by GD, challenging the prevailing narrative that these learning
algorithms are fundamentally different. Moreover, we demonstrate that these
conservation laws elucidate sufficient conditions for layer-wise alignment with
feedback matrices in ReLU networks. We further show that this implies
over-parameterized two-layer linear networks trained with FA converge to
minimum-norm solutions. The implications of our findings offer avenues for
developing more efficient and biologically plausible alternatives to
backpropagation through an understanding of the principles governing learning
dynamics in deep networks.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.01885
|
2023-06-02T19:37:38Z
|
Multifunctionality in a Connectome-Based Reservoir Computer
|
[
"Jacob Morra",
"Andrew Flynn",
"Andreas Amann",
"Mark Daley"
] |
Multifunctionality describes the capacity for a neural network to perform
multiple mutually exclusive tasks without altering its network connections; and
is an emerging area of interest in the reservoir computing machine learning
paradigm. Multifunctionality has been observed in the brains of humans and
other animals: particularly, in the lateral horn of the fruit fly. In this
work, we transplant the connectome of the fruit fly lateral horn to a reservoir
computer (RC), and investigate the extent to which this 'fruit fly RC' (FFRC)
exhibits multifunctionality using the 'seeing double' problem as a benchmark
test. We furthermore explore the dynamics of how this FFRC achieves
multifunctionality while varying the network's spectral radius. Compared to the
widely-used Erd\"os-Renyi Reservoir Computer (ERRC), we report that the FFRC
exhibits a greater capacity for multifunctionality; is multifunctional across a
broader hyperparameter range; and solves the seeing double problem far beyond
the previously observed spectral radius limit, wherein the ERRC's dynamics
become chaotic.
|
[
"cs.LG",
"cs.NE"
] | false |
2306.01958
|
2023-06-02T23:36:49Z
|
A Survey on Explainability of Graph Neural Networks
|
[
"Jaykumar Kakkad",
"Jaspal Jannu",
"Kartik Sharma",
"Charu Aggarwal",
"Sourav Medya"
] |
Graph neural networks (GNNs) are powerful graph-based deep-learning models
that have gained significant attention and demonstrated remarkable performance
in various domains, including natural language processing, drug discovery, and
recommendation systems. However, combining feature information and
combinatorial graph structures has led to complex non-linear GNN models.
Consequently, this has increased the challenges of understanding the workings
of GNNs and the underlying reasons behind their predictions. To address this,
numerous explainability methods have been proposed to shed light on the inner
mechanism of the GNNs. Explainable GNNs improve their security and enhance
trust in their recommendations. This survey aims to provide a comprehensive
overview of the existing explainability techniques for GNNs. We create a novel
taxonomy and hierarchy to categorize these methods based on their objective and
methodology. We also discuss the strengths, limitations, and application
scenarios of each category. Furthermore, we highlight the key evaluation
metrics and datasets commonly used to assess the explainability of GNNs. This
survey aims to assist researchers and practitioners in understanding the
existing landscape of explainability methods, identifying gaps, and fostering
further advancements in interpretable graph-based machine learning.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01220
|
2023-06-02T00:57:03Z
|
Is Model Attention Aligned with Human Attention? An Empirical Study on
Large Language Models for Code Generation
|
[
"Bonan Kou",
"Shengmai Chen",
"Zhijie Wang",
"Lei Ma",
"Tianyi Zhang"
] |
Large Language Models (LLMs) have been demonstrated effective for code
generation. Due to the complexity and opacity of LLMs, little is known about
how these models generate code. To deepen our understanding, we investigate
whether LLMs attend to the same parts of a natural language description as
human programmers during code generation. An analysis of five LLMs on a popular
benchmark, HumanEval, revealed a consistent misalignment between LLMs' and
programmers' attention. Furthermore, we found that there is no correlation
between the code generation accuracy of LLMs and their alignment with human
programmers. Through a quantitative experiment and a user study, we confirmed
that, among twelve different attention computation methods, attention computed
by the perturbation-based method is most aligned with human attention and is
constantly favored by human programmers. Our findings highlight the need for
human-aligned LLMs for better interpretability and programmer trust.
|
[
"cs.SE",
"cs.HC",
"cs.LG"
] | false |
2306.01270
|
2023-06-02T05:07:37Z
|
Multi-Robot Path Planning Combining Heuristics and Multi-Agent
Reinforcement Learning
|
[
"Shaoming Peng"
] |
Multi-robot path finding in dynamic environments is a highly challenging
classic problem. In the movement process, robots need to avoid collisions with
other moving robots while minimizing their travel distance. Previous methods
for this problem either continuously replan paths using heuristic search
methods to avoid conflicts or choose appropriate collision avoidance strategies
based on learning approaches. The former may result in long travel distances
due to frequent replanning, while the latter may have low learning efficiency
due to low sample exploration and utilization, and causing high training costs
for the model. To address these issues, we propose a path planning method,
MAPPOHR, which combines heuristic search, empirical rules, and multi-agent
reinforcement learning. The method consists of two layers: a real-time planner
based on the multi-agent reinforcement learning algorithm, MAPPO, which embeds
empirical rules in the action output layer and reward functions, and a
heuristic search planner used to create a global guiding path. During movement,
the heuristic search planner replans new paths based on the instructions of the
real-time planner. We tested our method in 10 different conflict scenarios. The
experiments show that the planning performance of MAPPOHR is better than that
of existing learning and heuristic methods. Due to the utilization of empirical
knowledge and heuristic search, the learning efficiency of MAPPOHR is higher
than that of existing learning methods.
|
[
"cs.AI",
"cs.LG",
"cs.RO"
] | false |
2306.01332
|
2023-06-02T07:53:41Z
|
Differentiable Grey-box Modelling of Phaser Effects using Frame-based
Spectral Processing
|
[
"Alistair Carson",
"Cassia Valentini-Botinhao",
"Simon King",
"Stefan Bilbao"
] |
Machine learning approaches to modelling analog audio effects have seen
intensive investigation in recent years, particularly in the context of
non-linear time-invariant effects such as guitar amplifiers. For modulation
effects such as phasers, however, new challenges emerge due to the presence of
the low-frequency oscillator which controls the slowly time-varying nature of
the effect. Existing approaches have either required foreknowledge of this
control signal, or have been non-causal in implementation. This work presents a
differentiable digital signal processing approach to modelling phaser effects
in which the underlying control signal and time-varying spectral response of
the effect are jointly learned. The proposed model processes audio in short
frames to implement a time-varying filter in the frequency domain, with a
transfer function based on typical analog phaser circuit topology. We show that
the model can be trained to emulate an analog reference device, while retaining
interpretable and adjustable parameters. The frame duration is an important
hyper-parameter of the proposed model, so an investigation was carried out into
its effect on model accuracy. The optimal frame length depends on both the rate
and transient decay-time of the target effect, but the frame length can be
altered at inference time without a significant change in accuracy.
|
[
"eess.AS",
"cs.LG",
"cs.SD"
] | false |
2306.01333
|
2023-06-02T07:54:48Z
|
Navigating Fairness in Radiology AI: Concepts, Consequences,and Crucial
Considerations
|
[
"Vasantha Kumar Venugopal",
"Abhishek Gupta",
"Rohit Takhar",
"Charlene Liew Jin Yee",
"Catherine Jones",
"Gilberto Szarf"
] |
Artificial Intelligence (AI) has significantly revolutionized radiology,
promising improved patient outcomes and streamlined processes. However, it's
critical to ensure the fairness of AI models to prevent stealthy bias and
disparities from leading to unequal outcomes. This review discusses the concept
of fairness in AI, focusing on bias auditing using the Aequitas toolkit, and
its real-world implications in radiology, particularly in disease screening
scenarios. Aequitas, an open-source bias audit toolkit, scrutinizes AI models'
decisions, identifying hidden biases that may result in disparities across
different demographic groups and imaging equipment brands. This toolkit
operates on statistical theories, analyzing a large dataset to reveal a model's
fairness. It excels in its versatility to handle various variables
simultaneously, especially in a field as diverse as radiology. The review
explicates essential fairness metrics: Equal and Proportional Parity, False
Positive Rate Parity, False Discovery Rate Parity, False Negative Rate Parity,
and False Omission Rate Parity. Each metric serves unique purposes and offers
different insights. We present hypothetical scenarios to demonstrate their
relevance in disease screening settings, and how disparities can lead to
significant real-world impacts.
|
[
"cs.LG",
"cs.AI",
"cs.CY"
] | false |
2306.01381
|
2023-06-02T09:02:09Z
|
Adaptive Message Quantization and Parallelization for Distributed
Full-graph GNN Training
|
[
"Borui Wan",
"Juntao Zhao",
"Chuan Wu"
] |
Distributed full-graph training of Graph Neural Networks (GNNs) over large
graphs is bandwidth-demanding and time-consuming. Frequent exchanges of node
features, embeddings and embedding gradients (all referred to as messages)
across devices bring significant communication overhead for nodes with remote
neighbors on other devices (marginal nodes) and unnecessary waiting time for
nodes without remote neighbors (central nodes) in the training graph. This
paper proposes an efficient GNN training system, AdaQP, to expedite distributed
full-graph GNN training. We stochastically quantize messages transferred across
devices to lower-precision integers for communication traffic reduction and
advocate communication-computation parallelization between marginal nodes and
central nodes. We provide theoretical analysis to prove fast training
convergence (at the rate of O(T^{-1}) with T being the total number of training
epochs) and design an adaptive quantization bit-width assignment scheme for
each message based on the analysis, targeting a good trade-off between training
convergence and efficiency. Extensive experiments on mainstream graph datasets
show that AdaQP substantially improves distributed full-graph training's
throughput (up to 3.01 X) with negligible accuracy drop (at most 0.30%) or even
accuracy improvement (up to 0.19%) in most cases, showing significant
advantages over the state-of-the-art works.
|
[
"cs.LG",
"cs.AI",
"cs.DC"
] | false |
2306.01428
|
2023-06-02T10:34:05Z
|
Improved DeepFake Detection Using Whisper Features
|
[
"Piotr Kawa",
"Marcin Plata",
"Michał Czuba",
"Piotr Szymański",
"Piotr Syga"
] |
With a recent influx of voice generation methods, the threat introduced by
audio DeepFake (DF) is ever-increasing. Several different detection methods
have been presented as a countermeasure. Many methods are based on so-called
front-ends, which, by transforming the raw audio, emphasize features crucial
for assessing the genuineness of the audio sample. Our contribution contains
investigating the influence of the state-of-the-art Whisper automatic speech
recognition model as a DF detection front-end. We compare various combinations
of Whisper and well-established front-ends by training 3 detection models
(LCNN, SpecRNet, and MesoNet) on a widely used ASVspoof 2021 DF dataset and
later evaluating them on the DF In-The-Wild dataset. We show that using
Whisper-based features improves the detection for each model and outperforms
recent results on the In-The-Wild dataset by reducing Equal Error Rate by 21%.
|
[
"cs.SD",
"cs.LG",
"eess.AS"
] | false |
2306.01431
|
2023-06-02T10:42:47Z
|
On Knowledge Editing in Federated Learning: Perspectives, Challenges,
and Future Directions
|
[
"Leijie Wu",
"Song Guo",
"Junxiao Wang",
"Zicong Hong",
"Jie Zhang",
"Jingren Zhou"
] |
As Federated Learning (FL) has gained increasing attention, it has become
widely acknowledged that straightforwardly applying stochastic gradient descent
(SGD) on the overall framework when learning over a sequence of tasks results
in the phenomenon known as ``catastrophic forgetting''. Consequently, much FL
research has centered on devising federated increasing learning methods to
alleviate forgetting while augmenting knowledge. On the other hand, forgetting
is not always detrimental. The selective amnesia, also known as federated
unlearning, which entails the elimination of specific knowledge, can address
privacy concerns and create additional ``space'' for acquiring new knowledge.
However, there is a scarcity of extensive surveys that encompass recent
advancements and provide a thorough examination of this issue. In this
manuscript, we present an extensive survey on the topic of knowledge editing
(augmentation/removal) in Federated Learning, with the goal of summarizing the
state-of-the-art research and expanding the perspective for various domains.
Initially, we introduce an integrated paradigm, referred to as Federated
Editable Learning (FEL), by reevaluating the entire lifecycle of FL. Secondly,
we provide a comprehensive overview of existing methods, evaluate their
position within the proposed paradigm, and emphasize the current challenges
they face. Lastly, we explore potential avenues for future research and
identify unresolved issues.
|
[
"cs.LG",
"cs.AI",
"cs.DC"
] | false |
2306.01464
|
2023-06-02T11:41:19Z
|
Theoretical Behavior of XAI Methods in the Presence of Suppressor
Variables
|
[
"Rick Wilming",
"Leo Kieslich",
"Benedict Clark",
"Stefan Haufe"
] |
In recent years, the community of 'explainable artificial intelligence' (XAI)
has created a vast body of methods to bridge a perceived gap between model
'complexity' and 'interpretability'. However, a concrete problem to be solved
by XAI methods has not yet been formally stated. As a result, XAI methods are
lacking theoretical and empirical evidence for the 'correctness' of their
explanations, limiting their potential use for quality-control and transparency
purposes. At the same time, Haufe et al. (2014) showed, using simple toy
examples, that even standard interpretations of linear models can be highly
misleading. Specifically, high importance may be attributed to so-called
suppressor variables lacking any statistical relation to the prediction target.
This behavior has been confirmed empirically for a large array of XAI methods
in Wilming et al. (2022). Here, we go one step further by deriving analytical
expressions for the behavior of a variety of popular XAI methods on a simple
two-dimensional binary classification problem involving Gaussian
class-conditional distributions. We show that the majority of the studied
approaches will attribute non-zero importance to a non-class-related suppressor
feature in the presence of correlated noise. This poses important limitations
on the interpretations and conclusions that the outputs of these XAI methods
can afford.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.