arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.15377
|
2023-05-24T17:37:33Z
|
Uncovering and Quantifying Social Biases in Code Generation
|
[
"Yan Liu",
"Xiaokang Chen",
"Yan Gao",
"Zhe Su",
"Fengji Zhang",
"Daoguang Zan",
"Jian-Guang Lou",
"Pin-Yu Chen",
"Tsung-Yi Ho"
] |
With the popularity of automatic code generation tools, such as Copilot, the
study of the potential hazards of these tools is gaining importance. In this
work, we explore the social bias problem in pre-trained code generation models.
We propose a new paradigm to construct code prompts and successfully uncover
social biases in code generation models. To quantify the severity of social
biases in generated code, we develop a dataset along with three metrics to
evaluate the overall social bias and fine-grained unfairness across different
demographics. Experimental results on three pre-trained code generation models
(Codex, InCoder, and CodeGen) with varying sizes, reveal severe social biases.
Moreover, we conduct analysis to provide useful insights for further choice of
code generation models with low social bias. (This work contains examples that
potentially implicate stereotypes, associations, and other harms that could be
offensive to individuals in certain social groups.)
|
[
"cs.CL"
] | false |
2305.15380
|
2023-05-24T17:40:20Z
|
Sentiment Analysis Using Aligned Word Embeddings for Uralic Languages
|
[
"Khalid Alnajjar",
"Mika Hämäläinen",
"Jack Rueter"
] |
In this paper, we present an approach for translating word embeddings from a
majority language into 4 minority languages: Erzya, Moksha, Udmurt and
Komi-Zyrian. Furthermore, we align these word embeddings and present a novel
neural network model that is trained on English data to conduct sentiment
analysis and then applied on endangered language data through the aligned word
embeddings. To test our model, we annotated a small sentiment analysis corpus
for the 4 endangered languages and Finnish. Our method reached at least 56\%
accuracy for each endangered language. The models and the sentiment corpus will
be released together with this paper. Our research shows that state-of-the-art
neural models can be used with endangered languages with the only requirement
being a dictionary between the endangered language and a majority language.
|
[
"cs.CL"
] | false |
2305.15389
|
2023-05-24T17:51:44Z
|
Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender
Bias Evaluation in Coreference Resolution
|
[
"Gili Lior",
"Gabriel Stanovsky"
] |
Spurious correlations were found to be an important factor explaining model
performance in various NLP tasks (e.g., gender or racial artifacts), often
considered to be ''shortcuts'' to the actual task. However, humans tend to
similarly make quick (and sometimes wrong) predictions based on societal and
cognitive presuppositions. In this work we address the question: can we
quantify the extent to which model biases reflect human behaviour? Answering
this question will help shed light on model performance and provide meaningful
comparisons against humans. We approach this question through the lens of the
dual-process theory for human decision-making. This theory differentiates
between an automatic unconscious (and sometimes biased) ''fast system'' and a
''slow system'', which when triggered may revisit earlier automatic reactions.
We make several observations from two crowdsourcing experiments of gender bias
in coreference resolution, using self-paced reading to study the ''fast''
system, and question answering to study the ''slow'' system under a constrained
time setting. On real-world data humans make $\sim$3\% more gender-biased
decisions compared to models, while on synthetic data models are $\sim$12\%
more biased.
|
[
"cs.CL"
] | false |
2305.15501
|
2023-05-24T18:42:45Z
|
Deriving Language Models from Masked Language Models
|
[
"Lucas Torroba Hennigen",
"Yoon Kim"
] |
Masked language models (MLM) do not explicitly define a distribution over
language, i.e., they are not language models per se. However, recent work has
implicitly treated them as such for the purposes of generation and scoring.
This paper studies methods for deriving explicit joint distributions from MLMs,
focusing on distributions over two tokens, which makes it possible to calculate
exact distributional properties. We find that an approach based on identifying
joints whose conditionals are closest to those of the MLM works well and
outperforms existing Markov random field-based approaches. We further find that
this derived model's conditionals can even occasionally outperform the original
MLM's conditionals.
|
[
"cs.CL"
] | false |
2305.15516
|
2023-05-24T19:14:57Z
|
Free Lunch for Efficient Textual Commonsense Integration in Language
Models
|
[
"Wanyun Cui",
"Xingran Chen"
] |
Recent years have witnessed the emergence of textual commonsense knowledge
bases, aimed at providing more nuanced and context-rich knowledge. The
integration of external commonsense into language models has been shown to be a
key enabler in advancing the state-of-the-art for a wide range of NLP tasks.
However, incorporating textual commonsense descriptions is computationally
expensive, as compared to encoding conventional symbolic knowledge. In this
paper, we propose a method to improve its efficiency without modifying the
model. We group training samples with similar commonsense descriptions into a
single batch, thus reusing the encoded description across multiple samples. One
key observation is that the upper bound of batch partitioning can be reduced to
the classic {\it graph k-cut problem}. Consequently, we propose a spectral
clustering-based algorithm to solve this problem. Extensive experiments
illustrate that the proposed batch partitioning approach effectively reduces
the computational cost while preserving performance. The efficiency improvement
is more pronounced on larger datasets and on devices with more memory capacity,
attesting to its practical utility for large-scale applications.
|
[
"cs.CL"
] | false |
2305.15520
|
2023-05-24T19:17:13Z
|
Exploring Automatically Perturbed Natural Language Explanations in
Relation Extraction
|
[
"Wanyun Cui",
"Xingran Chen"
] |
Previous research has demonstrated that natural language explanations provide
valuable inductive biases that guide models, thereby improving the
generalization ability and data efficiency. In this paper, we undertake a
systematic examination of the effectiveness of these explanations. Remarkably,
we find that corrupted explanations with diminished inductive biases can
achieve competitive or superior performance compared to the original
explanations. Our findings furnish novel insights into the characteristics of
natural language explanations in the following ways: (1) the impact of
explanations varies across different training styles and datasets, with
previously believed improvements primarily observed in frozen language models.
(2) While previous research has attributed the effect of explanations solely to
their inductive biases, our study shows that the effect persists even when the
explanations are completely corrupted. We propose that the main effect is due
to the provision of additional context space. (3) Utilizing the proposed
automatic perturbed context, we were able to attain comparable results to
annotated explanations, but with a significant increase in computational
efficiency, 20-30 times faster.
|
[
"cs.CL"
] | false |
2305.15533
|
2023-05-24T19:37:23Z
|
Automated Refugee Case Analysis: An NLP Pipeline for Supporting Legal
Practitioners
|
[
"Claire Barale",
"Michael Rovatsos",
"Nehal Bhuta"
] |
In this paper, we introduce an end-to-end pipeline for retrieving,
processing, and extracting targeted information from legal cases. We
investigate an under-studied legal domain with a case study on refugee law in
Canada. Searching case law for past similar cases is a key part of legal work
for both lawyers and judges, the potential end-users of our prototype. While
traditional named-entity recognition labels such as dates provide meaningful
information in legal work, we propose to extend existing models and retrieve a
total of 19 useful categories of items from refugee cases. After creating a
novel data set of cases, we perform information extraction based on
state-of-the-art neural named-entity recognition (NER). We test different
architectures including two transformer models, using contextual and
non-contextual embeddings, and compare general purpose versus domain-specific
pre-training. The results demonstrate that models pre-trained on legal data
perform best despite their smaller size, suggesting that domain matching had a
larger effect than network architecture. We achieve a F1 score above 90% on
five of the targeted categories and over 80% on four further categories.
|
[
"cs.CL"
] | false |
2305.15582
|
2023-05-24T21:36:15Z
|
Balancing Effect of Training Dataset Distribution of Multiple Styles for
Multi-Style Text Transfer
|
[
"Debarati Das",
"David Ma",
"Dongyeop Kang"
] |
Text style transfer is an exciting task within the field of natural language
generation that is often plagued by the need for high-quality paired datasets.
Furthermore, training a model for multi-attribute text style transfer requires
datasets with sufficient support across all combinations of the considered
stylistic attributes, adding to the challenges of training a style transfer
model. This paper explores the impact of training data input diversity on the
quality of the generated text from the multi-style transfer model. We construct
a pseudo-parallel dataset by devising heuristics to adjust the style
distribution in the training samples. We balance our training dataset using
marginal and joint distributions to train our style transfer models. We observe
that a balanced dataset produces more effective control effects over multiple
styles than an imbalanced or skewed one. Through quantitative analysis, we
explore the impact of multiple style distributions in training data on
style-transferred output. These findings will better inform the design of
style-transfer datasets.
|
[
"cs.CL"
] | false |
2305.15605
|
2023-05-24T22:34:01Z
|
Revisiting Sentence Union Generation as a Testbed for Text Consolidation
|
[
"Eran Hirsch",
"Valentina Pyatkin",
"Ruben Wolhandler",
"Avi Caciularu",
"Asi Shefer",
"Ido Dagan"
] |
Tasks involving text generation based on multiple input texts, such as
multi-document summarization, long-form question answering and contemporary
dialogue applications, challenge models for their ability to properly
consolidate partly-overlapping multi-text information. However, these tasks
entangle the consolidation phase with the often subjective and ill-defined
content selection requirement, impeding proper assessment of models'
consolidation capabilities. In this paper, we suggest revisiting the sentence
union generation task as an effective well-defined testbed for assessing text
consolidation capabilities, decoupling the consolidation challenge from
subjective content selection. To support research on this task, we present
refined annotation methodology and tools for crowdsourcing sentence union,
create the largest union dataset to date and provide an analysis of its rich
coverage of various consolidation aspects. We then propose a comprehensive
evaluation protocol for union generation, including both human and automatic
evaluation. Finally, as baselines, we evaluate state-of-the-art language models
on the task, along with a detailed analysis of their capacity to address
multi-text consolidation challenges and their limitations.
|
[
"cs.CL"
] | false |
2305.14590
|
2023-05-24T00:07:40Z
|
RE$^2$: Region-Aware Relation Extraction from Visually Rich Documents
|
[
"Pritika Ramu",
"Sijia Wang",
"Lalla Mouatadid",
"Joy Rimchala",
"Lifu Huang"
] |
Current research in form understanding predominantly relies on large
pre-trained language models, necessitating extensive data for pre-training.
However, the importance of layout structure (i.e., the spatial relationship
between the entity blocks in the visually rich document) to relation extraction
has been overlooked. In this paper, we propose REgion-Aware Relation Extraction
(RE$^2$) that leverages region-level spatial structure among the entity blocks
to improve their relation prediction. We design an edge-aware graph attention
network to learn the interaction between entities while considering their
spatial relationship defined by their region-level representations. We also
introduce a constraint objective to regularize the model towards consistency
with the inherent constraints of the relation extraction task. Extensive
experiments across various datasets, languages and domains demonstrate the
superiority of our proposed approach.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14592
|
2023-05-24T00:17:36Z
|
Instruction Tuning with Lexicons for Zero-Shot Style Classification
|
[
"Ruohao Guo",
"Wei Xu",
"Alan Ritter"
] |
Style is used to convey authors' intentions and attitudes. Despite the
success of large pre-trained language models on style classification, prior
work relies on fine-tuning with labeled examples. Prompting large language
models to classify style without fine-tuning is challenging because language
styles can be difficult to define. In this study, we investigate the
effectiveness of style lexicons as a means for instructing language models how
to identify new styles that are unseen during training. Our experiments show
that lexicon-based instructions improve transfer zero-shot performance
significantly. We will release our code and data.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.14618
|
2023-05-24T01:35:10Z
|
Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations
|
[
"Wenting Zhao",
"Justin T. Chiu",
"Claire Cardie",
"Alexander M. Rush"
] |
Abductive reasoning aims to find plausible explanations for an event. This
style of reasoning is critical for commonsense tasks where there are often
multiple plausible explanations. Existing approaches for abductive reasoning in
natural language processing (NLP) often rely on manually generated annotations
for supervision; however, such annotations can be subjective and biased.
Instead of using direct supervision, this work proposes an approach for
abductive commonsense reasoning that exploits the fact that only a subset of
explanations is correct for a given context. The method uses posterior
regularization to enforce a mutual exclusion constraint, encouraging the model
to learn the distinction between fluent explanations and plausible ones. We
evaluate our approach on a diverse set of abductive reasoning datasets;
experimental results show that our approach outperforms or is comparable to
directly applying pretrained language models in a zero-shot manner and other
knowledge-augmented zero-shot methods.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14622
|
2023-05-24T01:40:57Z
|
EXnet: Efficient In-context Learning for Data-less Text classification
|
[
"Debaditya Shome",
"Kuldeep Yadav"
] |
Large pre-trained language models (PLMs) have made significant progress in
encoding world knowledge and spawned a new set of learning paradigms including
zero-shot, few-shot, and in-context learning. Many language tasks can be
modeled as a set of prompts (for example, is this text about geography?) and
language models can provide binary answers, i.e., Yes or No. There is evidence
to suggest that the next-word prediction used by many PLMs does not align well
with zero-shot paradigms. Therefore, PLMs are fine-tuned as a
question-answering system. In-context learning extends zero-shot learning by
incorporating prompts and examples, resulting in increased task accuracy. Our
paper presents EXnet, a model specifically designed to perform in-context
learning without any limitations on the number of examples. We argue that
in-context learning is an effective method to increase task accuracy, and
providing examples facilitates cross-task generalization, especially when it
comes to text classification tasks. With extensive experiments, we show that
even our smallest model (15M parameters) generalizes to several unseen
classification tasks and domains.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.14688
|
2023-05-24T03:51:31Z
|
ExpertPrompting: Instructing Large Language Models to be Distinguished
Experts
|
[
"Benfeng Xu",
"An Yang",
"Junyang Lin",
"Quan Wang",
"Chang Zhou",
"Yongdong Zhang",
"Zhendong Mao"
] |
The answering quality of an aligned large language model (LLM) can be
drastically improved if treated with proper crafting of prompts. In this paper,
we propose ExpertPrompting to elicit the potential of LLMs to answer as
distinguished experts. We first utilize In-Context Learning to automatically
synthesize detailed and customized descriptions of the expert identity for each
specific instruction, and then ask LLMs to provide answer conditioned on such
agent background. Based on this augmented prompting strategy, we produce a new
set of instruction-following data using GPT-3.5, and train a competitive
open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation
to show that 1) the expert data is of significantly higher quality than vanilla
answers, and 2) ExpertLLaMA outperforms existing open-source opponents and
achieves 96\% of the original ChatGPT's capability. All data and the
ExpertLLaMA model will be made publicly available at
\url{https://github.com/OFA-Sys/ExpertLLaMA}.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14693
|
2023-05-24T03:53:43Z
|
Have Large Language Models Developed a Personality?: Applicability of
Self-Assessment Tests in Measuring Personality in LLMs
|
[
"Xiaoyang Song",
"Akshat Gupta",
"Kiyan Mohebbizadeh",
"Shujie Hu",
"Anant Singh"
] |
Have Large Language Models (LLMs) developed a personality? The short answer
is a resounding "We Don't Know!". In this paper, we show that we do not yet
have the right tools to measure personality in language models. Personality is
an important characteristic that influences behavior. As LLMs emulate
human-like intelligence and performance in various tasks, a natural question to
ask is whether these models have developed a personality. Previous works have
evaluated machine personality through self-assessment personality tests, which
are a set of multiple-choice questions created to evaluate personality in
humans. A fundamental assumption here is that human personality tests can
accurately measure personality in machines. In this paper, we investigate the
emergence of personality in five LLMs of different sizes ranging from 1.5B to
30B. We propose the Option-Order Symmetry property as a necessary condition for
the reliability of these self-assessment tests. Under this condition, the
answer to self-assessment questions is invariant to the order in which the
options are presented. We find that many LLMs personality test responses do not
preserve option-order symmetry. We take a deeper look at LLMs test responses
where option-order symmetry is preserved to find that in these cases, LLMs do
not take into account the situational statement being tested and produce the
exact same answer irrespective of the situation being tested. We also identify
the existence of inherent biases in these LLMs which is the root cause of the
aforementioned phenomenon and makes self-assessment tests unreliable. These
observations indicate that self-assessment tests are not the correct tools to
measure personality in LLMs. Through this paper, we hope to draw attention to
the shortcomings of current literature in measuring personality in LLMs and
call for developing tools for machine personality measurement.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.14701
|
2023-05-24T04:11:59Z
|
Modeling rapid language learning by distilling Bayesian priors into
artificial neural networks
|
[
"R. Thomas McCoy",
"Thomas L. Griffiths"
] |
Humans can learn languages from remarkably little experience. Developing
computational models that explain this ability has been a major challenge in
cognitive science. Bayesian models that build in strong inductive biases -
factors that guide generalization - have been successful at explaining how
humans might generalize from few examples in controlled settings but are
usually too restrictive to be tractably applied to more naturalistic data. By
contrast, neural networks have flexible representations that allow them to
learn well from naturalistic data but require many more examples than humans
receive. We show that learning from limited naturalistic data is possible with
an approach that combines the strong inductive biases of a Bayesian model with
the flexible representations of a neural network. This approach works by
distilling a Bayesian model's biases into a neural network. Like a Bayesian
model, the resulting system can learn formal linguistic patterns from a small
number of examples. Like a neural network, it can also learn aspects of English
syntax from a corpus of natural language - and it outperforms a standard neural
network at acquiring the linguistic phenomena of recursion and priming.
Bridging the divide between Bayesian models and neural networks makes it
possible to handle a broader range of learning scenarios than either approach
can handle on its own.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14717
|
2023-05-24T04:38:29Z
|
Exploiting Correlations Between Contexts and Definitions with Multiple
Definition Modeling
|
[
"Linhan Zhang",
"Qian Chen",
"Wen Wang",
"Yuxin Jiang",
"Bing Li",
"Wei Wang",
"Xin Cao"
] |
Definition modeling is an important task in advanced natural language
applications such as understanding and conversation. Since its introduction, it
focus on generating one definition for a target word or phrase in a given
context, which we refer to as Single Definition Modeling (SDM). However, this
approach does not adequately model the correlations and patterns among
different contexts and definitions of words. In addition, the creation of a
training dataset for SDM requires significant human expertise and effort. In
this paper, we carefully design a new task called Multiple Definition Modeling
(MDM) that pool together all contexts and definition of target words. We
demonstrate the ease of creating a model as well as multiple training sets
automatically. % In the experiments, we demonstrate and analyze the benefits of
MDM, including improving SDM's performance by using MDM as the pretraining task
and its comparable performance in the zero-shot setting.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14751
|
2023-05-24T05:53:38Z
|
DialogVCS: Robust Natural Language Understanding in Dialogue System
Upgrade
|
[
"Zefan Cai",
"Xin Zheng",
"Tianyu Liu",
"Xu Wang",
"Haoran Meng",
"Jiaqi Han",
"Gang Yuan",
"Binghuai Lin",
"Baobao Chang",
"Yunbo Cao"
] |
In the constant updates of the product dialogue systems, we need to retrain
the natural language understanding (NLU) model as new data from the real users
would be merged into the existent data accumulated in the last updates. Within
the newly added data, new intents would emerge and might have semantic
entanglement with the existing intents, e.g. new intents that are semantically
too specific or generic are actually subset or superset of some existing
intents in the semantic space, thus impairing the robustness of the NLU model.
As the first attempt to solve this problem, we setup a new benchmark consisting
of 4 Dialogue Version Control dataSets (DialogVCS). We formulate the intent
detection with imperfect data in the system update as a multi-label
classification task with positive but unlabeled intents, which asks the models
to recognize all the proper intents, including the ones with semantic
entanglement, in the inference. We also propose comprehensive baseline models
and conduct in-depth analyses for the benchmark, showing that the semantically
entangled intents can be effectively recognized with an automatic workflow.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14842
|
2023-05-24T07:48:41Z
|
Exploring Sentiment Analysis Techniques in Natural Language Processing:
A Comprehensive Review
|
[
"Karthick Prasad Gunasekaran"
] |
Sentiment analysis (SA) is the automated process of detecting and
understanding the emotions conveyed through written text. Over the past decade,
SA has gained significant popularity in the field of Natural Language
Processing (NLP). With the widespread use of social media and online platforms,
SA has become crucial for companies to gather customer feedback and shape their
marketing strategies. Additionally, researchers rely on SA to analyze public
sentiment on various topics. In this particular research study, a comprehensive
survey was conducted to explore the latest trends and techniques in SA. The
survey encompassed a wide range of methods, including lexicon-based,
graph-based, network-based, machine learning, deep learning, ensemble-based,
rule-based, and hybrid techniques. The paper also addresses the challenges and
opportunities in SA, such as dealing with sarcasm and irony, analyzing
multi-lingual data, and addressing ethical concerns. To provide a practical
case study, Twitter was chosen as one of the largest online social media
platforms. Furthermore, the researchers shed light on the diverse application
areas of SA, including social media, healthcare, marketing, finance, and
politics. The paper also presents a comparative and comprehensive analysis of
existing trends and techniques, datasets, and evaluation metrics. The ultimate
goal is to offer researchers and practitioners a systematic review of SA
techniques, identify existing gaps, and suggest possible improvements. This
study aims to enhance the efficiency and accuracy of SA processes, leading to
smoother and error-free outcomes.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14891
|
2023-05-24T08:41:23Z
|
Extracting Psychological Indicators Using Question Answering
|
[
"Luka Pavlović"
] |
In this work, we propose a method for extracting text spans that may indicate
one of the BIG5 psychological traits using a question-answering task with
examples that have no answer for the asked question. We utilized the RoBERTa
model fine-tuned on SQuAD 2.0 dataset. The model was further fine-tuned
utilizing comments from Reddit. We examined the effect of the percentage of
examples with no answer in the training dataset on the overall performance. The
results obtained in this study are in line with the SQuAD 2.0 benchmark and
present a good baseline for further research.
|
[
"cs.CL",
"cs.CY"
] | false |
2305.14917
|
2023-05-24T09:04:18Z
|
Structural Ambiguity and its Disambiguation in Language Model Based
Parsers: the Case of Dutch Clause Relativization
|
[
"Gijs Wijnholds",
"Michael Moortgat"
] |
This paper addresses structural ambiguity in Dutch relative clauses. By
investigating the task of disambiguation by grounding, we study how the
presence of a prior sentence can resolve relative clause ambiguities. We apply
this method to two parsing architectures in an attempt to demystify the parsing
and language model components of two present-day neural parsers. Results show
that a neurosymbolic parser, based on proof nets, is more open to data bias
correction than an approach based on universal dependencies, although both
setups suffer from a comparable initial data bias.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15024
|
2023-05-24T11:06:23Z
|
ChatAgri: Exploring Potentials of ChatGPT on Cross-linguistic
Agricultural Text Classification
|
[
"Biao Zhao",
"Weiqiang Jin",
"Javier Del Ser",
"Guang Yang"
] |
In the era of sustainable smart agriculture, a massive amount of agricultural
news text is being posted on the Internet, in which massive agricultural
knowledge has been accumulated. In this context, it is urgent to explore
effective text classification techniques for users to access the required
agricultural knowledge with high efficiency. Mainstream deep learning
approaches employing fine-tuning strategies on pre-trained language models
(PLMs), have demonstrated remarkable performance gains over the past few years.
Nonetheless, these methods still face many drawbacks that are complex to solve,
including: 1. Limited agricultural training data due to the expensive-cost and
labour-intensive annotation; 2. Poor domain transferability, especially of
cross-linguistic ability; 3. Complex and expensive large models
deployment.Inspired by the extraordinary success brought by the recent ChatGPT
(e.g. GPT-3.5, GPT-4), in this work, we systematically investigate and explore
the capability and utilization of ChatGPT applying to the agricultural
informatization field. ....(shown in article).... Code has been released on
Github
https://github.com/albert-jin/agricultural_textual_classification_ChatGPT.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15048
|
2023-05-24T11:38:39Z
|
Ranger: A Toolkit for Effect-Size Based Multi-Task Evaluation
|
[
"Mete Sertkan",
"Sophia Althammer",
"Sebastian Hofstätter"
] |
In this paper, we introduce Ranger - a toolkit to facilitate the easy use of
effect-size-based meta-analysis for multi-task evaluation in NLP and IR. We
observed that our communities often face the challenge of aggregating results
over incomparable metrics and scenarios, which makes conclusions and take-away
messages less reliable. With Ranger, we aim to address this issue by providing
a task-agnostic toolkit that combines the effect of a treatment on multiple
tasks into one statistical evaluation, allowing for comparison of metrics and
computation of an overall summary effect. Our toolkit produces
publication-ready forest plots that enable clear communication of evaluation
results over multiple tasks. Our goal with the ready-to-use Ranger toolkit is
to promote robust, effect-size-based evaluation and improve evaluation
standards in the community. We provide two case studies for common IR and NLP
settings to highlight Ranger's benefits.
|
[
"cs.CL",
"cs.IR"
] | false |
2305.15053
|
2023-05-24T11:43:40Z
|
Decomposing Complex Queries for Tip-of-the-tongue Retrieval
|
[
"Kevin Lin",
"Kyle Lo",
"Joseph E. Gonzalez",
"Dan Klein"
] |
When re-finding items, users who forget or are uncertain about identifying
details often rely on creative strategies for expressing their information
needs -- complex queries that describe content elements (e.g., book characters
or events), information beyond the document text (e.g., descriptions of book
covers), or personal context (e.g., when they read a book). This retrieval
setting, called tip of the tongue (TOT), is especially challenging for models
heavily reliant on lexical and semantic overlap between query and document
text. In this work, we introduce a simple yet effective framework for handling
such complex queries by decomposing the query into individual clues, routing
those as sub-queries to specialized retrievers, and ensembling the results.
This approach allows us to take advantage of off-the-shelf retrievers (e.g.,
CLIP for retrieving images of book covers) or incorporate retriever-specific
logic (e.g., date constraints). We show that our framework incorportating query
decompositions into retrievers can improve gold book recall up to 7% relative
again for Recall@5 on a new collection of 14,441 real-world query-book pairs
from an online community for resolving TOT inquiries.
|
[
"cs.CL",
"cs.IR"
] | false |
2305.15075
|
2023-05-24T11:56:01Z
|
HuatuoGPT, towards Taming Language Model to Be a Doctor
|
[
"Hongbo Zhang",
"Junying Chen",
"Feng Jiang",
"Fei Yu",
"Zhihong Chen",
"Jianquan Li",
"Guiming Chen",
"Xiangbo Wu",
"Zhiyi Zhang",
"Qingying Xiao",
"Xiang Wan",
"Benyou Wang",
"Haizhou Li"
] |
In this paper, we present HuatuoGPT, a large language model (LLM) for medical
consultation. The core recipe of HuatuoGPT is to leverage both
\textit{distilled data from ChatGPT} and \textit{real-world data from doctors}
in the supervised fine-tuned stage. The responses of ChatGPT are usually
detailed, well-presented and informative while it cannot perform like a doctor
in many aspects, e.g. for integrative diagnosis. We argue that real-world data
from doctors would be complementary to distilled data in the sense the former
could tame a distilled language model to perform like doctors. To better
leverage the strengths of both data, we train a reward model to align the
language model with the merits that both data bring, following an RLAIF
(reinforced learning from AI feedback) fashion. To evaluate and benchmark the
models, we propose a comprehensive evaluation scheme (including automatic and
manual metrics). Experimental results demonstrate that HuatuoGPT achieves
state-of-the-art results in performing medical consultation among open-source
LLMs in GPT-4 evaluation, human evaluation, and medical benchmark datasets. It
is worth noting that by using additional real-world data and RLAIF, the
distilled language model (i.e., HuatuoGPT) outperforms its teacher model
ChatGPT in most cases. Our code, data, and models are publicly available at
\url{https://github.com/FreedomIntelligence/HuatuoGPT}. The online demo is
available at \url{https://www.HuatuoGPT.cn/}.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15130
|
2023-05-24T13:25:20Z
|
On Degrees of Freedom in Defining and Testing Natural Language
Understanding
|
[
"Saku Sugawara",
"Shun Tsugita"
] |
Natural language understanding (NLU) studies often exaggerate or
underestimate the capabilities of systems, thereby limiting the reproducibility
of their findings. These erroneous evaluations can be attributed to the
difficulty of defining and testing NLU adequately. In this position paper, we
reconsider this challenge by identifying two types of researcher degrees of
freedom. We revisit Turing's original interpretation of the Turing test and
indicate that an NLU test does not provide an operational definition; it merely
provides inductive evidence that the test subject understands the language
sufficiently well to meet stakeholder objectives. In other words, stakeholders
are free to arbitrarily define NLU through their objectives. To use the test
results as inductive evidence, stakeholders must carefully assess if the
interpretation of test scores is valid or not. However, designing and using NLU
tests involve other degrees of freedom, such as specifying target skills and
defining evaluation metrics. As a result, achieving consensus among
stakeholders becomes difficult. To resolve this issue, we propose a validity
argument, which is a framework comprising a series of validation criteria
across test components. By demonstrating that current practices in NLU studies
can be associated with those criteria and organizing them into a comprehensive
checklist, we prove that the validity argument can serve as a coherent
guideline for designing credible test sets and facilitating scientific
communication.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15186
|
2023-05-24T14:26:30Z
|
SciReviewGen: A Large-scale Dataset for Automatic Literature Review
Generation
|
[
"Tetsu Kasanishi",
"Masaru Isonuma",
"Junichiro Mori",
"Ichiro Sakata"
] |
Automatic literature review generation is one of the most challenging tasks
in natural language processing. Although large language models have tackled
literature review generation, the absence of large-scale datasets has been a
stumbling block to the progress. We release SciReviewGen, consisting of over
10,000 literature reviews and 690,000 papers cited in the reviews. Based on the
dataset, we evaluate recent transformer-based summarization models on the
literature review generation task, including Fusion-in-Decoder extended for
literature review generation. Human evaluation results show that some
machine-generated summaries are comparable to human-written reviews, while
revealing the challenges of automatic literature review generation such as
hallucinations and a lack of detailed information. Our dataset and code are
available at https://github.com/tetsu9923/SciReviewGen.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15233
|
2023-05-24T15:14:49Z
|
Boosting Cross-lingual Transferability in Multilingual Models via
In-Context Learning
|
[
"Sunkyoung Kim",
"Dayeon Ki",
"Yireun Kim",
"Jinsik Lee"
] |
Existing cross-lingual transfer (CLT) prompting methods are only concerned
with monolingual demonstration examples in the source language. In this paper,
we propose In-CLT, a novel cross-lingual transfer prompting method that
leverages both source and target languages to construct the demonstration
examples. We conduct comprehensive evaluations on multilingual benchmarks,
focusing on question answering tasks. Experiment results show that In-CLT
prompt not only improves multilingual models' cross-lingual transferability,
but also demonstrates remarkable unseen language generalization ability. In-CLT
prompting, in particular, improves model performance by 10 to 20\% points on
average when compared to prior cross-lingual transfer approaches. We also
observe the surprising performance gain on the other multilingual benchmarks,
especially in reasoning tasks. Furthermore, we investigate the relationship
between lexical similarity and pre-training corpora in terms of the
cross-lingual transfer gap.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15268
|
2023-05-24T15:55:40Z
|
EvEval: A Comprehensive Evaluation of Event Semantics for Large Language
Models
|
[
"Zhengwei Tao",
"Zhi Jin",
"Xiaoying Bai",
"Haiyan Zhao",
"Yanlin Feng",
"Jia Li",
"Wenpeng Hu"
] |
Events serve as fundamental units of occurrence within various contexts. The
processing of event semantics in textual information forms the basis of
numerous natural language processing (NLP) applications. Recent studies have
begun leveraging large language models (LLMs) to address event semantic
processing. However, the extent that LLMs can effectively tackle these
challenges remains uncertain. Furthermore, the lack of a comprehensive
evaluation framework for event semantic processing poses a significant
challenge in evaluating these capabilities. In this paper, we propose an
overarching framework for event semantic processing, encompassing
understanding, reasoning, and prediction, along with their fine-grained
aspects. To comprehensively evaluate the event semantic processing abilities of
models, we introduce a novel benchmark called EVEVAL. We collect 8 datasets
that cover all aspects of event semantic processing. Extensive experiments are
conducted on EVEVAL, leading to several noteworthy findings based on the
obtained results.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15321
|
2023-05-24T16:37:35Z
|
Towards Foundation Models for Relational Databases [Vision Paper]
|
[
"Liane Vogel",
"Benjamin Hilprecht",
"Carsten Binnig"
] |
Tabular representation learning has recently gained a lot of attention.
However, existing approaches only learn a representation from a single table,
and thus ignore the potential to learn from the full structure of relational
databases, including neighboring tables that can contain important information
for a contextualized representation. Moreover, current models are significantly
limited in scale, which prevents that they learn from large databases. In this
paper, we thus introduce our vision of relational representation learning, that
can not only learn from the full relational structure, but also can scale to
larger database sizes that are commonly found in real-world. Moreover, we also
discuss opportunities and challenges we see along the way to enable this vision
and present initial very promising results. Overall, we argue that this
direction can lead to foundation models for relational databases that are today
only available for text and images.
|
[
"cs.DB",
"cs.CL"
] | false |
2305.15334
|
2023-05-24T16:48:11Z
|
Gorilla: Large Language Model Connected with Massive APIs
|
[
"Shishir G. Patil",
"Tianjun Zhang",
"Xin Wang",
"Joseph E. Gonzalez"
] |
Large Language Models (LLMs) have seen an impressive wave of advances
recently, with models now excelling in a variety of tasks, such as mathematical
reasoning and program synthesis. However, their potential to effectively use
tools via API calls remains unfulfilled. This is a challenging task even for
today's state-of-the-art LLMs such as GPT-4, largely due to their inability to
generate accurate input arguments and their tendency to hallucinate the wrong
usage of an API call. We release Gorilla, a finetuned LLaMA-based model that
surpasses the performance of GPT-4 on writing API calls. When combined with a
document retriever, Gorilla demonstrates a strong capability to adapt to
test-time document changes, enabling flexible user updates or version changes.
It also substantially mitigates the issue of hallucination, commonly
encountered when prompting LLMs directly. To evaluate the model's ability, we
introduce APIBench, a comprehensive dataset consisting of HuggingFace,
TorchHub, and TensorHub APIs. The successful integration of the retrieval
system with Gorilla demonstrates the potential for LLMs to use tools more
accurately, keep up with frequently updated documentation, and consequently
increase the reliability and applicability of their outputs. Gorilla's code,
model, data, and demo are available at https://gorilla.cs.berkeley.edu
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15338
|
2023-05-24T16:50:36Z
|
Measuring and Mitigating Constraint Violations of In-Context Learning
for Utterance-to-API Semantic Parsing
|
[
"Shufan Wang",
"Sebastien Jean",
"Sailik Sengupta",
"James Gung",
"Nikolaos Pappas",
"Yi Zhang"
] |
In executable task-oriented semantic parsing, the system aims to translate
users' utterances in natural language to machine-interpretable programs (API
calls) that can be executed according to pre-defined API specifications. With
the popularity of Large Language Models (LLMs), in-context learning offers a
strong baseline for such scenarios, especially in data-limited regimes.
However, LLMs are known to hallucinate and therefore pose a formidable
challenge in constraining generated content. Thus, it remains uncertain if LLMs
can effectively perform task-oriented utterance-to-API generation where
respecting API's structural and task-specific constraints is crucial.
In this work, we seek to measure, analyze and mitigate such constraints
violations. First, we identify the categories of various constraints in
obtaining API-semantics from task-oriented utterances, and define fine-grained
metrics that complement traditional ones. Second, we leverage these metrics to
conduct a detailed error analysis of constraints violations seen in
state-of-the-art LLMs, which motivates us to investigate two mitigation
strategies: Semantic-Retrieval of Demonstrations (SRD) and API-aware
Constrained Decoding (API-CD). Our experiments show that these strategies are
effective at reducing constraints violations and improving the quality of the
generated API calls, but require careful consideration given their
implementation complexity and latency.
|
[
"cs.AI",
"cs.CL"
] | false |
2305.15344
|
2023-05-24T16:57:04Z
|
Learning Answer Generation using Supervision from Automatic Question
Answering Evaluators
|
[
"Matteo Gabburo",
"Siddhant Garg",
"Rik Koncel-Kedziorski",
"Alessandro Moschitti"
] |
Recent studies show that sentence-level extractive QA, i.e., based on Answer
Sentence Selection (AS2), is outperformed by Generation-based QA (GenQA)
models, which generate answers using the top-k answer sentences ranked by AS2
models (a la retrieval-augmented generation style). In this paper, we propose a
novel training paradigm for GenQA using supervision from automatic QA
evaluation models (GAVA). Specifically, we propose three strategies to transfer
knowledge from these QA evaluation models to a GenQA model: (i) augmenting
training data with answers generated by the GenQA model and labelled by GAVA
(either statically, before training, or (ii) dynamically, at every training
epoch); and (iii) using the GAVA score for weighting the generator loss during
the learning of the GenQA model. We evaluate our proposed methods on two
academic and one industrial dataset, obtaining a significant improvement in
answering accuracy over the previous state of the art.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.15358
|
2023-05-24T17:10:45Z
|
Context-Aware Transformer Pre-Training for Answer Sentence Selection
|
[
"Luca Di Liello",
"Siddhant Garg",
"Alessandro Moschitti"
] |
Answer Sentence Selection (AS2) is a core component for building an accurate
Question Answering pipeline. AS2 models rank a set of candidate sentences based
on how likely they answer a given question. The state of the art in AS2
exploits pre-trained transformers by transferring them on large annotated
datasets, while using local contextual information around the candidate
sentence. In this paper, we propose three pre-training objectives designed to
mimic the downstream fine-tuning task of contextual AS2. This allows for
specializing LMs when fine-tuning for contextual AS2. Our experiments on three
public and two large-scale industrial datasets show that our pre-training
approaches (applied to RoBERTa and ELECTRA) can improve baseline contextual AS2
accuracy by up to 8% on some datasets.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.15387
|
2023-05-24T17:48:40Z
|
Peek Across: Improving Multi-Document Modeling via Cross-Document
Question-Answering
|
[
"Avi Caciularu",
"Matthew E. Peters",
"Jacob Goldberger",
"Ido Dagan",
"Arman Cohan"
] |
The integration of multi-document pre-training objectives into language
models has resulted in remarkable improvements in multi-document downstream
tasks. In this work, we propose extending this idea by pre-training a generic
multi-document model from a novel cross-document question answering
pre-training objective. To that end, given a set (or cluster) of
topically-related documents, we systematically generate semantically-oriented
questions from a salient sentence in one document and challenge the model,
during pre-training, to answer these questions while "peeking" into other
topically-related documents. In a similar manner, the model is also challenged
to recover the sentence from which the question was generated, again while
leveraging cross-document information. This novel multi-document QA formulation
directs the model to better recover cross-text informational relations, and
introduces a natural augmentation that artificially increases the pre-training
data. Further, unlike prior multi-document models that focus on either
classification or summarization tasks, our pre-training objective formulation
enables the model to perform tasks that involve both short text generation
(e.g., QA) and long text generation (e.g., summarization). Following this
scheme, we pre-train our model -- termed QAmden -- and evaluate its performance
across several multi-document tasks, including multi-document QA,
summarization, and query-focused summarization, yielding improvements of up to
7%, and significantly outperforms zero-shot GPT-3.5 and GPT-4.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15507
|
2023-05-24T18:54:39Z
|
The Larger They Are, the Harder They Fail: Language Models do not
Recognize Identifier Swaps in Python
|
[
"Antonio Valerio Miceli-Barone",
"Fazl Barez",
"Ioannis Konstas",
"Shay B. Cohen"
] |
Large Language Models (LLMs) have successfully been applied to code
generation tasks, raising the question of how well these models understand
programming. Typical programming languages have invariances and equivariances
in their semantics that human programmers intuitively understand and exploit,
such as the (near) invariance to the renaming of identifiers. We show that LLMs
not only fail to properly generate correct Python code when default function
names are swapped, but some of them even become more confident in their
incorrect predictions as the model size increases, an instance of the recently
discovered phenomenon of Inverse Scaling, which runs contrary to the commonly
observed trend of increasing prediction quality with increasing model size. Our
findings indicate that, despite their astonishing typical-case performance,
LLMs still lack a deep, abstract understanding of the content they manipulate,
making them unsuitable for tasks that statistically deviate from their training
data, and that mere scaling is not enough to achieve such capability.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15525
|
2023-05-24T19:25:16Z
|
Large Language Models are Few-Shot Health Learners
|
[
"Xin Liu",
"Daniel McDuff",
"Geza Kovacs",
"Isaac Galatzer-Levy",
"Jacob Sunshine",
"Jiening Zhan",
"Ming-Zher Poh",
"Shun Liao",
"Paolo Di Achille",
"Shwetak Patel"
] |
Large language models (LLMs) can capture rich representations of concepts
that are useful for real-world tasks. However, language alone is limited. While
existing LLMs excel at text-based inferences, health applications require that
models be grounded in numerical data (e.g., vital signs, laboratory values in
clinical domains; steps, movement in the wellness domain) that is not easily or
readily expressed as text in existing training corpus. We demonstrate that with
only few-shot tuning, a large language model is capable of grounding various
physiological and behavioral time-series data and making meaningful inferences
on numerous health tasks for both clinical and wellness contexts. Using data
from wearable and medical sensor recordings, we evaluate these capabilities on
the tasks of cardiac signal analysis, physical activity recognition, metabolic
calculation (e.g., calories burned), and estimation of stress reports and
mental health screeners.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.15541
|
2023-05-24T19:59:51Z
|
Harnessing the Power of Large Language Models for Natural Language to
First-Order Logic Translation
|
[
"Yuan Yang",
"Siheng Xiong",
"Ali Payani",
"Ehsan Shareghi",
"Faramarz Fekri"
] |
Translating natural language sentences to first-order logic (NL-FOL
translation) is a longstanding challenge in the NLP and formal logic
literature. This paper introduces LogicLLaMA, a LLaMA-7B model fine-tuned for
NL-FOL translation using LoRA on a single GPU. LogicLLaMA is capable of
directly translating natural language into FOL rules, which outperforms
GPT-3.5. LogicLLaMA is also equipped to correct FOL rules predicted by GPT-3.5,
and can achieve similar performance as GPT-4 with a fraction of the cost. This
correction ability was achieved by a novel supervised fine-tuning (SFT) +
reinforcement learning with human feedback (RLHF) framework, which initially
trains on synthetically perturbed NL-FOL pairs to encourage chain-of-thought
reasoning and then fine-tunes with RLHF on GPT-3.5 outputs using a FOL verifier
as the reward model.
To train LogicLLaMA, we present MALLS (large language $\textbf{M}$odel
gener$\textbf{A}$ted N$\textbf{L}$-FO$\textbf{L}$ pair$\textbf{S}$), a dataset
of 34K high-quality and diverse sentence-level NL-FOL pairs collected from
GPT-4. The dataset was created by implementing a pipeline that prompts GPT-4
for pairs, and dynamically adjusts the prompts to ensure the collection of
pairs with rich and diverse contexts at different levels of complexity, and
verifies the validity of the generated FOL rules. Codes, weights, and data are
available at $\href{https://github.com/gblackout/LogicLLaMA}{{\small
\text{https://github.com/gblackout/LogicLLaMA}}}$.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15587
|
2023-05-24T21:52:13Z
|
How do humans perceive adversarial text? A reality check on the validity
and naturalness of word-based adversarial attacks
|
[
"Salijona Dyrmishi",
"Salah Ghamizi",
"Maxime Cordy"
] |
Natural Language Processing (NLP) models based on Machine Learning (ML) are
susceptible to adversarial attacks -- malicious algorithms that imperceptibly
modify input text to force models into making incorrect predictions. However,
evaluations of these attacks ignore the property of imperceptibility or study
it under limited settings. This entails that adversarial perturbations would
not pass any human quality gate and do not represent real threats to
human-checked NLP systems. To bypass this limitation and enable proper
assessment (and later, improvement) of NLP model robustness, we have surveyed
378 human participants about the perceptibility of text adversarial examples
produced by state-of-the-art methods. Our results underline that existing text
attacks are impractical in real-world scenarios where humans are involved. This
contrasts with previous smaller-scale human studies, which reported overly
optimistic conclusions regarding attack success. Through our work, we hope to
position human perceptibility as a first-class success criterion for text
attacks, and provide guidance for research to build effective attack algorithms
and, in turn, design appropriate defence mechanisms.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.16343
|
2023-05-24T10:05:59Z
|
A Distributed Automatic Domain-Specific Multi-Word Term Recognition
Architecture using Spark Ecosystem
|
[
"Ciprian-Octavian Truică",
"Neculai-Ovidiu Istrate",
"Elena-Simona Apostol"
] |
Automatic Term Recognition is used to extract domain-specific terms that
belong to a given domain. In order to be accurate, these corpus and
language-dependent methods require large volumes of textual data that need to
be processed to extract candidate terms that are afterward scored according to
a given metric. To improve text preprocessing and candidate terms extraction
and scoring, we propose a distributed Spark-based architecture to automatically
extract domain-specific terms. The main contributions are as follows: (1)
propose a novel distributed automatic domain-specific multi-word term
recognition architecture built on top of the Spark ecosystem; (2) perform an
in-depth analysis of our architecture in terms of accuracy and scalability; (3)
design an easy-to-integrate Python implementation that enables the use of Big
Data processing in fields such as Computational Linguistics and Natural
Language Processing. We prove empirically the feasibility of our architecture
by performing experiments on two real-world datasets.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.04657
|
2023-05-24T10:25:12Z
|
Improving Empathetic Dialogue Generation by Dynamically Infusing
Commonsense Knowledge
|
[
"Hua Cai",
"Xuli Shen",
"Qing Xu",
"Weilin Shen",
"Xiaomei Wang",
"Weifeng Ge",
"Xiaoqing Zheng",
"Xiangyang Xue"
] |
In empathetic conversations, individuals express their empathy towards
others. Previous work has mainly focused on generating empathetic responses by
utilizing the speaker's emotion. Besides, external commonsense knowledge has
been applied to enhance the system's understandings of the speaker's situation.
However, given an event, commonsense knowledge base contains various relations,
potentially leading to confusion for the dialogue system. Consequently,
inconsistencies arise among the emotion, generated response and speaker's
contextual information. To this end, we propose a novel approach for empathetic
response generation, which incorporates an adaptive module for commonsense
knowledge selection to ensure consistency between the generated empathetic
responses and the speaker's situation. This selected knowledge is used to
refine the commonsense cognition and empathy expression for generated
responses. Experimental results show that our approach significantly
outperforms baseline models in both automatic and human evaluations, exhibiting
the generation of more coherent and empathetic responses. Moreover, case
studies highlight the interpretability of knowledge selection in the responses
and the effectiveness of adaptive module in our model. Code:
https://github.com/Hanscal/DCKS.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.14597
|
2023-05-24T00:40:49Z
|
Voices of Her: Analyzing Gender Differences in the AI Publication World
|
[
"Yiwen Ding",
"Jiarui Liu",
"Zhiheng Lyu",
"Kun Zhang",
"Bernhard Schoelkopf",
"Zhijing Jin",
"Rada Mihalcea"
] |
While several previous studies have analyzed gender bias in research, we are
still missing a comprehensive analysis of gender differences in the AI
community, covering diverse topics and different development trends. Using the
AI Scholar dataset of 78K researchers in the field of AI, we identify several
gender differences: (1) Although female researchers tend to have fewer overall
citations than males, this citation difference does not hold for all
academic-age groups; (2) There exist large gender homophily in co-authorship on
AI papers; (3) Female first-authored papers show distinct linguistic styles,
such as longer text, more positive emotion words, and more catchy titles than
male first-authored papers. Our analysis provides a window into the current
demographic trends in our AI community, and encourages more gender equality and
diversity in the future. Our code and data are at
https://github.com/causalNLP/ai-scholar-gender.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.14775
|
2023-05-24T06:26:11Z
|
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained
Language Models
|
[
"Amirhossein Kazemnejad",
"Mehdi Rezagholizadeh",
"Prasanna Parthasarathi",
"Sarath Chandar"
] |
While pre-trained language models (PLMs) have shown evidence of acquiring
vast amounts of knowledge, it remains unclear how much of this parametric
knowledge is actually usable in performing downstream tasks. We propose a
systematic framework to measure parametric knowledge utilization in PLMs. Our
framework first extracts knowledge from a PLM's parameters and subsequently
constructs a downstream task around this extracted knowledge. Performance on
this task thus depends exclusively on utilizing the model's possessed
knowledge, avoiding confounding factors like insufficient signal. As an
instantiation, we study factual knowledge of PLMs and measure utilization
across 125M to 13B parameter PLMs. We observe that: (1) PLMs exhibit two gaps -
in acquired vs. utilized knowledge, (2) they show limited robustness in
utilizing knowledge under distribution shifts, and (3) larger models close the
acquired knowledge gap but the utilized knowledge gap remains. Overall, our
study provides insights into PLMs' capabilities beyond their acquired
knowledge.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.14784
|
2023-05-24T06:39:45Z
|
Anthropomorphization of AI: Opportunities and Risks
|
[
"Ameet Deshpande",
"Tanmay Rajpurohit",
"Karthik Narasimhan",
"Ashwin Kalyan"
] |
Anthropomorphization is the tendency to attribute human-like traits to
non-human entities. It is prevalent in many social contexts -- children
anthropomorphize toys, adults do so with brands, and it is a literary device.
It is also a versatile tool in science, with behavioral psychology and
evolutionary biology meticulously documenting its consequences. With widespread
adoption of AI systems, and the push from stakeholders to make it human-like
through alignment techniques, human voice, and pictorial avatars, the tendency
for users to anthropomorphize it increases significantly. We take a dyadic
approach to understanding this phenomenon with large language models (LLMs) by
studying (1) the objective legal implications, as analyzed through the lens of
the recent blueprint of AI bill of rights and the (2) subtle psychological
aspects customization and anthropomorphization. We find that anthropomorphized
LLMs customized for different user bases violate multiple provisions in the
legislative blueprint. In addition, we point out that anthropomorphization of
LLMs affects the influence they can have on their users, thus having the
potential to fundamentally change the nature of human-AI interaction, with
potential for manipulation and negative influence. With LLMs being
hyper-personalized for vulnerable groups like children and patients among
others, our work is a timely and important contribution. We propose a
conservative strategy for the cautious use of anthropomorphization to improve
trustworthiness of AI systems.
|
[
"cs.AI",
"cs.CL",
"cs.CY",
"cs.LG"
] | false |
2305.14888
|
2023-05-24T08:37:27Z
|
Privacy Implications of Retrieval-Based Language Models
|
[
"Yangsibo Huang",
"Samyak Gupta",
"Zexuan Zhong",
"Kai Li",
"Danqi Chen"
] |
Retrieval-based language models (LMs) have demonstrated improved
interpretability, factuality, and adaptability compared to their parametric
counterparts, by incorporating retrieved text from external datastores. While
it is well known that parametric models are prone to leaking private data, it
remains unclear how the addition of a retrieval datastore impacts model
privacy. In this work, we present the first study of privacy risks in
retrieval-based LMs, particularly $k$NN-LMs. Our goal is to explore the optimal
design and training procedure in domains where privacy is of concern, aiming to
strike a balance between utility and privacy. Crucially, we find that $k$NN-LMs
are more susceptible to leaking private information from their private
datastore than parametric models. We further explore mitigations of privacy
risks. When privacy information is targeted and readily detected in the text,
we find that a simple sanitization step would completely eliminate the risks,
while decoupling query and key encoders achieves an even better utility-privacy
trade-off. Otherwise, we consider strategies of mixing public and private data
in both datastore and encoder training. While these methods offer modest
improvements, they leave considerable room for future work. Together, our
findings provide insights for practitioners to better understand and mitigate
privacy risks in retrieval-based LMs. Our code is available at:
https://github.com/Princeton-SysML/kNNLM_privacy .
|
[
"cs.CL",
"cs.CR",
"cs.LG"
] | false |
2305.14904
|
2023-05-24T08:56:35Z
|
Identifying Informational Sources in News Articles
|
[
"Alexander Spangher",
"Nanyun Peng",
"Jonathan May",
"Emilio Ferrara"
] |
News articles are driven by the informational sources journalists use in
reporting. Modeling when, how and why sources get used together in stories can
help us better understand the information we consume and even help journalists
with the task of producing it. In this work, we take steps toward this goal by
constructing the largest and widest-ranging annotated dataset, to date, of
informational sources used in news writing. We show that our dataset can be
used to train high-performing models for information detection and source
attribution. We further introduce a novel task, source prediction, to study the
compositionality of sources in news articles. We show good performance on this
task, which we argue is an important proof for narrative science exploring the
internal structure of news articles and aiding in planning-based language
generation, and an important step towards a source-recommendation system to aid
journalists.
|
[
"cs.CL",
"cs.AI",
"cs.CY"
] | false |
2305.14981
|
2023-05-24T10:15:17Z
|
Improving Factuality of Abstractive Summarization without Sacrificing
Summary Quality
|
[
"Tanay Dixit",
"Fei Wang",
"Muhao Chen"
] |
Improving factual consistency of abstractive summarization has been a widely
studied topic. However, most of the prior works on training factuality-aware
models have ignored the negative effect it has on summary quality. We propose
EFACTSUM (i.e., Effective Factual Summarization), a candidate summary
generation and ranking technique to improve summary factuality without
sacrificing summary quality. We show that using a contrastive learning
framework with our refined candidate summaries leads to significant gains on
both factuality and similarity-based metrics. Specifically, we propose a
ranking strategy in which we effectively combine two metrics, thereby
preventing any conflict during training. Models trained using our approach show
up to 6 points of absolute improvement over the base model with respect to
FactCC on XSUM and 11 points on CNN/DM, without negatively affecting either
similarity-based metrics or absractiveness.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.15008
|
2023-05-24T10:48:05Z
|
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation
into Input Regurgitation and Prompt-Induced Sanitization
|
[
"Aman Priyanshu",
"Supriti Vijay",
"Ayush Kumar",
"Rakshit Naidu",
"Fatemehsadat Mireshghallah"
] |
LLM-powered chatbots are becoming widely adopted in applications such as
healthcare, personal assistants, industry hiring decisions, etc. In many of
these cases, chatbots are fed sensitive, personal information in their prompts,
as samples for in-context learning, retrieved records from a database, or as
part of the conversation. The information provided in the prompt could directly
appear in the output, which might have privacy ramifications if there is
sensitive information there. As such, in this paper, we aim to understand the
input copying and regurgitation capabilities of these models during inference
and how they can be directly instructed to limit this copying by complying with
regulations such as HIPAA and GDPR, based on their internal knowledge of them.
More specifically, we find that when ChatGPT is prompted to summarize cover
letters of a 100 candidates, it would retain personally identifiable
information (PII) verbatim in 57.4% of cases, and we find this retention to be
non-uniform between different subgroups of people, based on attributes such as
gender identity. We then probe ChatGPT's perception of privacy-related policies
and privatization mechanisms by directly instructing it to provide compliant
outputs and observe a significant omission of PII from output.
|
[
"cs.CL",
"cs.AI",
"cs.CY"
] | false |
2305.15032
|
2023-05-24T11:16:09Z
|
How to Distill your BERT: An Empirical Study on the Impact of Weight
Initialisation and Distillation Objectives
|
[
"Xinpeng Wang",
"Leonie Weissweiler",
"Hinrich Schütze",
"Barbara Plank"
] |
Recently, various intermediate layer distillation (ILD) objectives have been
shown to improve compression of BERT models via Knowledge Distillation (KD).
However, a comprehensive evaluation of the objectives in both task-specific and
task-agnostic settings is lacking. To the best of our knowledge, this is the
first work comprehensively evaluating distillation objectives in both settings.
We show that attention transfer gives the best performance overall. We also
study the impact of layer choice when initializing the student from the teacher
layers, finding a significant impact on the performance in task-specific
distillation. For vanilla KD and hidden states transfer, initialisation with
lower layers of the teacher gives a considerable improvement over higher
layers, especially on the task of QNLI (up to an absolute percentage change of
17.8 in accuracy). Attention transfer behaves consistently under different
initialisation settings. We release our code as an efficient transformer-based
model distillation framework for further studies.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.15138
|
2023-05-24T13:35:08Z
|
Topic-Guided Self-Introduction Generation for Social Media Users
|
[
"Chunpu Xu",
"Jing Li",
"Piji Li",
"Min Yang"
] |
Millions of users are active on social media. To allow users to better
showcase themselves and network with others, we explore the auto-generation of
social media self-introduction, a short sentence outlining a user's personal
interests. While most prior work profiles users with tags (e.g., ages), we
investigate sentence-level self-introductions to provide a more natural and
engaging way for users to know each other. Here we exploit a user's tweeting
history to generate their self-introduction. The task is non-trivial because
the history content may be lengthy, noisy, and exhibit various personal
interests. To address this challenge, we propose a novel unified topic-guided
encoder-decoder (UTGED) framework; it models latent topics to reflect salient
user interest, whose topic mixture then guides encoding a user's history and
topic words control decoding their self-introduction. For experiments, we
collect a large-scale Twitter dataset, and extensive results show the
superiority of our UTGED to the advanced encoder-decoder models without topic
modeling.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.15222
|
2023-05-24T15:05:53Z
|
Neural Summarization of Electronic Health Records
|
[
"Koyena Pal",
"Seyed Ali Bahrainian",
"Laura Mercurio",
"Carsten Eickhoff"
] |
Hospital discharge documentation is among the most essential, yet
time-consuming documents written by medical practitioners. The objective of
this study was to automatically generate hospital discharge summaries using
neural network summarization models. We studied various data preparation and
neural network training techniques that generate discharge summaries. Using
nursing notes and discharge summaries from the MIMIC-III dataset, we studied
the viability of the automatic generation of various sections of a discharge
summary using four state-of-the-art neural network summarization models (BART,
T5, Longformer and FLAN-T5). Our experiments indicated that training
environments including nursing notes as the source, and discrete sections of
the discharge summary as the target output (e.g. "History of Present Illness")
improve language model efficiency and text quality. According to our findings,
the fine-tuned BART model improved its ROUGE F1 score by 43.6% against its
standard off-the-shelf version. We also found that fine-tuning the baseline
BART model with other setups caused different degrees of improvement (up to 80%
relative improvement). We also observed that a fine-tuned T5 generally achieves
higher ROUGE F1 scores than other fine-tuned models and a fine-tuned FLAN-T5
achieves the highest ROUGE score overall, i.e., 45.6. For majority of the
fine-tuned language models, summarizing discharge summary report sections
separately outperformed the summarization the entire report quantitatively. On
the other hand, fine-tuning language models that were previously instruction
fine-tuned showed better performance in summarizing entire reports. This study
concludes that a focused dataset designed for the automatic generation of
discharge summaries by a language model can produce coherent Discharge Summary
sections.
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2305.15374
|
2023-05-24T17:32:58Z
|
ASPER: Answer Set Programming Enhanced Neural Network Models for Joint
Entity-Relation Extraction
|
[
"Trung Hoang Le",
"Huiping Cao",
"Tran Cao Son"
] |
A plethora of approaches have been proposed for joint entity-relation (ER)
extraction. Most of these methods largely depend on a large amount of manually
annotated training data. However, manual data annotation is time consuming,
labor intensive, and error prone. Human beings learn using both data (through
induction) and knowledge (through deduction). Answer Set Programming (ASP) has
been a widely utilized approach for knowledge representation and reasoning that
is elaboration tolerant and adept at reasoning with incomplete information.
This paper proposes a new approach, ASP-enhanced Entity-Relation extraction
(ASPER), to jointly recognize entities and relations by learning from both data
and domain knowledge. In particular, ASPER takes advantage of the factual
knowledge (represented as facts in ASP) and derived knowledge (represented as
rules in ASP) in the learning process of neural network models. We have
conducted experiments on two real datasets and compare our method with three
baselines. The results show that our ASPER model consistently outperforms the
baselines.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.15403
|
2023-05-24T17:59:03Z
|
AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation
|
[
"Rongjie Huang",
"Huadai Liu",
"Xize Cheng",
"Yi Ren",
"Linjun Li",
"Zhenhui Ye",
"Jinzheng He",
"Lichao Zhang",
"Jinglin Liu",
"Xiang Yin",
"Zhou Zhao"
] |
Direct speech-to-speech translation (S2ST) aims to convert speech from one
language into another, and has demonstrated significant progress to date.
Despite the recent success, current S2ST models still suffer from distinct
degradation in noisy environments and fail to translate visual speech (i.e.,
the movement of lips and teeth). In this work, we present AV-TranSpeech, the
first audio-visual speech-to-speech (AV-S2ST) translation model without relying
on intermediate text. AV-TranSpeech complements the audio stream with visual
information to promote system robustness and opens up a host of practical
applications: dictation or dubbing archival films. To mitigate the data
scarcity with limited parallel AV-S2ST data, we 1) explore self-supervised
pre-training with unlabeled audio-visual data to learn contextual
representation, and 2) introduce cross-modal distillation with S2ST models
trained on the audio-only corpus to further reduce the requirements of visual
data. Experimental results on two language pairs demonstrate that AV-TranSpeech
outperforms audio-only models under all settings regardless of the type of
noise. With low-resource audio-visual data (10h, 30h), cross-modal distillation
yields an improvement of 7.6 BLEU on average compared with baselines. Audio
samples are available at https://AV-TranSpeech.github.io
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.15498
|
2023-05-24T18:40:43Z
|
Large Language Models for User Interest Journeys
|
[
"Konstantina Christakopoulou",
"Alberto Lalama",
"Cj Adams",
"Iris Qu",
"Yifat Amir",
"Samer Chucri",
"Pierce Vollucci",
"Fabio Soldo",
"Dina Bseiso",
"Sarah Scodel",
"Lucas Dixon",
"Ed H. Chi",
"Minmin Chen"
] |
Large language models (LLMs) have shown impressive capabilities in natural
language understanding and generation. Their potential for deeper user
understanding and improved personalized user experience on recommendation
platforms is, however, largely untapped. This paper aims to address this gap.
Recommender systems today capture users' interests through encoding their
historical activities on the platforms. The generated user representations are
hard to examine or interpret. On the other hand, if we were to ask people about
interests they pursue in their life, they might talk about their hobbies, like
I just started learning the ukulele, or their relaxation routines, e.g., I like
to watch Saturday Night Live, or I want to plant a vertical garden. We argue,
and demonstrate through extensive experiments, that LLMs as foundation models
can reason through user activities, and describe their interests in nuanced and
interesting ways, similar to how a human would.
We define interest journeys as the persistent and overarching user interests,
in other words, the non-transient ones. These are the interests that we believe
will benefit most from the nuanced and personalized descriptions. We introduce
a framework in which we first perform personalized extraction of interest
journeys, and then summarize the extracted journeys via LLMs, using techniques
like few-shot prompting, prompt-tuning and fine-tuning. Together, our results
in prompting LLMs to name extracted user journeys in a large-scale industrial
platform demonstrate great potential of these models in providing deeper, more
interpretable, and controllable user understanding. We believe LLM powered user
understanding can be a stepping stone to entirely new user experiences on
recommendation platforms that are journey-aware, assistive, and enabling
frictionless conversation down the line.
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2305.15594
|
2023-05-24T22:06:08Z
|
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for
Large Language Models
|
[
"Haonan Duan",
"Adam Dziedzic",
"Nicolas Papernot",
"Franziska Boenisch"
] |
Large language models (LLMs) are excellent in-context learners. However, the
sensitivity of data contained in prompts raises privacy concerns. Our work
first shows that these concerns are valid: we instantiate a simple but highly
effective membership inference attack against the data used to prompt LLMs. To
address this vulnerability, one could forego prompting and resort to
fine-tuning LLMs with known algorithms for private gradient descent. However,
this comes at the expense of the practicality and efficiency offered by
prompting. Therefore, we propose to privately learn to prompt. We first show
that soft prompts can be obtained privately through gradient descent on
downstream data. However, this is not the case for discrete prompts. Thus, we
orchestrate a noisy vote among an ensemble of LLMs presented with different
prompts, i.e., a flock of stochastic parrots. The vote privately transfers the
flock's knowledge into a single public prompt. We show that LLMs prompted with
our private algorithms closely match the non-private baselines. For example,
using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the
sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs.
95.2% for the non-private baseline. Through our experiments, we also show that
our prompt-based approach is easily deployed with existing commercial APIs.
|
[
"cs.LG",
"cs.CL",
"cs.CR"
] | false |
2305.15597
|
2023-05-24T22:09:35Z
|
Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language
Models
|
[
"Pengcheng Jiang",
"Shivam Agarwal",
"Bowen Jin",
"Xuan Wang",
"Jimeng Sun",
"Jiawei Han"
] |
The mission of open knowledge graph (KG) completion is to draw new findings
from known facts. Existing works that augment KG completion require either (1)
factual triples to enlarge the graph reasoning space or (2) manually designed
prompts to extract knowledge from a pre-trained language model (PLM),
exhibiting limited performance and requiring expensive efforts from experts. To
this end, we propose TAGREAL that automatically generates quality query prompts
and retrieves support information from large text corpora to probe knowledge
from PLM for KG completion. The results show that TAGREAL achieves
state-of-the-art performance on two benchmark datasets. We find that TAGREAL
has superb performance even with limited training data, outperforming existing
embedding-based, graph-based, and PLM-based methods.
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2305.16338
|
2023-05-24T01:20:22Z
|
Think Before You Act: Decision Transformers with Internal Working Memory
|
[
"Jikun Kang",
"Romain Laroche",
"Xindi Yuan",
"Adam Trischler",
"Xue Liu",
"Jie Fu"
] |
Large language model (LLM)-based decision-making agents have shown the
ability to generalize across multiple tasks. However, their performance relies
on massive data and compute. We argue that this inefficiency stems from the
forgetting phenomenon, in which a model memorizes its behaviors in parameters
throughout training. As a result, training on a new task may deteriorate the
model's performance on previous tasks. In contrast to LLMs' implicit memory
mechanism, the human brain utilizes distributed memory storage, which helps
manage and organize multiple skills efficiently, mitigating the forgetting
phenomenon. Thus inspired, we propose an internal working memory module to
store, blend, and retrieve information for different downstream tasks.
Evaluation results show that the proposed method improves training efficiency
and generalization in both Atari games and meta-world object manipulation
tasks. Moreover, we demonstrate that memory fine-tuning further enhances the
adaptability of the proposed architecture.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | true |
2305.16349
|
2023-05-24T19:10:46Z
|
Lexinvariant Language Models
|
[
"Qian Huang",
"Eric Zelikman",
"Sarah Li Chen",
"Yuhuai Wu",
"Gregory Valiant",
"Percy Liang"
] |
Token embeddings, a mapping from discrete lexical symbols to continuous
vectors, are at the heart of any language model (LM). However, lexical symbol
meanings can also be determined and even redefined by their structural role in
a long context. In this paper, we ask: is it possible for a language model to
be performant without \emph{any} fixed token embeddings? Such a language model
would have to rely entirely on the co-occurence and repetition of tokens in the
context rather than the \textit{a priori} identity of any token. To answer
this, we study \textit{lexinvariant}language models that are invariant to
lexical symbols and therefore do not need fixed token embeddings in practice.
First, we prove that we can construct a lexinvariant LM to converge to the true
language model at a uniform rate that is polynomial in terms of the context
length, with a constant factor that is sublinear in the vocabulary size.
Second, to build a lexinvariant LM, we simply encode tokens using random
Gaussian vectors, such that each token maps to the same representation within
each sequence but different representations across sequences. Empirically, we
demonstrate that it can indeed attain perplexity comparable to that of a
standard language model, given a sufficiently long context. We further explore
two properties of the lexinvariant language models: First, given text generated
from a substitution cipher of English, it implicitly implements Bayesian
in-context deciphering and infers the mapping to the underlying real tokens
with high accuracy. Second, it has on average 4X better accuracy over synthetic
in-context reasoning tasks. Finally, we discuss regularizing standard language
models towards lexinvariance and potential practical applications.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | true |
2305.18330
|
2023-05-24T07:10:56Z
|
#REVAL: a semantic evaluation framework for hashtag recommendation
|
[
"Areej Alsini",
"Du Q. Huynh",
"Amitava Datta"
] |
Automatic evaluation of hashtag recommendation models is a fundamental task
in many online social network systems. In the traditional evaluation method,
the recommended hashtags from an algorithm are firstly compared with the ground
truth hashtags for exact correspondences. The number of exact matches is then
used to calculate the hit rate, hit ratio, precision, recall, or F1-score. This
way of evaluating hashtag similarities is inadequate as it ignores the semantic
correlation between the recommended and ground truth hashtags. To tackle this
problem, we propose a novel semantic evaluation framework for hashtag
recommendation, called #REval. This framework includes an internal module
referred to as BERTag, which automatically learns the hashtag embeddings. We
investigate on how the #REval framework performs under different word embedding
methods and different numbers of synonyms and hashtags in the recommendation
using our proposed #REval-hit-ratio measure. Our experiments of the proposed
framework on three large datasets show that #REval gave more meaningful hashtag
synonyms for hashtag recommendation evaluation. Our analysis also highlights
the sensitivity of the framework to the word embedding technique, with #REval
based on BERTag more superior over #REval based on FastText and Word2Vec.
|
[
"cs.IR",
"cs.AI",
"cs.CL",
"I.2.7"
] | false |
2305.14655
|
2023-05-24T02:51:29Z
|
Learning Survival Distribution with Implicit Survival Function
|
[
"Yu Ling",
"Weimin Tan",
"Bo Yan"
] |
Survival analysis aims at modeling the relationship between covariates and
event occurrence with some untracked (censored) samples. In implementation,
existing methods model the survival distribution with strong assumptions or in
a discrete time space for likelihood estimation with censorship, which leads to
weak generalization. In this paper, we propose Implicit Survival Function (ISF)
based on Implicit Neural Representation for survival distribution estimation
without strong assumptions,and employ numerical integration to approximate the
cumulative distribution function for prediction and optimization. Experimental
results show that ISF outperforms the state-of-the-art methods in three public
datasets and has robustness to the hyperparameter controlling estimation
precision.
|
[
"cs.LG"
] | false |
2305.14712
|
2023-05-24T04:27:57Z
|
On the Generalization of Diffusion Model
|
[
"Mingyang Yi",
"Jiacheng Sun",
"Zhenguo Li"
] |
The diffusion probabilistic generative models are widely used to generate
high-quality data. Though they can synthetic data that does not exist in the
training set, the rationale behind such generalization is still unexplored. In
this paper, we formally define the generalization of the generative model,
which is measured by the mutual information between the generated data and the
training set. The definition originates from the intuition that the model which
generates data with less correlation to the training set exhibits better
generalization ability. Meanwhile, we show that for the empirical optimal
diffusion model, the data generated by a deterministic sampler are all highly
related to the training set, thus poor generalization. This result contradicts
the observation of the trained diffusion model's (approximating empirical
optima) extrapolation ability (generating unseen data). To understand this
contradiction, we empirically verify the difference between the sufficiently
trained diffusion model and the empirical optima. We found, though obtained
through sufficient training, there still exists a slight difference between
them, which is critical to making the diffusion model generalizable. Moreover,
we propose another training objective whose empirical optimal solution has no
potential generalization problem. We empirically show that the proposed
training objective returns a similar model to the original one, which further
verifies the generalization ability of the trained diffusion model.
|
[
"cs.LG"
] | false |
2305.14745
|
2023-05-24T05:39:46Z
|
Applications of Machine Learning in Detecting Afghan Fake Banknotes
|
[
"Hamida Ashna",
"Ziaullah Momand"
] |
Fake currency, unauthorized imitation money lacking government approval,
constitutes a form of fraud. Particularly in Afghanistan, the prevalence of
fake currency poses significant challenges and detrimentally impacts the
economy. While banks and commercial establishments employ authentication
machines, the public lacks access to such systems, necessitating a program that
can detect counterfeit banknotes accessible to all. This paper introduces a
method using image processing to identify counterfeit Afghan banknotes by
analyzing specific security features. Extracting first and second order
statistical features from input images, the WEKA machine learning tool was
employed to construct models and perform classification with Random Forest,
PART, and Na\"ive Bayes algorithms. The Random Forest algorithm achieved
exceptional accuracy of 99% in detecting fake Afghan banknotes, indicating the
efficacy of the proposed method as a solution for identifying counterfeit
currency.
|
[
"cs.LG"
] | false |
2305.15174
|
2023-05-24T14:06:02Z
|
Simultaneous identification of models and parameters of scientific
simulators
|
[
"Cornelius Schröder",
"Jakob H. Macke"
] |
Many scientific models are composed of multiple discrete components, and
scien tists often make heuristic decisions about which components to include.
Bayesian inference provides a mathematical framework for systematically
selecting model components, but defining prior distributions over model
components and developing associated inference schemes has been challenging. We
approach this problem in an amortized simulation-based inference framework: We
define implicit model priors over a fixed set of candidate components and train
neural networks to infer joint probability distributions over both, model
components and associated parameters from simulations. To represent
distributions over model components, we introduce a conditional mixture of
multivariate binary distributions in the Grassmann formalism. Our approach can
be applied to any compositional stochastic simulator without requiring access
to likelihood evaluations. We first illustrate our method on a simple time
series model with redundant components and show that it can retrieve joint
posterior distribution over a set of symbolic expressions and their parameters
while accurately capturing redundancy with strongly correlated posteriors. We
then apply our approach to drift-diffusion models, a commonly used model class
in cognitive neuroscience. After validating the method on synthetic data, we
show that our approach explains experimental data as well as previous methods,
but that our fully probabilistic approach can help to discover multiple
data-consistent model configurations, as well as reveal non-identifiable model
components and parameters. Our method provides a powerful tool for data-driven
scientific inquiry which will allow scientists to systematically identify
essential model components and make uncertainty-informed modelling decisions.
|
[
"cs.LG"
] | false |
2305.15563
|
2023-05-24T20:54:48Z
|
Fantastic DNN Classifiers and How to Identify them without Data
|
[
"Nathaniel Dean",
"Dilip Sarkar"
] |
Current algorithms and architecture can create excellent DNN classifier
models from example data. In general, larger training datasets result in better
model estimations, which improve test performance. Existing methods for
predicting generalization performance are based on hold-out test examples. To
the best of our knowledge, at present no method exists that can estimate the
quality of a trained DNN classifier without test data. In this paper, we show
that the quality of a trained DNN classifier can be assessed without any
example data. We consider DNNs to be composed of a feature extractor and a
feature classifier; the feature extractor's output is fed to the classifier.
The proposed method iteratively creates class prototypes in the input space for
each class by minimizing a cross-entropy loss function at the output of the
network. We use these prototypes and their feature relationships to reveal the
quality of the classifier. We have developed two metrics: one using the
features of the prototypes and the other using adversarial examples
corresponding to each prototype. Empirical evaluations show that accuracy
obtained from test examples is directly proportional to quality measures
obtained from the proposed metrics. We report our observations for ResNet18
with Tiny ImageNet, CIFAR100, and CIFAR10 datasets. The proposed metrics can be
used to compare performances of two or more classifiers without test examples.
|
[
"cs.LG",
"I.5.1"
] | false |
2305.15591
|
2023-05-24T21:58:19Z
|
Lightweight Learner for Shared Knowledge Lifelong Learning
|
[
"Yunhao Ge",
"Yuecheng Li",
"Di Wu",
"Ao Xu",
"Adam M. Jones",
"Amanda Sofie Rios",
"Iordanis Fostiropoulos",
"Shixian Wen",
"Po-Hsuan Huang",
"Zachary William Murdock",
"Gozde Sahin",
"Shuo Ni",
"Kiran Lekkala",
"Sumedh Anand Sontakke",
"Laurent Itti"
] |
In Lifelong Learning (LL), agents continually learn as they encounter new
conditions and tasks. Most current LL is limited to a single agent that learns
tasks sequentially. Dedicated LL machinery is then deployed to mitigate the
forgetting of old tasks as new tasks are learned. This is inherently slow. We
propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which
deploys a decentralized population of LL agents that each sequentially learn
different tasks, with all agents operating independently and in parallel. After
learning their respective tasks, agents share and consolidate their knowledge
over a decentralized communication network, so that, in the end, all agents can
master all tasks. We present one solution to SKILL which uses Lightweight
Lifelong Learning (LLL) agents, where the goal is to facilitate efficient
sharing by minimizing the fraction of the agent that is specialized for any
given task. Each LLL agent thus consists of a common task-agnostic immutable
part, where most parameters are, and individual task-specific modules that
contain fewer parameters but are adapted to each task. Agents share their
task-specific modules, plus summary information ("task anchors") representing
their tasks in the common task-agnostic latent space of all agents. Receiving
agents register each received task-specific module using the corresponding
anchor. Thus, every agent improves its ability to solve new tasks each time new
task-specific modules and anchors are received. On a new, very challenging
SKILL-102 dataset with 102 image classification tasks (5,033 classes in total,
2,041,225 training, 243,464 validation, and 243,464 test images), we achieve
much higher (and SOTA) accuracy over 8 LL baselines, while also achieving near
perfect parallelization. Code and data can be found at
https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learning
|
[
"cs.LG"
] | false |
2305.15621
|
2023-05-24T23:49:06Z
|
Matrix Estimation for Offline Reinforcement Learning with Low-Rank
Structure
|
[
"Xumei Xi",
"Christina Lee Yu",
"Yudong Chen"
] |
We consider offline Reinforcement Learning (RL), where the agent does not
interact with the environment and must rely on offline data collected using a
behavior policy. Previous works provide policy evaluation guarantees when the
target policy to be evaluated is covered by the behavior policy, that is,
state-action pairs visited by the target policy must also be visited by the
behavior policy. We show that when the MDP has a latent low-rank structure,
this coverage condition can be relaxed. Building on the connection to weighted
matrix completion with non-uniform observations, we propose an offline policy
evaluation algorithm that leverages the low-rank structure to estimate the
values of uncovered state-action pairs. Our algorithm does not require a known
feature representation, and our finite-sample error bound involves a novel
discrepancy measure quantifying the discrepancy between the behavior and target
policies in the spectral space. We provide concrete examples where our
algorithm achieves accurate estimation while existing coverage conditions are
not satisfied. Building on the above evaluation algorithm, we further design an
offline policy optimization algorithm and provide non-asymptotic performance
guarantees.
|
[
"cs.LG"
] | false |
2305.16348
|
2023-05-24T18:55:54Z
|
Machine learning-based characterization of hydrochar from biomass:
Implications for sustainable energy and material production
|
[
"Alireza Shafizadeh",
"Hossein Shahbeik",
"Shahin Rafiee",
"Aysooda Moradi",
"Mohammadreza Shahbaz",
"Meysam Madadi",
"Cheng Li",
"Wanxi Peng",
"Meisam Tabatabaei",
"Mortaza Aghbashlo"
] |
Hydrothermal carbonization (HTC) is a process that converts biomass into
versatile hydrochar without the need for prior drying. The physicochemical
properties of hydrochar are influenced by biomass properties and processing
parameters, making it challenging to optimize for specific applications through
trial-and-error experiments. To save time and money, machine learning can be
used to develop a model that characterizes hydrochar produced from different
biomass sources under varying reaction processing parameters. Thus, this study
aims to develop an inclusive model to characterize hydrochar using a database
covering a range of biomass types and reaction processing parameters. The
quality and quantity of hydrochar are predicted using two models (decision tree
regression and support vector regression). The decision tree regression model
outperforms the support vector regression model in terms of forecast accuracy
(R2 > 0.88, RMSE < 6.848, and MAE < 4.718). Using an evolutionary algorithm,
optimum inputs are identified based on cost functions provided by the selected
model to optimize hydrochar for energy production, soil amendment, and
pollutant adsorption, resulting in hydrochar yields of 84.31%, 84.91%, and
80.40%, respectively. The feature importance analysis reveals that biomass
ash/carbon content and operating temperature are the primary factors affecting
hydrochar production in the HTC process.
|
[
"cs.LG"
] | false |
2305.16350
|
2023-05-24T19:59:21Z
|
Using evolutionary machine learning to characterize and optimize
co-pyrolysis of biomass feedstocks and polymeric wastes
|
[
"Hossein Shahbeik",
"Alireza Shafizadeh",
"Mohammad Hossein Nadian",
"Dorsa Jeddi",
"Seyedali Mirjalili",
"Yadong Yang",
"Su Shiung Lam",
"Junting Pan",
"Meisam Tabatabaei",
"Mortaza Aghbashlo"
] |
Co-pyrolysis of biomass feedstocks with polymeric wastes is a promising
strategy for improving the quantity and quality parameters of the resulting
liquid fuel. Numerous experimental measurements are typically conducted to find
the optimal operating conditions. However, performing co-pyrolysis experiments
is highly challenging due to the need for costly and lengthy procedures.
Machine learning (ML) provides capabilities to cope with such issues by
leveraging on existing data. This work aims to introduce an evolutionary ML
approach to quantify the (by)products of the biomass-polymer co-pyrolysis
process. A comprehensive dataset covering various biomass-polymer mixtures
under a broad range of process conditions is compiled from the qualified
literature. The database was subjected to statistical analysis and mechanistic
discussion. The input features are constructed using an innovative approach to
reflect the physics of the process. The constructed features are subjected to
principal component analysis to reduce their dimensionality. The obtained
scores are introduced into six ML models. Gaussian process regression model
tuned by particle swarm optimization algorithm presents better prediction
performance (R2 > 0.9, MAE < 0.03, and RMSE < 0.06) than other developed
models. The multi-objective particle swarm optimization algorithm successfully
finds optimal independent parameters.
|
[
"cs.LG"
] | false |
2305.14606
|
2023-05-24T01:10:58Z
|
Taylor Learning
|
[
"James Schmidt"
] |
Empirical risk minimization stands behind most optimization in supervised
machine learning. Under this scheme, labeled data is used to approximate an
expected cost (risk), and a learning algorithm updates model-defining
parameters in search of an empirical risk minimizer, with the aim of thereby
approximately minimizing expected cost. Parameter update is often done by some
sort of gradient descent. In this paper, we introduce a learning algorithm to
construct models for real analytic functions using neither gradient descent nor
empirical risk minimization. Observing that such functions are defined by local
information, we situate familiar Taylor approximation methods in the context of
sampling data from a distribution, and prove a nonuniform learning result.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.14608
|
2023-05-24T01:12:08Z
|
Inverse Reinforcement Learning with the Average Reward Criterion
|
[
"Feiyang Wu",
"Jingyang Ke",
"Anqi Wu"
] |
We study the problem of Inverse Reinforcement Learning (IRL) with an
average-reward criterion. The goal is to recover an unknown policy and a reward
function when the agent only has samples of states and actions from an
experienced agent. Previous IRL methods assume that the expert is trained in a
discounted environment, and the discount factor is known. This work alleviates
this assumption by proposing an average-reward framework with efficient
learning algorithms. We develop novel stochastic first-order methods to solve
the IRL problem under the average-reward setting, which requires solving an
Average-reward Markov Decision Process (AMDP) as a subproblem. To solve the
subproblem, we develop a Stochastic Policy Mirror Descent (SPMD) method under
general state and action spaces that needs $\mathcal{{O}}(1/\varepsilon)$ steps
of gradient computation. Equipped with SPMD, we propose the Inverse Policy
Mirror Descent (IPMD) method for solving the IRL problem with a
$\mathcal{O}(1/\varepsilon^2)$ complexity. To the best of our knowledge, the
aforementioned complexity results are new in IRL. Finally, we corroborate our
analysis with numerical experiments using the MuJoCo benchmark and additional
control tasks.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.14644
|
2023-05-24T02:27:34Z
|
KARNet: Kalman Filter Augmented Recurrent Neural Network for Learning
World Models in Autonomous Driving Tasks
|
[
"Hemanth Manjunatha",
"Andrey Pak",
"Dimitar Filev",
"Panagiotis Tsiotras"
] |
Autonomous driving has received a great deal of attention in the automotive
industry and is often seen as the future of transportation. The development of
autonomous driving technology has been greatly accelerated by the growth of
end-to-end machine learning techniques that have been successfully used for
perception, planning, and control tasks. An important aspect of autonomous
driving planning is knowing how the environment evolves in the immediate future
and taking appropriate actions. An autonomous driving system should effectively
use the information collected from the various sensors to form an abstract
representation of the world to maintain situational awareness. For this
purpose, deep learning models can be used to learn compact latent
representations from a stream of incoming data. However, most deep learning
models are trained end-to-end and do not incorporate any prior knowledge (e.g.,
from physics) of the vehicle in the architecture. In this direction, many works
have explored physics-infused neural network (PINN) architectures to infuse
physics models during training. Inspired by this observation, we present a
Kalman filter augmented recurrent neural network architecture to learn the
latent representation of the traffic flow using front camera images only. We
demonstrate the efficacy of the proposed model in both imitation and
reinforcement learning settings using both simulated and real-world datasets.
The results show that incorporating an explicit model of the vehicle (states
estimated using Kalman filtering) in the end-to-end learning significantly
increases performance.
|
[
"cs.LG",
"cs.RO"
] | false |
2305.14709
|
2023-05-24T04:26:21Z
|
Regret Matching+: (In)Stability and Fast Convergence in Games
|
[
"Gabriele Farina",
"Julien Grand-Clément",
"Christian Kroer",
"Chung-Wei Lee",
"Haipeng Luo"
] |
Regret Matching+ (RM+) and its variants are important algorithms for solving
large-scale games. However, a theoretical understanding of their success in
practice is still a mystery. Moreover, recent advances on fast convergence in
games are limited to no-regret algorithms such as online mirror descent, which
satisfy stability. In this paper, we first give counterexamples showing that
RM+ and its predictive version can be unstable, which might cause other players
to suffer large regret. We then provide two fixes: restarting and chopping off
the positive orthant that RM+ works in. We show that these fixes are sufficient
to get $O(T^{1/4})$ individual regret and $O(1)$ social regret in normal-form
games via RM+ with predictions. We also apply our stabilizing techniques to
clairvoyant updates in the uncoupled learning setting for RM+ and prove
desirable results akin to recent works for Clairvoyant online mirror descent.
Our experiments show the advantages of our algorithms over vanilla RM+-based
algorithms in matrix and extensive-form games.
|
[
"cs.GT",
"cs.LG"
] | false |
2305.14765
|
2023-05-24T06:16:11Z
|
Masked Bayesian Neural Networks : Theoretical Guarantee and its
Posterior Inference
|
[
"Insung Kong",
"Dongyoon Yang",
"Jongjin Lee",
"Ilsang Ohn",
"Gyuseung Baek",
"Yongdai Kim"
] |
Bayesian approaches for learning deep neural networks (BNN) have been
received much attention and successfully applied to various applications.
Particularly, BNNs have the merit of having better generalization ability as
well as better uncertainty quantification. For the success of BNN, search an
appropriate architecture of the neural networks is an important task, and
various algorithms to find good sparse neural networks have been proposed. In
this paper, we propose a new node-sparse BNN model which has good theoretical
properties and is computationally feasible. We prove that the posterior
concentration rate to the true model is near minimax optimal and adaptive to
the smoothness of the true model. In particular the adaptiveness is the first
of its kind for node-sparse BNNs. In addition, we develop a novel MCMC
algorithm which makes the Bayesian inference of the node-sparse BNN model
feasible in practice.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.14814
|
2023-05-24T07:09:53Z
|
What functions can Graph Neural Networks compute on random graphs? The
role of Positional Encoding
|
[
"Nicolas Keriven",
"Samuel Vaiter"
] |
We aim to deepen the theoretical understanding of Graph Neural Networks
(GNNs) on large graphs, with a focus on their expressive power. Existing
analyses relate this notion to the graph isomorphism problem, which is mostly
relevant for graphs of small sizes, or studied graph classification or
regression tasks, while prediction tasks on nodes are far more relevant on
large graphs. Recently, several works showed that, on very general random
graphs models, GNNs converge to certains functions as the number of nodes
grows. In this paper, we provide a more complete and intuitive description of
the function space generated by equivariant GNNs for node-tasks, through
general notions of convergence that encompass several previous examples. We
emphasize the role of input node features, and study the impact of node
Positional Encodings (PEs), a recent line of work that has been shown to yield
state-of-the-art results in practice. Through the study of several examples of
PEs on large random graphs, we extend previously known universality results to
significantly more general models. Our theoretical results hint at some
normalization tricks, which is shown numerically to have a positive impact on
GNN generalization on synthetic and real data. Our proofs contain new
concentration inequalities of independent interest.
|
[
"cs.LG",
"stat.ML"
] | false |
2305.14826
|
2023-05-24T07:34:15Z
|
Building Transportation Foundation Model via Generative Graph
Transformer
|
[
"Xuhong Wang",
"Ding Wang",
"Liang Chen",
"Yilun Lin"
] |
Efficient traffic management is crucial for maintaining urban mobility,
especially in densely populated areas where congestion, accidents, and delays
can lead to frustrating and expensive commutes. However, existing prediction
methods face challenges in terms of optimizing a single objective and
understanding the complex composition of the transportation system. Moreover,
they lack the ability to understand the macroscopic system and cannot
efficiently utilize big data. In this paper, we propose a novel approach,
Transportation Foundation Model (TFM), which integrates the principles of
traffic simulation into traffic prediction. TFM uses graph structures and
dynamic graph generation algorithms to capture the participatory behavior and
interaction of transportation system actors. This data-driven and model-free
simulation method addresses the challenges faced by traditional systems in
terms of structural complexity and model accuracy and provides a foundation for
solving complex transportation problems with real data. The proposed approach
shows promising results in accurately predicting traffic outcomes in an urban
transportation setting.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.14852
|
2023-05-24T08:01:49Z
|
SWAMP: Sparse Weight Averaging with Multiple Particles for Iterative
Magnitude Pruning
|
[
"Moonseok Choi",
"Hyungi Lee",
"Giung Nam",
"Juho Lee"
] |
Given the ever-increasing size of modern neural networks, the significance of
sparse architectures has surged due to their accelerated inference speeds and
minimal memory demands. When it comes to global pruning techniques, Iterative
Magnitude Pruning (IMP) still stands as a state-of-the-art algorithm despite
its simple nature, particularly in extremely sparse regimes. In light of the
recent finding that the two successive matching IMP solutions are linearly
connected without a loss barrier, we propose Sparse Weight Averaging with
Multiple Particles (SWAMP), a straightforward modification of IMP that achieves
performance comparable to an ensemble of two IMP solutions. For every
iteration, we concurrently train multiple sparse models, referred to as
particles, using different batch orders yet the same matching ticket, and then
weight average such models to produce a single mask. We demonstrate that our
method consistently outperforms existing baselines across different sparsities
through extensive experiments on various data and neural network structures.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.14984
|
2023-05-24T10:18:45Z
|
Adversarial robustness of amortized Bayesian inference
|
[
"Manuel Glöckler",
"Michael Deistler",
"Jakob H. Macke"
] |
Bayesian inference usually requires running potentially costly inference
procedures separately for every new observation. In contrast, the idea of
amortized Bayesian inference is to initially invest computational cost in
training an inference network on simulated data, which can subsequently be used
to rapidly perform inference (i.e., to return estimates of posterior
distributions) for new observations. This approach has been applied to many
real-world models in the sciences and engineering, but it is unclear how robust
the approach is to adversarial perturbations in the observed data. Here, we
study the adversarial robustness of amortized Bayesian inference, focusing on
simulation-based estimation of multi-dimensional posterior distributions. We
show that almost unrecognizable, targeted perturbations of the observations can
lead to drastic changes in the predicted posterior and highly unrealistic
posterior predictive samples, across several benchmark tasks and a real-world
example from neuroscience. We propose a computationally efficient
regularization scheme based on penalizing the Fisher information of the
conditional density estimator, and show how it improves the adversarial
robustness of amortized Bayesian inference.
|
[
"cs.LG",
"stat.ML"
] | false |
2305.15042
|
2023-05-24T11:30:33Z
|
Test like you Train in Implicit Deep Learning
|
[
"Zaccharie Ramzi",
"Pierre Ablin",
"Gabriel Peyré",
"Thomas Moreau"
] |
Implicit deep learning has recently gained popularity with applications
ranging from meta-learning to Deep Equilibrium Networks (DEQs). In its general
formulation, it relies on expressing some components of deep learning pipelines
implicitly, typically via a root equation called the inner problem. In
practice, the solution of the inner problem is approximated during training
with an iterative procedure, usually with a fixed number of inner iterations.
During inference, the inner problem needs to be solved with new data. A popular
belief is that increasing the number of inner iterations compared to the one
used during training yields better performance. In this paper, we question such
an assumption and provide a detailed theoretical analysis in a simple setting.
We demonstrate that overparametrization plays a key role: increasing the number
of iterations at test time cannot improve performance for overparametrized
networks. We validate our theory on an array of implicit deep-learning
problems. DEQs, which are typically overparametrized, do not benefit from
increasing the number of iterations at inference while meta-learning, which is
typically not overparametrized, benefits from it.
|
[
"cs.LG",
"stat.ML"
] | false |
2305.15167
|
2023-05-24T13:59:03Z
|
Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process
Models
|
[
"Siu Lun Chau",
"Krikamol Muandet",
"Dino Sejdinovic"
] |
We present a novel approach for explaining Gaussian processes (GPs) that can
utilize the full analytical covariance structure present in GPs. Our method is
based on the popular solution concept of Shapley values extended to stochastic
cooperative games, resulting in explanations that are random variables. The GP
explanations generated using our approach satisfy similar favorable axioms to
standard Shapley values and possess a tractable covariance function across
features and data observations. This covariance allows for quantifying
explanation uncertainties and studying the statistical dependencies between
explanations. We further extend our framework to the problem of predictive
explanation, and propose a Shapley prior over the explanation function to
predict Shapley values for new data based on previously computed ones. Our
extensive illustrations demonstrate the effectiveness of the proposed approach.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.15228
|
2023-05-24T15:09:41Z
|
Short and Straight: Geodesics on Differentiable Manifolds
|
[
"Daniel Kelshaw",
"Luca Magri"
] |
Manifolds discovered by machine learning models provide a compact
representation of the underlying data. Geodesics on these manifolds define
locally length-minimising curves and provide a notion of distance, which are
key for reduced-order modelling, statistical inference, and interpolation. In
this work, we first analyse existing methods for computing length-minimising
geodesics. We find that these are not suitable for obtaining valid paths, and
thus, geodesic distances. We remedy these shortcomings by leveraging numerical
tools from differential geometry, which provide the means to obtain
Hamiltonian-conserving geodesics. Second, we propose a model-based
parameterisation for distance fields and geodesic flows on continuous
manifolds. Our approach exploits a manifold-aware extension to the Eikonal
equation, eliminating the need for approximations or discretisation. Finally,
we develop a curvature-based training mechanism, sampling and scaling points in
regions of the manifold exhibiting larger values of the Ricci scalar. This
sampling and scaling approach ensures that we capture regions of the manifold
subject to higher degrees of geodesic deviation. Our proposed methods provide
principled means to compute valid geodesics and geodesic distances on
manifolds. This work opens opportunities for latent-space interpolation,
optimal control, and distance computation on differentiable manifolds.
|
[
"cs.LG",
"cs.CG"
] | false |
2305.15234
|
2023-05-24T15:18:46Z
|
On the road to more accurate mobile cellular traffic predictions
|
[
"Natalia Vassileva Vesselinova"
] |
The main contribution reported in the paper is a novel paradigm through which
mobile cellular traffic forecasting is made substantially more accurate.
Specifically, by incorporating freely available road metrics we characterise
the data generation process and spatial dependencies. Therefore, this provides
a means for improving the forecasting estimates. We employ highway flow and
average speed variables together with a cellular network traffic metric in a
light learning structure to predict the short-term future load on a cell
covering a segment of a highway. This is in sharp contrast to prior art that
mainly studies urban scenarios (with pedestrian and limited vehicular speeds)
and develops machine learning approaches that use exclusively network metrics
and meta information to make mid-term and long-term predictions. The learning
structure can be used at a cell or edge level, and can find application in both
federated and centralised learning.
|
[
"cs.LG",
"cs.NI"
] | false |
2305.15242
|
2023-05-24T15:27:04Z
|
Machine Unlearning: its nature, scope, and importance for a "delete
culture"
|
[
"Luciano Floridi"
] |
The article explores the cultural shift from recording to deleting
information in the digital age and its implications on privacy, intellectual
property (IP), and Large Language Models like ChatGPT. It begins by defining a
delete culture where information, in principle legal, is made unavailable or
inaccessible because unacceptable or undesirable, especially but not only due
to its potential to infringe on privacy or IP. Then it focuses on two
strategies in this context: deleting, to make information unavailable; and
blocking, to make it inaccessible. The article argues that both strategies have
significant implications, particularly for machine learning (ML) models where
information is not easily made unavailable. However, the emerging research area
of Machine Unlearning (MU) is highlighted as a potential solution. MU, still in
its infancy, seeks to remove specific data points from ML models, effectively
making them 'forget' completely specific information. If successful, MU could
provide a feasible means to manage the overabundance of information and ensure
a better protection of privacy and IP. However, potential ethical risks, such
as misuse, overuse, and underuse of MU, should be systematically studied to
devise appropriate policies.
|
[
"cs.CY",
"cs.LG"
] | false |
2305.15254
|
2023-05-24T15:38:43Z
|
Attention to Mean-Fields for Particle Cloud Generation
|
[
"Benno Käch",
"Isabell Melzer-Pellmann"
] |
The generation of collider data using machine learning has emerged as a
prominent research topic in particle physics due to the increasing
computational challenges associated with traditional Monte Carlo simulation
methods, particularly for future colliders with higher luminosity. Although
generating particle clouds is analogous to generating point clouds, accurately
modelling the complex correlations between the particles presents a
considerable challenge. Additionally, variable particle cloud sizes further
exacerbate these difficulties, necessitating more sophisticated models. In this
work, we propose a novel model that utilizes an attention-based aggregation
mechanism to address these challenges. The model is trained in an adversarial
training paradigm, ensuring that both the generator and critic exhibit
permutation equivariance/invariance with respect to their input. A novel
feature matching loss in the critic is introduced to stabilize the training.
The proposed model performs competitively to the state-of-art whilst having
significantly fewer parameters.
|
[
"hep-ex",
"cs.LG"
] | false |
2305.15276
|
2023-05-24T16:02:28Z
|
Robust Sparse Mean Estimation via Incremental Learning
|
[
"Jianhao Ma",
"Rui Ray Chen",
"Yinghui He",
"Salar Fattahi",
"Wei Hu"
] |
In this paper, we study the problem of robust sparse mean estimation, where
the goal is to estimate a $k$-sparse mean from a collection of partially
corrupted samples drawn from a heavy-tailed distribution. Existing estimators
face two critical challenges in this setting. First, they are limited by a
conjectured computational-statistical tradeoff, implying that any
computationally efficient algorithm needs $\tilde\Omega(k^2)$ samples, while
its statistically-optimal counterpart only requires $\tilde O(k)$ samples.
Second, the existing estimators fall short of practical use as they scale
poorly with the ambient dimension. This paper presents a simple mean estimator
that overcomes both challenges under moderate conditions: it runs in
near-linear time and memory (both with respect to the ambient dimension) while
requiring only $\tilde O(k)$ samples to recover the true mean. At the core of
our method lies an incremental learning phenomenon: we introduce a simple
nonconvex framework that can incrementally learn the top-$k$ nonzero elements
of the mean while keeping the zero elements arbitrarily small. Unlike existing
estimators, our method does not need any prior knowledge of the sparsity level
$k$. We prove the optimality of our estimator by providing a matching
information-theoretic lower bound. Finally, we conduct a series of simulations
to corroborate our theoretical findings. Our code is available at
https://github.com/huihui0902/Robust_mean_estimation.
|
[
"cs.LG",
"stat.ML"
] | false |
2305.15331
|
2023-05-24T16:43:21Z
|
No-Regret Online Prediction with Strategic Experts
|
[
"Omid Sadeghi",
"Maryam Fazel"
] |
We study a generalization of the online binary prediction with expert advice
framework where at each round, the learner is allowed to pick $m\geq 1$ experts
from a pool of $K$ experts and the overall utility is a modular or submodular
function of the chosen experts. We focus on the setting in which experts act
strategically and aim to maximize their influence on the algorithm's
predictions by potentially misreporting their beliefs about the events. Among
others, this setting finds applications in forecasting competitions where the
learner seeks not only to make predictions by aggregating different forecasters
but also to rank them according to their relative performance. Our goal is to
design algorithms that satisfy the following two requirements: 1)
$\textit{Incentive-compatible}$: Incentivize the experts to report their
beliefs truthfully, and 2) $\textit{No-regret}$: Achieve sublinear regret with
respect to the true beliefs of the best fixed set of $m$ experts in hindsight.
Prior works have studied this framework when $m=1$ and provided
incentive-compatible no-regret algorithms for the problem. We first show that a
simple reduction of our problem to the $m=1$ setting is neither efficient nor
effective. Then, we provide algorithms that utilize the specific structure of
the utility functions to achieve the two desired goals.
|
[
"cs.LG",
"cs.GT"
] | false |
2305.15333
|
2023-05-24T16:45:38Z
|
Breaking the Curse of Quality Saturation with User-Centric Ranking
|
[
"Zhuokai Zhao",
"Yang Yang",
"Wenyu Wang",
"Chihuang Liu",
"Yu Shi",
"Wenjie Hu",
"Haotian Zhang",
"Shuang Yang"
] |
A key puzzle in search, ads, and recommendation is that the ranking model can
only utilize a small portion of the vastly available user interaction data. As
a result, increasing data volume, model size, or computation FLOPs will quickly
suffer from diminishing returns. We examined this problem and found that one of
the root causes may lie in the so-called ``item-centric'' formulation, which
has an unbounded vocabulary and thus uncontrolled model complexity. To mitigate
quality saturation, we introduce an alternative formulation named
``user-centric ranking'', which is based on a transposed view of the dyadic
user-item interaction data. We show that this formulation has a promising
scaling property, enabling us to train better-converged models on substantially
larger data sets.
|
[
"cs.IR",
"cs.LG"
] | false |
2305.15337
|
2023-05-24T16:50:05Z
|
A Deep Generative Model for Interactive Data Annotation through Direct
Manipulation in Latent Space
|
[
"Hannes Kath",
"Thiago S. Gouvêa",
"Daniel Sonntag"
] |
The impact of machine learning (ML) in many fields of application is
constrained by lack of annotated data. Among existing tools for ML-assisted
data annotation, one little explored tool type relies on an analogy between the
coordinates of a graphical user interface and the latent space of a neural
network for interaction through direct manipulation. In the present work, we 1)
expand the paradigm by proposing two new analogies: time and force as
reflecting iterations and gradients of network training; 2) propose a network
model for learning a compact graphical representation of the data that takes
into account both its internal structure and user provided annotations; and 3)
investigate the impact of model hyperparameters on the learned graphical
representations of the data, identifying candidate model variants for a future
user study.
|
[
"cs.LG",
"cs.HC"
] | false |
2305.15348
|
2023-05-24T16:59:41Z
|
READ: Recurrent Adaptation of Large Transformers
|
[
"Sid Wang",
"John Nguyen",
"Ke Li",
"Carole-Jean Wu"
] |
Fine-tuning large-scale Transformers has led to the explosion of many AI
applications across Natural Language Processing and Computer Vision tasks.
However, fine-tuning all pre-trained model parameters becomes impractical as
the model size and number of tasks increase. Parameter-efficient transfer
learning (PETL) methods aim to address these challenges. While effective in
reducing the number of trainable parameters, PETL methods still require
significant energy and computational resources to fine-tune. In this paper, we
introduce \textbf{RE}current \textbf{AD}aption (READ) -- a lightweight and
memory-efficient fine-tuning method -- to overcome the limitations of the
current PETL approaches. Specifically, READ inserts a small RNN network
alongside the backbone model so that the model does not have to back-propagate
through the large backbone network. Through comprehensive empirical evaluation
of the GLUE benchmark, we demonstrate READ can achieve a $56\%$ reduction in
the training memory consumption and an $84\%$ reduction in the GPU energy usage
while retraining high model quality compared to full-tuning. Additionally, the
model size of READ does not grow with the backbone model size, making it a
highly scalable solution for fine-tuning large Transformers.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.15353
|
2023-05-24T17:06:59Z
|
A Virtual Reality Tool for Representing, Visualizing and Updating Deep
Learning Models
|
[
"Hannes Kath",
"Bengt Lüers",
"Thiago S. Gouvêa",
"Daniel Sonntag"
] |
Deep learning is ubiquitous, but its lack of transparency limits its impact
on several potential application areas. We demonstrate a virtual reality tool
for automating the process of assigning data inputs to different categories. A
dataset is represented as a cloud of points in virtual space. The user explores
the cloud through movement and uses hand gestures to categorise portions of the
cloud. This triggers gradual movements in the cloud: points of the same
category are attracted to each other, different groups are pushed apart, while
points are globally distributed in a way that utilises the entire space. The
space, time, and forces observed in virtual reality can be mapped to
well-defined machine learning concepts, namely the latent space, the training
epochs and the backpropagation. Our tool illustrates how the inner workings of
deep neural networks can be made tangible and transparent. We expect this
approach to accelerate the autonomous development of deep learning applications
by end users in novel areas.
|
[
"cs.HC",
"cs.LG"
] | false |
2305.15529
|
2023-05-24T19:35:42Z
|
Editable Graph Neural Network for Node Classifications
|
[
"Zirui Liu",
"Zhimeng Jiang",
"Shaochen Zhong",
"Kaixiong Zhou",
"Li Li",
"Rui Chen",
"Soo-Hyun Choi",
"Xia Hu"
] |
Despite Graph Neural Networks (GNNs) have achieved prominent success in many
graph-based learning problem, such as credit risk assessment in financial
networks and fake news detection in social networks. However, the trained GNNs
still make errors and these errors may cause serious negative impact on
society. \textit{Model editing}, which corrects the model behavior on wrongly
predicted target samples while leaving model predictions unchanged on unrelated
samples, has garnered significant interest in the fields of computer vision and
natural language processing. However, model editing for graph neural networks
(GNNs) is rarely explored, despite GNNs' widespread applicability. To fill the
gap, we first observe that existing model editing methods significantly
deteriorate prediction accuracy (up to $50\%$ accuracy drop) in GNNs while a
slight accuracy drop in multi-layer perception (MLP). The rationale behind this
observation is that the node aggregation in GNNs will spread the editing effect
throughout the whole graph. This propagation pushes the node representation far
from its original one. Motivated by this observation, we propose
\underline{E}ditable \underline{G}raph \underline{N}eural \underline{N}etworks
(EGNN), a neighbor propagation-free approach to correct the model prediction on
misclassified nodes. Specifically, EGNN simply stitches an MLP to the
underlying GNNs, where the weights of GNNs are frozen during model editing. In
this way, EGNN disables the propagation during editing while still utilizing
the neighbor propagation scheme for node prediction to obtain satisfactory
results. Experiments demonstrate that EGNN outperforms existing baselines in
terms of effectiveness (correcting wrong predictions with lower accuracy drop),
generalizability (correcting wrong predictions for other similar nodes), and
efficiency (low training time and memory) on various graph datasets.
|
[
"cs.LG",
"cs.SI"
] | false |
2305.15536
|
2023-05-24T19:45:56Z
|
RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models
|
[
"David Qiu",
"David Rim",
"Shaojin Ding",
"Oleg Rybakov",
"Yanzhang He"
] |
With the rapid increase in the size of neural networks, model compression has
become an important area of research. Quantization is an effective technique at
decreasing the model size, memory access, and compute load of large models.
Despite recent advances in quantization aware training (QAT) technique, most
papers present evaluations that are focused on computer vision tasks, which
have different training dynamics compared to sequence tasks. In this paper, we
first benchmark the impact of popular techniques such as straight through
estimator, pseudo-quantization noise, learnable scale parameter, clipping, etc.
on 4-bit seq2seq models across a suite of speech recognition datasets ranging
from 1,000 hours to 1 million hours, as well as one machine translation dataset
to illustrate its applicability outside of speech.
Through the experiments, we report that noise based QAT suffers when there is
insufficient regularization signal flowing back to the quantization scale. We
propose low complexity changes to the QAT process to improve model accuracy
(outperforming popular learnable scale and clipping methods). With the improved
accuracy, it opens up the possibility to exploit some of the other benefits of
noise based QAT: 1) training a single model that performs well in mixed
precision mode and 2) improved generalization on long form speech recognition.
|
[
"eess.AS",
"cs.LG"
] | false |
2305.15574
|
2023-05-24T21:15:23Z
|
Deep Stochastic Processes via Functional Markov Transition Operators
|
[
"Jin Xu",
"Emilien Dupont",
"Kaspar Märtens",
"Tom Rainforth",
"Yee Whye Teh"
] |
We introduce Markov Neural Processes (MNPs), a new class of Stochastic
Processes (SPs) which are constructed by stacking sequences of neural
parameterised Markov transition operators in function space. We prove that
these Markov transition operators can preserve the exchangeability and
consistency of SPs. Therefore, the proposed iterative construction adds
substantial flexibility and expressivity to the original framework of Neural
Processes (NPs) without compromising consistency or adding restrictions. Our
experiments demonstrate clear advantages of MNPs over baseline models on a
variety of tasks.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.15603
|
2023-05-24T22:26:38Z
|
Learning Lagrangian Fluid Mechanics with E($3$)-Equivariant Graph Neural
Networks
|
[
"Artur P. Toshev",
"Gianluca Galletti",
"Johannes Brandstetter",
"Stefan Adami",
"Nikolaus A. Adams"
] |
We contribute to the vastly growing field of machine learning for engineering
systems by demonstrating that equivariant graph neural networks have the
potential to learn more accurate dynamic-interaction models than their
non-equivariant counterparts. We benchmark two well-studied fluid-flow systems,
namely 3D decaying Taylor-Green vortex and 3D reverse Poiseuille flow, and
evaluate the models based on different performance measures, such as kinetic
energy or Sinkhorn distance. In addition, we investigate different embedding
methods of physical-information histories for equivariant models. We find that
while currently being rather slow to train and evaluate, equivariant models
with our proposed history embeddings learn more accurate physical interactions.
|
[
"cs.LG",
"physics.flu-dyn"
] | false |
2305.16341
|
2023-05-24T08:08:56Z
|
TaxoKnow: Taxonomy as Prior Knowledge in the Loss Function of
Multi-class Classification
|
[
"Mohsen Pourvali",
"Yao Meng",
"Chen Sheng",
"Yangzhou Du"
] |
In this paper, we investigate the effectiveness of integrating a hierarchical
taxonomy of labels as prior knowledge into the learning algorithm of a flat
classifier. We introduce two methods to integrate the hierarchical taxonomy as
an explicit regularizer into the loss function of learning algorithms. By
reasoning on a hierarchical taxonomy, a neural network alleviates its output
distributions over the classes, allowing conditioning on upper concepts for a
minority class. We limit ourselves to the flat classification task and provide
our experimental results on two industrial in-house datasets and two public
benchmarks, RCV1 and Amazon product reviews. Our obtained results show the
significant effect of a taxonomy in increasing the performance of a learner in
semisupervised multi-class classification and the considerable results obtained
in a fully supervised fashion.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.16346
|
2023-05-24T14:45:54Z
|
Artificial Intelligence-Based Methods for Precision Medicine: Diabetes
Risk Prediction
|
[
"Farida Mohsen",
"Hamada R. H. Al-Absi",
"Noha A. Yousri",
"Nady El Hajj",
"Zubair Shah"
] |
The rising prevalence of type 2 diabetes mellitus (T2DM) necessitates the
development of predictive models for T2DM risk assessment. Artificial
intelligence (AI) models are being extensively used for this purpose, but a
comprehensive review of their advancements and challenges is lacking. This
scoping review analyzes existing literature on AI-based models for T2DM risk
prediction. Forty studies were included, mainly published in the past four
years. Traditional machine learning models were more prevalent than deep
learning models. Electronic health records were the most commonly used data
source. Unimodal AI models relying on EHR data were prominent, while only a few
utilized multimodal models. Both unimodal and multimodal models showed
promising performance, with the latter outperforming the former. Internal
validation was common, while external validation was limited. Interpretability
methods were reported in half of the studies. Few studies reported novel
biomarkers, and open-source code availability was limited. This review provides
insights into the current state and limitations of AI-based T2DM risk
prediction models and highlights challenges for their development and clinical
implementation.
|
[
"cs.LG",
"cs.AI"
] | false |
2310.11470
|
2023-05-24T13:38:38Z
|
Classic machine learning methods
|
[
"Johann Faouzi",
"Olivier Colliot"
] |
In this chapter, we present the main classic machine learning methods. A
large part of the chapter is devoted to supervised learning techniques for
classification and regression, including nearest-neighbor methods, linear and
logistic regressions, support vector machines and tree-based algorithms. We
also describe the problem of overfitting as well as strategies to overcome it.
We finally provide a brief overview of unsupervised learning methods, namely
for clustering and dimensionality reduction.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.14656
|
2023-05-24T02:51:45Z
|
RSRM: Reinforcement Symbolic Regression Machine
|
[
"Yilong Xu",
"Yang Liu",
"Hao Sun"
] |
In nature, the behaviors of many complex systems can be described by
parsimonious math equations. Automatically distilling these equations from
limited data is cast as a symbolic regression process which hitherto remains a
grand challenge. Keen efforts in recent years have been placed on tackling this
issue and demonstrated success in symbolic regression. However, there still
exist bottlenecks that current methods struggle to break when the discrete
search space tends toward infinity and especially when the underlying math
formula is intricate. To this end, we propose a novel Reinforcement Symbolic
Regression Machine (RSRM) that masters the capability of uncovering complex
math equations from only scarce data. The RSRM model is composed of three key
modules: (1) a Monte Carlo tree search (MCTS) agent that explores optimal math
expression trees consisting of pre-defined math operators and variables, (2) a
Double Q-learning block that helps reduce the feasible search space of MCTS via
properly understanding the distribution of reward, and (3) a modulated sub-tree
discovery block that heuristically learns and defines new math operators to
improve representation ability of math expression trees. Biding of these
modules yields the state-of-the-art performance of RSRM in symbolic regression
as demonstrated by multiple sets of benchmark examples. The RSRM model shows
clear superiority over several representative baseline models.
|
[
"cs.LG",
"cs.AI",
"cs.SC"
] | false |
2305.14689
|
2023-05-24T03:52:48Z
|
Under-Parameterized Double Descent for Ridge Regularized Least Squares
Denoising of Data on a Line
|
[
"Rishi Sonthalia",
"Xinyue Li",
"Bochao Gu"
] |
The relationship between the number of training data points, the number of
parameters in a statistical model, and the generalization capabilities of the
model has been widely studied. Previous work has shown that double descent can
occur in the over-parameterized regime, and believe that the standard
bias-variance trade-off holds in the under-parameterized regime. In this paper,
we present a simple example that provably exhibits double descent in the
under-parameterized regime. For simplicity, we look at the ridge regularized
least squares denoising problem with data on a line embedded in high-dimension
space. By deriving an asymptotically accurate formula for the generalization
error, we observe sample-wise and parameter-wise double descent with the peak
in the under-parameterized regime rather than at the interpolation point or in
the over-parameterized regime.
Further, the peak of the sample-wise double descent curve corresponds to a
peak in the curve for the norm of the estimator, and adjusting $\mu$, the
strength of the ridge regularization, shifts the location of the peak. We
observe that parameter-wise double descent occurs for this model for small
$\mu$. For larger values of $\mu$, we observe that the curve for the norm of
the estimator has a peak but that this no longer translates to a peak in the
generalization error. Moreover, we study the training error for this problem.
The considered problem setup allows for studying the interaction between two
regularizers. We provide empirical evidence that the model implicitly favors
using the ridge regularizer over the input data noise regularizer. Thus, we
show that even though both regularizers regularize the same quantity, i.e., the
norm of the estimator, they are not equivalent.
|
[
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] | false |
2305.14752
|
2023-05-24T05:54:10Z
|
A New Era in Software Security: Towards Self-Healing Software via Large
Language Models and Formal Verification
|
[
"Yiannis Charalambous",
"Norbert Tihanyi",
"Ridhi Jain",
"Youcheng Sun",
"Mohamed Amine Ferrag",
"Lucas C. Cordeiro"
] |
In this paper we present a novel solution that combines the capabilities of
Large Language Models (LLMs) with Formal Verification strategies to verify and
automatically repair software vulnerabilities. Initially, we employ Bounded
Model Checking (BMC) to locate the software vulnerability and derive a
counterexample. The counterexample provides evidence that the system behaves
incorrectly or contains a vulnerability. The counterexample that has been
detected, along with the source code, are provided to the LLM engine. Our
approach involves establishing a specialized prompt language for conducting
code debugging and generation to understand the vulnerability's root cause and
repair the code. Finally, we use BMC to verify the corrected version of the
code generated by the LLM. As a proof of concept, we create ESBMC-AI based on
the Efficient SMT-based Context-Bounded Model Checker (ESBMC) and a pre-trained
Transformer model, specifically gpt-3.5-turbo, to detect and fix errors in C
programs. Our experimentation involved generating a dataset comprising 1000 C
code samples, each consisting of 20 to 50 lines of code. Notably, our proposed
method achieved an impressive success rate of up to 80% in repairing vulnerable
code encompassing buffer overflow and pointer dereference failures. We assert
that this automated approach can effectively incorporate into the software
development lifecycle's continuous integration and deployment (CI/CD) process.
|
[
"cs.SE",
"cs.AI",
"cs.FL",
"cs.LG"
] | false |
2305.15157
|
2023-05-24T13:52:18Z
|
Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training
|
[
"Yifan Shi",
"Yingqi Liu",
"Yan Sun",
"Zihao Lin",
"Li Shen",
"Xueqian Wang",
"Dacheng Tao"
] |
Personalized federated learning (PFL) aims to produce the greatest
personalized model for each client to face an insurmountable problem--data
heterogeneity in real FL systems. However, almost all existing works have to
face large communication burdens and the risk of disruption if the central
server fails. Only limited efforts have been used in a decentralized way but
still suffers from inferior representation ability due to sharing the full
model with its neighbors. Therefore, in this paper, we propose a personalized
FL framework with a decentralized partial model training called DFedAlt. It
personalizes the "right" components in the modern deep models by alternately
updating the shared and personal parameters to train partially personalized
models in a peer-to-peer manner. To further promote the shared parameters
aggregation process, we propose DFedSalt integrating the local Sharpness Aware
Minimization (SAM) optimizer to update the shared parameters. It adds proper
perturbation in the direction of the gradient to overcome the shared model
inconsistency across clients. Theoretically, we provide convergence analysis of
both algorithms in the general non-convex setting for decentralized partial
model training in PFL. Our experiments on several real-world data with various
data partition settings demonstrate that (i) decentralized training is more
suitable for partial personalization, which results in state-of-the-art (SOTA)
accuracy compared with the SOTA PFL baselines; (ii) the shared parameters with
proper perturbation make partial personalized FL more suitable for
decentralized training, where DFedSalt achieves most competitive performance.
|
[
"cs.LG",
"cs.DC",
"math.OC"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.