arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.02871
|
2023-06-05T13:45:45Z
|
Text-To-KG Alignment: Comparing Current Methods on Classification Tasks
|
[
"Sondre Wold",
"Lilja Øvrelid",
"Erik Velldal"
] |
In contrast to large text corpora, knowledge graphs (KG) provide dense and
structured representations of factual information. This makes them attractive
for systems that supplement or ground the knowledge found in pre-trained
language models with an external knowledge source. This has especially been the
case for classification tasks, where recent work has focused on creating
pipeline models that retrieve information from KGs like ConceptNet as
additional context. Many of these models consist of multiple components, and
although they differ in the number and nature of these parts, they all have in
common that for some given text query, they attempt to identify and retrieve a
relevant subgraph from the KG. Due to the noise and idiosyncrasies often found
in KGs, it is not known how current methods compare to a scenario where the
aligned subgraph is completely relevant to the query. In this work, we try to
bridge this knowledge gap by reviewing current approaches to text-to-KG
alignment and evaluating them on two datasets where manually created graphs are
available, providing insights into the effectiveness of current methods.
|
[
"cs.CL"
] | false |
2306.02873
|
2023-06-05T13:46:31Z
|
DecompX: Explaining Transformers Decisions by Propagating Token
Decomposition
|
[
"Ali Modarressi",
"Mohsen Fayyaz",
"Ehsan Aghazadeh",
"Yadollah Yaghoobzadeh",
"Mohammad Taher Pilehvar"
] |
An emerging solution for explaining Transformer-based models is to use
vector-based analysis on how the representations are formed. However, providing
a faithful vector-based explanation for a multi-layer model could be
challenging in three aspects: (1) Incorporating all components into the
analysis, (2) Aggregating the layer dynamics to determine the information flow
and mixture throughout the entire model, and (3) Identifying the connection
between the vector-based analysis and the model's predictions. In this paper,
we present DecompX to tackle these challenges. DecompX is based on the
construction of decomposed token representations and their successive
propagation throughout the model without mixing them in between layers.
Additionally, our proposal provides multiple advantages over existing solutions
for its inclusion of all encoder components (especially nonlinear feed-forward
networks) and the classification head. The former allows acquiring precise
vectors while the latter transforms the decomposition into meaningful
prediction-based values, eliminating the need for norm- or summation-based
vector aggregation. According to the standard faithfulness evaluations, DecompX
consistently outperforms existing gradient-based and vector-based approaches on
various datasets. Our code is available at
https://github.com/mohsenfayyaz/DecompX.
|
[
"cs.CL"
] | false |
2306.02920
|
2023-06-05T14:32:41Z
|
Second Language Acquisition of Neural Language Models
|
[
"Miyu Oba",
"Tatsuki Kuribayashi",
"Hiroki Ouchi",
"Taro Watanabe"
] |
With the success of neural language models (LMs), their language acquisition
has gained much attention. This work sheds light on the second language (L2)
acquisition of LMs, while previous work has typically explored their first
language (L1) acquisition. Specifically, we trained bilingual LMs with a
scenario similar to human L2 acquisition and analyzed their cross-lingual
transfer from linguistic perspectives. Our exploratory experiments demonstrated
that the L1 pretraining accelerated their linguistic generalization in L2, and
language transfer configurations (e.g., the L1 choice, and presence of parallel
texts) substantially affected their generalizations. These clarify their
(non-)human-like L2 acquisition in particular aspects.
|
[
"cs.CL"
] | false |
2306.03024
|
2023-06-05T16:44:27Z
|
PokemonChat: Auditing ChatGPT for Pokémon Universe Knowledge
|
[
"Laura Cabello",
"Jiaang Li",
"Ilias Chalkidis"
] |
The recently released ChatGPT model demonstrates unprecedented capabilities
in zero-shot question-answering. In this work, we probe ChatGPT for its
conversational understanding and introduce a conversational framework
(protocol) that can be adopted in future studies. The Pok\'emon universe serves
as an ideal testing ground for auditing ChatGPT's reasoning capabilities due to
its closed world assumption. After bringing ChatGPT's background knowledge (on
the Pok\'emon universe) to light, we test its reasoning process when using
these concepts in battle scenarios. We then evaluate its ability to acquire new
knowledge and include it in its reasoning process. Our ultimate goal is to
assess ChatGPT's ability to generalize, combine features, and to acquire and
reason over newly introduced knowledge from human feedback. We find that
ChatGPT has prior knowledge of the Pokemon universe, which can reason upon in
battle scenarios to a great extent, even when new information is introduced.
The model performs better with collaborative feedback and if there is an
initial phase of information retrieval, but also hallucinates occasionally and
is susceptible to adversarial attacks.
|
[
"cs.CL"
] | true |
2306.03055
|
2023-06-05T17:27:48Z
|
Analyzing Syntactic Generalization Capacity of Pre-trained Language
Models on Japanese Honorific Conversion
|
[
"Ryo Sekizawa",
"Hitomi Yanaka"
] |
Using Japanese honorifics is challenging because it requires not only
knowledge of the grammatical rules but also contextual information, such as
social relationships. It remains unclear whether pre-trained large language
models (LLMs) can flexibly handle Japanese honorifics like humans. To analyze
this, we introduce an honorific conversion task that considers social
relationships among people mentioned in a conversation. We construct a Japanese
honorifics dataset from problem templates of various sentence structures to
investigate the syntactic generalization capacity of GPT-3, one of the leading
LLMs, on this task under two settings: fine-tuning and prompt learning. Our
results showed that the fine-tuned GPT-3 performed better in a context-aware
honorific conversion task than the prompt-based one. The fine-tuned model
demonstrated overall syntactic generalizability towards compound honorific
sentences, except when tested with the data involving direct speech.
|
[
"cs.CL"
] | false |
2306.03079
|
2023-06-05T17:53:41Z
|
Machine Learning and Statistical Approaches to Measuring Similarity of
Political Parties
|
[
"Daria Boratyn",
"Damian Brzyski",
"Beata Kosowska-Gąstoł",
"Jan Rybicki",
"Wojciech Słomczyński",
"Dariusz Stolicki"
] |
Mapping political party systems to metric policy spaces is one of the major
methodological problems in political science. At present, in most political
science project this task is performed by domain experts relying on purely
qualitative assessments, with all the attendant problems of subjectivity and
labor intensiveness. We consider how advances in natural language processing,
including large transformer-based language models, can be applied to solve that
issue. We apply a number of texts similarity measures to party political
programs, analyze how they correlate with each other, and -- in the absence of
a satisfactory benchmark -- evaluate them against other measures, including
those based on expert surveys, voting records, electoral patterns, and
candidate networks. Finally, we consider the prospects of relying on those
methods to correct, supplement, and eventually replace expert judgments.
|
[
"cs.CL",
"91F10 (Primary) 68T50 (Secondary)",
"J.4; I.2.7"
] | false |
2306.03189
|
2023-06-05T19:00:25Z
|
Easy-to-Read in Germany: A Survey on its Current State and Available
Resources
|
[
"Margot Madina",
"Itziar Gonzalez-Dios",
"Melanie Siegel"
] |
Easy-to-Read Language (E2R) is a controlled language variant that makes any
written text more accessible through the use of clear, direct and simple
language. It is mainly aimed at people with cognitive or intellectual
disabilities, among other target users. Plain Language (PL), on the other hand,
is a variant of a given language, which aims to promote the use of simple
language to communicate information. German counts with Leichte Sprache (LS),
its version of E2R, and Einfache Sprache (ES), its version of PL. In recent
years, important developments have been conducted in the field of LS. This
paper offers an updated overview of the existing Natural Language Processing
(NLP) tools and resources for LS. Besides, it also aims to set out the
situation with regard to LS and ES in Germany.
|
[
"cs.CL"
] | false |
2306.03264
|
2023-06-05T21:33:04Z
|
shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned
LLMs for Radiology Report Impression Generation
|
[
"Sanjeev Kumar Karn",
"Rikhiya Ghosh",
"Kusuma P",
"Oladimeji Farri"
] |
Instruction-tuned generative Large language models (LLMs) like ChatGPT and
Bloomz possess excellent generalization abilities, but they face limitations in
understanding radiology reports, particularly in the task of generating the
IMPRESSIONS section from the FINDINGS section. They tend to generate either
verbose or incomplete IMPRESSIONS, mainly due to insufficient exposure to
medical text data during training. We present a system which leverages
large-scale medical text data for domain-adaptive pre-training of
instruction-tuned LLMs to enhance its medical knowledge and performance on
specific medical tasks. We show that this system performs better in a zero-shot
setting than a number of pretrain-and-finetune adaptation methods on the
IMPRESSIONS generation task, and ranks 1st among participating systems in Task
1B: Radiology Report Summarization at the BioNLP 2023 workshop.
|
[
"cs.CL"
] | false |
2306.03316
|
2023-06-05T23:58:40Z
|
CoSiNES: Contrastive Siamese Network for Entity Standardization
|
[
"Jiaqing Yuan",
"Michele Merler",
"Mihir Choudhury",
"Raju Pavuluri",
"Munindar P. Singh",
"Maja Vukovic"
] |
Entity standardization maps noisy mentions from free-form text to standard
entities in a knowledge base. The unique challenge of this task relative to
other entity-related tasks is the lack of surrounding context and numerous
variations in the surface form of the mentions, especially when it comes to
generalization across domains where labeled data is scarce. Previous research
mostly focuses on developing models either heavily relying on context, or
dedicated solely to a specific domain. In contrast, we propose CoSiNES, a
generic and adaptable framework with Contrastive Siamese Network for Entity
Standardization that effectively adapts a pretrained language model to capture
the syntax and semantics of the entities in a new domain.
We construct a new dataset in the technology domain, which contains 640
technical stack entities and 6,412 mentions collected from industrial content
management systems. We demonstrate that CoSiNES yields higher accuracy and
faster runtime than baselines derived from leading methods in this domain.
CoSiNES also achieves competitive performance in four standard datasets from
the chemistry, medicine, and biomedical domains, demonstrating its cross-domain
applicability.
|
[
"cs.CL"
] | false |
2306.04459
|
2023-06-05T06:46:53Z
|
Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications
|
[
"Mengting Hu",
"Zhen Zhang",
"Shiwan Zhao",
"Minlie Huang",
"Bingzhe Wu"
] |
As a main field of artificial intelligence, natural language processing (NLP)
has achieved remarkable success via deep neural networks. Plenty of NLP tasks
have been addressed in a unified manner, with various tasks being associated
with each other through sharing the same paradigm. However, neural networks are
black boxes and rely on probability computation. Making mistakes is inevitable.
Therefore, estimating the reliability and trustworthiness (in other words,
uncertainty) of neural networks becomes a key research direction, which plays a
crucial role in reducing models' risks and making better decisions. Therefore,
in this survey, we provide a comprehensive review of uncertainty-relevant works
in the NLP field. Considering the data and paradigms characteristics, we first
categorize the sources of uncertainty in natural language into three types,
including input, system, and output. Then, we systemically review uncertainty
quantification approaches and the main applications. Finally, we discuss the
challenges of uncertainty estimation in NLP and discuss potential future
directions, taking into account recent trends in the field. Though there have
been a few surveys about uncertainty estimation, our work is the first to
review uncertainty from the NLP perspective.
|
[
"cs.CL"
] | false |
2306.05431
|
2023-06-05T08:42:59Z
|
LexGPT 0.1: pre-trained GPT-J models with Pile of Law
|
[
"Jieh-Sheng Lee"
] |
This research aims to build generative language models specialized for the
legal domain. The manuscript presents the development of LexGPT models based on
GPT-J models and pre-trained with Pile of Law. The foundation model built in
this manuscript is the initial step for the development of future applications
in the legal domain, such as further training with reinforcement learning from
human feedback. Another objective of this manuscript is to assist legal
professionals in utilizing language models through the ``No Code'' approach. By
fine-tuning models with specialized data and without modifying any source code,
legal professionals can create custom language models for downstream tasks with
minimum effort and technical knowledge. The downstream task in this manuscript
is to turn a LexGPT model into a classifier, although the performance is
notably lower than the state-of-the-art result. How to enhance downstream task
performance without modifying the model or its source code is a research topic
for future exploration.
|
[
"cs.CL"
] | false |
2306.02553
|
2023-06-05T03:00:10Z
|
Learning to Relate to Previous Turns in Conversational Search
|
[
"Fengran Mo",
"Jian-Yun Nie",
"Kaiyu Huang",
"Kelong Mao",
"Yutao Zhu",
"Peng Li",
"Yang Liu"
] |
Conversational search allows a user to interact with a search system in
multiple turns. A query is strongly dependent on the conversation context. An
effective way to improve retrieval effectiveness is to expand the current query
with historical queries. However, not all the previous queries are related to,
and useful for expanding the current query. In this paper, we propose a new
method to select relevant historical queries that are useful for the current
query. To cope with the lack of labeled training data, we use a pseudo-labeling
approach to annotate useful historical queries based on their impact on the
retrieval results. The pseudo-labeled data are used to train a selection model.
We further propose a multi-task learning framework to jointly train the
selector and the retriever during fine-tuning, allowing us to mitigate the
possible inconsistency between the pseudo labels and the changed retriever.
Extensive experiments on four conversational search datasets demonstrate the
effectiveness and broad applicability of our method compared with several
strong baselines.
|
[
"cs.IR",
"cs.CL"
] | false |
2306.02612
|
2023-06-05T06:01:00Z
|
Building Resilient SMEs: Harnessing Large Language Models for Cyber
Security in Australia
|
[
"Benjamin Kereopa-Yorke"
] |
The escalating digitalisation of our lives and enterprises has led to a
parallel growth in the complexity and frequency of cyber-attacks. Small and
medium-sized enterprises (SMEs), particularly in Australia, are experiencing
increased vulnerability to cyber threats, posing a significant challenge to the
nation's cyber security landscape. Embracing transformative technologies such
as Artificial Intelligence (AI), Machine Learning (ML) and Large Language
Models (LLMs) can potentially strengthen cyber security policies for Australian
SMEs. However, their practical application, advantages, and limitations remain
underexplored, with prior research mainly focusing on large corporations. This
study aims to address this gap by providing a comprehensive understanding of
the potential role of LLMs in enhancing cyber security policies for Australian
SMEs. Employing a mixed-methods study design, this research includes a
literature review, qualitative analysis of SME case studies, and a quantitative
assessment of LLM performance metrics in cyber security applications. The
findings highlight the promising potential of LLMs across various performance
criteria, including relevance, accuracy, and applicability, though gaps remain
in areas such as completeness and clarity. The study underlines the importance
of integrating human expertise with LLM technology and refining model
development to address these limitations. By proposing a robust conceptual
framework guiding the effective adoption of LLMs, this research aims to
contribute to a safer and more resilient cyber environment for Australian SMEs,
enabling sustainable growth and competitiveness in the digital era.
|
[
"cs.CR",
"cs.CL"
] | false |
2306.02679
|
2023-06-05T08:11:59Z
|
Joint Pre-training and Local Re-training: Transferable Representation
Learning on Multi-source Knowledge Graphs
|
[
"Zequn Sun",
"Jiacheng Huang",
"Jinghao Lin",
"Xiaozhou Xu",
"Qijin Chen",
"Wei Hu"
] |
In this paper, we present the ``joint pre-training and local re-training''
framework for learning and applying multi-source knowledge graph (KG)
embeddings. We are motivated by the fact that different KGs contain
complementary information to improve KG embeddings and downstream tasks. We
pre-train a large teacher KG embedding model over linked multi-source KGs and
distill knowledge to train a student model for a task-specific KG. To enable
knowledge transfer across different KGs, we use entity alignment to build a
linked subgraph for connecting the pre-trained KGs and the target KG. The
linked subgraph is re-trained for three-level knowledge distillation from the
teacher to the student, i.e., feature knowledge distillation, network knowledge
distillation, and prediction knowledge distillation, to generate more
expressive embeddings. The teacher model can be reused for different target KGs
and tasks without having to train from scratch. We conduct extensive
experiments to demonstrate the effectiveness and efficiency of our framework.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.02682
|
2023-06-05T08:18:01Z
|
End-to-End Word-Level Pronunciation Assessment with MASK Pre-training
|
[
"Yukang Liang",
"Kaitao Song",
"Shaoguang Mao",
"Huiqiang Jiang",
"Luna Qiu",
"Yuqing Yang",
"Dongsheng Li",
"Linli Xu",
"Lili Qiu"
] |
Pronunciation assessment is a major challenge in the computer-aided
pronunciation training system, especially at the word (phoneme)-level. To
obtain word (phoneme)-level scores, current methods usually rely on aligning
components to obtain acoustic features of each word (phoneme), which limits the
performance of assessment to the accuracy of alignments. Therefore, to address
this problem, we propose a simple yet effective method, namely
\underline{M}asked pre-training for \underline{P}ronunciation
\underline{A}ssessment (MPA). Specifically, by incorporating a mask-predict
strategy, our MPA supports end-to-end training without leveraging any aligning
components and can solve misalignment issues to a large extent during
prediction. Furthermore, we design two evaluation strategies to enable our
model to conduct assessments in both unsupervised and supervised settings.
Experimental results on SpeechOcean762 dataset demonstrate that MPA could
achieve better performance than previous methods, without any explicit
alignment. In spite of this, MPA still has some limitations, such as requiring
more inference time and reference text. They expect to be addressed in future
work.
|
[
"cs.CL",
"eess.AS"
] | false |
2306.02707
|
2023-06-05T08:58:39Z
|
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
|
[
"Subhabrata Mukherjee",
"Arindam Mitra",
"Ganesh Jawahar",
"Sahaj Agarwal",
"Hamid Palangi",
"Ahmed Awadallah"
] |
Recent research has focused on enhancing the capability of smaller models
through imitation learning, drawing on the outputs generated by large
foundation models (LFMs). A number of issues impact the quality of these
models, ranging from limited imitation signals from shallow LFM outputs; small
scale homogeneous training data; and most notably a lack of rigorous evaluation
resulting in overestimating the small model's capability as they tend to learn
to imitate the style, but not the reasoning process of LFMs. To address these
challenges, we develop Orca (We are working with our legal team to publicly
release a diff of the model weights in accordance with LLaMA's release policy
to be published at https://aka.ms/orca-lm), a 13-billion parameter model that
learns to imitate the reasoning process of LFMs. Orca learns from rich signals
from GPT-4 including explanation traces; step-by-step thought processes; and
other complex instructions, guided by teacher assistance from ChatGPT. To
promote this progressive learning, we tap into large-scale and diverse
imitation data with judicious sampling and selection. Orca surpasses
conventional state-of-the-art instruction-tuned models such as Vicuna-13B by
more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard
(BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH
benchmark and shows competitive performance (4 pts gap with optimized system
message) in professional and academic examinations like the SAT, LSAT, GRE, and
GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our
research indicates that learning from step-by-step explanations, whether these
are generated by humans or more advanced AI models, is a promising direction to
improve model capabilities and skills.
|
[
"cs.CL",
"cs.LG"
] | true |
2306.02790
|
2023-06-05T11:35:40Z
|
Exploring the Relationship between Alignment and Cross-lingual Transfer
in Multilingual Transformers
|
[
"Félix Gaschi",
"Patricio Cerda",
"Parisa Rastin",
"Yannick Toussaint"
] |
Without any explicit cross-lingual training data, multilingual language
models can achieve cross-lingual transfer. One common way to improve this
transfer is to perform realignment steps before fine-tuning, i.e., to train the
model to build similar representations for pairs of words from translated
sentences. But such realignment methods were found to not always improve
results across languages and tasks, which raises the question of whether
aligned representations are truly beneficial for cross-lingual transfer. We
provide evidence that alignment is actually significantly correlated with
cross-lingual transfer across languages, models and random seeds. We show that
fine-tuning can have a significant impact on alignment, depending mainly on the
downstream task and the model. Finally, we show that realignment can, in some
instances, improve cross-lingual transfer, and we identify conditions in which
realignment methods provide significant improvements. Namely, we find that
realignment works better on tasks for which alignment is correlated with
cross-lingual transfer when generalizing to a distant language and with smaller
models, as well as when using a bilingual dictionary rather than FastAlign to
extract realignment pairs. For example, for POS-tagging, between English and
Arabic, realignment can bring a +15.8 accuracy improvement on distilmBERT, even
outperforming XLM-R Large by 1.7. We thus advocate for further research on
realignment methods for smaller multilingual models as an alternative to
scaling.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02840
|
2023-06-05T12:44:18Z
|
Learning to Substitute Spans towards Improving Compositional
Generalization
|
[
"Zhaoyi Li",
"Ying Wei",
"Defu Lian"
] |
Despite the rising prevalence of neural sequence models, recent empirical
evidences suggest their deficiency in compositional generalization. One of the
current de-facto solutions to this problem is compositional data augmentation,
aiming to incur additional compositional inductive bias. Nonetheless, the
improvement offered by existing handcrafted augmentation strategies is limited
when successful systematic generalization of neural sequence models requires
multi-grained compositional bias (i.e., not limited to either lexical or
structural biases only) or differentiation of training sequences in an
imbalanced difficulty distribution. To address the two challenges, we first
propose a novel compositional augmentation strategy dubbed \textbf{Span}
\textbf{Sub}stitution (SpanSub) that enables multi-grained composition of
substantial substructures in the whole training set. Over and above that, we
introduce the \textbf{L}earning \textbf{to} \textbf{S}ubstitute \textbf{S}pan
(L2S2) framework which empowers the learning of span substitution probabilities
in SpanSub in an end-to-end manner by maximizing the loss of neural sequence
models, so as to outweigh those challenging compositions with elusive concepts
and novel surroundings. Our empirical results on three standard compositional
generalization benchmarks, including SCAN, COGS and GeoQuery (with an
improvement of at most 66.5\%, 10.3\%, 1.2\%, respectively), demonstrate the
superiority of SpanSub, %the learning framework L2S2 and their combination.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.02842
|
2023-06-05T12:48:56Z
|
Improving Conversational Recommendation Systems via Counterfactual Data
Simulation
|
[
"Xiaolei Wang",
"Kun Zhou",
"Xinyu Tang",
"Wayne Xin Zhao",
"Fan Pan",
"Zhao Cao",
"Ji-Rong Wen"
] |
Conversational recommender systems (CRSs) aim to provide recommendation
services via natural language conversations. Although a number of approaches
have been proposed for developing capable CRSs, they typically rely on
sufficient training data for training. Since it is difficult to annotate
recommendation-oriented dialogue datasets, existing CRS approaches often suffer
from the issue of insufficient training due to the scarcity of training data.
To address this issue, in this paper, we propose a CounterFactual data
simulation approach for CRS, named CFCRS, to alleviate the issue of data
scarcity in CRSs. Our approach is developed based on the framework of
counterfactual data augmentation, which gradually incorporates the rewriting to
the user preference from a real dialogue without interfering with the entire
conversation flow. To develop our approach, we characterize user preference and
organize the conversation flow by the entities involved in the dialogue, and
design a multi-stage recommendation dialogue simulator based on a conversation
flow language model. Under the guidance of the learned user preference and
dialogue schema, the flow language model can produce reasonable, coherent
conversation flows, which can be further realized into complete dialogues.
Based on the simulator, we perform the intervention at the representations of
the interacted entities of target users, and design an adversarial training
method with a curriculum schedule that can gradually optimize the data
augmentation strategy. Extensive experiments show that our approach can
consistently boost the performance of several competitive CRSs, and outperform
other data augmentation methods, especially when the training data is limited.
Our code is publicly available at https://github.com/RUCAIBox/CFCRS.
|
[
"cs.CL",
"cs.IR"
] | false |
2306.02907
|
2023-06-05T14:12:46Z
|
SelfEvolve: A Code Evolution Framework via Large Language Models
|
[
"Shuyang Jiang",
"Yuhao Wang",
"Yu Wang"
] |
Large language models (LLMs) have already revolutionized code generation,
after being pretrained on publicly available code data. However, while various
methods have been proposed to augment LLMs with retrieved knowledge and enhance
the quality of code generation, the performance of these retrieval-based
methods is limited by the strength of the retrievers used. In addition, while
LLMs show great emergent ability, they still struggle to produce the correct
code in one turn. To address these challenges, we propose a novel two-step
pipeline, called \autoknow, that leverages LLMs as both knowledge providers and
self-reflective programmers. Unlike retrieval-based methods, \autoknow~obtains
the knowledge from input prompts and generates intermediate code based on the
generated knowledge. After that, \autoknow~asks LLM to act as an expert
programmer to perform debugging for the generated code. This is achieved by
receiving the error message from the interpreter, without requiring special
test cases for correctness verification. We evaluate \autoknow~on three code
generation datasets, including DS-1000 for data science code, HumanEval for
software engineering code, and TransCoder for C++-to-Python translation. Our
empirical experiments show that \autoknow~outperforms strong baselines by a
significant margin on all datasets. We also conduct exhaustive analytical
experiments to validate the effectiveness of the two stages of \autoknow, and
find that both are superior to other prompting-based methods. Further
scalability analysis demonstrates that \autoknow~can be adapted to other more
advanced models, such as GPT-4, and bring consistent efficacy improvement.
|
[
"cs.CL",
"cs.SE"
] | false |
2306.02955
|
2023-06-05T15:23:55Z
|
A Simple and Flexible Modeling for Mental Disorder Detection by Learning
from Clinical Questionnaires
|
[
"Hoyun Song",
"Jisu Shin",
"Huije Lee",
"Jong C. Park"
] |
Social media is one of the most highly sought resources for analyzing
characteristics of the language by its users. In particular, many researchers
utilized various linguistic features of mental health problems from social
media. However, existing approaches to detecting mental disorders face critical
challenges, such as the scarcity of high-quality data or the trade-off between
addressing the complexity of models and presenting interpretable results
grounded in expert domain knowledge. To address these challenges, we design a
simple but flexible model that preserves domain-based interpretability. We
propose a novel approach that captures the semantic meanings directly from the
text and compares them to symptom-related descriptions. Experimental results
demonstrate that our model outperforms relevant baselines on various mental
disorder detection tasks. Our detailed analysis shows that the proposed model
is effective at leveraging domain knowledge, transferable to other mental
disorders, and providing interpretable detection results.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.02978
|
2023-06-05T15:50:57Z
|
Which Argumentative Aspects of Hate Speech in Social Media can be
reliably identified?
|
[
"Damián Furman",
"Pablo Torres",
"José A. Rodríguez",
"Diego Letzen",
"Vanina Martínez",
"Laura Alonso Alemany"
] |
With the increasing diversity of use cases of large language models, a more
informative treatment of texts seems necessary. An argumentative analysis could
foster a more reasoned usage of chatbots, text completion mechanisms or other
applications. However, it is unclear which aspects of argumentation can be
reliably identified and integrated in language models. In this paper, we
present an empirical assessment of the reliability with which different
argumentative aspects can be automatically identified in hate speech in social
media. We have enriched the Hateval corpus (Basile et al. 2019) with a manual
annotation of some argumentative components, adapted from Wagemans (2016)'s
Periodic Table of Arguments. We show that some components can be identified
with reasonable reliability. For those that present a high error ratio, we
analyze the patterns of disagreement between expert annotators and errors in
automatic procedures, and we propose adaptations of those categories that can
be more reliably reproduced.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02980
|
2023-06-05T15:51:58Z
|
KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating
Inconsistencies in Natural Language Explanations
|
[
"Myeongjun Jang",
"Bodhisattwa Prasad Majumder",
"Julian McAuley",
"Thomas Lukasiewicz",
"Oana-Maria Camburu"
] |
While recent works have been considerably improving the quality of the
natural language explanations (NLEs) generated by a model to justify its
predictions, there is very limited research in detecting and alleviating
inconsistencies among generated NLEs. In this work, we leverage external
knowledge bases to significantly improve on an existing adversarial attack for
detecting inconsistent NLEs. We apply our attack to high-performing NLE models
and show that models with higher NLE quality do not necessarily generate fewer
inconsistencies. Moreover, we propose an off-the-shelf mitigation method to
alleviate inconsistencies by grounding the model into external background
knowledge. Our method decreases the inconsistencies of previous high-performing
NLE models as detected by our attack.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.03067
|
2023-06-05T17:43:53Z
|
Interactive Editing for Text Summarization
|
[
"Yujia Xie",
"Xun Wang",
"Si-Qing Chen",
"Wayne Xiong",
"Pengcheng He"
] |
Summarizing lengthy documents is a common and essential task in our daily
lives. Although recent advancements in neural summarization models can assist
in crafting general-purpose summaries, human writers often have specific
requirements that call for a more customized approach. To address this need, we
introduce REVISE (Refinement and Editing via Iterative Summarization
Enhancement), an innovative framework designed to facilitate iterative editing
and refinement of draft summaries by human writers. Within our framework,
writers can effortlessly modify unsatisfactory segments at any location or
length and provide optional starting phrases -- our system will generate
coherent alternatives that seamlessly integrate with the existing summary. At
its core, REVISE incorporates a modified fill-in-the-middle model with the
encoder-decoder architecture while developing novel evaluation metrics tailored
for the summarization task. In essence, our framework empowers users to create
high-quality, personalized summaries by effectively harnessing both human
expertise and AI capabilities, ultimately transforming the summarization
process into a truly collaborative and adaptive experience.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.03078
|
2023-06-05T17:53:28Z
|
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight
Compression
|
[
"Tim Dettmers",
"Ruslan Svirschevski",
"Vage Egiazarian",
"Denis Kuznedelev",
"Elias Frantar",
"Saleh Ashkboos",
"Alexander Borzunov",
"Torsten Hoefler",
"Dan Alistarh"
] |
Recent advances in large language model (LLM) pretraining have led to
high-quality LLMs with impressive abilities. By compressing such LLMs via
quantization to 3-4 bits per parameter, they can fit into memory-limited
devices such as laptops and mobile phones, enabling personalized use. However,
quantization down to 3-4 bits per parameter usually leads to moderate-to-high
accuracy losses, especially for smaller models in the 1-10B parameter range,
which are well-suited for edge deployments. To address this accuracy issue, we
introduce the Sparse-Quantized Representation (SpQR), a new compressed format
and quantization technique which enables for the first time near-lossless
compression of LLMs across model scales, while reaching similar compression
levels to previous methods. SpQR works by identifying and isolating outlier
weights, which cause particularly-large quantization errors, and storing them
in higher precision, while compressing all other weights to 3-4 bits, and
achieves relative accuracy losses of less than 1% in perplexity for
highly-accurate LLaMA and Falcon LLMs. This makes it possible to run 33B
parameter LLM on a single 24 GB consumer GPU without any performance
degradation at 15% speedup thus making powerful LLMs available to consumer
without any downsides. SpQR comes with efficient algorithms for both encoding
weights into its format, as well as decoding them efficiently at runtime.
Specifically, we provide an efficient GPU inference algorithm for SpQR which
yields faster inference than 16-bit baselines at similar accuracy, while
enabling memory compression gains of more than 4x.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.03090
|
2023-06-05T17:59:21Z
|
Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For
Scoring and Providing Actionable Insights on Classroom Instruction
|
[
"Rose E. Wang",
"Dorottya Demszky"
] |
Coaching, which involves classroom observation and expert feedback, is a
widespread and fundamental part of teacher training. However, the majority of
teachers do not have access to consistent, high quality coaching due to limited
resources and access to expertise. We explore whether generative AI could
become a cost-effective complement to expert feedback by serving as an
automated teacher coach. In doing so, we propose three teacher coaching tasks
for generative AI: (A) scoring transcript segments based on classroom
observation instruments, (B) identifying highlights and missed opportunities
for good instructional strategies, and (C) providing actionable suggestions for
eliciting more student reasoning. We recruit expert math teachers to evaluate
the zero-shot performance of ChatGPT on each of these tasks for elementary math
classroom transcripts. Our results reveal that ChatGPT generates responses that
are relevant to improving instruction, but they are often not novel or
insightful. For example, 82% of the model's suggestions point to places in the
transcript where the teacher is already implementing that suggestion. Our work
highlights the challenges of producing insightful, novel and truthful feedback
for teachers while paving the way for future research to address these
obstacles and improve the capacity of generative AI to coach teachers.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.03166
|
2023-06-05T18:20:27Z
|
Unsupervised Dense Retrieval with Relevance-Aware Contrastive
Pre-Training
|
[
"Yibin Lei",
"Liang Ding",
"Yu Cao",
"Changtong Zan",
"Andrew Yates",
"Dacheng Tao"
] |
Dense retrievers have achieved impressive performance, but their demand for
abundant training data limits their application scenarios. Contrastive
pre-training, which constructs pseudo-positive examples from unlabeled data,
has shown great potential to solve this problem. However, the pseudo-positive
examples crafted by data augmentations can be irrelevant. To this end, we
propose relevance-aware contrastive learning. It takes the intermediate-trained
model itself as an imperfect oracle to estimate the relevance of positive pairs
and adaptively weighs the contrastive loss of different pairs according to the
estimated relevance. Our method consistently improves the SOTA unsupervised
Contriever model on the BEIR and open-domain QA retrieval benchmarks. Further
exploration shows that our method can not only beat BM25 after further
pre-training on the target corpus but also serves as a good few-shot learner.
Our code is publicly available at https://github.com/Yibin-Lei/ReContriever.
|
[
"cs.IR",
"cs.CL"
] | false |
2306.03197
|
2023-06-05T19:16:37Z
|
AutoScrum: Automating Project Planning Using Large Language Models
|
[
"Martin Schroder"
] |
Recent advancements in the field of large language models have made it
possible to use language models for advanced reasoning. In this paper we
leverage this ability for designing complex project plans based only on knowing
the current state and the desired state. Two approaches are demonstrated - a
scrum based approach and a shortcut plan approach. The scrum based approach
executes an automated process of requirements gathering, user story mapping,
feature identification, task decomposition and finally generates questions and
search terms for seeking out domain specific information to assist with task
completion. The shortcut approach looks at most recent snapshot of the current
and desired state and generates the next most reasonable task to do in order to
get to the desired state as quickly as possible. In this paper we automate
everything using a novel concept of "Language Programs". These are programs
written in natural language designed to process input data through the language
model. Guidance language is used for all LLM programs. All demo source code for
this paper is available at https://github.com/autoscrum/autoscrum
|
[
"cs.AI",
"cs.CL"
] | false |
2306.03203
|
2023-06-05T19:23:34Z
|
A Static Evaluation of Code Completion by Large Language Models
|
[
"Hantian Ding",
"Varun Kumar",
"Yuchen Tian",
"Zijian Wang",
"Rob Kwiatkowski",
"Xiaopeng Li",
"Murali Krishna Ramanathan",
"Baishakhi Ray",
"Parminder Bhatia",
"Sudipta Sengupta",
"Dan Roth",
"Bing Xiang"
] |
Large language models trained on code have shown great potential to increase
productivity of software developers. Several execution-based benchmarks have
been proposed to evaluate functional correctness of model-generated code on
simple programming problems. Nevertheless, it is expensive to perform the same
evaluation on complex real-world projects considering the execution cost. On
the contrary, static analysis tools such as linters, which can detect errors
without running the program, haven't been well explored for evaluating code
generation models. In this work, we propose a static evaluation framework to
quantify static errors in Python code completions, by leveraging Abstract
Syntax Trees. Compared with execution-based evaluation, our method is not only
more efficient, but also applicable to code in the wild. For experiments, we
collect code context from open source repos to generate one million function
bodies using public models. Our static analysis reveals that Undefined Name and
Unused Variable are the most common errors among others made by language
models. Through extensive studies, we also show the impact of sampling
temperature, model size, and context on static errors in code completions.
|
[
"cs.CL",
"cs.SE"
] | true |
2306.03208
|
2023-06-05T19:30:41Z
|
NLU on Data Diets: Dynamic Data Subset Selection for NLP Classification
Tasks
|
[
"Jean-Michel Attendu",
"Jean-Philippe Corbeil"
] |
Finetuning large language models inflates the costs of NLU applications and
remains the bottleneck of development cycles. Recent works in computer vision
use data pruning to reduce training time. Pruned data selection with static
methods is based on a score calculated for each training example prior to
finetuning, which involves important computational overhead. Moreover, the
score may not necessarily be representative of sample importance throughout the
entire training duration. We propose to address these issues with a refined
version of dynamic data pruning, a curriculum which periodically scores and
discards unimportant examples during finetuning. Our method leverages an EL2N
metric that we extend to the joint intent and slot classification task, and an
initial finetuning phase on the full train set. Our results on the GLUE
benchmark and four joint NLU datasets show a better time-accuracy trade-off
compared to static methods. Our method preserves full accuracy while training
on 50% of the data points and reduces computational times by up to 41%. If we
tolerate instead a minor drop of accuracy of 1%, we can prune 80% of the
training examples for a reduction in finetuning time reaching 66%.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.03313
|
2023-06-05T23:55:09Z
|
A Scalable and Adaptive System to Infer the Industry Sectors of
Companies: Prompt + Model Tuning of Generative Language Models
|
[
"Lele Cao",
"Vilhelm von Ehrenheim",
"Astrid Berghult",
"Cecilia Henje",
"Richard Anselmo Stahl",
"Joar Wandborg",
"Sebastian Stan",
"Armin Catovic",
"Erik Ferm",
"Hannes Ingelhag"
] |
The Private Equity (PE) firms operate investment funds by acquiring and
managing companies to achieve a high return upon selling. Many PE funds are
thematic, meaning investment professionals aim to identify trends by covering
as many industry sectors as possible, and picking promising companies within
these sectors. So, inferring sectors for companies is critical to the success
of thematic PE funds. In this work, we standardize the sector framework and
discuss the typical challenges; we then introduce our sector inference system
addressing these challenges. Specifically, our system is built on a
medium-sized generative language model, finetuned with a prompt + model tuning
procedure. The deployed model demonstrates a superior performance than the
common baselines. The system has been serving many PE professionals for over a
year, showing great scalability to data volume and adaptability to any change
in sector framework and/or annotation.
|
[
"cs.CL",
"cs.AI",
"68T50, 68T05",
"I.2.7; I.2.1"
] | false |
2306.03315
|
2023-06-05T23:57:52Z
|
Few Shot Rationale Generation using Self-Training with Dual Teachers
|
[
"Aditya Srikanth Veerubhotla",
"Lahari Poddar",
"Jun Yin",
"György Szarvas",
"Sharanya Eswaran"
] |
Self-rationalizing models that also generate a free-text explanation for
their predicted labels are an important tool to build trustworthy AI
applications. Since generating explanations for annotated labels is a laborious
and costly pro cess, recent models rely on large pretrained language models
(PLMs) as their backbone and few-shot learning. In this work we explore a
self-training approach leveraging both labeled and unlabeled data to further
improve few-shot models, under the assumption that neither human written
rationales nor annotated task labels are available at scale. We introduce a
novel dual-teacher learning framework, which learns two specialized teacher
models for task prediction and rationalization using self-training and distills
their knowledge into a multi-tasking student model that can jointly generate
the task label and rationale. Furthermore, we formulate a new loss function,
Masked Label Regularization (MLR) which promotes explanations to be strongly
conditioned on predicted labels. Evaluation on three public datasets
demonstrate that the proposed methods are effective in modeling task labels and
generating faithful rationales.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02534
|
2023-06-05T01:55:33Z
|
Incorporating L2 Phonemes Using Articulatory Features for Robust Speech
Recognition
|
[
"Jisung Wang",
"Haram Lee",
"Myungwoo Oh"
] |
The limited availability of non-native speech datasets presents a major
challenge in automatic speech recognition (ASR) to narrow the performance gap
between native and non-native speakers. To address this, the focus of this
study is on the efficient incorporation of the L2 phonemes, which in this work
refer to Korean phonemes, through articulatory feature analysis. This not only
enables accurate modeling of pronunciation variants but also allows for the
utilization of both native Korean and English speech datasets. We employ the
lattice-free maximum mutual information (LF-MMI) objective in an end-to-end
manner, to train the acoustic model to align and predict one of multiple
pronunciation candidates. Experimental results show that the proposed method
improves ASR accuracy for Korean L2 speech by training solely on L1 speech
data. Furthermore, fine-tuning on L2 speech improves recognition accuracy for
both L1 and L2 speech without performance trade-offs.
|
[
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.02549
|
2023-06-05T02:52:54Z
|
Evaluation of AI Chatbots for Patient-Specific EHR Questions
|
[
"Alaleh Hamidi",
"Kirk Roberts"
] |
This paper investigates the use of artificial intelligence chatbots for
patient-specific question answering (QA) from clinical notes using several
large language model (LLM) based systems: ChatGPT (versions 3.5 and 4), Google
Bard, and Claude. We evaluate the accuracy, relevance, comprehensiveness, and
coherence of the answers generated by each model using a 5-point Likert scale
on a set of patient-specific questions.
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2306.02579
|
2023-06-05T04:10:04Z
|
Cross-Lingual Transfer Learning for Phrase Break Prediction with
Multilingual Language Model
|
[
"Hoyeon Lee",
"Hyun-Wook Yoon",
"Jong-Hwan Kim",
"Jae-Min Kim"
] |
Phrase break prediction is a crucial task for improving the prosody
naturalness of a text-to-speech (TTS) system. However, most proposed phrase
break prediction models are monolingual, trained exclusively on a large amount
of labeled data. In this paper, we address this issue for low-resource
languages with limited labeled data using cross-lingual transfer. We
investigate the effectiveness of zero-shot and few-shot cross-lingual transfer
for phrase break prediction using a pre-trained multilingual language model. We
use manually collected datasets in four Indo-European languages: one
high-resource language and three with limited resources. Our findings
demonstrate that cross-lingual transfer learning can be a particularly
effective approach, especially in the few-shot setting, for improving
performance in low-resource languages. This suggests that cross-lingual
transfer can be inexpensive and effective for developing TTS front-end in
resource-poor languages.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.02592
|
2023-06-05T04:46:44Z
|
Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help
Multiple Graph Applications
|
[
"Han Xie",
"Da Zheng",
"Jun Ma",
"Houyu Zhang",
"Vassilis N. Ioannidis",
"Xiang Song",
"Qing Ping",
"Sheng Wang",
"Carl Yang",
"Yi Xu",
"Belinda Zeng",
"Trishul Chilimbi"
] |
Model pre-training on large text corpora has been demonstrated effective for
various downstream applications in the NLP domain. In the graph mining domain,
a similar analogy can be drawn for pre-training graph models on large graphs in
the hope of benefiting downstream graph applications, which has also been
explored by several recent studies. However, no existing study has ever
investigated the pre-training of text plus graph models on large heterogeneous
graphs with abundant textual information (a.k.a. large graph corpora) and then
fine-tuning the model on different related downstream applications with
different graph schemas. To address this problem, we propose a framework of
graph-aware language model pre-training (GALM) on a large graph corpus, which
incorporates large language models and graph neural networks, and a variety of
fine-tuning methods on downstream applications. We conduct extensive
experiments on Amazon's real internal datasets and large public datasets.
Comprehensive empirical results and in-depth analysis demonstrate the
effectiveness of our proposed methods along with lessons learned.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.02622
|
2023-06-05T06:50:09Z
|
What Makes Entities Similar? A Similarity Flooding Perspective for
Multi-sourced Knowledge Graph Embeddings
|
[
"Zequn Sun",
"Jiacheng Huang",
"Xiaozhou Xu",
"Qijin Chen",
"Weijun Ren",
"Wei Hu"
] |
Joint representation learning over multi-sourced knowledge graphs (KGs)
yields transferable and expressive embeddings that improve downstream tasks.
Entity alignment (EA) is a critical step in this process. Despite recent
considerable research progress in embedding-based EA, how it works remains to
be explored. In this paper, we provide a similarity flooding perspective to
explain existing translation-based and aggregation-based EA models. We prove
that the embedding learning process of these models actually seeks a fixpoint
of pairwise similarities between entities. We also provide experimental
evidence to support our theoretical analysis. We propose two simple but
effective methods inspired by the fixpoint computation in similarity flooding,
and demonstrate their effectiveness on benchmark datasets. Our work bridges the
gap between recent embedding-based models and the conventional similarity
flooding algorithm. It would improve our understanding of and increase our
faith in embedding-based EA.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2306.02680
|
2023-06-05T08:12:17Z
|
BeAts: Bengali Speech Acts Recognition using Multimodal Attention Fusion
|
[
"Ahana Deb",
"Sayan Nag",
"Ayan Mahapatra",
"Soumitri Chattopadhyay",
"Aritra Marik",
"Pijush Kanti Gayen",
"Shankha Sanyal",
"Archi Banerjee",
"Samir Karmakar"
] |
Spoken languages often utilise intonation, rhythm, intensity, and structure,
to communicate intention, which can be interpreted differently depending on the
rhythm of speech of their utterance. These speech acts provide the foundation
of communication and are unique in expression to the language. Recent
advancements in attention-based models, demonstrating their ability to learn
powerful representations from multilingual datasets, have performed well in
speech tasks and are ideal to model specific tasks in low resource languages.
Here, we develop a novel multimodal approach combining two models, wav2vec2.0
for audio and MarianMT for text translation, by using multimodal attention
fusion to predict speech acts in our prepared Bengali speech corpus. We also
show that our model BeAts ($\underline{\textbf{Be}}$ngali speech acts
recognition using Multimodal $\underline{\textbf{At}}$tention
Fu$\underline{\textbf{s}}$ion) significantly outperforms both the unimodal
baseline using only speech data and a simpler bimodal fusion using both speech
and text data. Project page: https://soumitri2001.github.io/BeAts
|
[
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.02771
|
2023-06-05T10:55:15Z
|
Identifying the style by a qualified reader on a short fragment of
generated poetry
|
[
"Boris Orekhov"
] |
Style is an important concept in today's challenges in natural language
generating. After the success in the field of image style transfer, the task of
text style transfer became actual and attractive. Researchers are also
interested in the tasks of style reproducing in generation of the poetic text.
Evaluation of style reproducing in natural poetry generation remains a problem.
I used 3 character-based LSTM-models to work with style reproducing assessment.
All three models were trained on the corpus of texts by famous Russian-speaking
poets. Samples were shown to the assessors and 4 answer options were offered,
the style of which poet this sample reproduces. In addition, the assessors were
asked how well they were familiar with the work of the poet they had named.
Students studying history of literature were the assessors, 94 answers were
received. It has appeared that accuracy of definition of style increases if the
assessor can quote the poet by heart. Each model showed at least 0.7
macro-average accuracy. The experiment showed that it is better to involve a
professional rather than a naive reader in the evaluation of style in the tasks
of poetry generation, while lstm models are good at reproducing the style of
Russian poets even on a limited training corpus.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.02902
|
2023-06-05T14:09:25Z
|
N-Shot Benchmarking of Whisper on Diverse Arabic Speech Recognition
|
[
"Bashar Talafha",
"Abdul Waheed",
"Muhammad Abdul-Mageed"
] |
Whisper, the recently developed multilingual weakly supervised model, is
reported to perform well on multiple speech recognition benchmarks in both
monolingual and multilingual settings. However, it is not clear how Whisper
would fare under diverse conditions even on languages it was evaluated on such
as Arabic. In this work, we address this gap by comprehensively evaluating
Whisper on several varieties of Arabic speech for the ASR task. Our evaluation
covers most publicly available Arabic speech data and is performed under n-shot
(zero-, few-, and full) finetuning. We also investigate the robustness of
Whisper under completely novel conditions, such as in dialect-accented standard
Arabic and in unseen dialects for which we develop evaluation data. Our
experiments show that although Whisper zero-shot outperforms fully finetuned
XLS-R models on all datasets, its performance deteriorates significantly in the
zero-shot setting for five unseen dialects (i.e., Algeria, Jordan, Palestine,
UAE, and Yemen).
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.02543
|
2023-06-05T02:28:19Z
|
Addressing Budget Allocation and Revenue Allocation in Data Market
Environments Using an Adaptive Sampling Algorithm
|
[
"Boxin Zhao",
"Boxiang Lyu",
"Raul Castro Fernandez",
"Mladen Kolar"
] |
High-quality machine learning models are dependent on access to high-quality
training data. When the data are not already available, it is tedious and
costly to obtain them. Data markets help with identifying valuable training
data: model consumers pay to train a model, the market uses that budget to
identify data and train the model (the budget allocation problem), and finally
the market compensates data providers according to their data contribution
(revenue allocation problem). For example, a bank could pay the data market to
access data from other financial institutions to train a fraud detection model.
Compensating data contributors requires understanding data's contribution to
the model; recent efforts to solve this revenue allocation problem based on the
Shapley value are inefficient to lead to practical data markets.
In this paper, we introduce a new algorithm to solve budget allocation and
revenue allocation problems simultaneously in linear time. The new algorithm
employs an adaptive sampling process that selects data from those providers who
are contributing the most to the model. Better data means that the algorithm
accesses those providers more often, and more frequent accesses corresponds to
higher compensation. Furthermore, the algorithm can be deployed in both
centralized and federated scenarios, boosting its applicability. We provide
theoretical guarantees for the algorithm that show the budget is used
efficiently and the properties of revenue allocation are similar to Shapley's.
Finally, we conduct an empirical evaluation to show the performance of the
algorithm in practical scenarios and when compared to other baselines. Overall,
we believe that the new algorithm paves the way for the implementation of
practical data markets.
|
[
"cs.LG"
] | false |
2306.02731
|
2023-06-05T09:27:28Z
|
Enhanced Distribution Modelling via Augmented Architectures For Neural
ODE Flows
|
[
"Etrit Haxholli",
"Marco Lorenzi"
] |
While the neural ODE formulation of normalizing flows such as in FFJORD
enables us to calculate the determinants of free form Jacobians in O(D) time,
the flexibility of the transformation underlying neural ODEs has been shown to
be suboptimal. In this paper, we present AFFJORD, a neural ODE-based
normalizing flow which enhances the representation power of FFJORD by defining
the neural ODE through special augmented transformation dynamics which preserve
the topology of the space. Furthermore, we derive the Jacobian determinant of
the general augmented form by generalizing the chain rule in the continuous
sense into the Cable Rule, which expresses the forward sensitivity of ODEs with
respect to their initial conditions. The cable rule gives an explicit
expression for the Jacobian of a neural ODE transformation, and provides an
elegant proof of the instantaneous change of variable. Our experimental results
on density estimation in synthetic and high dimensional data, such as MNIST,
CIFAR-10 and CelebA 32x32, show that AFFJORD outperforms the baseline FFJORD
through the improved flexibility of the underlying vector field.
|
[
"cs.LG"
] | false |
2306.02807
|
2023-06-05T11:58:25Z
|
On Tail Decay Rate Estimation of Loss Function Distributions
|
[
"Etrit Haxholli",
"Marco Lorenzi"
] |
The study of loss function distributions is critical to characterize a
model's behaviour on a given machine learning problem. For example, while the
quality of a model is commonly determined by the average loss assessed on a
testing set, this quantity does not reflect the existence of the true mean of
the loss distribution. Indeed, the finiteness of the statistical moments of the
loss distribution is related to the thickness of its tails, which are generally
unknown. Since typical cross-validation schemes determine a family of testing
loss distributions conditioned on the training samples, the total loss
distribution must be recovered by marginalizing over the space of training
sets. As we show in this work, the finiteness of the sampling procedure
negatively affects the reliability and efficiency of classical tail estimation
methods from the Extreme Value Theory, such as the Peaks-Over-Threshold
approach. In this work we tackle this issue by developing a novel general
theory for estimating the tails of marginal distributions, when there exists a
large variability between locations of the individual conditional distributions
underlying the marginal. To this end, we demonstrate that under some regularity
conditions, the shape parameter of the marginal distribution is the maximum
tail shape parameter of the family of conditional distributions. We term this
estimation approach as Cross Tail Estimation (CTE). We test cross-tail
estimation in a series of experiments on simulated and real data, showing the
improved robustness and quality of tail estimation as compared to classical
approaches, and providing evidence for the relationship between overfitting and
loss distribution tail thickness.
|
[
"cs.LG"
] | false |
2306.02824
|
2023-06-05T12:21:42Z
|
COMET: Learning Cardinality Constrained Mixture of Experts with Trees
and Local Search
|
[
"Shibal Ibrahim",
"Wenyu Chen",
"Hussein Hazimeh",
"Natalia Ponomareva",
"Zhe Zhao",
"Rahul Mazumder"
] |
The sparse Mixture-of-Experts (Sparse-MoE) framework efficiently scales up
model capacity in various domains, such as natural language processing and
vision. Sparse-MoEs select a subset of the "experts" (thus, only a portion of
the overall network) for each input sample using a sparse, trainable gate.
Existing sparse gates are prone to convergence and performance issues when
training with first-order optimization methods. In this paper, we introduce two
improvements to current MoE approaches. First, we propose a new sparse gate:
COMET, which relies on a novel tree-based mechanism. COMET is differentiable,
can exploit sparsity to speed up computation, and outperforms state-of-the-art
gates. Second, due to the challenging combinatorial nature of sparse expert
selection, first-order methods are typically prone to low-quality solutions. To
deal with this challenge, we propose a novel, permutation-based local search
method that can complement first-order methods in training any sparse gate,
e.g., Hash routing, Top-k, DSelect-k, and COMET. We show that local search can
help networks escape bad initializations or solutions. We performed large-scale
experiments on various domains, including recommender systems, vision, and
natural language processing. On standard vision and recommender systems
benchmarks, COMET+ (COMET with local search) achieves up to 13% improvement in
ROC AUC over popular gates, e.g., Hash routing and Top-k, and up to 9% over
prior differentiable gates e.g., DSelect-k. When Top-k and Hash gates are
combined with local search, we see up to $100\times$ reduction in the budget
needed for hyperparameter tuning. Moreover, for language modeling, our approach
improves over the state-of-the-art MoEBERT model for distilling BERT on 5/7
GLUE benchmarks as well as SQuAD dataset.
|
[
"cs.LG"
] | false |
2306.02859
|
2023-06-05T13:24:03Z
|
Local Boosting for Weakly-Supervised Learning
|
[
"Rongzhi Zhang",
"Yue Yu",
"Jiaming Shen",
"Xiquan Cui",
"Chao Zhang"
] |
Boosting is a commonly used technique to enhance the performance of a set of
base models by combining them into a strong ensemble model. Though widely
adopted, boosting is typically used in supervised learning where the data is
labeled accurately. However, in weakly supervised learning, where most of the
data is labeled through weak and noisy sources, it remains nontrivial to design
effective boosting approaches. In this work, we show that the standard
implementation of the convex combination of base learners can hardly work due
to the presence of noisy labels. Instead, we propose $\textit{LocalBoost}$, a
novel framework for weakly-supervised boosting. LocalBoost iteratively boosts
the ensemble model from two dimensions, i.e., intra-source and inter-source.
The intra-source boosting introduces locality to the base learners and enables
each base learner to focus on a particular feature regime by training new base
learners on granularity-varying error regions. For the inter-source boosting,
we leverage a conditional function to indicate the weak source where the sample
is more likely to appear. To account for the weak labels, we further design an
estimate-then-modify approach to compute the model weights. Experiments on
seven datasets show that our method significantly outperforms vanilla boosting
methods and other weakly-supervised methods.
|
[
"cs.LG"
] | false |
2306.03052
|
2023-06-05T17:23:26Z
|
Forecasting Crude Oil Prices Using Reservoir Computing Models
|
[
"Kaushal Kumar"
] |
Accurate crude oil price prediction is crucial for financial decision-making.
We propose a novel reservoir computing model for forecasting crude oil prices.
It outperforms popular deep learning methods in most scenarios, as demonstrated
through rigorous evaluation using daily closing price data from major stock
market indices. Our model's competitive advantage is further validated by
comparing it with recent deep-learning approaches. This study introduces
innovative reservoir computing models for predicting crude oil prices, with
practical implications for financial practitioners. By leveraging advanced
techniques, market participants can enhance decision-making and gain valuable
insights into crude oil market dynamics.
|
[
"cs.LG"
] | false |
2306.03074
|
2023-06-05T17:50:29Z
|
A General Perspective on Objectives of Reinforcement Learning
|
[
"Long Yang"
] |
In this lecture, we present a general perspective on reinforcement learning
(RL) objectives, where we show three versions of objectives. The first version
is the standard definition of objective in RL literature. Then we extend the
standard definition to the $\lambda$-return version, which unifies the standard
definition of objective. Finally, we propose a general objective that unifies
the previous two versions. The last version provides a high level to understand
of RL's objective, where it shows a fundamental formulation that connects some
widely used RL techniques (e.g., TD$(\lambda)$ and GAE), and this objective can
be potentially applied to extensive RL algorithms.
|
[
"cs.LG"
] | false |
2306.03209
|
2023-06-05T19:34:36Z
|
End-to-end Differentiable Clustering with Associative Memories
|
[
"Bishwajit Saha",
"Dmitry Krotov",
"Mohammed J. Zaki",
"Parikshit Ram"
] |
Clustering is a widely used unsupervised learning technique involving an
intensive discrete optimization problem. Associative Memory models or AMs are
differentiable neural networks defining a recursive dynamical system, which
have been integrated with various deep learning architectures. We uncover a
novel connection between the AM dynamics and the inherent discrete assignment
necessary in clustering to propose a novel unconstrained continuous relaxation
of the discrete clustering problem, enabling end-to-end differentiable
clustering with AM, dubbed ClAM. Leveraging the pattern completion ability of
AMs, we further develop a novel self-supervised clustering loss. Our
evaluations on varied datasets demonstrate that ClAM benefits from the
self-supervision, and significantly improves upon both the traditional Lloyd's
k-means algorithm, and more recent continuous clustering relaxations (by upto
60% in terms of the Silhouette Coefficient).
|
[
"cs.LG"
] | false |
2306.03240
|
2023-06-05T20:50:36Z
|
Improving Accelerated Federated Learning with Compression and Importance
Sampling
|
[
"Michał Grudzień",
"Grigory Malinovsky",
"Peter Richtárik"
] |
Federated Learning is a collaborative training framework that leverages
heterogeneous data distributed across a vast number of clients. Since it is
practically infeasible to request and process all clients during the
aggregation step, partial participation must be supported. In this setting, the
communication between the server and clients poses a major bottleneck. To
reduce communication loads, there are two main approaches: compression and
local steps. Recent work by Mishchenko et al. [2022] introduced the new
ProxSkip method, which achieves an accelerated rate using the local steps
technique. Follow-up works successfully combined local steps acceleration with
partial participation [Grudzie\'n et al., 2023, Condat et al. 2023] and
gradient compression [Condat et al. [2022]. In this paper, we finally present a
complete method for Federated Learning that incorporates all necessary
ingredients: Local Training, Compression, and Partial Participation. We obtain
state-of-the-art convergence guarantees in the considered setting. Moreover, we
analyze the general sampling framework for partial participation and derive an
importance sampling scheme, which leads to even better performance. We
experimentally demonstrate the advantages of the proposed method in practice.
|
[
"cs.LG"
] | false |
2306.03782
|
2023-06-05T02:24:59Z
|
Non-parametric Probabilistic Time Series Forecasting via Innovations
Representation
|
[
"Xinyi Wang",
"Meijen Lee",
"Qing Zhao",
"Lang Tong"
] |
Probabilistic time series forecasting predicts the conditional probability
distributions of the time series at a future time given past realizations. Such
techniques are critical in risk-based decision-making and planning under
uncertainties. Existing approaches are primarily based on parametric or
semi-parametric time-series models that are restrictive, difficult to validate,
and challenging to adapt to varying conditions. This paper proposes a
nonparametric method based on the classic notion of {\em innovations} pioneered
by Norbert Wiener and Gopinath Kallianpur that causally transforms a
nonparametric random process to an independent and identical uniformly
distributed {\em innovations process}. We present a machine-learning
architecture and a learning algorithm that circumvent two limitations of the
original Wiener-Kallianpur innovations representation: (i) the need for known
probability distributions of the time series and (ii) the existence of a causal
decoder that reproduces the original time series from the innovations
representation. We develop a deep-learning approach and a Monte Carlo sampling
technique to obtain a generative model for the predicted conditional
probability distribution of the time series based on a weak notion of
Wiener-Kallianpur innovations representation. The efficacy of the proposed
probabilistic forecasting technique is demonstrated on a variety of electricity
price datasets, showing marked improvement over leading benchmarks of
probabilistic forecasting techniques.
|
[
"cs.LG"
] | false |
2306.02508
|
2023-06-05T00:01:17Z
|
Graph Fourier MMD for Signals on Graphs
|
[
"Samuel Leone",
"Aarthi Venkat",
"Guillaume Huguet",
"Alexander Tong",
"Guy Wolf",
"Smita Krishnaswamy"
] |
While numerous methods have been proposed for computing distances between
probability distributions in Euclidean space, relatively little attention has
been given to computing such distances for distributions on graphs. However,
there has been a marked increase in data that either lies on graph (such as
protein interaction networks) or can be modeled as a graph (single cell data),
particularly in the biomedical sciences. Thus, it becomes important to find
ways to compare signals defined on such graphs. Here, we propose Graph Fourier
MMD (GFMMD), a novel distance between distributions and signals on graphs.
GFMMD is defined via an optimal witness function that is both smooth on the
graph and maximizes difference in expectation between the pair of distributions
on the graph. We find an analytical solution to this optimization problem as
well as an embedding of distributions that results from this method. We also
prove several properties of this method including scale invariance and
applicability to disconnected graphs. We showcase it on graph benchmark
datasets as well on single cell RNA-sequencing data analysis. In the latter, we
use the GFMMD-based gene embeddings to find meaningful gene clusters. We also
propose a novel type of score for gene selection called "gene localization
score" which helps select genes for cellular state space characterization.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.02516
|
2023-06-05T00:43:37Z
|
SamToNe: Improving Contrastive Loss for Dual Encoder Retrieval Models
with Same Tower Negatives
|
[
"Fedor Moiseev",
"Gustavo Hernandez Abrego",
"Peter Dornbach",
"Imed Zitouni",
"Enrique Alfonseca",
"Zhe Dong"
] |
Dual encoders have been used for retrieval tasks and representation learning
with good results. A standard way to train dual encoders is using a contrastive
loss with in-batch negatives. In this work, we propose an improved contrastive
learning objective by adding queries or documents from the same encoder towers
to the negatives, for which we name it as "contrastive loss with SAMe TOwer
NEgatives" (SamToNe). By evaluating on question answering retrieval benchmarks
from MS MARCO and MultiReQA, and heterogenous zero-shot information retrieval
benchmarks (BEIR), we demonstrate that SamToNe can effectively improve the
retrieval quality for both symmetric and asymmetric dual encoders. By directly
probing the embedding spaces of the two encoding towers via the t-SNE algorithm
(van der Maaten and Hinton, 2008), we observe that SamToNe ensures the
alignment between the embedding spaces from the two encoder towers. Based on
the analysis of the embedding distance distributions of the top-$1$ retrieved
results, we further explain the efficacy of the method from the perspective of
regularisation.
|
[
"cs.LG",
"cs.IR"
] | false |
2306.02527
|
2023-06-05T01:23:49Z
|
Searching for Optimal Per-Coordinate Step-sizes with Multidimensional
Backtracking
|
[
"Frederik Kunstner",
"Victor S. Portella",
"Mark Schmidt",
"Nick Harvey"
] |
The backtracking line-search is an effective technique to automatically tune
the step-size in smooth optimization. It guarantees similar performance to
using the theoretically optimal step-size. Many approaches have been developed
to instead tune per-coordinate step-sizes, also known as diagonal
preconditioners, but none of the existing methods are provably competitive with
the optimal per-coordinate stepsizes. We propose multidimensional backtracking,
an extension of the backtracking line-search to find good diagonal
preconditioners for smooth convex problems. Our key insight is that the
gradient with respect to the step-sizes, also known as hypergradients, yields
separating hyperplanes that let us search for good preconditioners using
cutting-plane methods. As black-box cutting-plane approaches like the ellipsoid
method are computationally prohibitive, we develop an efficient algorithm
tailored to our setting. Multidimensional backtracking is provably competitive
with the best diagonal preconditioner and requires no manual tuning.
|
[
"math.OC",
"cs.LG"
] | false |
2306.02533
|
2023-06-05T01:45:22Z
|
On Emergence of Clean-Priority Learning in Early Stopped Neural Networks
|
[
"Chaoyue Liu",
"Amirhesam Abedsoltan",
"Mikhail Belkin"
] |
When random label noise is added to a training dataset, the prediction error
of a neural network on a label-noise-free test dataset initially improves
during early training but eventually deteriorates, following a U-shaped
dependence on training time. This behaviour is believed to be a result of
neural networks learning the pattern of clean data first and fitting the noise
later in the training, a phenomenon that we refer to as clean-priority
learning. In this study, we aim to explore the learning dynamics underlying
this phenomenon. We theoretically demonstrate that, in the early stage of
training, the update direction of gradient descent is determined by the clean
subset of training data, leaving the noisy subset has minimal to no impact,
resulting in a prioritization of clean learning. Moreover, we show both
theoretically and experimentally, as the clean-priority learning goes on, the
dominance of the gradients of clean samples over those of noisy samples
diminishes, and finally results in a termination of the clean-priority learning
and fitting of the noisy samples.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.02556
|
2023-06-05T03:08:29Z
|
Improved Active Multi-Task Representation Learning via Lasso
|
[
"Yiping Wang",
"Yifang Chen",
"Kevin Jamieson",
"Simon S. Du"
] |
To leverage the copious amount of data from source tasks and overcome the
scarcity of the target task samples, representation learning based on
multi-task pretraining has become a standard approach in many applications.
However, up until now, most existing works design a source task selection
strategy from a purely empirical perspective. Recently, \citet{chen2022active}
gave the first active multi-task representation learning (A-MTRL) algorithm
which adaptively samples from source tasks and can provably reduce the total
sample complexity using the L2-regularized-target-source-relevance parameter
$\nu^2$. But their work is theoretically suboptimal in terms of total source
sample complexity and is less practical in some real-world scenarios where
sparse training source task selection is desired. In this paper, we address
both issues. Specifically, we show the strict dominance of the
L1-regularized-relevance-based ($\nu^1$-based) strategy by giving a lower bound
for the $\nu^2$-based strategy. When $\nu^1$ is unknown, we propose a practical
algorithm that uses the LASSO program to estimate $\nu^1$. Our algorithm
successfully recovers the optimal result in the known case. In addition to our
sample complexity results, we also characterize the potential of our
$\nu^1$-based strategy in sample-cost-sensitive settings. Finally, we provide
experiments on real-world computer vision datasets to illustrate the
effectiveness of our proposed method.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02565
|
2023-06-05T03:36:31Z
|
Coupled Variational Autoencoder
|
[
"Xiaoran Hao",
"Patrick Shafto"
] |
Variational auto-encoders are powerful probabilistic models in generative
tasks but suffer from generating low-quality samples which are caused by the
holes in the prior. We propose the Coupled Variational Auto-Encoder (C-VAE),
which formulates the VAE problem as one of Optimal Transport (OT) between the
prior and data distributions. The C-VAE allows greater flexibility in priors
and natural resolution of the prior hole problem by enforcing coupling between
the prior and the data distribution and enables flexible optimization through
the primal, dual, and semi-dual formulations of entropic OT. Simulations on
synthetic and real data show that the C-VAE outperforms alternatives including
VAE, WAE, and InfoVAE in fidelity to the data, quality of the latent
representation, and in quality of generated samples.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02568
|
2023-06-05T03:47:59Z
|
Latent Optimal Paths by Gumbel Propagation for Variational Bayesian
Dynamic Programming
|
[
"Xinlei Niu",
"Christian Walder",
"Jing Zhang",
"Charles Patrick Martin"
] |
We propose a unified approach to obtain structured sparse optimal paths in
the latent space of a variational autoencoder (VAE) using dynamic programming
and Gumbel propagation. We solve the classical optimal path problem by a
probability softening solution, called the stochastic optimal path, and
transform a wide range of DP problems into directed acyclic graphs in which all
possible paths follow a Gibbs distribution. We show the equivalence of the
Gibbs distribution to a message-passing algorithm by the properties of the
Gumbel distribution and give all the ingredients required for variational
Bayesian inference. Our approach obtaining latent optimal paths enables
end-to-end training for generative tasks in which models rely on the
information of unobserved structural features. We validate the behavior of our
approach and showcase its applicability in two real-world applications:
text-to-speech and singing voice synthesis.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02570
|
2023-06-05T03:51:14Z
|
When Decentralized Optimization Meets Federated Learning
|
[
"Hongchang Gao",
"My T. Thai",
"Jie Wu"
] |
Federated learning is a new learning paradigm for extracting knowledge from
distributed data. Due to its favorable properties in preserving privacy and
saving communication costs, it has been extensively studied and widely applied
to numerous data analysis applications. However, most existing federated
learning approaches concentrate on the centralized setting, which is vulnerable
to a single-point failure. An alternative strategy for addressing this issue is
the decentralized communication topology. In this article, we systematically
investigate the challenges and opportunities when renovating decentralized
optimization for federated learning. In particular, we discussed them from the
model, data, and communication sides, respectively, which can deepen our
understanding about decentralized federated learning.
|
[
"cs.LG",
"math.OC"
] | false |
2306.02587
|
2023-06-05T04:28:04Z
|
Jammer classification with Federated Learning
|
[
"Peng Wu",
"Helena Calatrava",
"Tales Imbiriba",
"Pau Closas"
] |
Jamming signals can jeopardize the operation of GNSS receivers until denying
its operation. Given their ubiquity, jamming mitigation and localization
techniques are of crucial importance, for which jammer classification is of
help. Data-driven models have been proven useful in detecting these threats,
while their training using crowdsourced data still poses challenges when it
comes to private data sharing. This article investigates the use of federated
learning to train jamming signal classifiers locally on each device, with model
updates aggregated and averaged at the central server. This allows for
privacy-preserving training procedures that do not require centralized data
storage or access to client local data. The used framework FedAvg is assessed
on a dataset consisting of spectrogram images of simulated interfered GNSS
signal. Six different jammer types are effectively classified with comparable
results to a fully centralized solution that requires vast amounts of data
communication and involves privacy-preserving concerns.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.02595
|
2023-06-05T04:58:41Z
|
Explore and Exploit the Diverse Knowledge in Model Zoo for Domain
Generalization
|
[
"Yimeng Chen",
"Tianyang Hu",
"Fengwei Zhou",
"Zhenguo Li",
"Zhiming Ma"
] |
The proliferation of pretrained models, as a result of advancements in
pretraining techniques, has led to the emergence of a vast zoo of publicly
available models. Effectively utilizing these resources to obtain models with
robust out-of-distribution generalization capabilities for downstream tasks has
become a crucial area of research. Previous research has primarily focused on
identifying the most powerful models within the model zoo, neglecting to fully
leverage the diverse inductive biases contained within. This paper argues that
the knowledge contained in weaker models is valuable and presents a method for
leveraging the diversity within the model zoo to improve out-of-distribution
generalization capabilities. Specifically, we investigate the behaviors of
various pretrained models across different domains of downstream tasks by
characterizing the variations in their encoded representations in terms of two
dimensions: diversity shift and correlation shift. This characterization
enables us to propose a new algorithm for integrating diverse pretrained
models, not limited to the strongest models, in order to achieve enhanced
out-of-distribution generalization performance. Our proposed method
demonstrates state-of-the-art empirical results on a variety of datasets, thus
validating the benefits of utilizing diverse knowledge.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.02628
|
2023-06-05T06:55:39Z
|
Active Ranking of Experts Based on their Performances in Many Tasks
|
[
"El Mehdi Saad",
"Nicolas Verzelen",
"Alexandra Carpentier"
] |
We consider the problem of ranking n experts based on their performances on d
tasks. We make a monotonicity assumption stating that for each pair of experts,
one outperforms the other on all tasks. We consider the sequential setting
where in each round, the learner has access to noisy evaluations of actively
chosen pair of expert-task, given the information available up to the actual
round. Given a confidence parameter $\delta$ $\in$ (0, 1), we provide
strategies allowing to recover the correct ranking of experts and develop a
bound on the total number of queries made by our algorithm that hold with
probability at least 1 -- $\delta$. We show that our strategy is adaptive to
the complexity of the problem (our bounds are instance dependent), and develop
matching lower bounds up to a poly-logarithmic factor. Finally, we adapt our
strategy to the relaxed problem of best expert identification and provide
numerical simulation consistent with our theoretical results.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02639
|
2023-06-05T07:15:54Z
|
Evaluating robustness of support vector machines with the Lagrangian
dual approach
|
[
"Yuting Liu",
"Hong Gu",
"Pan Qin"
] |
Adversarial examples bring a considerable security threat to support vector
machines (SVMs), especially those used in safety-critical applications. Thus,
robustness verification is an essential issue for SVMs, which can provide
provable robustness against various kinds of adversary attacks. The evaluation
results obtained through the robustness verification can provide a safe
guarantee for the use of SVMs. The existing verification method does not often
perform well in verifying SVMs with nonlinear kernels. To this end, we propose
a method to improve the verification performance for SVMs with nonlinear
kernels. We first formalize the adversarial robustness evaluation of SVMs as an
optimization problem. Then a lower bound of the original problem is obtained by
solving the Lagrangian dual problem of the original problem. Finally, the
adversarial robustness of SVMs is evaluated concerning the lower bound. We
evaluate the adversarial robustness of SVMs with linear and nonlinear kernels
on the MNIST and Fashion-MNIST datasets. The experimental results show that the
percentage of provable robustness obtained by our method on the test set is
better than that of the state-of-the-art.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02658
|
2023-06-05T07:47:30Z
|
Faster Training of Diffusion Models and Improved Density Estimation via
Parallel Score Matching
|
[
"Etrit Haxholli",
"Marco Lorenzi"
] |
In Diffusion Probabilistic Models (DPMs), the task of modeling the score
evolution via a single time-dependent neural network necessitates extended
training periods and may potentially impede modeling flexibility and capacity.
To counteract these challenges, we propose leveraging the independence of
learning tasks at different time points inherent to DPMs. More specifically, we
partition the learning task by utilizing independent networks, each dedicated
to learning the evolution of scores within a specific time sub-interval.
Further, inspired by residual flows, we extend this strategy to its logical
conclusion by employing separate networks to independently model the score at
each individual time point. As empirically demonstrated on synthetic and image
datasets, our approach not only significantly accelerates the training process
by introducing an additional layer of parallelization atop data
parallelization, but it also enhances density estimation performance when
compared to the conventional training methodology for DPMs.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.02677
|
2023-06-05T08:11:44Z
|
A Privacy-Preserving Federated Learning Approach for Kernel methods
|
[
"Anika Hannemann",
"Ali Burak Ünal",
"Arjhun Swaminathan",
"Erik Buchmann",
"Mete Akgün"
] |
It is challenging to implement Kernel methods, if the data sources are
distributed and cannot be joined at a trusted third party for privacy reasons.
It is even more challenging, if the use case rules out privacy-preserving
approaches that introduce noise. An example for such a use case is machine
learning on clinical data. To realize exact privacy preserving computation of
kernel methods, we propose FLAKE, a Federated Learning Approach for KErnel
methods on horizontally distributed data. With FLAKE, the data sources mask
their data so that a centralized instance can compute a Gram matrix without
compromising privacy. The Gram matrix allows to calculate many kernel matrices,
which can be used to train kernel-based machine learning algorithms such as
Support Vector Machines. We prove that FLAKE prevents an adversary from
learning the input data or the number of input features under a semi-honest
threat model. Experiments on clinical and synthetic data confirm that FLAKE is
outperforming the accuracy and efficiency of comparable methods. The time
needed to mask the data and to compute the Gram matrix is several orders of
magnitude less than the time a Support Vector Machine needs to be trained.
Thus, FLAKE can be applied to many use cases.
|
[
"cs.LG",
"cs.CR",
"I.2; I.2; K.6.5; E.3"
] | false |
2306.02701
|
2023-06-05T08:45:44Z
|
Unlocking the Potential of Federated Learning for Deeper Models
|
[
"Haolin Wang",
"Xuefeng Liu",
"Jianwei Niu",
"Shaojie Tang",
"Jiaxing Shen"
] |
Federated learning (FL) is a new paradigm for distributed machine learning
that allows a global model to be trained across multiple clients without
compromising their privacy. Although FL has demonstrated remarkable success in
various scenarios, recent studies mainly utilize shallow and small neural
networks. In our research, we discover a significant performance decline when
applying the existing FL framework to deeper neural networks, even when client
data are independently and identically distributed (i.i.d.). Our further
investigation shows that the decline is due to the continuous accumulation of
dissimilarities among client models during the layer-by-layer back-propagation
process, which we refer to as "divergence accumulation." As deeper models
involve a longer chain of divergence accumulation, they tend to manifest
greater divergence, subsequently leading to performance decline. Both
theoretical derivations and empirical evidence are proposed to support the
existence of divergence accumulation and its amplified effects in deeper
models. To address this issue, we propose several technical guidelines based on
reducing divergence, such as using wider models and reducing the receptive
field. These approaches can greatly improve the accuracy of FL on deeper
models. For example, the application of these guidelines can boost the
ResNet101 model's performance by as much as 43\% on the Tiny-ImageNet dataset.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02704
|
2023-06-05T08:55:50Z
|
Calibrated Stackelberg Games: Learning Optimal Commitments Against
Calibrated Agents
|
[
"Nika Haghtalab",
"Chara Podimata",
"Kunhe Yang"
] |
In this paper, we introduce a generalization of the standard Stackelberg
Games (SGs) framework: Calibrated Stackelberg Games (CSGs). In CSGs, a
principal repeatedly interacts with an agent who (contrary to standard SGs)
does not have direct access to the principal's action but instead best-responds
to calibrated forecasts about it. CSG is a powerful modeling tool that goes
beyond assuming that agents use ad hoc and highly specified algorithms for
interacting in strategic settings and thus more robustly addresses real-life
applications that SGs were originally intended to capture. Along with CSGs, we
also introduce a stronger notion of calibration, termed adaptive calibration,
that provides fine-grained any-time calibration guarantees against adversarial
sequences. We give a general approach for obtaining adaptive calibration
algorithms and specialize them for finite CSGs. In our main technical result,
we show that in CSGs, the principal can achieve utility that converges to the
optimum Stackelberg value of the game both in finite and continuous settings,
and that no higher utility is achievable. Two prominent and immediate
applications of our results are the settings of learning in Stackelberg
Security Games and strategic classification, both against calibrated agents.
|
[
"cs.GT",
"cs.LG"
] | false |
2306.02732
|
2023-06-05T09:28:03Z
|
Conformal Prediction with Missing Values
|
[
"Margaux Zaffran",
"Aymeric Dieuleveut",
"Julie Josse",
"Yaniv Romano"
] |
Conformal prediction is a theoretically grounded framework for constructing
predictive intervals. We study conformal prediction with missing values in the
covariates -- a setting that brings new challenges to uncertainty
quantification. We first show that the marginal coverage guarantee of conformal
prediction holds on imputed data for any missingness distribution and almost
all imputation functions. However, we emphasize that the average coverage
varies depending on the pattern of missing values: conformal methods tend to
construct prediction intervals that under-cover the response conditionally to
some missing patterns. This motivates our novel generalized conformalized
quantile regression framework, missing data augmentation, which yields
prediction intervals that are valid conditionally to the patterns of missing
values, despite their exponential number. We then show that a universally
consistent quantile regression algorithm trained on the imputed data is Bayes
optimal for the pinball risk, thus achieving valid coverage conditionally to
any given data point. Moreover, we examine the case of a linear model, which
demonstrates the importance of our proposal in overcoming the
heteroskedasticity induced by missing values. Using synthetic and data from
critical care, we corroborate our theory and report improved performance of our
methods.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02798
|
2023-06-05T11:51:04Z
|
Enhancing naive classifier for positive unlabeled data based on logistic
regression approach
|
[
"Mateusz Płatek",
"Jan Mielniczuk"
] |
We argue that for analysis of Positive Unlabeled (PU) data under Selected
Completely At Random (SCAR) assumption it is fruitful to view the problem as
fitting of misspecified model to the data. Namely, we show that the results on
misspecified fit imply that in the case when posterior probability of the
response is modelled by logistic regression, fitting the logistic regression to
the observable PU data which {\it does not} follow this model, still yields the
vector of estimated parameters approximately colinear with the true vector of
parameters. This observation together with choosing the intercept of the
classifier based on optimisation of analogue of F1 measure yields a classifier
which performs on par or better than its competitors on several real data sets
considered.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02806
|
2023-06-05T11:58:07Z
|
A Data-driven Region Generation Framework for Spatiotemporal
Transportation Service Management
|
[
"Liyue Chen",
"Jiangyi Fang",
"Zhe Yu",
"Yongxin Tong",
"Shaosheng Cao",
"Leye Wang"
] |
MAUP (modifiable areal unit problem) is a fundamental problem for spatial
data management and analysis. As an instantiation of MAUP in online
transportation platforms, region generation (i.e., specifying the areal unit
for service operations) is the first and vital step for supporting
spatiotemporal transportation services such as ride-sharing and freight
transport. Most existing region generation methods are manually specified
(e.g., fixed-size grids), suffering from poor spatial semantic meaning and
inflexibility to meet service operation requirements. In this paper, we propose
RegionGen, a data-driven region generation framework that can specify regions
with key characteristics (e.g., good spatial semantic meaning and
predictability) by modeling region generation as a multi-objective optimization
problem. First, to obtain good spatial semantic meaning, RegionGen segments the
whole city into atomic spatial elements based on road networks and obstacles
(e.g., rivers). Then, it clusters the atomic spatial elements into regions by
maximizing various operation characteristics, which is formulated as a
multi-objective optimization problem. For this optimization problem, we propose
a multi-objective co-optimization algorithm. Extensive experiments verify that
RegionGen can generate more suitable regions than traditional methods for
spatiotemporal service management.
|
[
"cs.LG",
"cs.DB"
] | false |
2306.02808
|
2023-06-05T12:00:12Z
|
Deep Active Learning with Structured Neural Depth Search
|
[
"Xiaoyun Zhang",
"Xieyi Ping",
"Jianwei Zhang"
] |
Previous work optimizes traditional active learning (AL) processes with
incremental neural network architecture search (Active-iNAS) based on data
complexity change, which improves the accuracy and learning efficiency.
However, Active-iNAS trains several models and selects the model with the best
generalization performance for querying the subsequent samples after each
active learning cycle. The independent training processes lead to an
insufferable computational budget, which is significantly inefficient and
limits search flexibility and final performance. To address this issue, we
propose a novel active strategy with the method called structured variational
inference (SVI) or structured neural depth search (SNDS) whereby we could use
the gradient descent method in neural network depth search during AL processes.
At the same time, we theoretically demonstrate that the current VI-based
methods based on the mean-field assumption could lead to poor performance. We
apply our strategy using three querying techniques and three datasets and show
that our strategy outperforms current methods.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02831
|
2023-06-05T12:27:22Z
|
MM-DAG: Multi-task DAG Learning for Multi-modal Data -- with Application
for Traffic Congestion Analysis
|
[
"Tian Lan",
"Ziyue Li",
"Zhishuai Li",
"Lei Bai",
"Man Li",
"Fugee Tsung",
"Wolfgang Ketter",
"Rui Zhao",
"Chen Zhang"
] |
This paper proposes to learn Multi-task, Multi-modal Direct Acyclic Graphs
(MM-DAGs), which are commonly observed in complex systems, e.g., traffic,
manufacturing, and weather systems, whose variables are multi-modal with
scalars, vectors, and functions. This paper takes the traffic congestion
analysis as a concrete case, where a traffic intersection is usually regarded
as a DAG. In a road network of multiple intersections, different intersections
can only have some overlapping and distinct variables observed. For example, a
signalized intersection has traffic light-related variables, whereas
unsignalized ones do not. This encourages the multi-task design: with each DAG
as a task, the MM-DAG tries to learn the multiple DAGs jointly so that their
consensus and consistency are maximized. To this end, we innovatively propose a
multi-modal regression for linear causal relationship description of different
variables. Then we develop a novel Causality Difference (CD) measure and its
differentiable approximator. Compared with existing SOTA measures, CD can
penalize the causal structural difference among DAGs with distinct nodes and
can better consider the uncertainty of causal orders. We rigidly prove our
design's topological interpretation and consistency properties. We conduct
thorough simulations and one case study to show the effectiveness of our
MM-DAG. The code is available under https://github.com/Lantian72/MM-DAG
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02834
|
2023-06-05T12:29:34Z
|
Computational Complexity of Detecting Proximity to Losslessly
Compressible Neural Network Parameters
|
[
"Matthew Farrugia-Roberts"
] |
To better understand complexity in neural networks, we theoretically
investigate the idealised phenomenon of lossless network compressibility,
whereby an identical function can be implemented with a smaller network. We
give an efficient formal algorithm for optimal lossless compression in the
setting of single-hidden-layer hyperbolic tangent networks. To measure lossless
compressibility, we define the rank of a parameter as the minimum number of
hidden units required to implement the same function. Losslessly compressible
parameters are atypical, but their existence has implications for nearby
parameters. We define the proximate rank of a parameter as the rank of the most
compressible parameter within a small $L^\infty$ neighbourhood. Unfortunately,
detecting nearby losslessly compressible parameters is not so easy: we show
that bounding the proximate rank is an NP-complete problem, using a reduction
from Boolean satisfiability via a geometric problem involving covering points
in the plane with small squares. These results underscore the computational
complexity of measuring neural network complexity, laying a foundation for
future theoretical and empirical work in this direction.
|
[
"cs.LG",
"cs.CC"
] | false |
2306.02931
|
2023-06-05T14:51:05Z
|
Causal Discovery using Bayesian Model Selection
|
[
"Anish Dhir",
"Mark van der Wilk"
] |
With only observational data on two variables, and without other assumptions,
it is not possible to infer which one causes the other. Much of the causal
literature has focused on guaranteeing identifiability of causal direction in
statistical models for datasets where strong assumptions hold, such as additive
noise or restrictions on parameter count. These methods are then subsequently
tested on realistic datasets, most of which violate their assumptions. Building
on previous attempts, we show how to use causal assumptions within the Bayesian
framework. This allows us to specify models with realistic assumptions, while
also encoding independent causal mechanisms, leading to an asymmetry between
the causal directions. Identifying causal direction then becomes a Bayesian
model selection problem. We analyse why Bayesian model selection works for
known identifiable cases and flexible model classes, while also providing
correctness guarantees about its behaviour. To demonstrate our approach, we
construct a Bayesian non-parametric model that can flexibly model the joint. We
then outperform previous methods on a wide range of benchmark datasets with
varying data generating assumptions showing the usefulness of our method.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02972
|
2023-06-05T15:35:19Z
|
Simultaneous or Sequential Training? How Speech Representations
Cooperate in a Multi-Task Self-Supervised Learning System
|
[
"Khazar Khorrami",
"María Andrea Cruz Blandón",
"Tuomas Virtanen",
"Okko Räsänen"
] |
Speech representation learning with self-supervised algorithms has resulted
in notable performance boosts in many downstream tasks. Recent work combined
self-supervised learning (SSL) and visually grounded speech (VGS) processing
mechanisms for representation learning. The joint training with SSL and VGS
mechanisms provides the opportunity to utilize both unlabeled speech and
speech-related visual information based on data availability. This has shown to
enhance the quality of learned representations, especially at encoding
semantic- and lexical-level knowledge. In this work, we further study the joint
optimization of wav2vec 2.0-based SSL and transformer-based VGS as a multi-task
learning system. We explore a set of training scenarios to understand how
speech representations are shared or transferred between the two tasks, and
what is the optimal training strategy for cross-modal semantic retrieval and
phoneme discrimination performance. As a result, we find that sequential
training with wav2vec 2.0 first and VGS next provides higher performance on
audio-visual retrieval compared to simultaneous optimization of both learning
mechanisms. However, the parallel SSL-VGS training reduces the effects of
catastrophic forgetting when switching between optimization criteria. Moreover,
the results suggest that phonemic representations learned through the VGS
mechanism may generalize better across datasets compared to those learned with
SSL.
|
[
"eess.AS",
"cs.LG"
] | false |
2306.02996
|
2023-06-05T16:06:39Z
|
Over-the-Air Federated Learning in Satellite systems
|
[
"Edward Akito Carlos",
"Raphael Pinard",
"Mitra Hassani"
] |
Federated learning in satellites offers several advantages. Firstly, it
ensures data privacy and security, as sensitive data remains on the satellites
and is not transmitted to a central location. This is particularly important
when dealing with sensitive or classified information. Secondly, federated
learning allows satellites to collectively learn from a diverse set of data
sources, benefiting from the distributed knowledge across the satellite
network. Lastly, the use of federated learning reduces the communication
bandwidth requirements between satellites and the central server, as only model
updates are exchanged instead of raw data. By leveraging federated learning,
satellites can collaborate and continuously improve their machine learning
models while preserving data privacy and minimizing communication overhead.
This enables the development of more intelligent and efficient satellite
systems for various applications, such as Earth observation, weather
forecasting, and space exploration.
|
[
"cs.LG",
"eess.IV"
] | false |
2306.03018
|
2023-06-05T16:35:01Z
|
Quantification of Uncertainties in Deep Learning-based Environment
Perception
|
[
"Marco Braun",
"Moritz Luszek",
"Jan Siegemund",
"Kevin Kollek",
"Anton Kummert"
] |
In this work, we introduce a novel Deep Learning-based method to perceive the
environment of a vehicle based on radar scans while accounting for
uncertainties in its predictions. The environment of the host vehicle is
segmented into equally sized grid cells which are classified individually.
Complementary to the segmentation output, our Deep Learning-based algorithm is
capable of differentiating uncertainties in its predictions as being related to
an inadequate model (epistemic uncertainty) or noisy data (aleatoric
uncertainty). To this end, weights are described as probability distributions
accounting for uncertainties in the model parameters. Distributions are learned
in a supervised fashion using gradient descent. We prove that uncertainties in
the model output correlate with the precision of its predictions. Compared to
previous concepts, we show superior performance of our approach to reliably
perceive the environment of a vehicle.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03054
|
2023-06-05T17:25:45Z
|
Discriminative Adversarial Privacy: Balancing Accuracy and Membership
Privacy in Neural Networks
|
[
"Eugenio Lomurno",
"Alberto Archetti",
"Francesca Ausonio",
"Matteo Matteucci"
] |
The remarkable proliferation of deep learning across various industries has
underscored the importance of data privacy and security in AI pipelines. As the
evolution of sophisticated Membership Inference Attacks (MIAs) threatens the
secrecy of individual-specific information used for training deep learning
models, Differential Privacy (DP) raises as one of the most utilized techniques
to protect models against malicious attacks. However, despite its proven
theoretical properties, DP can significantly hamper model performance and
increase training time, turning its use impractical in real-world scenarios.
Tackling this issue, we present Discriminative Adversarial Privacy (DAP), a
novel learning technique designed to address the limitations of DP by achieving
a balance between model performance, speed, and privacy. DAP relies on
adversarial training based on a novel loss function able to minimise the
prediction error while maximising the MIA's error. In addition, we introduce a
novel metric named Accuracy Over Privacy (AOP) to capture the
performance-privacy trade-off. Finally, to validate our claims, we compare DAP
with diverse DP scenarios, providing an analysis of the results from
performance, time, and privacy preservation perspectives.
|
[
"cs.CR",
"cs.LG"
] | false |
2306.03076
|
2023-06-05T17:52:44Z
|
Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning
Hardware
|
[
"Lakshmi Nair",
"Darius Bunandar"
] |
Existing methods to recover model accuracy on analog-digital hardware in the
presence of quantization and analog noise include noise-injection training.
However, it can be slow in practice, incurring high computational costs, even
when starting from pretrained models. We introduce the Sensitivity-Aware
Finetuning (SAFT) approach that identifies noise sensitive layers in a model,
and uses the information to freeze specific layers for noise-injection
training. Our results show that SAFT achieves comparable accuracy to
noise-injection training and is 2x to 8x faster.
|
[
"cs.LG",
"cs.AR"
] | false |
2306.03186
|
2023-06-05T18:56:48Z
|
Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement
Learning
|
[
"Sam Lobel",
"Akhil Bagaria",
"George Konidaris"
] |
We propose a new method for count-based exploration in high-dimensional state
spaces. Unlike previous work which relies on density models, we show that
counts can be derived by averaging samples from the Rademacher distribution (or
coin flips). This insight is used to set up a simple supervised learning
objective which, when optimized, yields a state's visitation count. We show
that our method is significantly more effective at deducing ground-truth
visitation counts than previous work; when used as an exploration bonus for a
model-free reinforcement learning algorithm, it outperforms existing approaches
on most of 9 challenging exploration tasks, including the Atari game
Montezuma's Revenge.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03191
|
2023-06-05T19:06:18Z
|
Personalized Federated Domain Adaptation for Item-to-Item Recommendation
|
[
"Ziwei Fan",
"Hao Ding",
"Anoop Deoras",
"Trong Nghia Hoang"
] |
Item-to-Item (I2I) recommendation is an important function in most
recommendation systems, which generates replacement or complement suggestions
for a particular item based on its semantic similarities to other cataloged
items. Given that subsets of items in a recommendation system might be
co-interacted with by the same set of customers, graph-based models, such as
graph neural networks (GNNs), provide a natural framework to combine, ingest
and extract valuable insights from such high-order relational interactions
between cataloged items, as well as their metadata features, as has been shown
in many recent studies. However, learning GNNs effectively for I2I requires
ingesting a large amount of relational data, which might not always be
available, especially in new, emerging market segments. To mitigate this data
bottleneck, we postulate that recommendation patterns learned from existing
mature market segments (with private data) could be adapted to build effective
warm-start models for emerging ones. To achieve this, we propose and
investigate a personalized federated modeling framework based on GNNs to
summarize, assemble and adapt recommendation patterns across market segments
with heterogeneous customer behaviors into effective local models. Our key
contribution is a personalized graph adaptation model that bridges the gap
between recent literature on federated GNNs and (non-graph) personalized
federated learning, which either does not optimize for the adaptability of the
federated model or is restricted to local models with homogeneous
parameterization, excluding GNNs with heterogeneous local graphs.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.03235
|
2023-06-05T20:40:05Z
|
Information Flow Control in Machine Learning through Modular Model
Architecture
|
[
"Trishita Tiwari",
"Suchin Gururangan",
"Chuan Guo",
"Weizhe Hua",
"Sanjay Kariyappa",
"Udit Gupta",
"Wenjie Xiong",
"Kiwan Maeng",
"Hsien-Hsin S. Lee",
"G. Edward Suh"
] |
In today's machine learning (ML) models, any part of the training data can
affect its output. This lack of control for information flow from training data
to model output is a major obstacle in training models on sensitive data when
access control only allows individual users to access a subset of data. To
enable secure machine learning for access controlled data, we propose the
notion of information flow control for machine learning, and develop a secure
Transformer-based language model based on the Mixture-of-Experts (MoE)
architecture. The secure MoE architecture controls information flow by limiting
the influence of training data from each security domain to a single expert
module, and only enabling a subset of experts at inference time based on an
access control policy. The evaluation using a large corpus of text data shows
that the proposed MoE architecture has minimal (1.9%) performance overhead and
can significantly improve model accuracy (up to 37%) by enabling training on
access-controlled data.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.03256
|
2023-06-05T21:17:48Z
|
Explaining and Adapting Graph Conditional Shift
|
[
"Qi Zhu",
"Yizhu Jiao",
"Natalia Ponomareva",
"Jiawei Han",
"Bryan Perozzi"
] |
Graph Neural Networks (GNNs) have shown remarkable performance on
graph-structured data. However, recent empirical studies suggest that GNNs are
very susceptible to distribution shift. There is still significant ambiguity
about why graph-based models seem more vulnerable to these shifts. In this work
we provide a thorough theoretical analysis on it by quantifying the magnitude
of conditional shift between the input features and the output label. Our
findings show that both graph heterophily and model architecture exacerbate
conditional shifts, leading to performance degradation. To address this, we
propose an approach that involves estimating and minimizing the conditional
shift for unsupervised domain adaptation on graphs. In our controlled synthetic
experiments, our algorithm demonstrates robustness towards distribution shift,
resulting in up to 10% absolute ROC AUC improvement versus the second-best
algorithm. Furthermore, comprehensive experiments on both node classification
and graph classification show its robust performance under various distribution
shifts.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.03262
|
2023-06-05T21:26:12Z
|
Has the Machine Learning Review Process Become More Arbitrary as the
Field Has Grown? The NeurIPS 2021 Consistency Experiment
|
[
"Alina Beygelzimer",
"Yann N. Dauphin",
"Percy Liang",
"Jennifer Wortman Vaughan"
] |
We present the NeurIPS 2021 consistency experiment, a larger-scale variant of
the 2014 NeurIPS experiment in which 10% of conference submissions were
reviewed by two independent committees to quantify the randomness in the review
process. We observe that the two committees disagree on their accept/reject
recommendations for 23% of the papers and that, consistent with the results
from 2014, approximately half of the list of accepted papers would change if
the review process were randomly rerun. Our analysis suggests that making the
conference more selective would increase the arbitrariness of the process.
Taken together with previous research, our results highlight the inherent
difficulty of objectively measuring the quality of research, and suggest that
authors should not be excessively discouraged by rejected work.
|
[
"cs.LG",
"cs.DL"
] | false |
2306.03273
|
2023-06-05T21:45:23Z
|
Under-Counted Tensor Completion with Neural Incorporation of Attributes
|
[
"Shahana Ibrahim",
"Xiao Fu",
"Rebecca Hutchinson",
"Eugene Seo"
] |
Systematic under-counting effects are observed in data collected across many
disciplines, e.g., epidemiology and ecology. Under-counted tensor completion
(UC-TC) is well-motivated for many data analytics tasks, e.g., inferring the
case numbers of infectious diseases at unobserved locations from under-counted
case numbers in neighboring regions. However, existing methods for similar
problems often lack supports in theory, making it hard to understand the
underlying principles and conditions beyond empirical successes. In this work,
a low-rank Poisson tensor model with an expressive unknown nonlinear side
information extractor is proposed for under-counted multi-aspect data. A joint
low-rank tensor completion and neural network learning algorithm is designed to
recover the model. Moreover, the UC-TC formulation is supported by theoretical
analysis showing that the fully counted entries of the tensor and each entry's
under-counting probability can be provably recovered from partial observations
-- under reasonable conditions. To our best knowledge, the result is the first
to offer theoretical supports for under-counted multi-aspect data completion.
Simulations and real-data experiments corroborate the theoretical claims.
|
[
"cs.LG",
"eess.SP"
] | false |
2306.03284
|
2023-06-05T22:09:06Z
|
Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models
|
[
"Sriram Ravula",
"Brett Levac",
"Ajil Jalal",
"Jonathan I. Tamir",
"Alexandros G. Dimakis"
] |
Diffusion-based generative models have been used as powerful priors for
magnetic resonance imaging (MRI) reconstruction. We present a learning method
to optimize sub-sampling patterns for compressed sensing multi-coil MRI that
leverages pre-trained diffusion generative models. Crucially, during training
we use a single-step reconstruction based on the posterior mean estimate given
by the diffusion model and the MRI measurement process. Experiments across
varying anatomies, acceleration factors, and pattern types show that sampling
operators learned with our method lead to competitive, and in the case of 2D
patterns, improved reconstructions compared to baseline patterns. Our method
requires as few as five training images to learn effective sampling patterns.
|
[
"cs.LG",
"eess.IV"
] | false |
2306.03311
|
2023-06-05T23:38:31Z
|
Learning Embeddings for Sequential Tasks Using Population of Agents
|
[
"Mridul Mahajan",
"Georgios Tzannetos",
"Goran Radanovic",
"Adish Singla"
] |
We present an information-theoretic framework to learn fixed-dimensional
embeddings for tasks in reinforcement learning. We leverage the idea that two
tasks are similar to each other if observing an agent's performance on one task
reduces our uncertainty about its performance on the other. This intuition is
captured by our information-theoretic criterion which uses a diverse population
of agents to measure similarity between tasks in sequential decision-making
settings. In addition to qualitative assessment, we empirically demonstrate the
effectiveness of our techniques based on task embeddings by quantitative
comparisons against strong baselines on two application scenarios: predicting
an agent's performance on a test task by observing its performance on a small
quiz of tasks, and selecting tasks with desired characteristics from a given
set of options.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03938
|
2023-06-05T13:11:33Z
|
Learning Causal Mechanisms through Orthogonal Neural Networks
|
[
"Peyman Sheikholharam Mashhadi",
"Slawomir Nowaczyk"
] |
A fundamental feature of human intelligence is the ability to infer
high-level abstractions from low-level sensory data. An essential component of
such inference is the ability to discover modularized generative mechanisms.
Despite many efforts to use statistical learning and pattern recognition for
finding disentangled factors, arguably human intelligence remains unmatched in
this area.
In this paper, we investigate a problem of learning, in a fully unsupervised
manner, the inverse of a set of independent mechanisms from distorted data
points. We postulate, and justify this claim with experimental results, that an
important weakness of existing machine learning solutions lies in the
insufficiency of cross-module diversification. Addressing this crucial
discrepancy between human and machine intelligence is an important challenge
for pattern recognition systems.
To this end, our work proposes an unsupervised method that discovers and
disentangles a set of independent mechanisms from unlabeled data, and learns
how to invert them. A number of experts compete against each other for
individual data points in an adversarial setting: one that best inverses the
(unknown) generative mechanism is the winner. We demonstrate that introducing
an orthogonalization layer into the expert architectures enforces additional
diversity in the outputs, leading to significantly better separability.
Moreover, we propose a procedure for relocating data points between experts to
further prevent any one from claiming multiple mechanisms. We experimentally
illustrate that these techniques allow discovery and modularization of much
less pronounced transformations, in addition to considerably faster
convergence.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.05373
|
2023-06-05T13:52:56Z
|
A Computational Analysis of Oral Argument in the Supreme Court
|
[
"Gregory M. Dickinson"
] |
As the most public component of the Supreme Court's decision-making process,
oral argument receives an out-sized share of attention in the popular media.
Despite its prominence, however, the basic function and operation of oral
argument as an institution remains poorly understood, as political scientists
and legal scholars continue to debate even the most fundamental questions about
its role.
Past study of oral argument has tended to focus on discrete, quantifiable
attributes of oral argument, such as the number of questions asked to each
advocate, the party of the Justices' appointing president, or the ideological
implications of the case on appeal. Such studies allow broad generalizations
about oral argument and judicial decision making: Justices tend to vote in
accordance with their ideological preferences, and they tend to ask more
questions when they are skeptical of a party's position. But they tell us
little about the actual goings on at oral argument -- the running dialog
between Justice and advocate that is the heart of the institution.
This Article fills that void, using machine learning techniques to, for the
first time, construct predictive models of judicial decision making based not
on oral argument's superficial features or on factors external to oral
argument, such as where the case falls on a liberal-conservative spectrum, but
on the actual content of the oral argument itself -- the Justices' questions to
each side. The resultant models offer an important new window into aspects of
oral argument that have long resisted empirical study, including the Justices'
individual questioning styles, how each expresses skepticism, and which of the
Justices' questions are most central to oral argument dialog.
|
[
"cs.CY",
"cs.LG"
] | false |
2306.06119
|
2023-06-05T14:15:39Z
|
Doubly Stochastic Graph-based Non-autoregressive Reaction Prediction
|
[
"Ziqiao Meng",
"Peilin Zhao",
"Yang Yu",
"Irwin King"
] |
Organic reaction prediction is a critical task in drug discovery. Recently,
researchers have achieved non-autoregressive reaction prediction by modeling
the redistribution of electrons, resulting in state-of-the-art top-1 accuracy,
and enabling parallel sampling. However, the current non-autoregressive decoder
does not satisfy two essential rules of electron redistribution modeling
simultaneously: the electron-counting rule and the symmetry rule. This
violation of the physical constraints of chemical reactions impairs model
performance. In this work, we propose a new framework called that combines two
doubly stochastic self-attention mappings to obtain electron redistribution
predictions that follow both constraints. We further extend our solution to a
general multi-head attention mechanism with augmented constraints. To achieve
this, we apply Sinkhorn's algorithm to iteratively update self-attention
mappings, which imposes doubly conservative constraints as additional
informative priors on electron redistribution modeling. We theoretically
demonstrate that our can simultaneously satisfy both rules, which the current
decoder mechanism cannot do. Empirical results show that our approach
consistently improves the predictive performance of non-autoregressive models
and does not bring an unbearable additional computational cost.
|
[
"physics.chem-ph",
"cs.LG"
] | false |
2306.10028
|
2023-06-05T07:04:34Z
|
Graph Based Long-Term And Short-Term Interest Model for Click-Through
Rate Prediction
|
[
"Huinan Sun",
"Guangliang Yu",
"Pengye Zhang",
"Bo Zhang",
"Xingxing Wang",
"Dong Wang"
] |
Click-through rate (CTR) prediction aims to predict the probability that the
user will click an item, which has been one of the key tasks in online
recommender and advertising systems. In such systems, rich user behavior (viz.
long- and short-term) has been proved to be of great value in capturing user
interests. Both industry and academy have paid much attention to this topic and
propose different approaches to modeling with long-term and short-term user
behavior data. But there are still some unresolved issues. More specially, (1)
rule and truncation based methods to extract information from long-term
behavior are easy to cause information loss, and (2) single feedback behavior
regardless of scenario to extract information from short-term behavior lead to
information confusion and noise. To fill this gap, we propose a Graph based
Long-term and Short-term interest Model, termed GLSM. It consists of a
multi-interest graph structure for capturing long-term user behavior, a
multi-scenario heterogeneous sequence model for modeling short-term
information, then an adaptive fusion mechanism to fused information from
long-term and short-term behaviors. Comprehensive experiments on real-world
datasets, GLSM achieved SOTA score on offline metrics. At the same time, the
GLSM algorithm has been deployed in our industrial application, bringing 4.9%
CTR and 4.3% GMV lift, which is significant to the business.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.02532
|
2023-06-05T01:41:23Z
|
R-Mixup: Riemannian Mixup for Biological Networks
|
[
"Xuan Kan",
"Zimu Li",
"Hejie Cui",
"Yue Yu",
"Ran Xu",
"Shaojun Yu",
"Zilong Zhang",
"Ying Guo",
"Carl Yang"
] |
Biological networks are commonly used in biomedical and healthcare domains to
effectively model the structure of complex biological systems with interactions
linking biological entities. However, due to their characteristics of high
dimensionality and low sample size, directly applying deep learning models on
biological networks usually faces severe overfitting. In this work, we propose
R-MIXUP, a Mixup-based data augmentation technique that suits the symmetric
positive definite (SPD) property of adjacency matrices from biological networks
with optimized training efficiency. The interpolation process in R-MIXUP
leverages the log-Euclidean distance metrics from the Riemannian manifold,
effectively addressing the swelling effect and arbitrarily incorrect label
issues of vanilla Mixup. We demonstrate the effectiveness of R-MIXUP with five
real-world biological network datasets on both regression and classification
tasks. Besides, we derive a commonly ignored necessary condition for
identifying the SPD matrices of biological networks and empirically study its
influence on the model performance. The code implementation can be found in
Appendix E.
|
[
"cs.LG",
"cs.AI",
"q-bio.QM",
"68T07, 68T05",
"I.2.6; J.3"
] | false |
2306.02555
|
2023-06-05T03:05:43Z
|
Barriers for the performance of graph neural networks (GNN) in discrete
random structures. A comment
on~\cite{schuetz2022combinatorial},\cite{angelini2023modern},\cite{schuetz2023reply}
|
[
"David Gamarnik"
] |
Recently graph neural network (GNN) based algorithms were proposed to solve a
variety of combinatorial optimization problems, including Maximum Cut problem,
Maximum Independent Set problem and similar other
problems~\cite{schuetz2022combinatorial},\cite{schuetz2022graph}.
The publication~\cite{schuetz2022combinatorial} stirred a debate whether GNN
based method was adequately benchmarked against best prior methods. In
particular, critical commentaries~\cite{angelini2023modern}
and~\cite{boettcher2023inability} point out that simple greedy algorithm
performs better than GNN in the setting of random graphs, and in fact stronger
algorithmic performance can be reached with more sophisticated methods. A
response from the authors~\cite{schuetz2023reply} pointed out that GNN
performance can be improved further by tuning up the parameters better.
We do not intend to discuss the merits of arguments and counter-arguments
in~\cite{schuetz2022combinatorial},\cite{angelini2023modern},\cite{boettcher2023inability},\cite{schuetz2023reply}.
Rather in this note we establish a fundamental limitation for running GNN on
random graphs considered in these references, for a broad range of choices of
GNN architecture. These limitations arise from the presence of the Overlap Gap
Property (OGP) phase transition, which is a barrier for many algorithms, both
classical and quantum. As we demonstrate in this paper, it is also a barrier to
GNN due to its local structure. We note that at the same time known algorithms
ranging from simple greedy algorithms to more sophisticated algorithms based on
message passing, provide best results for these problems \emph{up to} the OGP
phase transition. This leaves very little space for GNN to outperform the known
algorithms, and based on this we side with the conclusions made
in~\cite{angelini2023modern} and~\cite{boettcher2023inability}.
|
[
"cs.LG",
"cs.AI",
"cs.DM"
] | false |
2306.02563
|
2023-06-05T03:33:26Z
|
Large-Scale Distributed Learning via Private On-Device
Locality-Sensitive Hashing
|
[
"Tahseen Rabbani",
"Marco Bornstein",
"Furong Huang"
] |
Locality-sensitive hashing (LSH) based frameworks have been used efficiently
to select weight vectors in a dense hidden layer with high cosine similarity to
an input, enabling dynamic pruning. While this type of scheme has been shown to
improve computational training efficiency, existing algorithms require repeated
randomized projection of the full layer weight, which is impractical for
computational- and memory-constrained devices. In a distributed setting,
deferring LSH analysis to a centralized host is (i) slow if the device cluster
is large and (ii) requires access to input data which is forbidden in a
federated context. Using a new family of hash functions, we develop one of the
first private, personalized, and memory-efficient on-device LSH frameworks. Our
framework enables privacy and personalization by allowing each device to
generate hash tables, without the help of a central host, using device-specific
hashing hyper-parameters (e.g. number of hash tables or hash length). Hash
tables are generated with a compressed set of the full weights, and can be
serially generated and discarded if the process is memory-intensive. This
allows devices to avoid maintaining (i) the fully-sized model and (ii) large
amounts of hash tables in local memory for LSH analysis. We prove several
statistical and sensitivity properties of our hash functions, and
experimentally demonstrate that our framework is competitive in training
large-scale recommender networks compared to other LSH frameworks which assume
unrestricted on-device capacity.
|
[
"cs.LG",
"cs.CR",
"cs.DC"
] | false |
2306.02572
|
2023-06-05T03:55:26Z
|
Introduction to Latent Variable Energy-Based Models: A Path Towards
Autonomous Machine Intelligence
|
[
"Anna Dawid",
"Yann LeCun"
] |
Current automated systems have crucial limitations that need to be addressed
before artificial intelligence can reach human-like levels and bring new
technological revolutions. Among others, our societies still lack Level 5
self-driving cars, domestic robots, and virtual assistants that learn reliable
world models, reason, and plan complex action sequences. In these notes, we
summarize the main ideas behind the architecture of autonomous intelligence of
the future proposed by Yann LeCun. In particular, we introduce energy-based and
latent variable models and combine their advantages in the building block of
LeCun's proposal, that is, in the hierarchical joint embedding predictive
architecture (H-JEPA).
|
[
"cs.LG",
"cond-mat.dis-nn",
"stat.ML"
] | false |
2306.02601
|
2023-06-05T05:21:01Z
|
Aiming towards the minimizers: fast convergence of SGD for
overparametrized problems
|
[
"Chaoyue Liu",
"Dmitriy Drusvyatskiy",
"Mikhail Belkin",
"Damek Davis",
"Yi-An Ma"
] |
Modern machine learning paradigms, such as deep learning, occur in or close
to the interpolation regime, wherein the number of model parameters is much
larger than the number of data samples. In this work, we propose a regularity
condition within the interpolation regime which endows the stochastic gradient
method with the same worst-case iteration complexity as the deterministic
gradient method, while using only a single sampled gradient (or a minibatch) in
each iteration. In contrast, all existing guarantees require the stochastic
gradient method to take small steps, thereby resulting in a much slower linear
rate of convergence. Finally, we demonstrate that our condition holds when
training sufficiently wide feedforward neural networks with a linear output
layer.
|
[
"cs.LG",
"math.OC",
"stat.ML"
] | false |
2306.02816
|
2023-06-05T12:12:59Z
|
MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale
Training of Physics-informed Neural Networks
|
[
"Jiachen Yao",
"Chang Su",
"Zhongkai Hao",
"Songming Liu",
"Hang Su",
"Jun Zhu"
] |
Physics-informed Neural Networks (PINNs) have recently achieved remarkable
progress in solving Partial Differential Equations (PDEs) in various fields by
minimizing a weighted sum of PDE loss and boundary loss. However, there are
several critical challenges in the training of PINNs, including the lack of
theoretical frameworks and the imbalance between PDE loss and boundary loss. In
this paper, we present an analysis of second-order non-homogeneous PDEs, which
are classified into three categories and applicable to various common problems.
We also characterize the connections between the training loss and actual
error, guaranteeing convergence under mild conditions. The theoretical analysis
inspires us to further propose MultiAdam, a scale-invariant optimizer that
leverages gradient momentum to parameter-wisely balance the loss terms.
Extensive experiment results on multiple problems from different physical
domains demonstrate that our MultiAdam solver can improve the predictive
accuracy by 1-2 orders of magnitude compared with strong baselines.
|
[
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2306.02833
|
2023-06-05T12:29:13Z
|
The $L^\infty$ Learnability of Reproducing Kernel Hilbert Spaces
|
[
"Hongrui Chen",
"Jihao Long",
"Lei Wu"
] |
In this work, we analyze the learnability of reproducing kernel Hilbert
spaces (RKHS) under the $L^\infty$ norm, which is critical for understanding
the performance of kernel methods and random feature models in safety- and
security-critical applications. Specifically, we relate the $L^\infty$
learnability of a RKHS to the spectrum decay of the associate kernel and both
lower bounds and upper bounds of the sample complexity are established. In
particular, for dot-product kernels on the sphere, we identify conditions when
the $L^\infty$ learning can be achieved with polynomial samples. Let $d$ denote
the input dimension and assume the kernel spectrum roughly decays as
$\lambda_k\sim k^{-1-\beta}$ with $\beta>0$. We prove that if $\beta$ is
independent of the input dimension $d$, then functions in the RKHS can be
learned efficiently under the $L^\infty$ norm, i.e., the sample complexity
depends polynomially on $d$. In contrast, if $\beta=1/\mathrm{poly}(d)$, then
the $L^\infty$ learning requires exponentially many samples.
|
[
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] | false |
2306.02984
|
2023-06-05T15:53:56Z
|
A Deep Learning Approach Utilizing Covariance Matrix Analysis for the
ISBI Edited MRS Reconstruction Challenge
|
[
"Julian P. Merkofer",
"Dennis M. J. van de Sande",
"Sina Amirrajab",
"Gerhard S. Drenthen",
"Mitko Veta",
"Jacobus F. A. Jansen",
"Marcel Breeuwer",
"Ruud J. G. van Sloun"
] |
This work proposes a method to accelerate the acquisition of high-quality
edited magnetic resonance spectroscopy (MRS) scans using machine learning
models taking the sample covariance matrix as input. The method is invariant to
the number of transients and robust to noisy input data for both synthetic as
well as in-vivo scenarios.
|
[
"physics.med-ph",
"cs.LG",
"eess.IV"
] | false |
2306.02990
|
2023-06-05T16:01:33Z
|
Integrated Sensing, Computation, and Communication for UAV-assisted
Federated Edge Learning
|
[
"Yao Tang",
"Guangxu Zhu",
"Wei Xu",
"Man Hon Cheung",
"Tat-Ming Lok",
"Shuguang Cui"
] |
Federated edge learning (FEEL) enables privacy-preserving model training
through periodic communication between edge devices and the server. Unmanned
Aerial Vehicle (UAV)-mounted edge devices are particularly advantageous for
FEEL due to their flexibility and mobility in efficient data collection. In
UAV-assisted FEEL, sensing, computation, and communication are coupled and
compete for limited onboard resources, and UAV deployment also affects sensing
and communication performance. Therefore, the joint design of UAV deployment
and resource allocation is crucial to achieving the optimal training
performance. In this paper, we address the problem of joint UAV deployment
design and resource allocation for FEEL via a concrete case study of human
motion recognition based on wireless sensing. We first analyze the impact of
UAV deployment on the sensing quality and identify a threshold value for the
sensing elevation angle that guarantees a satisfactory quality of data samples.
Due to the non-ideal sensing channels, we consider the probabilistic sensing
model, where the successful sensing probability of each UAV is determined by
its position. Then, we derive the upper bound of the FEEL training loss as a
function of the sensing probability. Theoretical results suggest that the
convergence rate can be improved if UAVs have a uniform successful sensing
probability. Based on this analysis, we formulate a training time minimization
problem by jointly optimizing UAV deployment, integrated sensing, computation,
and communication (ISCC) resources under a desirable optimality gap constraint.
To solve this challenging mixed-integer non-convex problem, we apply the
alternating optimization technique, and propose the bandwidth, batch size, and
position optimization (BBPO) scheme to optimize these three decision variables
alternately.
|
[
"cs.IT",
"cs.LG",
"eess.SP",
"math.IT"
] | false |
2306.03009
|
2023-06-05T16:19:48Z
|
Using Sequences of Life-events to Predict Human Lives
|
[
"Germans Savcisens",
"Tina Eliassi-Rad",
"Lars Kai Hansen",
"Laust Mortensen",
"Lau Lilleholt",
"Anna Rogers",
"Ingo Zettler",
"Sune Lehmann"
] |
Over the past decade, machine learning has revolutionized computers' ability
to analyze text through flexible computational models. Due to their structural
similarity to written language, transformer-based architectures have also shown
promise as tools to make sense of a range of multi-variate sequences from
protein-structures, music, electronic health records to weather-forecasts. We
can also represent human lives in a way that shares this structural similarity
to language. From one perspective, lives are simply sequences of events: People
are born, visit the pediatrician, start school, move to a new location, get
married, and so on. Here, we exploit this similarity to adapt innovations from
natural language processing to examine the evolution and predictability of
human lives based on detailed event sequences. We do this by drawing on
arguably the most comprehensive registry data in existence, available for an
entire nation of more than six million individuals across decades. Our data
include information about life-events related to health, education, occupation,
income, address, and working hours, recorded with day-to-day resolution. We
create embeddings of life-events in a single vector space showing that this
embedding space is robust and highly structured. Our models allow us to predict
diverse outcomes ranging from early mortality to personality nuances,
outperforming state-of-the-art models by a wide margin. Using methods for
interpreting deep learning models, we probe the algorithm to understand the
factors that enable our predictions. Our framework allows researchers to
identify new potential mechanisms that impact life outcomes and associated
possibilities for personalized interventions.
|
[
"stat.ML",
"cs.LG",
"stat.AP"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.