arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.06147
|
2023-06-09T12:07:10Z
|
SentiGOLD: A Large Bangla Gold Standard Multi-Domain Sentiment Analysis
Dataset and its Evaluation
|
[
"Md. Ekramul Islam",
"Labib Chowdhury",
"Faisal Ahamed Khan",
"Shazzad Hossain",
"Sourave Hossain",
"Mohammad Mamun Or Rashid",
"Nabeel Mohammed",
"Mohammad Ruhul Amin"
] |
This study introduces SentiGOLD, a Bangla multi-domain sentiment analysis
dataset. Comprising 70,000 samples, it was created from diverse sources and
annotated by a gender-balanced team of linguists. SentiGOLD adheres to
established linguistic conventions agreed upon by the Government of Bangladesh
and a Bangla linguistics committee. Unlike English and other languages, Bangla
lacks standard sentiment analysis datasets due to the absence of a national
linguistics framework. The dataset incorporates data from online video
comments, social media posts, blogs, news, and other sources while maintaining
domain and class distribution rigorously. It spans 30 domains (e.g., politics,
entertainment, sports) and includes 5 sentiment classes (strongly negative,
weakly negative, neutral, and strongly positive). The annotation scheme,
approved by the national linguistics committee, ensures a robust Inter
Annotator Agreement (IAA) with a Fleiss' kappa score of 0.88. Intra- and
cross-dataset evaluation protocols are applied to establish a standard
classification system. Cross-dataset evaluation on the noisy SentNoB dataset
presents a challenging test scenario. Additionally, zero-shot experiments
demonstrate the generalizability of SentiGOLD. The top model achieves a macro
f1 score of 0.62 (intra-dataset) across 5 classes, setting a benchmark, and
0.61 (cross-dataset from SentNoB) across 3 classes, comparable to the
state-of-the-art. Fine-tuned sentiment analysis model can be accessed at
https://sentiment.bangla.gov.bd.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.06199
|
2023-06-09T19:07:31Z
|
Reliability Check: An Analysis of GPT-3's Response to Sensitive Topics
and Prompt Wording
|
[
"Aisha Khatun",
"Daniel G. Brown"
] |
Large language models (LLMs) have become mainstream technology with their
versatile use cases and impressive performance. Despite the countless
out-of-the-box applications, LLMs are still not reliable. A lot of work is
being done to improve the factual accuracy, consistency, and ethical standards
of these models through fine-tuning, prompting, and Reinforcement Learning with
Human Feedback (RLHF), but no systematic analysis of the responses of these
models to different categories of statements, or on their potential
vulnerabilities to simple prompting changes is available. In this work, we
analyze what confuses GPT-3: how the model responds to certain sensitive topics
and what effects the prompt wording has on the model response. We find that
GPT-3 correctly disagrees with obvious Conspiracies and Stereotypes but makes
mistakes with common Misconceptions and Controversies. The model responses are
inconsistent across prompts and settings, highlighting GPT-3's unreliability.
Dataset and code of our analysis is available in
https://github.com/tanny411/GPT3-Reliability-Check.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.06234
|
2023-06-09T20:08:48Z
|
Using Foundation Models to Detect Policy Violations with Minimal
Supervision
|
[
"Sid Mittal",
"Vineet Gupta",
"Frederick Liu",
"Mukund Sundararajan"
] |
Foundation models, i.e. large neural networks pre-trained on large text
corpora, have revolutionized NLP. They can be instructed directly (e.g.
(arXiv:2005.14165)) - this is called hard prompting - and they can be tuned
using very little data (e.g. (arXiv:2104.08691)) - this technique is called
soft prompting. We seek to leverage their capabilities to detect policy
violations. Our contributions are: We identify a hard prompt that adapts
chain-of-thought prompting to policy violation tasks. This prompt produces
policy violation classifications, along with extractive explanations that
justify the classification. We compose the hard-prompts with soft prompt tuning
to produce a classifier that attains high accuracy with very little
supervision; the same classifier also produces explanations. Though the
supervision only acts on the classifications, we find that the modified
explanations remain consistent with the (tuned) model's response. Along the
way, we identify several unintuitive aspects of foundation models. For
instance, adding an example from a specific class can actually reduce
predictions of that class, and separately, the effects of tokenization on
scoring etc. Based on our technical results, we identify a simple workflow for
product teams to quickly develop effective policy violation detectors.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.06264
|
2023-06-09T21:25:48Z
|
Measuring and Modifying Factual Knowledge in Large Language Models
|
[
"Pouya Pezeshkpour"
] |
Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.06297
|
2023-06-09T23:23:26Z
|
Protect Your Prompts: Protocols for IP Protection in LLM Applications
|
[
"M. A. van Wyk",
"M. Bekker",
"X. L. Richards",
"K. J. Nixon"
] |
With the rapid adoption of AI in the form of large language models (LLMs),
the potential value of carefully engineered prompts has become significant.
However, to realize this potential, prompts should be tradable on an open
market. Since prompts are, at present, generally economically non-excludable,
by virtue of their nature as text, no general competitive market has yet been
established. This note discusses two protocols intended to provide protection
of prompts, elevating their status as intellectual property, thus confirming
the intellectual property rights of prompt engineers, and potentially
supporting the flourishing of an open market for LLM prompts.
|
[
"cs.CL",
"cs.AI",
"91D10, 68T10, 03D40",
"I.2.6; K.6.5; F.3.2"
] | false |
2306.07401
|
2023-06-09T17:53:19Z
|
Implementing BERT and fine-tuned RobertA to detect AI generated news by
ChatGPT
|
[
"Zecong Wang",
"Jiaxi Cheng",
"Chen Cui",
"Chenhao Yu"
] |
The abundance of information on social media has increased the necessity of
accurate real-time rumour detection. Manual techniques of identifying and
verifying fake news generated by AI tools are impracticable and time-consuming
given the enormous volume of information generated every day. This has sparked
an increase in interest in creating automated systems to find fake news on the
Internet. The studies in this research demonstrate that the BERT and RobertA
models with fine-tuning had the best success in detecting AI generated news.
With a score of 98%, tweaked RobertA in particular showed excellent precision.
In conclusion, this study has shown that neural networks can be used to
identify bogus news AI generation news created by ChatGPT. The RobertA and BERT
models' excellent performance indicates that these models can play a critical
role in the fight against misinformation.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.07933
|
2023-06-09T15:44:41Z
|
Understanding Telecom Language Through Large Language Models
|
[
"Lina Bariah",
"Hang Zou",
"Qiyang Zhao",
"Belkacem Mouhouche",
"Faouzi Bader",
"Merouane Debbah"
] |
The recent progress of artificial intelligence (AI) opens up new frontiers in
the possibility of automating many tasks involved in Telecom networks design,
implementation, and deployment. This has been further pushed forward with the
evolution of generative artificial intelligence (AI), including the emergence
of large language models (LLMs), which is believed to be the cornerstone toward
realizing self-governed, interactive AI agents. Motivated by this, in this
paper, we aim to adapt the paradigm of LLMs to the Telecom domain. In
particular, we fine-tune several LLMs including BERT, distilled BERT, RoBERTa
and GPT-2, to the Telecom domain languages, and demonstrate a use case for
identifying the 3rd Generation Partnership Project (3GPP) standard working
groups. We consider training the selected models on 3GPP technical documents
(Tdoc) pertinent to years 2009-2019 and predict the Tdoc categories in years
2020-2023. The results demonstrate that fine-tuning BERT and RoBERTa model
achieves 84.6% accuracy, while GPT-2 model achieves 83% in identifying 3GPP
working groups. The distilled BERT model with around 50% less parameters
achieves similar performance as others. This corroborates that fine-tuning
pretrained LLM can effectively identify the categories of Telecom language. The
developed framework shows a stepping stone towards realizing intent-driven and
self-evolving wireless networks from Telecom languages, and paves the way for
the implementation of generative AI in the Telecom domain.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.07941
|
2023-06-09T15:47:22Z
|
GPT-Calls: Enhancing Call Segmentation and Tagging by Generating
Synthetic Conversations via Large Language Models
|
[
"Itzik Malkiel",
"Uri Alon",
"Yakir Yehuda",
"Shahar Keren",
"Oren Barkan",
"Royi Ronen",
"Noam Koenigstein"
] |
Transcriptions of phone calls are of significant value across diverse fields,
such as sales, customer service, healthcare, and law enforcement. Nevertheless,
the analysis of these recorded conversations can be an arduous and
time-intensive process, especially when dealing with extended or multifaceted
dialogues. In this work, we propose a novel method, GPT-distilled Calls
Segmentation and Tagging (GPT-Calls), for efficient and accurate call
segmentation and topic extraction. GPT-Calls is composed of offline and online
phases. The offline phase is applied once to a given list of topics and
involves generating a distribution of synthetic sentences for each topic using
a GPT model and extracting anchor vectors. The online phase is applied to every
call separately and scores the similarity between the transcripted conversation
and the topic anchors found in the offline phase. Then, time domain analysis is
applied to the similarity scores to group utterances into segments and tag them
with topics. The proposed paradigm provides an accurate and efficient method
for call segmentation and topic extraction that does not require labeled data,
thus making it a versatile approach applicable to various domains. Our
algorithm operates in production under Dynamics 365 Sales Conversation
Intelligence, and our research is based on real sales conversations gathered
from various Dynamics 365 Sales tenants.
|
[
"cs.CL",
"cs.LG"
] | true |
2306.05617
|
2023-06-09T01:43:41Z
|
Low-rank Adaptation Method for Wav2vec2-based Fake Audio Detection
|
[
"Chenglong Wang",
"Jiangyan Yi",
"Xiaohui Zhang",
"Jianhua Tao",
"Le Xu",
"Ruibo Fu"
] |
Self-supervised speech models are a rapidly developing research topic in fake
audio detection. Many pre-trained models can serve as feature extractors,
learning richer and higher-level speech features. However,when fine-tuning
pre-trained models, there is often a challenge of excessively long training
times and high memory consumption, and complete fine-tuning is also very
expensive. To alleviate this problem, we apply low-rank adaptation(LoRA) to the
wav2vec2 model, freezing the pre-trained model weights and injecting a
trainable rank-decomposition matrix into each layer of the transformer
architecture, greatly reducing the number of trainable parameters for
downstream tasks. Compared with fine-tuning with Adam on the wav2vec2 model
containing 317M training parameters, LoRA achieved similar performance by
reducing the number of trainable parameters by 198 times.
|
[
"cs.SD",
"cs.CL",
"eess.AS"
] | false |
2306.05652
|
2023-06-09T03:37:49Z
|
Privacy Aware Question-Answering System for Online Mental Health Risk
Assessment
|
[
"Prateek Chhikara",
"Ujjwal Pasupulety",
"John Marshall",
"Dhiraj Chaurasia",
"Shweta Kumari"
] |
Social media platforms have enabled individuals suffering from mental
illnesses to share their lived experiences and find the online support
necessary to cope. However, many users fail to receive genuine clinical
support, thus exacerbating their symptoms. Screening users based on what they
post online can aid providers in administering targeted healthcare and minimize
false positives. Pre-trained Language Models (LMs) can assess users' social
media data and classify them in terms of their mental health risk. We propose a
Question-Answering (QA) approach to assess mental health risk using the
Unified-QA model on two large mental health datasets. To protect user data, we
extend Unified-QA by anonymizing the model training process using differential
privacy. Our results demonstrate the effectiveness of modeling risk assessment
as a QA task, specifically for mental health use cases. Furthermore, the
model's performance decreases by less than 1% with the inclusion of
differential privacy. The proposed system's performance is indicative of a
promising research direction that will lead to the development of privacy-aware
diagnostic systems.
|
[
"cs.CL",
"cs.AI",
"cs.HC"
] | false |
2306.05709
|
2023-06-09T07:04:56Z
|
Learning Emotional Representations from Imbalanced Speech Data for
Speech Emotion Recognition and Emotional Text-to-Speech
|
[
"Shijun Wang",
"Jón Guðnason",
"Damian Borth"
] |
Effective speech emotional representations play a key role in Speech Emotion
Recognition (SER) and Emotional Text-To-Speech (TTS) tasks. However, emotional
speech samples are more difficult and expensive to acquire compared with
Neutral style speech, which causes one issue that most related works
unfortunately neglect: imbalanced datasets. Models might overfit to the
majority Neutral class and fail to produce robust and effective emotional
representations. In this paper, we propose an Emotion Extractor to address this
issue. We use augmentation approaches to train the model and enable it to
extract effective and generalizable emotional representations from imbalanced
datasets. Our empirical results show that (1) for the SER task, the proposed
Emotion Extractor surpasses the state-of-the-art baseline on three imbalanced
datasets; (2) the produced representations from our Emotion Extractor benefit
the TTS model, and enable it to synthesize more expressive speech.
|
[
"eess.AS",
"cs.CL",
"cs.SD"
] | false |
2306.05803
|
2023-06-09T10:40:22Z
|
Causality between Sentiment and Cryptocurrency Prices
|
[
"Lubdhak Mondal",
"Udeshya Raj",
"Abinandhan S",
"Began Gowsik S",
"Sarwesh P",
"Abhijeet Chandra"
] |
This study investigates the relationship between narratives conveyed through
microblogging platforms, namely Twitter, and the value of crypto assets. Our
study provides a unique technique to build narratives about cryptocurrency by
combining topic modelling of short texts with sentiment analysis. First, we
used an unsupervised machine learning algorithm to discover the latent topics
within the massive and noisy textual data from Twitter, and then we revealed
4-5 cryptocurrency-related narratives, including financial investment,
technological advancement related to crypto, financial and political
regulations, crypto assets, and media coverage. In a number of situations, we
noticed a strong link between our narratives and crypto prices. Our work
connects the most recent innovation in economics, Narrative Economics, to a new
area of study that combines topic modelling and sentiment analysis to relate
consumer behaviour to narratives.
|
[
"q-fin.CP",
"cs.CL",
"cs.LG",
"I.2.7"
] | false |
2306.05861
|
2023-06-09T12:52:01Z
|
Efficient Encoder-Decoder and Dual-Path Conformer for Comprehensive
Feature Learning in Speech Enhancement
|
[
"Junyu Wang"
] |
Current speech enhancement (SE) research has largely neglected channel
attention and spatial attention, and encoder-decoder architecture-based
networks have not adequately considered how to provide efficient inputs to the
intermediate enhancement layer. To address these issues, this paper proposes a
time-frequency (T-F) domain SE network (DPCFCS-Net) that incorporates improved
densely connected blocks, dual-path modules, convolution-augmented transformers
(conformers), channel attention, and spatial attention. Compared with previous
models, our proposed model has a more efficient encoder-decoder and can learn
comprehensive features. Experimental results on the VCTK+DEMAND dataset
demonstrate that our method outperforms existing techniques in SE performance.
Furthermore, the improved densely connected block and two dimensions attention
module developed in this work are highly adaptable and easily integrated into
existing networks.
|
[
"eess.AS",
"cs.CL",
"cs.SD"
] | false |
2306.05887
|
2023-06-09T13:30:27Z
|
An Efficient Speech Separation Network Based on Recurrent Fusion Dilated
Convolution and Channel Attention
|
[
"Junyu Wang"
] |
We present an efficient speech separation neural network, ARFDCN, which
combines dilated convolutions, multi-scale fusion (MSF), and channel attention
to overcome the limited receptive field of convolution-based networks and the
high computational cost of transformer-based networks. The suggested network
architecture is encoder-decoder based. By using dilated convolutions with
gradually increasing dilation value to learn local and global features and
fusing them at adjacent stages, the model can learn rich feature content.
Meanwhile, by adding channel attention modules to the network, the model can
extract channel weights, learn more important features, and thus improve its
expressive power and robustness. Experimental results indicate that the model
achieves a decent balance between performance and computational efficiency,
making it a promising alternative to current mainstream models for practical
applications.
|
[
"eess.AS",
"cs.CL",
"cs.SD"
] | false |
2306.06031
|
2023-06-09T16:52:00Z
|
FinGPT: Open-Source Financial Large Language Models
|
[
"Hongyang Yang",
"Xiao-Yang Liu",
"Christina Dan Wang"
] |
Large language models (LLMs) have shown the potential of revolutionizing
natural language processing tasks in diverse domains, sparking great interest
in finance. Accessing high-quality financial data is the first challenge for
financial LLMs (FinLLMs). While proprietary models like BloombergGPT have taken
advantage of their unique data accumulation, such privileged access calls for
an open-source alternative to democratize Internet-scale financial data.
In this paper, we present an open-source large language model, FinGPT, for
the finance sector. Unlike proprietary models, FinGPT takes a data-centric
approach, providing researchers and practitioners with accessible and
transparent resources to develop their FinLLMs. We highlight the importance of
an automatic data curation pipeline and the lightweight low-rank adaptation
technique in building FinGPT. Furthermore, we showcase several potential
applications as stepping stones for users, such as robo-advising, algorithmic
trading, and low-code development. Through collaborative efforts within the
open-source AI4Finance community, FinGPT aims to stimulate innovation,
democratize FinLLMs, and unlock new opportunities in open finance. Two
associated code repos are \url{https://github.com/AI4Finance-Foundation/FinGPT}
and \url{https://github.com/AI4Finance-Foundation/FinNLP}
|
[
"q-fin.ST",
"cs.CL",
"cs.LG",
"q-fin.TR"
] | false |
2306.06086
|
2023-06-09T17:48:58Z
|
Developing Speech Processing Pipelines for Police Accountability
|
[
"Anjalie Field",
"Prateek Verma",
"Nay San",
"Jennifer L. Eberhardt",
"Dan Jurafsky"
] |
Police body-worn cameras have the potential to improve accountability and
transparency in policing. Yet in practice, they result in millions of hours of
footage that is never reviewed. We investigate the potential of large
pre-trained speech models for facilitating reviews, focusing on ASR and officer
speech detection in footage from traffic stops. Our proposed pipeline includes
training data alignment and filtering, fine-tuning with resource constraints,
and combining officer speech detection with ASR for a fully automated approach.
We find that (1) fine-tuning strongly improves ASR performance on officer
speech (WER=12-13%), (2) ASR on officer speech is much more accurate than on
community member speech (WER=43.55-49.07%), (3) domain-specific tasks like
officer speech detection and diarization remain challenging. Our work offers
practical applications for reviewing body camera footage and general guidance
for adapting pre-trained speech models to noisy multi-speaker domains.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.06232
|
2023-06-09T20:07:22Z
|
Probing self-supervised speech models for phonetic and phonemic
information: a case study in aspiration
|
[
"Kinan Martin",
"Jon Gauthier",
"Canaan Breiss",
"Roger Levy"
] |
Textless self-supervised speech models have grown in capabilities in recent
years, but the nature of the linguistic information they encode has not yet
been thoroughly examined. We evaluate the extent to which these models' learned
representations align with basic representational distinctions made by humans,
focusing on a set of phonetic (low-level) and phonemic (more abstract)
contrasts instantiated in word-initial stops. We find that robust
representations of both phonetic and phonemic distinctions emerge in early
layers of these models' architectures, and are preserved in the principal
components of deeper layer representations. Our analyses suggest two sources
for this success: some can only be explained by the optimization of the models
on speech data, while some can be attributed to these models' high-dimensional
architectures. Our findings show that speech-trained HuBERT derives a low-noise
and low-dimensional subspace corresponding to abstract phonological
distinctions.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.07926
|
2023-06-09T08:12:27Z
|
A Theory of Unsupervised Speech Recognition
|
[
"Liming Wang",
"Mark Hasegawa-Johnson",
"Chang D. Yoo"
] |
Unsupervised speech recognition (ASR-U) is the problem of learning automatic
speech recognition (ASR) systems from unpaired speech-only and text-only
corpora. While various algorithms exist to solve this problem, a theoretical
framework is missing from studying their properties and addressing such issues
as sensitivity to hyperparameters and training instability. In this paper, we
proposed a general theoretical framework to study the properties of ASR-U
systems based on random matrix theory and the theory of neural tangent kernels.
Such a framework allows us to prove various learnability conditions and sample
complexity bounds of ASR-U. Extensive ASR-U experiments on synthetic languages
with three classes of transition graphs provide strong empirical evidence for
our theory (code available at cactuswiththoughts/UnsupASRTheory.git).
|
[
"eess.AS",
"cs.CL",
"cs.LG",
"cs.SD"
] | false |
2307.03687
|
2023-06-09T16:06:02Z
|
Leveraging text data for causal inference using electronic health
records
|
[
"Reagan Mozer",
"Aaron R. Kaufman",
"Leo A. Celi",
"Luke Miratrix"
] |
Text is a ubiquitous component of medical data, containing valuable
information about patient characteristics and care that are often missing from
structured chart data. Despite this richness, it is rarely used in clinical
research, owing partly to its complexity. Using a large database of patient
records and treatment histories accompanied by extensive notes by attendant
physicians and nurses, we show how text data can be used to support causal
inference with electronic health data in all stages, from conception and design
to analysis and interpretation, with minimal additional effort. We focus on
studies using matching for causal inference. We augment a classic matching
analysis by incorporating text in three ways: by using text to supplement a
multiple imputation procedure, we improve the fidelity of imputed values to
handle missing data; by incorporating text in the matching stage, we strengthen
the plausibility of the matching procedure; and by conditioning on text, we can
estimate easily interpretable text-based heterogeneous treatment effects that
may be stronger than those found across categories of structured covariates.
Using these techniques, we hope to expand the scope of secondary analysis of
clinical data to domains where quantitative data is of poor quality or
nonexistent, but where text is available, such as in developing countries.
|
[
"cs.CL",
"stat.AP",
"stat.ME"
] | false |
2306.05715
|
2023-06-09T07:19:43Z
|
Exploring the Responses of Large Language Models to Beginner
Programmers' Help Requests
|
[
"Arto Hellas",
"Juho Leinonen",
"Sami Sarsa",
"Charles Koutcheme",
"Lilja Kujanpää",
"Juha Sorva"
] |
Background and Context: Over the past year, large language models (LLMs) have
taken the world by storm. In computing education, like in other walks of life,
many opportunities and threats have emerged as a consequence.
Objectives: In this article, we explore such opportunities and threats in a
specific area: responding to student programmers' help requests. More
specifically, we assess how good LLMs are at identifying issues in problematic
code that students request help on.
Method: We collected a sample of help requests and code from an online
programming course. We then prompted two different LLMs (OpenAI Codex and
GPT-3.5) to identify and explain the issues in the students' code and assessed
the LLM-generated answers both quantitatively and qualitatively.
Findings: GPT-3.5 outperforms Codex in most respects. Both LLMs frequently
find at least one actual issue in each student program (GPT-3.5 in 90% of the
cases). Neither LLM excels at finding all the issues (GPT-3.5 finding them 57%
of the time). False positives are common (40% chance for GPT-3.5). The advice
that the LLMs provide on the issues is often sensible. The LLMs perform better
on issues involving program logic rather than on output formatting. Model
solutions are frequently provided even when the LLM is prompted not to. LLM
responses to prompts in a non-English language are only slightly worse than
responses to English prompts.
Implications: Our results continue to highlight the utility of LLMs in
programming education. At the same time, the results highlight the
unreliability of LLMs: LLMs make some of the same mistakes that students do,
perhaps especially when formatting output as required by automated assessment
systems. Our study informs teachers interested in using LLMs as well as future
efforts to customize LLMs for the needs of programming education.
|
[
"cs.CY",
"cs.AI",
"cs.CL",
"cs.HC",
"cs.SE"
] | false |
2306.05741
|
2023-06-09T08:18:58Z
|
Challenges and Opportunities for the Design of Smart Speakers
|
[
"Tao Long",
"Lydia B. Chilton"
] |
Advances in voice technology and voice user interfaces (VUIs) -- such as
Alexa, Siri, and Google Home -- have opened up the potential for many new types
of interaction. However, despite the potential of these devices reflected by
the growing market and body of VUI research, there is a lingering sense that
the technology is still underused. In this paper, we conducted a systematic
literature review of 35 papers to identify and synthesize 127 VUI design
guidelines into five themes. Additionally, we conducted semi-structured
interviews with 15 smart speaker users to understand their use and non-use of
the technology. From the interviews, we distill four design challenges that
contribute the most to non-use. Based on their (non-)use, we identify four
opportunity spaces for designers to explore such as focusing on information
support while multitasking (cooking, driving, childcare, etc), incorporating
users' mental models for smart speakers, and integrating calm design
principles.
|
[
"cs.HC",
"cs.AI",
"cs.CL",
"cs.CY",
"cs.RO"
] | false |
2306.05628
|
2023-06-09T02:23:37Z
|
Quantifying the Knowledge in GNNs for Reliable Distillation into MLPs
|
[
"Lirong Wu",
"Haitao Lin",
"Yufei Huang",
"Stan Z. Li"
] |
To bridge the gaps between topology-aware Graph Neural Networks (GNNs) and
inference-efficient Multi-Layer Perceptron (MLPs), GLNN proposes to distill
knowledge from a well-trained teacher GNN into a student MLP. Despite their
great progress, comparatively little work has been done to explore the
reliability of different knowledge points (nodes) in GNNs, especially their
roles played during distillation. In this paper, we first quantify the
knowledge reliability in GNN by measuring the invariance of their information
entropy to noise perturbations, from which we observe that different knowledge
points (1) show different distillation speeds (temporally); (2) are
differentially distributed in the graph (spatially). To achieve reliable
distillation, we propose an effective approach, namely Knowledge-inspired
Reliable Distillation (KRD), that models the probability of each node being an
informative and reliable knowledge point, based on which we sample a set of
additional reliable knowledge points as supervision for training student MLPs.
Extensive experiments show that KRD improves over the vanilla MLPs by 12.62%
and outperforms its corresponding teacher GNNs by 2.16% averaged over 7
datasets and 3 GNN architectures.
|
[
"cs.LG"
] | false |
2306.05637
|
2023-06-09T02:47:21Z
|
On the Importance of Feature Decorrelation for Unsupervised
Representation Learning in Reinforcement Learning
|
[
"Hojoon Lee",
"Koanho Lee",
"Dongyoon Hwang",
"Hyunho Lee",
"Byungkun Lee",
"Jaegul Choo"
] |
Recently, unsupervised representation learning (URL) has improved the sample
efficiency of Reinforcement Learning (RL) by pretraining a model from a large
unlabeled dataset. The underlying principle of these methods is to learn
temporally predictive representations by predicting future states in the latent
space. However, an important challenge of this approach is the representational
collapse, where the subspace of the latent representations collapses into a
low-dimensional manifold. To address this issue, we propose a novel URL
framework that causally predicts future states while increasing the dimension
of the latent manifold by decorrelating the features in the latent space.
Through extensive empirical studies, we demonstrate that our framework
effectively learns predictive representations without collapse, which
significantly improves the sample efficiency of state-of-the-art URL methods on
the Atari 100k benchmark. The code is available at
https://github.com/dojeon-ai/SimTPR.
|
[
"cs.LG"
] | false |
2306.05769
|
2023-06-09T09:17:51Z
|
Self-Paced Absolute Learning Progress as a Regularized Approach to
Curriculum Learning
|
[
"Tobias Niehues",
"Ulla Scheler",
"Pascal Klink"
] |
The usability of Reinforcement Learning is restricted by the large
computation times it requires. Curriculum Reinforcement Learning speeds up
learning by defining a helpful order in which an agent encounters tasks, i.e.
from simple to hard. Curricula based on Absolute Learning Progress (ALP) have
proven successful in different environments, but waste computation on repeating
already learned behaviour in new tasks. We solve this problem by introducing a
new regularization method based on Self-Paced (Deep) Learning, called
Self-Paced Absolute Learning Progress (SPALP). We evaluate our method in three
different environments. Our method achieves performance comparable to original
ALP in all cases, and reaches it quicker than ALP in two of them. We illustrate
possibilities to further improve the efficiency and performance of SPALP.
|
[
"cs.LG"
] | false |
2306.05786
|
2023-06-09T09:57:18Z
|
Two-level histograms for dealing with outliers and heavy tail
distributions
|
[
"Marc Boullé"
] |
Histograms are among the most popular methods used in exploratory analysis to
summarize univariate distributions. In particular, irregular histograms are
good non-parametric density estimators that require very few parameters: the
number of bins with their lengths and frequencies. Many approaches have been
proposed in the literature to infer these parameters, either assuming
hypotheses about the underlying data distributions or exploiting a model
selection approach. In this paper, we focus on the G-Enum histogram method,
which exploits the Minimum Description Length (MDL) principle to build
histograms without any user parameter and achieves state-of-the art performance
w.r.t accuracy; parsimony and computation time. We investigate on the limits of
this method in the case of outliers or heavy-tailed distributions. We suggest a
two-level heuristic to deal with such cases. The first level exploits a
logarithmic transformation of the data to split the data set into a list of
data subsets with a controlled range of values. The second level builds a
sub-histogram for each data subset and aggregates them to obtain a complete
histogram. Extensive experiments show the benefits of the approach.
|
[
"cs.LG"
] | false |
2306.05810
|
2023-06-09T10:52:39Z
|
Explaining Reinforcement Learning with Shapley Values
|
[
"Daniel Beechey",
"Thomas M. S. Smith",
"Özgür Şimşek"
] |
For reinforcement learning systems to be widely adopted, their users must
understand and trust them. We present a theoretical analysis of explaining
reinforcement learning using Shapley values, following a principled approach
from game theory for identifying the contribution of individual players to the
outcome of a cooperative game. We call this general framework Shapley Values
for Explaining Reinforcement Learning (SVERL). Our analysis exposes the
limitations of earlier uses of Shapley values in reinforcement learning. We
then develop an approach that uses Shapley values to explain agent performance.
In a variety of domains, SVERL produces meaningful explanations that match and
supplement human intuition.
|
[
"cs.LG"
] | false |
2306.05991
|
2023-06-09T15:59:39Z
|
Approximate information state based convergence analysis of recurrent
Q-learning
|
[
"Erfan Seyedsalehi",
"Nima Akbarzadeh",
"Amit Sinha",
"Aditya Mahajan"
] |
In spite of the large literature on reinforcement learning (RL) algorithms
for partially observable Markov decision processes (POMDPs), a complete
theoretical understanding is still lacking. In a partially observable setting,
the history of data available to the agent increases over time so most
practical algorithms either truncate the history to a finite window or compress
it using a recurrent neural network leading to an agent state that is
non-Markovian. In this paper, it is shown that in spite of the lack of the
Markov property, recurrent Q-learning (RQL) converges in the tabular setting.
Moreover, it is shown that the quality of the converged limit depends on the
quality of the representation which is quantified in terms of what is known as
an approximate information state (AIS). Based on this characterization of the
approximation error, a variant of RQL with AIS losses is presented. This
variant performs better than a strong baseline for RQL that does not use AIS
losses. It is demonstrated that there is a strong correlation between the
performance of RQL over time and the loss associated with the AIS
representation.
|
[
"cs.LG"
] | false |
2306.06063
|
2023-06-09T17:38:22Z
|
Virtual Node Tuning for Few-shot Node Classification
|
[
"Zhen Tan",
"Ruocheng Guo",
"Kaize Ding",
"Huan Liu"
] |
Few-shot Node Classification (FSNC) is a challenge in graph representation
learning where only a few labeled nodes per class are available for training.
To tackle this issue, meta-learning has been proposed to transfer structural
knowledge from base classes with abundant labels to target novel classes.
However, existing solutions become ineffective or inapplicable when base
classes have no or limited labeled nodes. To address this challenge, we propose
an innovative method dubbed Virtual Node Tuning (VNT). Our approach utilizes a
pretrained graph transformer as the encoder and injects virtual nodes as soft
prompts in the embedding space, which can be optimized with few-shot labels in
novel classes to modulate node embeddings for each specific FSNC task. A unique
feature of VNT is that, by incorporating a Graph-based Pseudo Prompt Evolution
(GPPE) module, VNT-GPPE can handle scenarios with sparse labels in base
classes. Experimental results on four datasets demonstrate the superiority of
the proposed approach in addressing FSNC with unlabeled or sparsely labeled
base classes, outperforming existing state-of-the-art methods and even fully
supervised baselines.
|
[
"cs.LG"
] | false |
2306.06194
|
2023-06-09T18:48:39Z
|
Public Transit Demand Prediction During Highly Dynamic Conditions: A
Meta-Analysis of State-of-the-Art Models and Open-Source Benchmarking
Infrastructure
|
[
"Juan D. Caicedo",
"Marta C. González",
"Joan L. Walker"
] |
Real-time demand prediction is a critical input for dynamic bus routing.
While many researchers have developed numerous complex methods to predict
short-term transit demand, the applications have been limited to short, stable
time frames and a few stations. How these methods perform in highly dynamic
environments has not been studied, nor has their performance been
systematically compared. We built an open-source infrastructure with five
common methodologies, including econometric and deep learning approaches, and
assessed their performance under stable and highly dynamic conditions. We used
a time series from smartcard data to predict demand for the following day for
the BRT system in Bogota, Colombia. The dynamic conditions in the time series
include a month-long protest and the COVID-19 pandemic. Both conditions
triggered drastic shifts in demand. The results reveal that most tested models
perform similarly in stable conditions, with MAAPE varying from 0.08 to 0.12.
The benchmark demonstrated that all models performed significantly worse in
both dynamic conditions compared to the stable conditions. In the month-long
protest, the increased MAAPE ranged from 0.14 to 0.24. Similarly, during the
COVID-19 pandemic, the increased MAAPE ranged from 0.12 to 0.82. Notably, in
the COVID-19 pandemic condition, an LSTM model with adaptive training and a
multi-output design outperformed other models, adapting faster to disruptions.
The prediction error stabilized within approximately 1.5 months, whereas other
models continued to exhibit higher error rates even a year after the start of
the pandemic. The aim of this open-source codebase infrastructure is to lower
the barrier for other researchers to replicate and reproduce models, facilitate
a collective effort within the research community to improve the benchmarking
process and accelerate the advancement of short-term ridership prediction
models.
|
[
"cs.LG"
] | false |
2306.05622
|
2023-06-09T01:53:56Z
|
Improving Quantum Circuit Synthesis with Machine Learning
|
[
"Mathias Weiden",
"Ed Younis",
"Justin Kalloor",
"John Kubiatowicz",
"Costin Iancu"
] |
In the Noisy Intermediate Scale Quantum (NISQ) era, finding implementations
of quantum algorithms that minimize the number of expensive and error prone
multi-qubit gates is vital to ensure computations produce meaningful outputs.
Unitary synthesis, the process of finding a quantum circuit that implements
some target unitary matrix, is able to solve this problem optimally in many
cases. However, current bottom-up unitary synthesis algorithms are limited by
their exponentially growing run times. We show how applying machine learning to
unitary datasets permits drastic speedups for synthesis algorithms. This paper
presents QSeed, a seeded synthesis algorithm that employs a learned model to
quickly propose resource efficient circuit implementations of unitaries. QSeed
maintains low gate counts and offers a speedup of $3.7\times$ in synthesis time
over the state of the art for a 64 qubit modular exponentiation circuit, a core
component in Shor's factoring algorithm. QSeed's performance improvements also
generalize to families of circuits not seen during the training process.
|
[
"quant-ph",
"cs.LG"
] | false |
2306.05641
|
2023-06-09T03:00:34Z
|
Revisiting Permutation Symmetry for Merging Models between Different
Datasets
|
[
"Masanori Yamada",
"Tomoya Yamashita",
"Shin'ya Yamaguchi",
"Daiki Chijiwa"
] |
Model merging is a new approach to creating a new model by combining the
weights of different trained models. Previous studies report that model merging
works well for models trained on a single dataset with different random seeds,
while model merging between different datasets is difficult. Merging knowledge
from different datasets has practical significance, but it has not been well
investigated. In this paper, we investigate the properties of merging models
between different datasets. Through theoretical and empirical analyses, we find
that the accuracy of the merged model decreases more significantly as the
datasets diverge more and that the different loss landscapes for each dataset
make model merging between different datasets difficult. We also show that
merged models require datasets for merging in order to achieve a high accuracy.
Furthermore, we show that condensed datasets created by dataset condensation
can be used as substitutes for the original datasets when merging models. We
conduct experiments for model merging between different datasets. When merging
between MNIST and Fashion- MNIST models, the accuracy significantly improves by
28% using the dataset and 25% using the condensed dataset compared with not
using the dataset.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.05655
|
2023-06-09T03:51:45Z
|
Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications
|
[
"Ege C. Kaya",
"M. Berk Sahin",
"Abolfazl Hashemi"
] |
This paper focuses on a multi-agent zeroth-order online optimization problem
in a federated learning setting for target tracking. The agents only sense
their current distances to their targets and aim to maintain a minimum safe
distance from each other to prevent collisions. The coordination among the
agents and dissemination of collision-prevention information is managed by a
central server using the federated learning paradigm. The proposed formulation
leads to an instance of distributed online nonconvex optimization problem that
is solved via a group of communication-constrained agents. To deal with the
communication limitations of the agents, an error feedback-based compression
scheme is utilized for agent-to-server communication. The proposed algorithm is
analyzed theoretically for the general class of distributed online nonconvex
optimization problems. We provide non-asymptotic convergence rates that show
the dominant term is independent of the characteristics of the compression
scheme. Our theoretical results feature a new approach that employs
significantly more relaxed assumptions in comparison to standard literature.
The performance of the proposed solution is further analyzed numerically in
terms of tracking errors and collisions between agents in two relevant
applications.
|
[
"cs.LG",
"math.OC"
] | false |
2306.05680
|
2023-06-09T05:43:06Z
|
Emotion Detection from EEG using Transfer Learning
|
[
"Sidharth Sidharth",
"Ashish Abraham Samuel",
"Ranjana H",
"Jerrin Thomas Panachakel",
"Sana Parveen K"
] |
The detection of emotions using an Electroencephalogram (EEG) is a crucial
area in brain-computer interfaces and has valuable applications in fields such
as rehabilitation and medicine. In this study, we employed transfer learning to
overcome the challenge of limited data availability in EEG-based emotion
detection. The base model used in this study was Resnet50. Additionally, we
employed a novel feature combination in EEG-based emotion detection. The input
to the model was in the form of an image matrix, which comprised Mean Phase
Coherence (MPC) and Magnitude Squared Coherence (MSC) in the upper-triangular
and lower-triangular matrices, respectively. We further improved the technique
by incorporating features obtained from the Differential Entropy (DE) into the
diagonal, which previously held little to no useful information for classifying
emotions. The dataset used in this study, SEED EEG (62 channel EEG), comprises
three classes (Positive, Neutral, and Negative). We calculated both
subject-independent and subject-dependent accuracy. The subject-dependent
accuracy was obtained using a 10-fold cross-validation method and was 93.1%,
while the subject-independent classification was performed by employing the
leave-one-subject-out (LOSO) strategy. The accuracy obtained in
subject-independent classification was 71.6%. Both of these accuracies are at
least twice better than the chance accuracy of classifying 3 classes. The study
found the use of MSC and MPC in EEG-based emotion detection promising for
emotion classification. The future scope of this work includes the use of data
augmentation techniques, enhanced classifiers, and better features for emotion
classification.
|
[
"eess.SP",
"cs.LG"
] | false |
2306.05698
|
2023-06-09T06:35:14Z
|
JABBERWOCK: A Tool for WebAssembly Dataset Generation and Its
Application to Malicious Website Detection
|
[
"Chika Komiya",
"Naoto Yanai",
"Kyosuke Yamashita",
"Shingo Okamura"
] |
Machine learning is often used for malicious website detection, but an
approach incorporating WebAssembly as a feature has not been explored due to a
limited number of samples, to the best of our knowledge. In this paper, we
propose JABBERWOCK (JAvascript-Based Binary EncodeR by WebAssembly Optimization
paCKer), a tool to generate WebAssembly datasets in a pseudo fashion via
JavaScript. Loosely speaking, JABBERWOCK automatically gathers JavaScript code
in the real world, convert them into WebAssembly, and then outputs vectors of
the WebAssembly as samples for malicious website detection. We also conduct
experimental evaluations of JABBERWOCK in terms of the processing time for
dataset generation, comparison of the generated samples with actual WebAssembly
samples gathered from the Internet, and an application for malicious website
detection. Regarding the processing time, we show that JABBERWOCK can construct
a dataset in 4.5 seconds per sample for any number of samples. Next, comparing
10,000 samples output by JABBERWOCK with 168 gathered WebAssembly samples, we
believe that the generated samples by JABBERWOCK are similar to those in the
real world. We then show that JABBERWOCK can provide malicious website
detection with 99\% F1-score because JABBERWOCK makes a gap between benign and
malicious samples as the reason for the above high score. We also confirm that
JABBERWOCK can be combined with an existing malicious website detection tool to
improve F1-scores. JABBERWOCK is publicly available via GitHub
(https://github.com/c-chocolate/Jabberwock).
|
[
"cs.CR",
"cs.LG"
] | false |
2306.05747
|
2023-06-09T08:24:56Z
|
An End-to-End Reinforcement Learning Approach for Job-Shop Scheduling
Problems Based on Constraint Programming
|
[
"Pierre Tassel",
"Martin Gebser",
"Konstantin Schekotihin"
] |
Constraint Programming (CP) is a declarative programming paradigm that allows
for modeling and solving combinatorial optimization problems, such as the
Job-Shop Scheduling Problem (JSSP). While CP solvers manage to find optimal or
near-optimal solutions for small instances, they do not scale well to large
ones, i.e., they require long computation times or yield low-quality solutions.
Therefore, real-world scheduling applications often resort to fast,
handcrafted, priority-based dispatching heuristics to find a good initial
solution and then refine it using optimization methods.
This paper proposes a novel end-to-end approach to solving scheduling
problems by means of CP and Reinforcement Learning (RL). In contrast to
previous RL methods, tailored for a given problem by including procedural
simulation algorithms, complex feature engineering, or handcrafted reward
functions, our neural-network architecture and training algorithm merely
require a generic CP encoding of some scheduling problem along with a set of
small instances. Our approach leverages existing CP solvers to train an agent
learning a Priority Dispatching Rule (PDR) that generalizes well to large
instances, even from separate datasets. We evaluate our method on seven JSSP
datasets from the literature, showing its ability to find higher-quality
solutions for very large instances than obtained by static PDRs and by a CP
solver within the same time limit.
|
[
"cs.AI",
"cs.LG"
] | false |
2306.05760
|
2023-06-09T08:54:20Z
|
Efficient GNN Explanation via Learning Removal-based Attribution
|
[
"Yao Rong",
"Guanchu Wang",
"Qizhang Feng",
"Ninghao Liu",
"Zirui Liu",
"Enkelejda Kasneci",
"Xia Hu"
] |
As Graph Neural Networks (GNNs) have been widely used in real-world
applications, model explanations are required not only by users but also by
legal regulations. However, simultaneously achieving high fidelity and low
computational costs in generating explanations has been a challenge for current
methods. In this work, we propose a framework of GNN explanation named LeArn
Removal-based Attribution (LARA) to address this problem. Specifically, we
introduce removal-based attribution and demonstrate its substantiated link to
interpretability fidelity theoretically and experimentally. The explainer in
LARA learns to generate removal-based attribution which enables providing
explanations with high fidelity. A strategy of subgraph sampling is designed in
LARA to improve the scalability of the training process. In the deployment,
LARA can efficiently generate the explanation through a feed-forward pass. We
benchmark our approach with other state-of-the-art GNN explanation methods on
six datasets. Results highlight the effectiveness of our framework regarding
both efficiency and fidelity. In particular, LARA is 3.5 times faster and
achieves higher fidelity than the state-of-the-art method on the large dataset
ogbn-arxiv (more than 160K nodes and 1M edges), showing its great potential in
real-world applications. Our source code is available at
https://anonymous.4open.science/r/LARA-10D8/README.md.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.05776
|
2023-06-09T09:42:21Z
|
Weight Re-Mapping for Variational Quantum Algorithms
|
[
"Michael Kölle",
"Alessandro Giovagnoli",
"Jonas Stein",
"Maximilian Balthasar Mansky",
"Julian Hager",
"Tobias Rohe",
"Robert Müller",
"Claudia Linnhoff-Popien"
] |
Inspired by the remarkable success of artificial neural networks across a
broad spectrum of AI tasks, variational quantum circuits (VQCs) have recently
seen an upsurge in quantum machine learning applications. The promising
outcomes shown by VQCs, such as improved generalization and reduced parameter
training requirements, are attributed to the robust algorithmic capabilities of
quantum computing. However, the current gradient-based training approaches for
VQCs do not adequately accommodate the fact that trainable parameters (or
weights) are typically used as angles in rotational gates. To address this, we
extend the concept of weight re-mapping for VQCs, as introduced by K\"olle et
al. (2023). This approach unambiguously maps the weights to an interval of
length $2\pi$, mirroring data rescaling techniques in conventional machine
learning that have proven to be highly beneficial in numerous scenarios. In our
study, we employ seven distinct weight re-mapping functions to assess their
impact on eight classification datasets, using variational classifiers as a
representative example. Our results indicate that weight re-mapping can enhance
the convergence speed of the VQC. We assess the efficacy of various re-mapping
functions across all datasets and measure their influence on the VQC's average
performance. Our findings indicate that weight re-mapping not only consistently
accelerates the convergence of VQCs, regardless of the specific re-mapping
function employed, but also significantly increases accuracy in certain cases.
|
[
"quant-ph",
"cs.LG"
] | false |
2306.05784
|
2023-06-09T09:55:20Z
|
Quantitative Ink Analysis: Estimating the Number of Inks in Documents
through Hyperspectral Imaging
|
[
"Aneeqa Abrar",
"Hamza Iqbal"
] |
In the field of document forensics, ink analysis plays a crucial role in
determining the authenticity of legal and historic documents and detecting
forgery. Visual examination alone is insufficient for distinguishing visually
similar inks, necessitating the use of advanced scientific techniques. This
paper proposes an ink analysis technique based on hyperspectral imaging, which
enables the examination of documents in hundreds of narrowly spaced spectral
bands, revealing hidden details. The main objective of this study is to
identify the number of distinct inks used in a document. Three clustering
algorithms, namely k-means, Agglomerative, and c-means, are employed to
estimate the number of inks present. The methodology involves data extraction,
ink pixel segmentation, and ink number determination. The results demonstrate
the effectiveness of the proposed technique in identifying ink clusters and
distinguishing between different inks. The analysis of a hyperspectral cube
dataset reveals variations in spectral reflectance across different bands and
distinct spectral responses among the 12 lines, indicating the presence of
multiple inks. The clustering algorithms successfully identify ink clusters,
with k-means clustering showing superior classification performance. These
findings contribute to the development of reliable methodologies for ink
analysis using hyperspectral imaging, enhancing the
|
[
"cs.LG",
"eess.IV"
] | false |
2306.05808
|
2023-06-09T10:47:06Z
|
RankFormer: Listwise Learning-to-Rank Using Listwide Labels
|
[
"Maarten Buyl",
"Paul Missault",
"Pierre-Antoine Sondag"
] |
Web applications where users are presented with a limited selection of items
have long employed ranking models to put the most relevant results first. Any
feedback received from users is typically assumed to reflect a relative
judgement on the utility of items, e.g. a user clicking on an item only implies
it is better than items not clicked in the same ranked list. Hence, the
objectives optimized in Learning-to-Rank (LTR) tend to be pairwise or listwise.
Yet, by only viewing feedback as relative, we neglect the user's absolute
feedback on the list's overall quality, e.g. when no items in the selection are
clicked. We thus reconsider the standard LTR paradigm and argue the benefits of
learning from this listwide signal. To this end, we propose the RankFormer as
an architecture that, with a Transformer at its core, can jointly optimize a
novel listwide assessment objective and a traditional listwise LTR objective.
We simulate implicit feedback on public datasets and observe that the
RankFormer succeeds in benefitting from listwide signals. Additionally, we
conduct experiments in e-commerce on Amazon Search data and find the RankFormer
to be superior to all baselines offline. An online experiment shows that
knowledge distillation can be used to find immediate practical use for the
RankFormer.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.05815
|
2023-06-09T11:27:35Z
|
Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast
Algorithms
|
[
"Francesco Tonin",
"Alex Lambert",
"Panagiotis Patrinos",
"Johan A. K. Suykens"
] |
The goal of this paper is to revisit Kernel Principal Component Analysis
(KPCA) through dualization of a difference of convex functions. This allows to
naturally extend KPCA to multiple objective functions and leads to efficient
gradient-based algorithms avoiding the expensive SVD of the Gram matrix.
Particularly, we consider objective functions that can be written as Moreau
envelopes, demonstrating how to promote robustness and sparsity within the same
framework. The proposed method is evaluated on synthetic and real-world
benchmarks, showing significant speedup in KPCA training time as well as
highlighting the benefits in terms of robustness and sparsity.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.05865
|
2023-06-09T12:58:47Z
|
Faster Discrete Convex Function Minimization with Predictions: The
M-Convex Case
|
[
"Taihei Oki",
"Shinsaku Sakaue"
] |
Recent years have seen a growing interest in accelerating optimization
algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have
developed a general framework that warm-starts the L-convex function
minimization method with predictions, revealing the idea's usefulness for
various discrete optimization problems. In this paper, we present a framework
for using predictions to accelerate M-convex function minimization, thus
complementing previous research and extending the range of discrete
optimization algorithms that can benefit from predictions. Our framework is
particularly effective for an important subclass called laminar convex
minimization, which appears in many operations research applications. Our
methods can improve time complexity bounds upon the best worst-case results by
using predictions and even have potential to go beyond a lower-bound result.
|
[
"cs.LG",
"cs.DS"
] | false |
2306.05905
|
2023-06-09T14:01:26Z
|
TreeDQN: Learning to minimize Branch-and-Bound tree
|
[
"Dmitry Sorokin",
"Alexander Kostin"
] |
Combinatorial optimization problems require an exhaustive search to find the
optimal solution. A convenient approach to solving combinatorial optimization
tasks in the form of Mixed Integer Linear Programs is Branch-and-Bound.
Branch-and-Bound solver splits a task into two parts dividing the domain of an
integer variable, then it solves them recursively, producing a tree of nested
sub-tasks. The efficiency of the solver depends on the branchning heuristic
used to select a variable for splitting. In the present work, we propose a
reinforcement learning method that can efficiently learn the branching
heuristic. We view the variable selection task as a tree Markov Decision
Process, prove that the Bellman operator adapted for the tree Markov Decision
Process is contracting in mean, and propose a modified learning objective for
the reinforcement learning agent. Our agent requires less training data and
produces smaller trees compared to previous reinforcement learning methods.
|
[
"cs.LG",
"math.OC"
] | false |
2306.05907
|
2023-06-09T14:02:53Z
|
2DeteCT -- A large 2D expandable, trainable, experimental Computed
Tomography dataset for machine learning
|
[
"Maximilian B. Kiss",
"Sophia B. Coban",
"K. Joost Batenburg",
"Tristan van Leeuwen",
"Felix Lucka"
] |
Recent research in computational imaging largely focuses on developing
machine learning (ML) techniques for image reconstruction, which requires
large-scale training datasets consisting of measurement data and ground-truth
images. However, suitable experimental datasets for X-ray Computed Tomography
(CT) are scarce, and methods are often developed and evaluated only on
simulated data. We fill this gap by providing the community with a versatile,
open 2D fan-beam CT dataset suitable for developing ML techniques for a range
of image reconstruction tasks. To acquire it, we designed a sophisticated,
semi-automatic scan procedure that utilizes a highly-flexible laboratory X-ray
CT setup. A diverse mix of samples with high natural variability in shape and
density was scanned slice-by-slice (5000 slices in total) with high angular and
spatial resolution and three different beam characteristics: A high-fidelity, a
low-dose and a beam-hardening-inflicted mode. In addition, 750
out-of-distribution slices were scanned with sample and beam variations to
accommodate robustness and segmentation tasks. We provide raw projection data,
reference reconstructions and segmentations based on an open-source data
processing pipeline.
|
[
"eess.IV",
"cs.LG"
] | false |
2306.05915
|
2023-06-09T14:11:07Z
|
Speaker Embeddings as Individuality Proxy for Voice Stress Detection
|
[
"Zihan Wu",
"Neil Scheidwasser-Clow",
"Karl El Hajal",
"Milos Cernak"
] |
Since the mental states of the speaker modulate speech, stress introduced by
cognitive or physical loads could be detected in the voice. The existing voice
stress detection benchmark has shown that the audio embeddings extracted from
the Hybrid BYOL-S self-supervised model perform well. However, the benchmark
only evaluates performance separately on each dataset, but does not evaluate
performance across the different types of stress and different languages.
Moreover, previous studies found strong individual differences in stress
susceptibility. This paper presents the design and development of voice stress
detection, trained on more than 100 speakers from 9 language groups and five
different types of stress. We address individual variabilities in voice stress
analysis by adding speaker embeddings to the hybrid BYOL-S features. The
proposed method significantly improves voice stress detection performance with
an input audio length of only 3-5 seconds.
|
[
"eess.AS",
"cs.LG"
] | false |
2306.05955
|
2023-06-09T15:11:49Z
|
Path Neural Networks: Expressive and Accurate Graph Neural Networks
|
[
"Gaspard Michel",
"Giannis Nikolentzos",
"Johannes Lutzeyer",
"Michalis Vazirgiannis"
] |
Graph neural networks (GNNs) have recently become the standard approach for
learning with graph-structured data. Prior work has shed light into their
potential, but also their limitations. Unfortunately, it was shown that
standard GNNs are limited in their expressive power. These models are no more
powerful than the 1-dimensional Weisfeiler-Leman (1-WL) algorithm in terms of
distinguishing non-isomorphic graphs. In this paper, we propose Path Neural
Networks (PathNNs), a model that updates node representations by aggregating
paths emanating from nodes. We derive three different variants of the PathNN
model that aggregate single shortest paths, all shortest paths and all simple
paths of length up to K. We prove that two of these variants are strictly more
powerful than the 1-WL algorithm, and we experimentally validate our
theoretical results. We find that PathNNs can distinguish pairs of
non-isomorphic graphs that are indistinguishable by 1-WL, while our most
expressive PathNN variant can even distinguish between 3-WL indistinguishable
graphs. The different PathNN variants are also evaluated on graph
classification and graph regression datasets, where in most cases, they
outperform the baseline methods.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.06135
|
2023-06-09T01:37:32Z
|
Safety and Fairness for Content Moderation in Generative Models
|
[
"Susan Hao",
"Piyush Kumar",
"Sarah Laszlo",
"Shivani Poddar",
"Bhaktipriya Radharapu",
"Renee Shelby"
] |
With significant advances in generative AI, new technologies are rapidly
being deployed with generative components. Generative models are typically
trained on large datasets, resulting in model behaviors that can mimic the
worst of the content in the training data. Responsible deployment of generative
technologies requires content moderation strategies, such as safety input and
output filters. Here, we provide a theoretical framework for conceptualizing
responsible content moderation of text-to-image generative technologies,
including a demonstration of how to empirically measure the constructs we
enumerate. We define and distinguish the concepts of safety, fairness, and
metric equity, and enumerate example harms that can come in each domain. We
then provide a demonstration of how the defined harms can be quantified. We
conclude with a summary of how the style of harms quantification we demonstrate
enables data-driven content moderation decisions.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.06140
|
2023-06-09T07:01:17Z
|
Null/No Information Rate (NIR): a statistical test to assess if a
classification accuracy is significant for a given problem
|
[
"Manuele Bicego",
"Antonella Mensi"
] |
In many research contexts, especially in the biomedical field, after studying
and developing a classification system a natural question arises: "Is this
accuracy enough high?", or better, "Can we say, with a statistically
significant confidence, that our classification system is able to solve the
problem"? To answer to this question, we can use the statistical test described
in this paper, which is referred in some cases as NIR (No Information Rate or
Null Information Rate).
|
[
"stat.ME",
"cs.LG"
] | false |
2306.06148
|
2023-06-09T12:08:34Z
|
Artificial intelligence and radiation protection. A game changer or an
update?
|
[
"Sylvain Andresz",
"A Zéphir",
"Jeremy Bez",
"Maxime Karst",
"J. Danieli"
] |
Artificial intelligence (AI) is regarded as one of the most disruptive
technology of the century and with countless applications. What does it mean
for radiation protection? This article describes the fundamentals of machine
learning (ML) based methods and presents the inaugural applications in
different fields of radiation protection. It is foreseen that the usage of AI
will increase in radiation protection. Consequently, this article explores some
of the benefits and also the potential barriers and questions, including
ethical ones, that can come out. The article proposes that collaboration
between radiation protection professionals and data scientist experts can
accelerate and guide the development of the algorithms for effective scientific
and technological outcomes.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.06152
|
2023-06-09T13:51:33Z
|
EfficientBioAI: Making Bioimaging AI Models Efficient in Energy, Latency
and Representation
|
[
"Yu Zhou",
"Justin Sonneck",
"Sweta Banerjee",
"Stefanie Dörr",
"Anika Grüneboom",
"Kristina Lorenz",
"Jianxu Chen"
] |
Artificial intelligence (AI) has been widely used in bioimage image analysis
nowadays, but the efficiency of AI models, like the energy consumption and
latency is not ignorable due to the growing model size and complexity, as well
as the fast-growing analysis needs in modern biomedical studies. Like we can
compress large images for efficient storage and sharing, we can also compress
the AI models for efficient applications and deployment. In this work, we
present EfficientBioAI, a plug-and-play toolbox that can compress given
bioimaging AI models for them to run with significantly reduced energy cost and
inference time on both CPU and GPU, without compromise on accuracy. In some
cases, the prediction accuracy could even increase after compression, since the
compression procedure could remove redundant information in the model
representation and therefore reduce over-fitting. From four different bioimage
analysis applications, we observed around 2-5 times speed-up during inference
and 30-80$\%$ saving in energy. Cutting the runtime of large scale bioimage
analysis from days to hours or getting a two-minutes bioimaging AI model
inference done in near real-time will open new doors for method development and
biomedical discoveries. We hope our toolbox will facilitate
resource-constrained bioimaging AI and accelerate large-scale AI-based
quantitative biological studies in an eco-friendly way, as well as stimulate
further research on the efficiency of bioimaging AI.
|
[
"cs.LG",
"eess.IV"
] | false |
2306.06184
|
2023-06-09T18:21:04Z
|
A Unified Model and Dimension for Interactive Estimation
|
[
"Nataly Brukhim",
"Miroslav Dudik",
"Aldo Pacchiano",
"Robert Schapire"
] |
We study an abstract framework for interactive learning called interactive
estimation in which the goal is to estimate a target from its "similarity'' to
points queried by the learner. We introduce a combinatorial measure called
dissimilarity dimension which largely captures learnability in our model. We
present a simple, general, and broadly-applicable algorithm, for which we
obtain both regret and PAC generalization bounds that are polynomial in the new
dimension. We show that our framework subsumes and thereby unifies two classic
learning models: statistical-query learning and structured bandits. We also
delineate how the dissimilarity dimension is related to well-known parameters
for both frameworks, in some cases yielding significantly improved analyses.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.06191
|
2023-06-09T18:43:26Z
|
Open Data on GitHub: Unlocking the Potential of AI
|
[
"Anthony Cintron Roman",
"Kevin Xu",
"Arfon Smith",
"Jehu Torres Vega",
"Caleb Robinson",
"Juan M Lavista Ferres"
] |
GitHub is the world's largest platform for collaborative software
development, with over 100 million users. GitHub is also used extensively for
open data collaboration, hosting more than 800 million open data files,
totaling 142 terabytes of data. This study highlights the potential of open
data on GitHub and demonstrates how it can accelerate AI research. We analyze
the existing landscape of open data on GitHub and the patterns of how users
share datasets. Our findings show that GitHub is one of the largest hosts of
open data in the world and has experienced an accelerated growth of open data
assets over the past four years. By examining the open data landscape on
GitHub, we aim to empower users and organizations to leverage existing open
datasets and improve their discoverability -- ultimately contributing to the
ongoing AI revolution to help address complex societal issues. We release the
three datasets that we have collected to support this analysis as open datasets
at https://github.com/github/open-data-on-github.
|
[
"cs.LG",
"cs.IR"
] | false |
2306.06213
|
2023-06-09T19:27:24Z
|
Robust Twin Parametric Margin Support Vector Machine for Multiclass
Classification
|
[
"Renato De Leone",
"Francesca Maggioni",
"Andrea Spinelli"
] |
In this paper we present a Twin Parametric-Margin Support Vector Machine
(TPMSVM) model to tackle the problem of multiclass classification. In the
spirit of one-versus-all paradigm, for each class we construct a classifier by
solving a TPMSVM-type model. Once all classifiers have been determined, they
are combined into an aggregate decision function. We consider the cases of both
linear and nonlinear kernel-induced classifiers. In addition, we robustify the
proposed approach through robust optimization techniques. Indeed, in real-world
applications observations are subject to measurement errors and noise,
affecting the quality of the solutions. Consequently, data uncertainties need
to be included within the model in order to prevent low accuracies in the
classification process. Preliminary computational experiments on real-world
datasets show the good performance of the proposed approach.
|
[
"cs.LG",
"math.OC"
] | false |
2306.06228
|
2023-06-09T19:53:40Z
|
AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale
Malware Corpora
|
[
"Robert J. Joyce",
"Tirth Patel",
"Charles Nicholas",
"Edward Raff"
] |
When investigating a malicious file, searching for related files is a common
task that malware analysts must perform. Given that production malware corpora
may contain over a billion files and consume petabytes of storage, many feature
extraction and similarity search approaches are computationally infeasible. Our
work explores the potential of antivirus (AV) scan data as a scalable source of
features for malware. This is possible because AV scan reports are widely
available through services such as VirusTotal and are ~100x smaller than the
average malware sample. The information within an AV scan report is abundant
with information and can indicate a malicious file's family, behavior, target
operating system, and many other characteristics. We introduce AVScan2Vec, a
language model trained to comprehend the semantics of AV scan data. AVScan2Vec
ingests AV scan data for a malicious file and outputs a meaningful vector
representation. AVScan2Vec vectors are ~3 to 85x smaller than popular
alternatives in use today, enabling faster vector comparisons and lower memory
usage. By incorporating Dynamic Continuous Indexing, we show that
nearest-neighbor queries on AVScan2Vec vectors can scale to even the largest
malware production datasets. We also demonstrate that AVScan2Vec vectors are
superior to other leading malware feature vector representations across nearly
all classification, clustering, and nearest-neighbor lookup algorithms that we
evaluated.
|
[
"cs.CR",
"cs.LG"
] | false |
2306.06252
|
2023-06-09T20:46:55Z
|
Feature Programming for Multivariate Time Series Prediction
|
[
"Alex Reneau",
"Jerry Yao-Chieh Hu",
"Chenwei Xu",
"Weijian Li",
"Ammar Gilani",
"Han Liu"
] |
We introduce the concept of programmable feature engineering for time series
modeling and propose a feature programming framework. This framework generates
large amounts of predictive features for noisy multivariate time series while
allowing users to incorporate their inductive bias with minimal effort. The key
motivation of our framework is to view any multivariate time series as a
cumulative sum of fine-grained trajectory increments, with each increment
governed by a novel spin-gas dynamical Ising model. This fine-grained
perspective motivates the development of a parsimonious set of operators that
summarize multivariate time series in an abstract fashion, serving as the
foundation for large-scale automated feature engineering. Numerically, we
validate the efficacy of our method on several synthetic and real-world noisy
time series datasets.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.06262
|
2023-06-09T21:18:44Z
|
Spectral gap-based deterministic tensor completion
|
[
"Kameron Decker Harris",
"Oscar López",
"Angus Read",
"Yizhe Zhu"
] |
Tensor completion is a core machine learning algorithm used in recommender
systems and other domains with missing data. While the matrix case is
well-understood, theoretical results for tensor problems are limited,
particularly when the sampling patterns are deterministic. Here we bound the
generalization error of the solutions of two tensor completion methods, Poisson
loss and atomic norm minimization, providing tighter bounds in terms of the
target tensor rank. If the ground-truth tensor is order $t$ with CP-rank $r$,
the dependence on $r$ is improved from $r^{2(t-1)(t^2-t-1)}$ in
arXiv:1910.10692 to $r^{2(t-1)(3t-5)}$. The error in our bounds is
deterministically controlled by the spectral gap of the sampling sparsity
pattern. We also prove several new properties for the atomic tensor norm,
reducing the rank dependence from $r^{3t-3}$ in arXiv:1711.04965 to $r^{3t-5}$
under random sampling schemes. A limitation is that atomic norm minimization,
while theoretically interesting, leads to inefficient algorithms. However,
numerical experiments illustrate the dependence of the reconstruction error on
the spectral gap for the practical max-quasinorm, ridge penalty, and Poisson
loss minimization algorithms. This view through the spectral gap is a promising
window for further study of tensor algorithms.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.06281
|
2023-06-09T22:11:16Z
|
Energy-Dissipative Evolutionary Deep Operator Neural Networks
|
[
"Jiahao Zhang",
"Shiheng Zhang",
"Jie Shen",
"Guang Lin"
] |
Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator
learning neural network. It is designed to seed numerical solutions for a class
of partial differential equations instead of a single partial differential
equation, such as partial differential equations with different parameters or
different initial conditions. The network consists of two sub-networks, the
Branch net and the Trunk net. For an objective operator G, the Branch net
encodes different input functions u at the same number of sensors, and the
Trunk net evaluates the output function at any location. By minimizing the
error between the evaluated output q and the expected output G(u)(y), DeepONet
generates a good approximation of the operator G. In order to preserve
essential physical properties of PDEs, such as the Energy Dissipation Law, we
adopt a scalar auxiliary variable approach to generate the minimization
problem. It introduces a modified energy and enables unconditional energy
dissipation law at the discrete level. By taking the parameter as a function of
time t, this network can predict the accurate solution at any further time with
feeding data only at the initial state. The data needed can be generated by the
initial conditions, which are readily available. In order to validate the
accuracy and efficiency of our neural networks, we provide numerical
simulations of several partial differential equations, including heat
equations, parametric heat equations and Allen-Cahn equations.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.06292
|
2023-06-09T22:48:14Z
|
PLPCA: Persistent Laplacian Enhanced-PCA for Microarray Data Analysis
|
[
"Sean Cottrell",
"Rui Wang",
"Guowei Wei"
] |
Over the years, Principal Component Analysis (PCA) has served as the baseline
approach for dimensionality reduction in gene expression data analysis. It
primary objective is to identify a subset of disease-causing genes from a vast
pool of thousands of genes. However, PCA possesses inherent limitations that
hinder its interpretability, introduce classification ambiguity, and fail to
capture complex geometric structures in the data. Although these limitations
have been partially addressed in the literature by incorporating various
regularizers such as graph Laplacian regularization, existing improved PCA
methods still face challenges related to multiscale analysis and capturing
higher-order interactions in the data. To address these challenges, we propose
a novel approach called Persistent Laplacian-enhanced Principal Component
Analysis (PLPCA). PLPCA amalgamates the advantages of earlier regularized PCA
methods with persistent spectral graph theory, specifically persistent
Laplacians derived from algebraic topology. In contrast to graph Laplacians,
persistent Laplacians enable multiscale analysis through filtration and
incorporate higher-order simplicial complexes to capture higher-order
interactions in the data. We evaluate and validate the performance of PLPCA
using benchmark microarray datasets that involve normal tissue samples and four
different cancer tissues. Our extensive studies demonstrate that PLPCA
outperforms all other state-of-the-art models for classification tasks after
dimensionality reduction.
|
[
"math.AT",
"cs.LG"
] | false |
2306.06296
|
2023-06-09T23:22:49Z
|
Response Time Improves Choice Prediction and Function Estimation for
Gaussian Process Models of Perception and Preferences
|
[
"Michael Shvartsman",
"Benjamin Letham",
"Stephen Keeley"
] |
Models for human choice prediction in preference learning and psychophysics
often consider only binary response data, requiring many samples to accurately
learn preferences or perceptual detection thresholds. The response time (RT) to
make each choice captures additional information about the decision process,
however existing models incorporating RTs for choice prediction do so in fully
parametric settings or over discrete stimulus sets. This is in part because the
de-facto standard model for choice RTs, the diffusion decision model (DDM),
does not admit tractable, differentiable inference. The DDM thus cannot be
easily integrated with flexible models for continuous, multivariate function
approximation, particularly Gaussian process (GP) models. We propose a novel
differentiable approximation to the DDM likelihood using a family of known,
skewed three-parameter distributions. We then use this new likelihood to
incorporate RTs into GP models for binary choices. Our RT-choice GPs enable
both better latent value estimation and held-out choice prediction relative to
baselines, which we demonstrate on three real-world multivariate datasets
covering both human psychophysics and preference learning applications.
|
[
"q-bio.NC",
"cs.LG"
] | false |
2306.06302
|
2023-06-09T23:40:03Z
|
Multi-Task Knowledge Enhancement for Zero-Shot and Multi-Domain
Recommendation in an AI Assistant Application
|
[
"Elan Markowitz",
"Ziyan Jiang",
"Fan Yang",
"Xing Fan",
"Tony Chen",
"Greg Ver Steeg",
"Aram Galstyan"
] |
Recommender systems have found significant commercial success but still
struggle with integrating new users. Since users often interact with content in
different domains, it is possible to leverage a user's interactions in previous
domains to improve that user's recommendations in a new one (multi-domain
recommendation). A separate research thread on knowledge graph enhancement uses
external knowledge graphs to improve single domain recommendations (knowledge
graph enhancement). Both research threads incorporate related information to
improve predictions in a new domain. We propose in this work to unify these
approaches: Using information from interactions in other domains as well as
external knowledge graphs to make predictions in a new domain that would be
impossible with either information source alone. We apply these ideas to a
dataset derived from millions of users' requests for content across three
domains (videos, music, and books) in a live virtual assistant application. We
demonstrate the advantage of combining knowledge graph enhancement with
previous multi-domain recommendation techniques to provide better overall
recommendations as well as for better recommendations on new users of a domain.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.07290
|
2023-06-09T18:40:55Z
|
Value function estimation using conditional diffusion models for control
|
[
"Bogdan Mazoure",
"Walter Talbott",
"Miguel Angel Bautista",
"Devon Hjelm",
"Alexander Toshev",
"Josh Susskind"
] |
A fairly reliable trend in deep reinforcement learning is that the
performance scales with the number of parameters, provided a complimentary
scaling in amount of training data. As the appetite for large models increases,
it is imperative to address, sooner than later, the potential problem of
running out of high-quality demonstrations. In this case, instead of collecting
only new data via costly human demonstrations or risking a simulation-to-real
transfer with uncertain effects, it would be beneficial to leverage vast
amounts of readily-available low-quality data. Since classical control
algorithms such as behavior cloning or temporal difference learning cannot be
used on reward-free or action-free data out-of-the-box, this solution warrants
novel training paradigms for continuous control. We propose a simple algorithm
called Diffused Value Function (DVF), which learns a joint multi-step model of
the environment-robot interaction dynamics using a diffusion model. This model
can be efficiently learned from state sequences (i.e., without access to reward
functions nor actions), and subsequently used to estimate the value of each
action out-of-the-box. We show how DVF can be used to efficiently capture the
state visitation measure for multiple controllers, and show promising
qualitative and quantitative results on challenging robotics benchmarks.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.07989
|
2023-06-09T19:01:32Z
|
A Survey on Cross-Architectural IoT Malware Threat Hunting
|
[
"Anandharaju Durai Raju",
"Ibrahim Abualhaol",
"Ronnie Salvador Giagone",
"Yang Zhou",
"Shengqiang Huang"
] |
In recent years, the increase in non-Windows malware threats had turned the
focus of the cybersecurity community. Research works on hunting Windows
PE-based malwares are maturing, whereas the developments on Linux malware
threat hunting are relatively scarce. With the advent of the Internet of Things
(IoT) era, smart devices that are getting integrated into human life have
become a hackers highway for their malicious activities. The IoT devices employ
various Unix-based architectures that follow ELF (Executable and Linkable
Format) as their standard binary file specification. This study aims at
providing a comprehensive survey on the latest developments in
cross-architectural IoT malware detection and classification approaches. Aided
by a modern taxonomy, we discuss the feature representations, feature
extraction techniques, and machine learning models employed in the surveyed
works. We further provide more insights on the practical challenges involved in
cross-architectural IoT malware threat hunting and discuss various avenues to
instill potential future research.
|
[
"cs.CR",
"cs.LG"
] | false |
2306.13662
|
2023-06-09T12:14:43Z
|
Best Practices for Machine Learning Systems: An Industrial Framework for
Analysis and Optimization
|
[
"Georgios Christos Chouliaras",
"Kornel Kiełczewski",
"Amit Beka",
"David Konopnicki",
"Lucas Bernardi"
] |
In the last few years, the Machine Learning (ML) and Artificial Intelligence
community has developed an increasing interest in Software Engineering (SE) for
ML Systems leading to a proliferation of best practices, rules, and guidelines
aiming at improving the quality of the software of ML Systems. However,
understanding their impact on the overall quality has received less attention.
Practices are usually presented in a prescriptive manner, without an explicit
connection to their overall contribution to software quality. Based on the
observation that different practices influence different aspects of
software-quality and that one single quality aspect might be addressed by
several practices we propose a framework to analyse sets of best practices with
focus on quality impact and prioritization of their implementation. We first
introduce a hierarchical Software Quality Model (SQM) specifically tailored for
ML Systems. Relying on expert knowledge, the connection between individual
practices and software quality aspects is explicitly elicited for a large set
of well-established practices. Applying set-function optimization techniques we
can answer questions such as what is the set of practices that maximizes SQM
coverage, what are the most important ones, which practices should be
implemented in order to improve specific quality aspects, among others. We
illustrate the usage of our framework by analyzing well-known sets of
practices.
|
[
"cs.SE",
"cs.LG"
] | false |
2307.05390
|
2023-06-09T11:16:01Z
|
CrysMMNet: Multimodal Representation for Crystal Property Prediction
|
[
"Kishalay Das",
"Pawan Goyal",
"Seung-Cheol Lee",
"Satadeep Bhattacharjee",
"Niloy Ganguly"
] |
Machine Learning models have emerged as a powerful tool for fast and accurate
prediction of different crystalline properties. Exiting state-of-the-art models
rely on a single modality of crystal data i.e. crystal graph structure, where
they construct multi-graph by establishing edges between nearby atoms in 3D
space and apply GNN to learn materials representation. Thereby, they encode
local chemical semantics around the atoms successfully but fail to capture
important global periodic structural information like space group number,
crystal symmetry, rotational information, etc, which influence different
crystal properties. In this work, we leverage textual descriptions of materials
to model global structural information into graph structure and learn a more
robust and enriched representation of crystalline materials. To this effect, we
first curate a textual dataset for crystalline material databases containing
descriptions of each material. Further, we propose CrysMMNet, a simple
multi-modal framework, which fuses both structural and textual representation
together to generate a joint multimodal representation of crystalline
materials. We conduct extensive experiments on two benchmark datasets across
ten different properties to show that CrysMMNet outperforms existing
state-of-the-art baseline methods with a good margin. We also observe that
fusing the textual representation with crystal graph structure provides
consistent improvement for all the SOTA GNN models compared to their own
vanilla versions. We have shared the textual dataset, that we have curated for
both the benchmark material databases, with the community for future use.
|
[
"cond-mat.mtrl-sci",
"cs.LG"
] | false |
2306.05651
|
2023-06-09T03:37:27Z
|
Differentially Private Sharpness-Aware Training
|
[
"Jinseong Park",
"Hoki Kim",
"Yujin Choi",
"Jaewook Lee"
] |
Training deep learning models with differential privacy (DP) results in a
degradation of performance. The training dynamics of models with DP show a
significant difference from standard training, whereas understanding the
geometric properties of private learning remains largely unexplored. In this
paper, we investigate sharpness, a key factor in achieving better
generalization, in private learning. We show that flat minima can help reduce
the negative effects of per-example gradient clipping and the addition of
Gaussian noise. We then verify the effectiveness of Sharpness-Aware
Minimization (SAM) for seeking flat minima in private learning. However, we
also discover that SAM is detrimental to the privacy budget and computational
time due to its two-step optimization. Thus, we propose a new sharpness-aware
training method that mitigates the privacy-optimization trade-off. Our
experimental results demonstrate that the proposed method improves the
performance of deep learning models with DP from both scratch and fine-tuning.
Code is available at https://github.com/jinseongP/DPSAT.
|
[
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2306.05666
|
2023-06-09T04:40:38Z
|
QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors
|
[
"Sunmin Lee",
"Sebastian Starke",
"Yuting Ye",
"Jungdam Won",
"Alexander Winkler"
] |
Replicating a user's pose from only wearable sensors is important for many
AR/VR applications. Most existing methods for motion tracking avoid environment
interaction apart from foot-floor contact due to their complex dynamics and
hard constraints. However, in daily life people regularly interact with their
environment, e.g. by sitting on a couch or leaning on a desk. Using
Reinforcement Learning, we show that headset and controller pose, if combined
with physics simulation and environment observations can generate realistic
full-body poses even in highly constrained environments. The physics simulation
automatically enforces the various constraints necessary for realistic poses,
instead of manually specifying them as in many kinematic approaches. These hard
constraints allow us to achieve high-quality interaction motions without
typical artifacts such as penetration or contact sliding. We discuss three
features, the environment representation, the contact reward and scene
randomization, crucial to the performance of the method. We demonstrate the
generality of the approach through various examples, such as sitting on chairs,
a couch and boxes, stepping over boxes, rocking a chair and turning an office
chair. We believe these are some of the highest-quality results achieved for
motion tracking from sparse sensor with scene interaction.
|
[
"cs.GR",
"cs.LG",
"cs.RO",
"I.3.6"
] | false |
2306.05706
|
2023-06-09T06:55:15Z
|
Understanding How Consistency Works in Federated Learning via Stage-wise
Relaxed Initialization
|
[
"Yan Sun",
"Li Shen",
"Dacheng Tao"
] |
Federated learning (FL) is a distributed paradigm that coordinates massive
local clients to collaboratively train a global model via stage-wise local
training processes on the heterogeneous dataset. Previous works have implicitly
studied that FL suffers from the ``client-drift'' problem, which is caused by
the inconsistent optimum across local clients. However, till now it still lacks
solid theoretical analysis to explain the impact of this local inconsistency.
To alleviate the negative impact of the ``client drift'' and explore its
substance in FL, in this paper, we first design an efficient FL algorithm
\textit{FedInit}, which allows employing the personalized relaxed
initialization state at the beginning of each local training stage.
Specifically, \textit{FedInit} initializes the local state by moving away from
the current global state towards the reverse direction of the latest local
state. This relaxed initialization helps to revise the local divergence and
enhance the local consistency level. Moreover, to further understand how
inconsistency disrupts performance in FL, we introduce the excess risk analysis
and study the divergence term to investigate the test error of the proposed
\textit{FedInit} method. Our studies show that optimization error is not
sensitive to this local inconsistency, while it mainly affects the
generalization error bound in \textit{FedInit}. Extensive experiments are
conducted to validate this conclusion. Our proposed \textit{FedInit} could
achieve state-of-the-art~(SOTA) results compared to several advanced benchmarks
without any additional costs. Meanwhile, stage-wise relaxed initialization
could also be incorporated into the current advanced algorithms to achieve
higher performance in the FL paradigm.
|
[
"cs.LG",
"cs.DC",
"math.OC"
] | false |
2306.05764
|
2023-06-09T08:57:14Z
|
Fair yet Asymptotically Equal Collaborative Learning
|
[
"Xiaoqiang Lin",
"Xinyi Xu",
"See-Kiong Ng",
"Chuan-Sheng Foo",
"Bryan Kian Hsiang Low"
] |
In collaborative learning with streaming data, nodes (e.g., organizations)
jointly and continuously learn a machine learning (ML) model by sharing the
latest model updates computed from their latest streaming data. For the more
resourceful nodes to be willing to share their model updates, they need to be
fairly incentivized. This paper explores an incentive design that guarantees
fairness so that nodes receive rewards commensurate to their contributions. Our
approach leverages an explore-then-exploit formulation to estimate the nodes'
contributions (i.e., exploration) for realizing our theoretically guaranteed
fair incentives (i.e., exploitation). However, we observe a "rich get richer"
phenomenon arising from the existing approaches to guarantee fairness and it
discourages the participation of the less resourceful nodes. To remedy this, we
additionally preserve asymptotic equality, i.e., less resourceful nodes achieve
equal performance eventually to the more resourceful/"rich" nodes. We
empirically demonstrate in two settings with real-world streaming data:
federated online incremental learning and federated reinforcement learning,
that our proposed approach outperforms existing baselines in fairness and
learning performance while remaining competitive in preserving equality.
|
[
"cs.LG",
"cs.AI",
"cs.GT",
"cs.MA"
] | false |
2306.05813
|
2023-06-09T11:12:55Z
|
Incorporating Prior Knowledge in Deep Learning Models via Pathway
Activity Autoencoders
|
[
"Pedro Henrique da Costa Avelar",
"Min Wu",
"Sophia Tsoka"
] |
Motivation: Despite advances in the computational analysis of high-throughput
molecular profiling assays (e.g. transcriptomics), a dichotomy exists between
methods that are simple and interpretable, and ones that are complex but with
lower degree of interpretability. Furthermore, very few methods deal with
trying to translate interpretability in biologically relevant terms, such as
known pathway cascades. Biological pathways reflecting signalling events or
metabolic conversions are Small improvements or modifications of existing
algorithms will generally not be suitable, unless novel biological results have
been predicted and verified. Determining which pathways are implicated in
disease and incorporating such pathway data as prior knowledge may enhance
predictive modelling and personalised strategies for diagnosis, treatment and
prevention of disease.
Results: We propose a novel prior-knowledge-based deep auto-encoding
framework, PAAE, together with its accompanying generative variant, PAVAE, for
RNA-seq data in cancer. Through comprehensive comparisons among various
learning models, we show that, despite having access to a smaller set of
features, our PAAE and PAVAE models achieve better out-of-set reconstruction
results compared to common methodologies. Furthermore, we compare our model
with equivalent baselines on a classification task and show that they achieve
better results than models which have access to the full input gene set.
Another result is that using vanilla variational frameworks might negatively
impact both reconstruction outputs as well as classification performance.
Finally, our work directly contributes by providing comprehensive
interpretability analyses on our models on top of improving prognostication for
translational medicine.
|
[
"cs.LG",
"cs.AI",
"cs.NE"
] | false |
2306.05862
|
2023-06-09T12:53:24Z
|
Federated Learning You May Communicate Less Often!
|
[
"Milad Sefidgaran",
"Romain Chor",
"Abdellatif Zaidi",
"Yijun Wan"
] |
We investigate the generalization error of statistical learning models in a
Federated Learning (FL) setting. Specifically, we study the evolution of the
generalization error with the number of communication rounds between the
clients and the parameter server, i.e., the effect on the generalization error
of how often the local models as computed by the clients are aggregated at the
parameter server. We establish PAC-Bayes and rate-distortion theoretic bounds
on the generalization error that account explicitly for the effect of the
number of rounds, say $ R \in \mathbb{N}$, in addition to the number of
participating devices $K$ and individual datasets size $n$. The bounds, which
apply in their generality for a large class of loss functions and learning
algorithms, appear to be the first of their kind for the FL setting.
Furthermore, we apply our bounds to FL-type Support Vector Machines (FSVM); and
we derive (more) explicit bounds on the generalization error in this case. In
particular, we show that the generalization error of FSVM increases with $R$,
suggesting that more frequent communication with the parameter server
diminishes the generalization power of such learning algorithms. Combined with
that the empirical risk generally decreases for larger values of $R$, this
indicates that $R$ might be a parameter to optimize in order to minimize the
population risk of FL algorithms. Moreover, specialized to the case $R=1$
(sometimes referred to as "one-shot" FL or distributed learning) our bounds
suggest that the generalization error of the FL setting decreases faster than
that of centralized learning by a factor of $\mathcal{O}(\sqrt{\log(K)/K})$,
thereby generalizing recent findings in this direction to arbitrary loss
functions and algorithms. The results of this paper are also validated on some
experiments.
|
[
"stat.ML",
"cs.IT",
"cs.LG",
"math.IT"
] | false |
2306.05873
|
2023-06-09T13:11:05Z
|
Detecting Adversarial Directions in Deep Reinforcement Learning to Make
Robust Decisions
|
[
"Ezgi Korkmaz",
"Jonah Brown-Cohen"
] |
Learning in MDPs with highly complex state representations is currently
possible due to multiple advancements in reinforcement learning algorithm
design. However, this incline in complexity, and furthermore the increase in
the dimensions of the observation came at the cost of volatility that can be
taken advantage of via adversarial attacks (i.e. moving along worst-case
directions in the observation space). To solve this policy instability problem
we propose a novel method to detect the presence of these non-robust directions
via local quadratic approximation of the deep neural policy loss. Our method
provides a theoretical basis for the fundamental cut-off between safe
observations and adversarial observations. Furthermore, our technique is
computationally efficient, and does not depend on the methods used to produce
the worst-case directions. We conduct extensive experiments in the Arcade
Learning Environment with several different adversarial attack techniques. Most
significantly, we demonstrate the effectiveness of our approach even in the
setting where non-robust directions are explicitly optimized to circumvent our
proposed method.
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"stat.ML"
] | false |
2306.05889
|
2023-06-09T13:35:04Z
|
C(NN)FD -- a deep learning framework for turbomachinery CFD analysis
|
[
"Giuseppe Bruni",
"Sepehr Maleki",
"Senthil K. Krishnababu"
] |
Deep Learning methods have seen a wide range of successful applications
across different industries. Up until now, applications to physical simulations
such as CFD (Computational Fluid Dynamics), have been limited to simple
test-cases of minor industrial relevance. This paper demonstrates the
development of a novel deep learning framework for real-time predictions of the
impact of manufacturing and build variations on the overall performance of
axial compressors in gas turbines, with a focus on tip clearance variations.
The associated scatter in efficiency can significantly increase the $CO_2$
emissions, thus being of great industrial and environmental relevance. The
proposed \textit{C(NN)FD} architecture achieves in real-time accuracy
comparable to the CFD benchmark. Predicting the flow field and using it to
calculate the corresponding overall performance renders the methodology
generalisable, while filtering only relevant parts of the CFD solution makes
the methodology scalable to industrial applications.
|
[
"cs.LG",
"cs.CE",
"physics.flu-dyn"
] | false |
2306.05937
|
2023-06-09T14:56:06Z
|
Robust Data-driven Prescriptiveness Optimization
|
[
"Mehran Poursoltani",
"Erick Delage",
"Angelos Georghiou"
] |
The abundance of data has led to the emergence of a variety of optimization
techniques that attempt to leverage available side information to provide more
anticipative decisions. The wide range of methods and contexts of application
have motivated the design of a universal unitless measure of performance known
as the coefficient of prescriptiveness. This coefficient was designed to
quantify both the quality of contextual decisions compared to a reference one
and the prescriptive power of side information. To identify policies that
maximize the former in a data-driven context, this paper introduces a
distributionally robust contextual optimization model where the coefficient of
prescriptiveness substitutes for the classical empirical risk minimization
objective. We present a bisection algorithm to solve this model, which relies
on solving a series of linear programs when the distributional ambiguity set
has an appropriate nested form and polyhedral structure. Studying a contextual
shortest path problem, we evaluate the robustness of the resulting policies
against alternative methods when the out-of-sample dataset is subject to
varying amounts of distribution shift.
|
[
"math.OC",
"cs.LG",
"stat.ME"
] | false |
2306.05951
|
2023-06-09T15:05:40Z
|
Prediction of Transportation Index for Urban Patterns in Small and
Medium-sized Indian Cities using Hybrid RidgeGAN Model
|
[
"Rahisha Thottolil",
"Uttam Kumar",
"Tanujit Chakraborty"
] |
The rapid urbanization trend in most developing countries including India is
creating a plethora of civic concerns such as loss of green space, degradation
of environmental health, clean water availability, air pollution, traffic
congestion leading to delays in vehicular transportation, etc. Transportation
and network modeling through transportation indices have been widely used to
understand transportation problems in the recent past. This necessitates
predicting transportation indices to facilitate sustainable urban planning and
traffic management. Recent advancements in deep learning research, in
particular, Generative Adversarial Networks (GANs), and their modifications in
spatial data analysis such as CityGAN, Conditional GAN, and MetroGAN have
enabled urban planners to simulate hyper-realistic urban patterns. These
synthetic urban universes mimic global urban patterns and evaluating their
landscape structures through spatial pattern analysis can aid in comprehending
landscape dynamics, thereby enhancing sustainable urban planning. This research
addresses several challenges in predicting the urban transportation index for
small and medium-sized Indian cities. A hybrid framework based on Kernel Ridge
Regression (KRR) and CityGAN is introduced to predict transportation index
using spatial indicators of human settlement patterns. This paper establishes a
relationship between the transportation index and human settlement indicators
and models it using KRR for the selected 503 Indian cities. The proposed hybrid
pipeline, we call it RidgeGAN model, can evaluate the sustainability of urban
sprawl associated with infrastructure development and transportation systems in
sprawling cities. Experimental results show that the two-step pipeline approach
outperforms existing benchmarks based on spatial and statistical measures.
|
[
"cs.LG",
"physics.geo-ph",
"stat.AP"
] | false |
2306.05998
|
2023-06-09T16:10:26Z
|
Distributed Consensus Algorithm for Decision-Making in Multi-agent
Multi-armed Bandit
|
[
"Xiaotong Cheng",
"Setareh Maghsudi"
] |
We study a structured multi-agent multi-armed bandit (MAMAB) problem in a
dynamic environment. A graph reflects the information-sharing structure among
agents, and the arms' reward distributions are piecewise-stationary with
several unknown change points. The agents face the identical
piecewise-stationary MAB problem. The goal is to develop a decision-making
policy for the agents that minimizes the regret, which is the expected total
loss of not playing the optimal arm at each time step. Our proposed solution,
Restarted Bayesian Online Change Point Detection in Cooperative Upper
Confidence Bound Algorithm (RBO-Coop-UCB), involves an efficient multi-agent
UCB algorithm as its core enhanced with a Bayesian change point detector. We
also develop a simple restart decision cooperation that improves
decision-making. Theoretically, we establish that the expected group regret of
RBO-Coop-UCB is upper bounded by $\mathcal{O}(KNM\log T + K\sqrt{MT\log T})$,
where K is the number of agents, M is the number of arms, and T is the number
of time steps. Numerical experiments on synthetic and real-world datasets
demonstrate that our proposed method outperforms the state-of-the-art
algorithms.
|
[
"cs.LG",
"cs.MA",
"stat.ML"
] | false |
2306.06136
|
2023-06-09T02:26:28Z
|
Robustness Testing for Multi-Agent Reinforcement Learning: State
Perturbations on Critical Agents
|
[
"Ziyuan Zhou",
"Guanjun Liu"
] |
Multi-Agent Reinforcement Learning (MARL) has been widely applied in many
fields such as smart traffic and unmanned aerial vehicles. However, most MARL
algorithms are vulnerable to adversarial perturbations on agent states.
Robustness testing for a trained model is an essential step for confirming the
trustworthiness of the model against unexpected perturbations. This work
proposes a novel Robustness Testing framework for MARL that attacks states of
Critical Agents (RTCA). The RTCA has two innovations: 1) a Differential
Evolution (DE) based method to select critical agents as victims and to advise
the worst-case joint actions on them; and 2) a team cooperation policy
evaluation method employed as the objective function for the optimization of
DE. Then, adversarial state perturbations of the critical agents are generated
based on the worst-case joint actions. This is the first robustness testing
framework with varying victim agents. RTCA demonstrates outstanding performance
in terms of the number of victim agents and destroying cooperation policies.
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"cs.MA"
] | false |
2306.06144
|
2023-06-09T09:10:28Z
|
Bayesian Calibration of MEMS Accelerometers
|
[
"Oliver Dürr",
"Po-Yu Fan",
"Zong-Xian Yin"
] |
This study aims to investigate the utilization of Bayesian techniques for the
calibration of micro-electro-mechanical systems (MEMS) accelerometers. These
devices have garnered substantial interest in various practical applications
and typically require calibration through error-correcting functions. The
parameters of these error-correcting functions are determined during a
calibration process. However, due to various sources of noise, these parameters
cannot be determined with precision, making it desirable to incorporate
uncertainty in the calibration models. Bayesian modeling offers a natural and
complete way of reflecting uncertainty by treating the model parameters as
variables rather than fixed values. Additionally, Bayesian modeling enables the
incorporation of prior knowledge, making it an ideal choice for calibration.
Nevertheless, it is infrequently used in sensor calibration. This study
introduces Bayesian methods for the calibration of MEMS accelerometer data in a
straightforward manner using recent advances in probabilistic programming.
|
[
"eess.SP",
"cs.LG",
"stat.AP"
] | false |
2306.06179
|
2023-06-09T18:07:06Z
|
Hidden symmetries of ReLU networks
|
[
"J. Elisenda Grigsby",
"Kathryn Lindsey",
"David Rolnick"
] |
The parameter space for any fixed architecture of feedforward ReLU neural
networks serves as a proxy during training for the associated class of
functions - but how faithful is this representation? It is known that many
different parameter settings can determine the same function. Moreover, the
degree of this redundancy is inhomogeneous: for some networks, the only
symmetries are permutation of neurons in a layer and positive scaling of
parameters at a neuron, while other networks admit additional hidden
symmetries. In this work, we prove that, for any network architecture where no
layer is narrower than the input, there exist parameter settings with no hidden
symmetries. We also describe a number of mechanisms through which hidden
symmetries can arise, and empirically approximate the functional dimension of
different network architectures at initialization. These experiments indicate
that the probability that a network has no hidden symmetries decreases towards
0 as depth increases, while increasing towards 1 as width and input dimension
increase.
|
[
"cs.LG",
"math.CO",
"math.GT",
"57R70, 57Q99, 52B70, 52C35",
"I.2.6"
] | false |
2306.06265
|
2023-06-09T21:26:57Z
|
Near-optimal Conservative Exploration in Reinforcement Learning under
Episode-wise Constraints
|
[
"Donghao Li",
"Ruiquan Huang",
"Cong Shen",
"Jing Yang"
] |
This paper investigates conservative exploration in reinforcement learning
where the performance of the learning agent is guaranteed to be above a certain
threshold throughout the learning process. It focuses on the tabular episodic
Markov Decision Process (MDP) setting that has finite states and actions. With
the knowledge of an existing safe baseline policy, an algorithm termed as
StepMix is proposed to balance the exploitation and exploration while ensuring
that the conservative constraint is never violated in each episode with high
probability. StepMix features a unique design of a mixture policy that
adaptively and smoothly interpolates between the baseline policy and the
optimistic policy. Theoretical analysis shows that StepMix achieves
near-optimal regret order as in the constraint-free setting, indicating that
obeying the stringent episode-wise conservative constraint does not compromise
the learning performance. Besides, a randomization-based EpsMix algorithm is
also proposed and shown to achieve the same performance as StepMix. The
algorithm design and theoretical analysis are further extended to the setting
where the baseline policy is not given a priori but must be learned from an
offline dataset, and it is proved that similar conservative guarantee and
regret can be achieved if the offline dataset is sufficiently large. Experiment
results corroborate the theoretical analysis and demonstrate the effectiveness
of the proposed conservative exploration strategies.
|
[
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] | false |
2306.06284
|
2023-06-09T22:24:05Z
|
Everybody Compose: Deep Beats To Music
|
[
"Conghao Shen",
"Violet Z. Yao",
"Yixin Liu"
] |
This project presents a deep learning approach to generate monophonic
melodies based on input beats, allowing even amateurs to create their own music
compositions. Three effective methods - LSTM with Full Attention, LSTM with
Local Attention, and Transformer with Relative Position Representation - are
proposed for this novel task, providing great variation, harmony, and structure
in the generated music. This project allows anyone to compose their own music
by tapping their keyboards or ``recoloring'' beat sequences from existing
works.
|
[
"cs.SD",
"cs.LG",
"cs.MM",
"eess.AS"
] | false |
2306.06291
|
2023-06-09T22:48:13Z
|
Optimal Heterogeneous Collaborative Linear Regression and Contextual
Bandits
|
[
"Xinmeng Huang",
"Kan Xu",
"Donghwan Lee",
"Hamed Hassani",
"Hamsa Bastani",
"Edgar Dobriban"
] |
Large and complex datasets are often collected from several, possibly
heterogeneous sources. Collaborative learning methods improve efficiency by
leveraging commonalities across datasets while accounting for possible
differences among them. Here we study collaborative linear regression and
contextual bandits, where each instance's associated parameters are equal to a
global parameter plus a sparse instance-specific term. We propose a novel
two-stage estimator called MOLAR that leverages this structure by first
constructing an entry-wise median of the instances' linear regression
estimates, and then shrinking the instance-specific estimates towards the
median. MOLAR improves the dependence of the estimation error on the data
dimension, compared to independent least squares estimates. We then apply MOLAR
to develop methods for sparsely heterogeneous collaborative contextual bandits,
which lead to improved regret guarantees compared to independent bandit
methods. We further show that our methods are minimax optimal by providing a
number of lower bounds. Finally, we support the efficiency of our methods by
performing experiments on both synthetic data and the PISA dataset on student
educational outcomes from heterogeneous countries.
|
[
"stat.ML",
"cs.LG",
"stat.ME"
] | false |
2306.07949
|
2023-06-09T03:36:00Z
|
Improving Frame-level Classifier for Word Timings with Non-peaky CTC in
End-to-End Automatic Speech Recognition
|
[
"Xianzhao Chen",
"Yist Y. Lin",
"Kang Wang",
"Yi He",
"Zejun Ma"
] |
End-to-end (E2E) systems have shown comparable performance to hybrid systems
for automatic speech recognition (ASR). Word timings, as a by-product of ASR,
are essential in many applications, especially for subtitling and
computer-aided pronunciation training. In this paper, we improve the
frame-level classifier for word timings in E2E system by introducing label
priors in connectionist temporal classification (CTC) loss, which is adopted
from prior works, and combining low-level Mel-scale filter banks with
high-level ASR encoder output as input feature. On the internal Chinese corpus,
the proposed method achieves 95.68%/94.18% compared to the hybrid system
93.0%/90.22% on the word timing accuracy metrics. It also surpass a previous
E2E approach with an absolute increase of 4.80%/8.02% on the metrics on 7
languages. In addition, we further improve word timing accuracy by delaying CTC
peaks with frame-wise knowledge distillation, though only experimenting on
LibriSpeech.
|
[
"eess.AS",
"cs.AI",
"cs.LG"
] | false |
2306.11503
|
2023-06-09T15:55:10Z
|
The Age of Synthetic Realities: Challenges and Opportunities
|
[
"João Phillipe Cardenuto",
"Jing Yang",
"Rafael Padilha",
"Renjie Wan",
"Daniel Moreira",
"Haoliang Li",
"Shiqi Wang",
"Fernanda Andaló",
"Sébastien Marcel",
"Anderson Rocha"
] |
Synthetic realities are digital creations or augmentations that are
contextually generated through the use of Artificial Intelligence (AI) methods,
leveraging extensive amounts of data to construct new narratives or realities,
regardless of the intent to deceive. In this paper, we delve into the concept
of synthetic realities and their implications for Digital Forensics and society
at large within the rapidly advancing field of AI. We highlight the crucial
need for the development of forensic techniques capable of identifying harmful
synthetic creations and distinguishing them from reality. This is especially
important in scenarios involving the creation and dissemination of fake news,
disinformation, and misinformation. Our focus extends to various forms of
media, such as images, videos, audio, and text, as we examine how synthetic
realities are crafted and explore approaches to detecting these malicious
creations. Additionally, we shed light on the key research challenges that lie
ahead in this area. This study is of paramount importance due to the rapid
progress of AI generative techniques and their impact on the fundamental
principles of Forensic Science.
|
[
"cs.CY",
"cs.AI",
"cs.LG"
] | false |
2306.12432
|
2023-06-09T22:44:46Z
|
Interpretation of immunofluorescence slides by deep learning techniques:
anti-nuclear antibodies case study
|
[
"Oumar Khlelfa",
"Aymen Yahyaoui",
"Mouna Ben Azaiz",
"Anwer Ncibi",
"Ezzedine Gazouani",
"Adel Ammar",
"Wadii Boulila"
] |
Nowadays, diseases are increasing in numbers and severity by the hour.
Immunity diseases, affecting 8\% of the world population in 2017 according to
the World Health Organization (WHO), is a field in medicine worth attention due
to the high rate of disease occurrence classified under this category. This
work presents an up-to-date review of state-of-the-art immune diseases
healthcare solutions. We focus on tackling the issue with modern solutions such
as Deep Learning to detect anomalies in the early stages hence providing health
practitioners with efficient tools. We rely on advanced deep learning
techniques such as Convolutional Neural Networks (CNN) to fulfill our objective
of providing an efficient tool while providing a proficient analysis of this
solution. The proposed solution was tested and evaluated by the immunology
department in the Principal Military Hospital of Instruction of Tunis, which
considered it a very helpful tool.
|
[
"q-bio.QM",
"cs.LG",
"eess.IV"
] | false |
2306.14902
|
2023-06-09T03:04:21Z
|
Molecule Design by Latent Space Energy-Based Modeling and Gradual
Distribution Shifting
|
[
"Deqian Kong",
"Bo Pang",
"Tian Han",
"Ying Nian Wu"
] |
Generation of molecules with desired chemical and biological properties such
as high drug-likeness, high binding affinity to target proteins, is critical
for drug discovery. In this paper, we propose a probabilistic generative model
to capture the joint distribution of molecules and their properties. Our model
assumes an energy-based model (EBM) in the latent space. Conditional on the
latent vector, the molecule and its properties are modeled by a molecule
generation model and a property regression model respectively. To search for
molecules with desired properties, we propose a sampling with gradual
distribution shifting (SGDS) algorithm, so that after learning the model
initially on the training data of existing molecules and their properties, the
proposed algorithm gradually shifts the model distribution towards the region
supported by molecules with desired values of properties. Our experiments show
that our method achieves very strong performances on various molecule design
tasks.
|
[
"q-bio.BM",
"cs.LG",
"stat.ML"
] | false |
2306.15676
|
2023-06-09T03:12:42Z
|
KAPLA: Pragmatic Representation and Fast Solving of Scalable NN
Accelerator Dataflow
|
[
"Zhiyao Li",
"Mingyu Gao"
] |
Dataflow scheduling decisions are of vital importance to neural network (NN)
accelerators. Recent scalable NN accelerators support a rich set of advanced
dataflow techniques. The problems of comprehensively representing and quickly
finding optimized dataflow schemes thus become significantly more complicated
and challenging. In this work, we first propose comprehensive and pragmatic
dataflow representations for temporal and spatial scheduling on scalable
multi-node NN architectures. An informal hierarchical taxonomy highlights the
tight coupling across different levels of the dataflow space as the major
difficulty for fast design exploration. A set of formal tensor-centric
directives accurately express various inter-layer and intra-layer schemes, and
allow for quickly determining their validity and efficiency. We then build a
generic, optimized, and fast dataflow solver, KAPLA, which makes use of the
pragmatic directives to explore the design space with effective validity check
and efficiency estimation. KAPLA decouples the upper inter-layer level for fast
pruning, and solves the lower intra-layer schemes with a novel bottom-up cost
descending method. KAPLA achieves within only 2.2% and 7.7% energy overheads on
the result dataflow for training and inference, respectively, compared to the
exhaustively searched optimal schemes. It also outperforms random and
machine-learning-based approaches, with more optimized results and orders of
magnitude faster search speedup.
|
[
"cs.AR",
"cs.AI",
"cs.LG",
"cs.PF",
"C.1.4; C.3"
] | false |
2307.05392
|
2023-06-09T10:10:03Z
|
Simplicial Message Passing for Chemical Property Prediction
|
[
"Hai Lan",
"Xian Wei"
] |
Recently, message-passing Neural networks (MPNN) provide a promising tool for
dealing with molecular graphs and have achieved remarkable success in
facilitating the discovery and materials design with desired properties.
However, the classical MPNN methods also suffer from a limitation in capturing
the strong topological information hidden in molecular structures, such as
nonisomorphic graphs. To address this problem, this work proposes a Simplicial
Message Passing (SMP) framework to better capture the topological information
from molecules, which can break through the limitation within the vanilla
message-passing paradigm. In SMP, a generalized message-passing framework is
established for aggregating the information from arbitrary-order simplicial
complex, and a hierarchical structure is elaborated to allow information
exchange between different order simplices. We apply the SMP framework within
deep learning architectures for quantum-chemical properties prediction and
achieve state-of-the-art results. The results show that compared to traditional
MPNN, involving higher-order simplex can better capture the complex structure
of molecules and substantially enhance the performance of tasks. The SMP-based
model can provide a generalized framework for GNNs and aid in the discovery and
design of materials with tailored properties for various applications.
|
[
"cond-mat.mtrl-sci",
"cs.LG",
"physics.chem-ph"
] | false |
2306.05781
|
2023-06-09T09:49:16Z
|
Adaptivity Complexity for Causal Graph Discovery
|
[
"Davin Choo",
"Kirankumar Shiragur"
] |
Causal discovery from interventional data is an important problem, where the
task is to design an interventional strategy that learns the hidden ground
truth causal graph $G(V,E)$ on $|V| = n$ nodes while minimizing the number of
performed interventions. Most prior interventional strategies broadly fall into
two categories: non-adaptive and adaptive. Non-adaptive strategies decide on a
single fixed set of interventions to be performed while adaptive strategies can
decide on which nodes to intervene on sequentially based on past interventions.
While adaptive algorithms may use exponentially fewer interventions than their
non-adaptive counterparts, there are practical concerns that constrain the
amount of adaptivity allowed. Motivated by this trade-off, we study the problem
of $r$-adaptivity, where the algorithm designer recovers the causal graph under
a total of $r$ sequential rounds whilst trying to minimize the total number of
interventions. For this problem, we provide a $r$-adaptive algorithm that
achieves $O(\min\{r,\log n\} \cdot n^{1/\min\{r,\log n\}})$ approximation with
respect to the verification number, a well-known lower bound for adaptive
algorithms. Furthermore, for every $r$, we show that our approximation is
tight. Our definition of $r$-adaptivity interpolates nicely between the
non-adaptive ($r=1$) and fully adaptive ($r=n$) settings where our
approximation simplifies to $O(n)$ and $O(\log n)$ respectively, matching the
best-known approximation guarantees for both extremes. Our results also extend
naturally to the bounded size interventions.
|
[
"cs.LG",
"cs.AI",
"cs.DS",
"stat.ME",
"stat.ML"
] | false |
2306.06087
|
2023-06-09T17:49:56Z
|
Learning Not to Spoof
|
[
"David Byrd"
] |
As intelligent trading agents based on reinforcement learning (RL) gain
prevalence, it becomes more important to ensure that RL agents obey laws,
regulations, and human behavioral expectations. There is substantial literature
concerning the aversion of obvious catastrophes like crashing a helicopter or
bankrupting a trading account, but little around the avoidance of subtle
non-normative behavior for which there are examples, but no programmable
definition. Such behavior may violate legal or regulatory, rather than physical
or monetary, constraints.
In this article, I consider a series of experiments in which an intelligent
stock trading agent maximizes profit but may also inadvertently learn to spoof
the market in which it participates. I first inject a hand-coded spoofing agent
to a multi-agent market simulation and learn to recognize spoofing activity
sequences. Then I replace the hand-coded spoofing trader with a simple
profit-maximizing RL agent and observe that it independently discovers spoofing
as the optimal strategy. Finally, I introduce a method to incorporate the
recognizer as normative guide, shaping the agent's perceived rewards and
altering its selected actions. The agent remains profitable while avoiding
spoofing behaviors that would result in even higher profit. After presenting
the empirical results, I conclude with some recommendations. The method should
generalize to the reduction of any unwanted behavior for which a recognizer can
be learned.
|
[
"cs.LG",
"cs.AI",
"cs.CE",
"cs.MA",
"q-fin.ST"
] | false |
2306.06174
|
2023-06-09T18:01:14Z
|
Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems
|
[
"Harshit Kapadia",
"Lihong Feng",
"Peter Benner"
] |
When repeated evaluations for varying parameter configurations of a
high-fidelity physical model are required, surrogate modeling techniques based
on model order reduction are desired. In absence of the governing equations
describing the dynamics, we need to construct the parametric reduced-order
surrogate model in a non-intrusive fashion. In this setting, the usual
residual-based error estimate for optimal parameter sampling associated with
the reduced basis method is not directly available. Our work provides a
non-intrusive optimality criterion to efficiently populate the parameter
snapshots, thereby, enabling us to effectively construct a parametric surrogate
model. We consider separate parameter-specific proper orthogonal decomposition
(POD) subspaces and propose an active-learning-driven surrogate model using
kernel-based shallow neural networks, abbreviated as ActLearn-POD-KSNN
surrogate model. To demonstrate the validity of our proposed ideas, we present
numerical experiments using two physical models, namely Burgers' equation and
shallow water equations. Both the models have mixed -- convective and diffusive
-- effects within their respective parameter domains, with each of them
dominating in certain regions. The proposed ActLearn-POD-KSNN surrogate model
efficiently predicts the solution at new parameter locations, even for a
setting with multiple interacting shock profiles.
|
[
"cs.LG",
"cs.CE",
"cs.NA",
"math.DS",
"math.NA",
"physics.flu-dyn"
] | false |
2306.06339
|
2023-06-10T04:22:13Z
|
Two-Stage Holistic and Contrastive Explanation of Image Classification
|
[
"Weiyan Xie",
"Xiao-Hui Li",
"Zhi Lin",
"Leonard K. M. Poon",
"Caleb Chen Cao",
"Nevin L. Zhang"
] |
The need to explain the output of a deep neural network classifier is now
widely recognized. While previous methods typically explain a single class in
the output, we advocate explaining the whole output, which is a probability
distribution over multiple classes. A whole-output explanation can help a human
user gain an overall understanding of model behaviour instead of only one
aspect of it. It can also provide a natural framework where one can examine the
evidence used to discriminate between competing classes, and thereby obtain
contrastive explanations. In this paper, we propose a contrastive whole-output
explanation (CWOX) method for image classification, and evaluate it using
quantitative metrics and through human subject studies. The source code of CWOX
is available at https://github.com/vaynexie/CWOX.
|
[
"cs.CV"
] | false |
2306.06359
|
2023-06-10T06:39:25Z
|
NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance
Fields against Adversarial Perturbations
|
[
"Yonggan Fu",
"Ye Yuan",
"Souvik Kundu",
"Shang Wu",
"Shunyao Zhang",
"Yingyan Lin"
] |
Generalizable Neural Radiance Fields (GNeRF) are one of the most promising
real-world solutions for novel view synthesis, thanks to their cross-scene
generalization capability and thus the possibility of instant rendering on new
scenes. While adversarial robustness is essential for real-world applications,
little study has been devoted to understanding its implication on GNeRF. We
hypothesize that because GNeRF is implemented by conditioning on the source
views from new scenes, which are often acquired from the Internet or
third-party providers, there are potential new security concerns regarding its
real-world applications. Meanwhile, existing understanding and solutions for
neural networks' adversarial robustness may not be applicable to GNeRF, due to
its 3D nature and uniquely diverse operations. To this end, we present NeRFool,
which to the best of our knowledge is the first work that sets out to
understand the adversarial robustness of GNeRF. Specifically, NeRFool unveils
the vulnerability patterns and important insights regarding GNeRF's adversarial
robustness. Built upon the above insights gained from NeRFool, we further
develop NeRFool+, which integrates two techniques capable of effectively
attacking GNeRF across a wide range of target views, and provide guidelines for
defending against our proposed attacks. We believe that our NeRFool/NeRFool+
lays the initial foundation for future innovations in developing robust
real-world GNeRF solutions. Our codes are available at:
https://github.com/GATECH-EIC/NeRFool.
|
[
"cs.CV"
] | false |
2306.06365
|
2023-06-10T07:01:08Z
|
FalconNet: Factorization for the Light-weight ConvNets
|
[
"Zhicheng Cai",
"Qiu Shen"
] |
Designing light-weight CNN models with little parameters and Flops is a
prominent research concern. However, three significant issues persist in the
current light-weight CNNs: i) the lack of architectural consistency leads to
redundancy and hindered capacity comparison, as well as the ambiguity in
causation between architectural choices and performance enhancement; ii) the
utilization of a single-branch depth-wise convolution compromises the model
representational capacity; iii) the depth-wise convolutions account for large
proportions of parameters and Flops, while lacking efficient method to make
them light-weight. To address these issues, we factorize the four vital
components of light-weight CNNs from coarse to fine and redesign them: i) we
design a light-weight overall architecture termed LightNet, which obtains
better performance by simply implementing the basic blocks of other
light-weight CNNs; ii) we abstract a Meta Light Block, which consists of
spatial operator and channel operator and uniformly describes current basic
blocks; iii) we raise RepSO which constructs multiple spatial operator branches
to enhance the representational ability; iv) we raise the concept of receptive
range, guided by which we raise RefCO to sparsely factorize the channel
operator. Based on above four vital components, we raise a novel light-weight
CNN model termed as FalconNet. Experimental results validate that FalconNet can
achieve higher accuracy with lower number of parameters and Flops compared to
existing light-weight CNNs.
|
[
"cs.CV"
] | false |
2306.06370
|
2023-06-10T07:27:00Z
|
AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder
|
[
"Tal Shaharabany",
"Aviad Dahan",
"Raja Giryes",
"Lior Wolf"
] |
The recently introduced Segment Anything Model (SAM) combines a clever
architecture and large quantities of training data to obtain remarkable image
segmentation capabilities. However, it fails to reproduce such results for
Out-Of-Distribution (OOD) domains such as medical images. Moreover, while SAM
is conditioned on either a mask or a set of points, it may be desirable to have
a fully automatic solution. In this work, we replace SAM's conditioning with an
encoder that operates on the same input image. By adding this encoder and
without further fine-tuning SAM, we obtain state-of-the-art results on multiple
medical images and video benchmarks. This new encoder is trained via gradients
provided by a frozen SAM. For inspecting the knowledge within it, and providing
a lightweight segmentation solution, we also learn to decode it into a mask by
a shallow deconvolution network.
|
[
"cs.CV"
] | false |
2306.06414
|
2023-06-10T11:20:04Z
|
Revealing Model Biases: Assessing Deep Neural Networks via Recovered
Sample Analysis
|
[
"Mohammad Mahdi Mehmanchi",
"Mahbod Nouri",
"Mohammad Sabokrou"
] |
This paper proposes a straightforward and cost-effective approach to assess
whether a deep neural network (DNN) relies on the primary concepts of training
samples or simply learns discriminative, yet simple and irrelevant features
that can differentiate between classes. The paper highlights that DNNs, as
discriminative classifiers, often find the simplest features to discriminate
between classes, leading to a potential bias towards irrelevant features and
sometimes missing generalization. While a generalization test is one way to
evaluate a trained model's performance, it can be costly and may not cover all
scenarios to ensure that the model has learned the primary concepts.
Furthermore, even after conducting a generalization test, identifying bias in
the model may not be possible. Here, the paper proposes a method that involves
recovering samples from the parameters of the trained model and analyzing the
reconstruction quality. We believe that if the model's weights are optimized to
discriminate based on some features, these features will be reflected in the
reconstructed samples. If the recovered samples contain the primary concepts of
the training data, it can be concluded that the model has learned the essential
and determining features. On the other hand, if the recovered samples contain
irrelevant features, it can be concluded that the model is biased towards these
features. The proposed method does not require any test or generalization
samples, only the parameters of the trained model and the training data that
lie on the margin. Our experiments demonstrate that the proposed method can
determine whether the model has learned the desired features of the training
data. The paper highlights that our understanding of how these models work is
limited, and the proposed approach addresses this issue.
|
[
"cs.CV"
] | false |
2306.06505
|
2023-06-10T18:42:36Z
|
Vista-Morph: Unsupervised Image Registration of Visible-Thermal Facial
Pairs
|
[
"Catherine Ordun",
"Edward Raff",
"Sanjay Purushotham"
] |
For a variety of biometric cross-spectral tasks, Visible-Thermal (VT) facial
pairs are used. However, due to a lack of calibration in the lab, photographic
capture between two different sensors leads to severely misaligned pairs that
can lead to poor results for person re-identification and generative AI. To
solve this problem, we introduce our approach for VT image registration called
Vista Morph. Unlike existing VT facial registration that requires manual,
hand-crafted features for pixel matching and/or a supervised thermal reference,
Vista Morph is completely unsupervised without the need for a reference. By
learning the affine matrix through a Vision Transformer (ViT)-based Spatial
Transformer Network (STN) and Generative Adversarial Networks (GAN), Vista
Morph successfully aligns facial and non-facial VT images. Our approach learns
warps in Hard, No, and Low-light visual settings and is robust to geometric
perturbations and erasure at test time. We conduct a downstream generative AI
task to show that registering training data with Vista Morph improves subject
identity of generated thermal faces when performing V2T image translation.
|
[
"cs.CV"
] | false |
2306.06367
|
2023-06-10T07:14:59Z
|
Shuffled Autoregression For Motion Interpolation
|
[
"Shuo Huang",
"Jia Jia",
"Zongxin Yang",
"Wei Wang",
"Haozhe Wu",
"Yi Yang",
"Junliang Xing"
] |
This work aims to provide a deep-learning solution for the motion
interpolation task. Previous studies solve it with geometric weight functions.
Some other works propose neural networks for different problem settings with
consecutive pose sequences as input. However, motion interpolation is a more
complex problem that takes isolated poses (e.g., only one start pose and one
end pose) as input. When applied to motion interpolation, these deep learning
methods have limited performance since they do not leverage the flexible
dependencies between interpolation frames as the original geometric formulas
do. To realize this interpolation characteristic, we propose a novel framework,
referred to as \emph{Shuffled AutoRegression}, which expands the autoregression
to generate in arbitrary (shuffled) order and models any inter-frame
dependencies as a directed acyclic graph. We further propose an approach to
constructing a particular kind of dependency graph, with three stages assembled
into an end-to-end spatial-temporal motion Transformer. Experimental results on
one of the current largest datasets show that our model generates vivid and
coherent motions from only one start frame to one end frame and outperforms
competing methods by a large margin. The proposed model is also extensible to
multiple keyframes' motion interpolation tasks and other areas' interpolation.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.06378
|
2023-06-10T08:25:16Z
|
An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees
|
[
"Alexandros Gkillas",
"Dimitris Ampeliotis",
"Kostas Berberidis"
] |
In this paper, we propose a novel methodology for addressing the
hyperspectral image deconvolution problem. This problem is highly ill-posed,
and thus, requires proper priors (regularizers) to model the inherent
spectral-spatial correlations of the HSI signals. To this end, a new
optimization problem is formulated, leveraging a learnable regularizer in the
form of a neural network. To tackle this problem, an effective solver is
proposed using the half quadratic splitting methodology. The derived iterative
solver is then expressed as a fixed-point calculation problem within the Deep
Equilibrium (DEQ) framework, resulting in an interpretable architecture, with
clear explainability to its parameters and convergence properties with
practical benefits. The proposed model is a first attempt to handle the
classical HSI degradation problem with different blurring kernels and noise
levels via a single deep equilibrium model with significant computational
efficiency. Extensive numerical experiments validate the superiority of the
proposed methodology over other state-of-the-art methods. This superior
restoration performance is achieved while requiring 99.85\% less computation
time as compared to existing methods.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.06410
|
2023-06-10T11:04:10Z
|
OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality
Alignment
|
[
"Xize Cheng",
"Tao Jin",
"Linjun Li",
"Wang Lin",
"Xinyu Duan",
"Zhou Zhao"
] |
Speech Recognition builds a bridge between the multimedia streaming
(audio-only, visual-only or audio-visual) and the corresponding text
transcription. However, when training the specific model of new domain, it
often gets stuck in the lack of new-domain utterances, especially the labeled
visual utterances. To break through this restriction, we attempt to achieve
zero-shot modality transfer by maintaining the multi-modality alignment in
phoneme space learned with unlabeled multimedia utterances in the high resource
domain during the pre-training \cite{shi2022learning}, and propose a training
system Open-modality Speech Recognition (\textbf{OpenSR}) that enables the
models trained on a single modality (e.g., audio-only) applicable to more
modalities (e.g., visual-only and audio-visual). Furthermore, we employ a
cluster-based prompt tuning strategy to handle the domain shift for the
scenarios with only common words in the new domain utterances. We demonstrate
that OpenSR enables modality transfer from one to any in three different
settings (zero-, few- and full-shot), and achieves highly competitive zero-shot
performance compared to the existing few-shot and full-shot lip-reading
methods. To the best of our knowledge, OpenSR achieves the state-of-the-art
performance of word error rate in LRS2 on audio-visual speech recognition and
lip-reading with 2.7\% and 25.0\%, respectively. The code and demo are
available at https://github.com/Exgc/OpenSR.
|
[
"cs.CL",
"cs.CV"
] | false |
2306.06434
|
2023-06-10T12:57:57Z
|
Sliding Window Neural Generated Tracking Based on Measurement Model
|
[
"Haya Ejjawi",
"Amal El Fallah Seghrouchni",
"Frederic Barbaresco",
"Raed Abu Zitar"
] |
In the pursuit of further advancement in the field of target tracking, this
paper explores the efficacy of a feedforward neural network in predicting
drones tracks, aiming to eventually, compare the tracks created by the
well-known Kalman filter and the ones created by our proposed neural network.
The unique feature of our proposed neural network tracker is that it is using
only a measurement model to estimate the next states of the track. Object model
selection and linearization is one of the challenges that always face in the
tracking process. The neural network uses a sliding window to incorporate the
history of measurements when applying estimations of the track values. The
testing results are comparable to the ones generated by the Kalman filter,
especially for the cases where there is low measurement covariance. The
complexity of linearization is avoided when using this proposed model.
|
[
"cs.CV",
"eess.SP"
] | false |
2306.06441
|
2023-06-10T13:41:02Z
|
Image Vectorization: a Review
|
[
"Maria Dziuba",
"Ivan Jarsky",
"Valeria Efimova",
"Andrey Filchenkov"
] |
Nowadays, there are many diffusion and autoregressive models that show
impressive results for generating images from text and other input domains.
However, these methods are not intended for ultra-high-resolution image
synthesis. Vector graphics are devoid of this disadvantage, so the generation
of images in this format looks very promising. Instead of generating vector
images directly, you can first synthesize a raster image and then apply
vectorization. Vectorization is the process of converting a raster image into a
similar vector image using primitive shapes. Besides being similar, generated
vector image is also required to contain the minimum number of shapes for
rendering. In this paper, we focus specifically on machine learning-compatible
vectorization methods. We are considering Mang2Vec, Deep Vectorization of
Technical Drawings, DiffVG, and LIVE models. We also provide a brief overview
of existing online methods. We also recall other algorithmic methods, Im2Vec
and ClipGEN models, but they do not participate in the comparison, since there
is no open implementation of these methods or their official implementations do
not work correctly. Our research shows that despite the ability to directly
specify the number and type of shapes, existing machine learning methods work
for a very long time and do not accurately recreate the original image. We
believe that there is no fast universal automatic approach and human control is
required for every method.
|
[
"cs.CV",
"cs.GR"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.