arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.15872
|
2023-05-25T09:07:04Z
|
Jointprop: Joint Semi-supervised Learning for Entity and Relation
Extraction with Heterogeneous Graph-based Propagation
|
[
"Yandan Zheng",
"Anran Hao",
"Anh Tuan Luu"
] |
Semi-supervised learning has been an important approach to address challenges
in extracting entities and relations from limited data. However, current
semi-supervised works handle the two tasks (i.e., Named Entity Recognition and
Relation Extraction) separately and ignore the cross-correlation of entity and
relation instances as well as the existence of similar instances across
unlabeled data. To alleviate the issues, we propose Jointprop, a Heterogeneous
Graph-based Propagation framework for joint semi-supervised entity and relation
extraction, which captures the global structure information between individual
tasks and exploits interactions within unlabeled data. Specifically, we
construct a unified span-based heterogeneous graph from entity and relation
candidates and propagate class labels based on confidence scores. We then
employ a propagation learning scheme to leverage the affinities between
labelled and unlabeled samples. Experiments on benchmark datasets show that our
framework outperforms the state-of-the-art semi-supervised approaches on NER
and RE tasks. We show that the joint semi-supervised learning of the two tasks
benefits from their codependency and validates the importance of utilizing the
shared information between unlabeled data.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15894
|
2023-05-25T09:48:50Z
|
Private Meeting Summarization Without Performance Loss
|
[
"Seolhwa Lee",
"Anders Søgaard"
] |
Meeting summarization has an enormous business potential, but in addition to
being a hard problem, roll-out is challenged by privacy concerns. We explore
the problem of meeting summarization under differential privacy constraints and
find, to our surprise, that while differential privacy leads to slightly lower
performance on in-sample data, differential privacy improves performance when
evaluated on unseen meeting types. Since meeting summarization systems will
encounter a great variety of meeting types in practical employment scenarios,
this observation makes safe meeting summarization seem much more feasible. We
perform extensive error analysis and identify potential risks in meeting
summarization under differential privacy, including a faithfulness analysis.
|
[
"cs.CL",
"cs.CR"
] | false |
2305.16000
|
2023-05-25T12:43:29Z
|
Do You Hear The People Sing? Key Point Analysis via Iterative Clustering
and Abstractive Summarisation
|
[
"Hao Li",
"Viktor Schlegel",
"Riza Batista-Navarro",
"Goran Nenadic"
] |
Argument summarisation is a promising but currently under-explored field.
Recent work has aimed to provide textual summaries in the form of concise and
salient short texts, i.e., key points (KPs), in a task known as Key Point
Analysis (KPA). One of the main challenges in KPA is finding high-quality key
point candidates from dozens of arguments even in a small corpus. Furthermore,
evaluating key points is crucial in ensuring that the automatically generated
summaries are useful. Although automatic methods for evaluating summarisation
have considerably advanced over the years, they mainly focus on sentence-level
comparison, making it difficult to measure the quality of a summary (a set of
KPs) as a whole. Aggravating this problem is the fact that human evaluation is
costly and unreproducible. To address the above issues, we propose a two-step
abstractive summarisation framework based on neural topic modelling with an
iterative clustering procedure, to generate key points which are aligned with
how humans identify key points. Our experiments show that our framework
advances the state of the art in KPA, with performance improvement of up to 14
(absolute) percentage points, in terms of both ROUGE and our own proposed
evaluation metrics. Furthermore, we evaluate the generated summaries using a
novel set-based evaluation toolkit. Our quantitative analysis demonstrates the
effectiveness of our proposed evaluation metrics in assessing the quality of
generated KPs. Human evaluation further demonstrates the advantages of our
approach and validates that our proposed evaluation metric is more consistent
with human judgment than ROUGE scores.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.16048
|
2023-05-25T13:25:49Z
|
UFO: Unified Fact Obtaining for Commonsense Question Answering
|
[
"Zhifeng Li",
"Yifan Fan",
"Bowei Zou",
"Yu Hong"
] |
Leveraging external knowledge to enhance the reasoning ability is crucial for
commonsense question answering. However, the existing knowledge bases heavily
rely on manual annotation which unavoidably causes deficiency in coverage of
world-wide commonsense knowledge. Accordingly, the knowledge bases fail to be
flexible enough to support the reasoning over diverse questions. Recently,
large-scale language models (LLMs) have dramatically improved the intelligence
in capturing and leveraging knowledge, which opens up a new way to address the
issue of eliciting knowledge from language models. We propose a Unified Facts
Obtaining (UFO) approach. UFO turns LLMs into knowledge sources and produces
relevant facts (knowledge statements) for the given question. We first develop
a unified prompt consisting of demonstrations that cover different aspects of
commonsense and different question styles. On this basis, we instruct the LLMs
to generate question-related supporting facts for various commonsense questions
via prompting. After facts generation, we apply a dense retrieval-based fact
selection strategy to choose the best-matched fact. This kind of facts will be
fed into the answer inference model along with the question. Notably, due to
the design of unified prompts, UFO can support reasoning in various commonsense
aspects (including general commonsense, scientific commonsense, and social
commonsense). Extensive experiments on CommonsenseQA 2.0, OpenBookQA, QASC, and
Social IQA benchmarks show that UFO significantly improves the performance of
the inference model and outperforms manually constructed knowledge sources.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.16057
|
2023-05-25T13:42:08Z
|
Fake News Detection and Behavioral Analysis: Case of COVID-19
|
[
"Chih-Yuan Li",
"Navya Martin Kollapally",
"Soon Ae Chun",
"James Geller"
] |
While the world has been combating COVID-19 for over three years, an ongoing
"Infodemic" due to the spread of fake news regarding the pandemic has also been
a global issue. The existence of the fake news impact different aspect of our
daily lives, including politics, public health, economic activities, etc.
Readers could mistake fake news for real news, and consequently have less
access to authentic information. This phenomenon will likely cause confusion of
citizens and conflicts in society. Currently, there are major challenges in
fake news research. It is challenging to accurately identify fake news data in
social media posts. In-time human identification is infeasible as the amount of
the fake news data is overwhelming. Besides, topics discussed in fake news are
hard to identify due to their similarity to real news. The goal of this paper
is to identify fake news on social media to help stop the spread. We present
Deep Learning approaches and an ensemble approach for fake news detection. Our
detection models achieved higher accuracy than previous studies. The ensemble
approach further improved the detection performance. We discovered feature
differences between fake news and real news items. When we added them into the
sentence embeddings, we found that they affected the model performance. We
applied a hybrid method and built models for recognizing topics from posts. We
found half of the identified topics were overlapping in fake news and real
news, which could increase confusion in the population.
|
[
"cs.LG",
"cs.CL",
"68"
] | false |
2305.16195
|
2023-05-25T15:55:42Z
|
Abstractive Summary Generation for the Urdu Language
|
[
"Ali Raza",
"Hadia Sultan Raja",
"Usman Maratib"
] |
Abstractive summary generation is a challenging task that requires the model
to comprehend the source text and generate a concise and coherent summary that
captures the essential information. In this paper, we explore the use of an
encoder/decoder approach for abstractive summary generation in the Urdu
language. We employ a transformer-based model that utilizes self-attention
mechanisms to encode the input text and generate a summary. Our experiments
show that our model can produce summaries that are grammatically correct and
semantically meaningful. We evaluate our model on a publicly available dataset
and achieve state-of-the-art results in terms of Rouge scores. We also conduct
a qualitative analysis of our model's output to assess its effectiveness and
limitations. Our findings suggest that the encoder/decoder approach is a
promising method for abstractive summary generation in Urdu and can be extended
to other languages with suitable modifications.
|
[
"cs.CL",
"cs.AI",
"68T50 (Primary) 03B65, 91F20 (Secondary)",
"I.2; I.7"
] | false |
2305.16470
|
2023-05-25T21:01:00Z
|
Measuring the Effect of Influential Messages on Varying Personas
|
[
"Chenkai Sun",
"Jinning Li",
"Hou Pong Chan",
"ChengXiang Zhai",
"Heng Ji"
] |
Predicting how a user responds to news events enables important applications
such as allowing intelligent agents or content producers to estimate the effect
on different communities and revise unreleased messages to prevent unexpected
bad outcomes such as social conflict and moral injury. We present a new task,
Response Forecasting on Personas for News Media, to estimate the response a
persona (characterizing an individual or a group) might have upon seeing a news
message. Compared to the previous efforts which only predict generic comments
to news, the proposed task not only introduces personalization in the modeling
but also predicts the sentiment polarity and intensity of each response. This
enables more accurate and comprehensive inference on the mental state of the
persona. Meanwhile, the generated sentiment dimensions make the evaluation and
application more reliable. We create the first benchmark dataset, which
consists of 13,357 responses to 3,847 news headlines from Twitter. We further
evaluate the SOTA neural language models with our dataset. The empirical
results suggest that the included persona attributes are helpful for the
performance of all response dimensions. Our analysis shows that the
best-performing models are capable of predicting responses that are consistent
with the personas, and as a byproduct, the task formulation also enables many
interesting applications in the analysis of social network groups and their
opinions, such as the discovery of extreme opinion groups.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.16521
|
2023-05-25T22:55:32Z
|
Label Agnostic Pre-training for Zero-shot Text Classification
|
[
"Christopher Clarke",
"Yuzhao Heng",
"Yiping Kang",
"Krisztian Flautner",
"Lingjia Tang",
"Jason Mars"
] |
Conventional approaches to text classification typically assume the existence
of a fixed set of predefined labels to which a given text can be classified.
However, in real-world applications, there exists an infinite label space for
describing a given text. In addition, depending on the aspect (sentiment,
topic, etc.) and domain of the text (finance, legal, etc.), the interpretation
of the label can vary greatly. This makes the task of text classification,
particularly in the zero-shot scenario, extremely challenging. In this paper,
we investigate the task of zero-shot text classification with the aim of
improving the ability of pre-trained language models (PLMs) to generalize to
both seen and unseen data across varying aspects and domains. To solve this we
introduce two new simple yet effective pre-training strategies, Implicit and
Explicit pre-training. These methods inject aspect-level understanding into the
model at train time with the goal of conditioning the model to build task-level
understanding. To evaluate this, we construct and release UTCD, a new benchmark
dataset for evaluating text classification in zero-shot settings. Experimental
results on UTCD show that our approach achieves improved zero-shot
generalization on a suite of challenging datasets across an array of zero-shot
formalizations.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.15663
|
2023-05-25T02:16:32Z
|
Mixture-of-Expert Conformer for Streaming Multilingual ASR
|
[
"Ke Hu",
"Bo Li",
"Tara N. Sainath",
"Yu Zhang",
"Francoise Beaufays"
] |
End-to-end models with large capacity have significantly improved
multilingual automatic speech recognition, but their computation cost poses
challenges for on-device applications. We propose a streaming truly
multilingual Conformer incorporating mixture-of-expert (MoE) layers that learn
to only activate a subset of parameters in training and inference. The MoE
layer consists of a softmax gate which chooses the best two experts among many
in forward propagation. The proposed MoE layer offers efficient inference by
activating a fixed number of parameters as the number of experts increases. We
evaluate the proposed model on a set of 12 languages, and achieve an average
11.9% relative improvement in WER over the baseline. Compared to an adapter
model using ground truth information, our MoE model achieves similar WER and
activates similar number of parameters but without any language information. We
further show around 3% relative WER improvement by multilingual shallow fusion.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.15760
|
2023-05-25T06:20:29Z
|
Svarah: Evaluating English ASR Systems on Indian Accents
|
[
"Tahir Javed",
"Sakshi Joshi",
"Vignesh Nagarajan",
"Sai Sundaresan",
"Janki Nawale",
"Abhigyan Raman",
"Kaushal Bhogale",
"Pratyush Kumar",
"Mitesh M. Khapra"
] |
India is the second largest English-speaking country in the world with a
speaker base of roughly 130 million. Thus, it is imperative that automatic
speech recognition (ASR) systems for English should be evaluated on Indian
accents. Unfortunately, Indian speakers find a very poor representation in
existing English ASR benchmarks such as LibriSpeech, Switchboard, Speech Accent
Archive, etc. In this work, we address this gap by creating Svarah, a benchmark
that contains 9.6 hours of transcribed English audio from 117 speakers across
65 geographic locations throughout India, resulting in a diverse range of
accents. Svarah comprises both read speech and spontaneous conversational data,
covering various domains, such as history, culture, tourism, etc., ensuring a
diverse vocabulary. We evaluate 6 open source ASR models and 2 commercial ASR
systems on Svarah and show that there is clear scope for improvement on Indian
accents. Svarah as well as all our code will be publicly available.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.15853
|
2023-05-25T08:44:11Z
|
Sequential Integrated Gradients: a simple but effective method for
explaining language models
|
[
"Joseph Enguehard"
] |
Several explanation methods such as Integrated Gradients (IG) can be
characterised as path-based methods, as they rely on a straight line between
the data and an uninformative baseline. However, when applied to language
models, these methods produce a path for each word of a sentence
simultaneously, which could lead to creating sentences from interpolated words
either having no clear meaning, or having a significantly different meaning
compared to the original sentence. In order to keep the meaning of these
sentences as close as possible to the original one, we propose Sequential
Integrated Gradients (SIG), which computes the importance of each word in a
sentence by keeping fixed every other words, only creating interpolations
between the baseline and the word of interest. Moreover, inspired by the
training procedure of several language models, we also propose to replace the
baseline token "pad" with the trained token "mask". While being a simple
improvement over the original IG method, we show on various models and datasets
that SIG proves to be a very effective method for explaining language models.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.15867
|
2023-05-25T08:59:36Z
|
Extracting Text Representations for Terms and Phrases in Technical
Domains
|
[
"Francesco Fusco",
"Diego Antognini"
] |
Extracting dense representations for terms and phrases is a task of great
importance for knowledge discovery platforms targeting highly-technical fields.
Dense representations are used as features for downstream components and have
multiple applications ranging from ranking results in search to summarization.
Common approaches to create dense representations include training
domain-specific embeddings with self-supervised setups or using sentence
encoder models trained over similarity tasks. In contrast to static embeddings,
sentence encoders do not suffer from the out-of-vocabulary (OOV) problem, but
impose significant computational costs. In this paper, we propose a fully
unsupervised approach to text encoding that consists of training small
character-based models with the objective of reconstructing large pre-trained
embedding matrices. Models trained with this approach can not only match the
quality of sentence encoders in technical domains, but are 5 times smaller and
up to 10 times faster, even on high-end GPUs.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.15904
|
2023-05-25T10:06:08Z
|
MTCue: Learning Zero-Shot Control of Extra-Textual Attributes by
Leveraging Unstructured Context in Neural Machine Translation
|
[
"Sebastian Vincent",
"Robert Flynn",
"Carolina Scarton"
] |
Efficient utilisation of both intra- and extra-textual context remains one of
the critical gaps between machine and human translation. Existing research has
primarily focused on providing individual, well-defined types of context in
translation, such as the surrounding text or discrete external variables like
the speaker's gender. This work introduces MTCue, a novel neural machine
translation (NMT) framework that interprets all context (including discrete
variables) as text. MTCue learns an abstract representation of context,
enabling transferability across different data settings and leveraging similar
attributes in low-resource scenarios. With a focus on a dialogue domain with
access to document and metadata context, we extensively evaluate MTCue in four
language pairs in both translation directions. Our framework demonstrates
significant improvements in translation quality over a parameter-matched
non-contextual baseline, as measured by BLEU (+0.88) and Comet (+1.58).
Moreover, MTCue significantly outperforms a "tagging" baseline at translating
English text. Analysis reveals that the context encoder of MTCue learns a
representation space that organises context based on specific attributes, such
as formality, enabling effective zero-shot control. Pre-training on context
embeddings also improves MTCue's few-shot performance compared to the "tagging"
baseline. Finally, an ablation study conducted on model components and
contextual variables further supports the robustness of MTCue for context-based
NMT.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.15937
|
2023-05-25T11:05:54Z
|
Visually grounded few-shot word acquisition with fewer shots
|
[
"Leanne Nortje",
"Benjamin van Niekerk",
"Herman Kamper"
] |
We propose a visually grounded speech model that acquires new words and their
visual depictions from just a few word-image example pairs. Given a set of test
images and a spoken query, we ask the model which image depicts the query word.
Previous work has simplified this problem by either using an artificial setting
with digit word-image pairs or by using a large number of examples per class.
We propose an approach that can work on natural word-image pairs but with less
examples, i.e. fewer shots. Our approach involves using the given word-image
example pairs to mine new unsupervised word-image training pairs from large
collections of unlabelled speech and images. Additionally, we use a
word-to-image attention mechanism to determine word-image similarity. With this
new model, we achieve better performance with fewer shots than any existing
approach.
|
[
"cs.CL",
"cs.AI",
"eess.AS"
] | false |
2305.16051
|
2023-05-25T13:34:09Z
|
What about em? How Commercial Machine Translation Fails to Handle
(Neo-)Pronouns
|
[
"Anne Lauscher",
"Debora Nozza",
"Archie Crowley",
"Ehm Miltersen",
"Dirk Hovy"
] |
As 3rd-person pronoun usage shifts to include novel forms, e.g., neopronouns,
we need more research on identity-inclusive NLP. Exclusion is particularly
harmful in one of the most popular NLP applications, machine translation (MT).
Wrong pronoun translations can discriminate against marginalized groups, e.g.,
non-binary individuals (Dev et al., 2021). In this ``reality check'', we study
how three commercial MT systems translate 3rd-person pronouns. Concretely, we
compare the translations of gendered vs. gender-neutral pronouns from English
to five other languages (Danish, Farsi, French, German, Italian), and vice
versa, from Danish to English. Our error analysis shows that the presence of a
gender-neutral pronoun often leads to grammatical and semantic translation
errors. Similarly, gender neutrality is often not preserved. By surveying the
opinions of affected native speakers from diverse languages, we provide
recommendations to address the issue in future MT research.
|
[
"cs.CL",
"cs.AI",
"cs.CY"
] | false |
2305.16107
|
2023-05-25T14:39:47Z
|
VioLA: Unified Codec Language Models for Speech Recognition, Synthesis,
and Translation
|
[
"Tianrui Wang",
"Long Zhou",
"Ziqiang Zhang",
"Yu Wu",
"Shujie Liu",
"Yashesh Gaur",
"Zhuo Chen",
"Jinyu Li",
"Furu Wei"
] |
Recent research shows a big convergence in model architecture, training
objectives, and inference methods across various tasks for different
modalities. In this paper, we propose VioLA, a single auto-regressive
Transformer decoder-only network that unifies various cross-modal tasks
involving speech and text, such as speech-to-text, text-to-text,
text-to-speech, and speech-to-speech tasks, as a conditional codec language
model task via multi-task learning framework. To accomplish this, we first
convert all the speech utterances to discrete tokens (similar to the textual
data) using an offline neural codec encoder. In such a way, all these tasks are
converted to token-based sequence conversion problems, which can be naturally
handled with one conditional language model. We further integrate task IDs
(TID) and language IDs (LID) into the proposed model to enhance the modeling
capability of handling different languages and tasks. Experimental results
demonstrate that the proposed VioLA model can support both single-modal and
cross-modal tasks well, and the decoder-only model achieves a comparable and
even better performance than the strong baselines.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.16162
|
2023-05-25T15:25:34Z
|
Feature Collapse
|
[
"Thomas Laurent",
"James H. von Brecht",
"Xavier Bresson"
] |
We formalize and study a phenomenon called feature collapse that makes
precise the intuitive idea that entities playing a similar role in a learning
task receive similar representations. As feature collapse requires a notion of
task, we leverage a simple but prototypical NLP task to study it. We start by
showing experimentally that feature collapse goes hand in hand with
generalization. We then prove that, in the large sample limit, distinct words
that play identical roles in this NLP task receive identical local feature
representations in a neural network. This analysis reveals the crucial role
that normalization mechanisms, such as LayerNorm, play in feature collapse and
in generalization.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2305.16353
|
2023-05-25T02:54:29Z
|
Betray Oneself: A Novel Audio DeepFake Detection Model via
Mono-to-Stereo Conversion
|
[
"Rui Liu",
"Jinhua Zhang",
"Guanglai Gao",
"Haizhou Li"
] |
Audio Deepfake Detection (ADD) aims to detect the fake audio generated by
text-to-speech (TTS), voice conversion (VC) and replay, etc., which is an
emerging topic. Traditionally we take the mono signal as input and focus on
robust feature extraction and effective classifier design. However, the
dual-channel stereo information in the audio signal also includes important
cues for deepfake, which has not been studied in the prior work. In this paper,
we propose a novel ADD model, termed as M2S-ADD, that attempts to discover
audio authenticity cues during the mono-to-stereo conversion process. We first
projects the mono to a stereo signal using a pretrained stereo synthesizer,
then employs a dual-branch neural architecture to process the left and right
channel signals, respectively. In this way, we effectively reveal the artifacts
in the fake audio, thus improve the ADD performance. The experiments on the
ASVspoof2019 database show that M2S-ADD outperforms all baselines that input
mono. We release the source code at \url{https://github.com/AI-S2-Lab/M2S-ADD}.
|
[
"cs.SD",
"cs.AI",
"cs.CL"
] | false |
2305.16366
|
2023-05-25T11:35:52Z
|
Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal
Theorem Proving
|
[
"Xueliang Zhao",
"Wenda Li",
"Lingpeng Kong"
] |
Large language models~(LLMs) present an intriguing avenue of exploration in
the domain of formal theorem proving. Nonetheless, the full utilization of
these models, particularly in terms of demonstration formatting and
organization, remains an underexplored area. In an endeavor to enhance the
efficacy of LLMs, we introduce a subgoal-based demonstration learning
framework, consisting of two primary elements: Firstly, drawing upon the
insights of subgoal learning from the domains of reinforcement learning and
robotics, we propose the construction of distinct subgoals for each
demonstration example and refine these subgoals in accordance with the
pertinent theories of subgoal learning. Secondly, we build upon recent advances
in diffusion models to predict the optimal organization, simultaneously
addressing two intricate issues that persist within the domain of demonstration
organization: subset selection and order determination. Through the integration
of subgoal-based learning methodologies, we have successfully increased the
prevailing proof accuracy from 38.9\% to 44.3\% on the miniF2F benchmark.
Furthermore, the adoption of diffusion models for demonstration organization
can lead to an additional enhancement in accuracy to 45.5\%, or a $5\times$
improvement in sampling efficiency compared with the long-standing
state-of-the-art method. Our code is available at
\url{https://github.com/HKUNLP/subgoal-theorem-prover}.
|
[
"cs.CL",
"cs.AI",
"cs.LG",
"cs.LO"
] | false |
2305.16367
|
2023-05-25T11:36:52Z
|
Role-Play with Large Language Models
|
[
"Murray Shanahan",
"Kyle McDonell",
"Laria Reynolds"
] |
As dialogue agents become increasingly human-like in their performance, it is
imperative that we develop effective ways to describe their behaviour in
high-level terms without falling into the trap of anthropomorphism. In this
paper, we foreground the concept of role-play. Casting dialogue agent behaviour
in terms of role-play allows us to draw on familiar folk psychological terms,
without ascribing human characteristics to language models they in fact lack.
Two important cases of dialogue agent behaviour are addressed this way, namely
(apparent) deception and (apparent) self-awareness.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | true |
2305.16371
|
2023-05-25T13:06:01Z
|
INTapt: Information-Theoretic Adversarial Prompt Tuning for Enhanced
Non-Native Speech Recognition
|
[
"Eunseop Yoon",
"Hee Suk Yoon",
"John Harvill",
"Mark Hasegawa-Johnson",
"Chang D. Yoo"
] |
Automatic Speech Recognition (ASR) systems have attained unprecedented
performance with large speech models pre-trained based on self-supervised
speech representation learning. However, these pre-trained speech models suffer
from representational bias as they tend to better represent those prominent
accents (i.e., native (L1) English accent) in the pre-training speech corpus
than less represented accents, resulting in a deteriorated performance for
non-native (L2) English accents. Although there have been some approaches to
mitigate this issue, all of these methods require updating the pre-trained
model weights. In this paper, we propose Information Theoretic Adversarial
Prompt Tuning (INTapt), which introduces prompts concatenated to the original
input that can re-modulate the attention of the pre-trained model such that the
corresponding input resembles a native (L1) English speech without updating the
backbone weights. INTapt is trained simultaneously in the following two
manners: (1) adversarial training to reduce accent feature dependence between
the original input and the prompt-concatenated input and (2) training to
minimize CTC loss for improving ASR performance to a prompt-concatenated input.
Experimental results show that INTapt improves the performance of L2 English
and increases feature similarity between L2 and L1 accents.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.16433
|
2023-05-25T19:15:06Z
|
Neural Machine Translation for Mathematical Formulae
|
[
"Felix Petersen",
"Moritz Schubotz",
"Andre Greiner-Petter",
"Bela Gipp"
] |
We tackle the problem of neural machine translation of mathematical formulae
between ambiguous presentation languages and unambiguous content languages.
Compared to neural machine translation on natural language, mathematical
formulae have a much smaller vocabulary and much longer sequences of symbols,
while their translation requires extreme precision to satisfy mathematical
information needs. In this work, we perform the tasks of translating from LaTeX
to Mathematica as well as from LaTeX to semantic LaTeX. While recurrent,
recursive, and transformer networks struggle with preserving all contained
information, we find that convolutional sequence-to-sequence networks achieve
95.1% and 90.7% exact matches, respectively.
|
[
"cs.CL",
"cs.SC",
"stat.AP"
] | false |
2305.16504
|
2023-05-25T22:10:20Z
|
On the Tool Manipulation Capability of Open-source Large Language Models
|
[
"Qiantong Xu",
"Fenglu Hong",
"Bo Li",
"Changran Hu",
"Zhengyu Chen",
"Jian Zhang"
] |
Recent studies on software tool manipulation with large language models
(LLMs) mostly rely on closed model APIs. The industrial adoption of these
models is substantially constrained due to the security and robustness risks in
exposing information to closed LLM API services. In this paper, we ask can we
enhance open-source LLMs to be competitive to leading closed LLM APIs in tool
manipulation, with practical amount of human supervision. By analyzing common
tool manipulation failures, we first demonstrate that open-source LLMs may
require training with usage examples, in-context demonstration and generation
style regulation to resolve failures. These insights motivate us to revisit
classical methods in LLM literature, and demonstrate that we can adapt them as
model alignment with programmatic data generation, system prompts and
in-context demonstration retrievers to enhance open-source LLMs for tool
manipulation. To evaluate these techniques, we create the ToolBench, a tool
manipulation benchmark consisting of diverse software tools for real-world
tasks. We demonstrate that our techniques can boost leading open-source LLMs by
up to 90% success rate, showing capabilities competitive to OpenAI GPT-4 in 4
out of 8 ToolBench tasks. We show that such enhancement typically requires
about one developer day to curate data for each tool, rendering a recipe with
practical amount of human supervision.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.03823
|
2023-05-25T17:35:57Z
|
Transformative Effects of ChatGPT on Modern Education: Emerging Era of
AI Chatbots
|
[
"Sukhpal Singh Gill",
"Minxian Xu",
"Panos Patros",
"Huaming Wu",
"Rupinder Kaur",
"Kamalpreet Kaur",
"Stephanie Fuller",
"Manmeet Singh",
"Priyansh Arora",
"Ajith Kumar Parlikad",
"Vlado Stankovski",
"Ajith Abraham",
"Soumya K. Ghosh",
"Hanan Lutfiyya",
"Salil S. Kanhere",
"Rami Bahsoon",
"Omer Rana",
"Schahram Dustdar",
"Rizos Sakellariou",
"Steve Uhlig",
"Rajkumar Buyya"
] |
ChatGPT, an AI-based chatbot, was released to provide coherent and useful
replies based on analysis of large volumes of data. In this article, leading
scientists, researchers and engineers discuss the transformative effects of
ChatGPT on modern education. This research seeks to improve our knowledge of
ChatGPT capabilities and its use in the education sector, identifying potential
concerns and challenges. Our preliminary evaluation concludes that ChatGPT
performed differently in each subject area including finance, coding and maths.
While ChatGPT has the ability to help educators by creating instructional
content, offering suggestions and acting as an online educator to learners by
answering questions and promoting group work, there are clear drawbacks in its
use, such as the possibility of producing inaccurate or false data and
circumventing duplicate content (plagiarism) detectors where originality is
essential. The often reported hallucinations within Generative AI in general,
and also relevant for ChatGPT, can render its use of limited benefit where
accuracy is essential. What ChatGPT lacks is a stochastic measure to help
provide sincere and sensitive communication with its users. Academic
regulations and evaluation practices used in educational institutions need to
be updated, should ChatGPT be used as a tool in education. To address the
transformative effects of ChatGPT on the learning environment, educating
teachers and students alike about its capabilities and limitations will be
crucial.
|
[
"cs.CY",
"cs.AI",
"cs.CL"
] | false |
2306.09194
|
2023-05-25T02:57:16Z
|
Undetectable Watermarks for Language Models
|
[
"Miranda Christ",
"Sam Gunn",
"Or Zamir"
] |
Recent advances in the capabilities of large language models such as GPT-4
have spurred increasing concern about our ability to detect AI-generated text.
Prior works have suggested methods of embedding watermarks in model outputs, by
noticeably altering the output distribution. We ask: Is it possible to
introduce a watermark without incurring any detectable change to the output
distribution?
To this end we introduce a cryptographically-inspired notion of undetectable
watermarks for language models. That is, watermarks can be detected only with
the knowledge of a secret key; without the secret key, it is computationally
intractable to distinguish watermarked outputs from those of the original
model. In particular, it is impossible for a user to observe any degradation in
the quality of the text. Crucially, watermarks should remain undetectable even
when the user is allowed to adaptively query the model with arbitrarily chosen
prompts. We construct undetectable watermarks based on the existence of one-way
functions, a standard assumption in cryptography.
|
[
"cs.CR",
"cs.CL",
"cs.LG"
] | false |
2305.16263
|
2023-05-25T17:18:37Z
|
Unified Modeling of Multi-Talker Overlapped Speech Recognition and
Diarization with a Sidecar Separator
|
[
"Lingwei Meng",
"Jiawen Kang",
"Mingyu Cui",
"Haibin Wu",
"Xixin Wu",
"Helen Meng"
] |
Multi-talker overlapped speech poses a significant challenge for speech
recognition and diarization. Recent research indicated that these two tasks are
inter-dependent and complementary, motivating us to explore a unified modeling
method to address them in the context of overlapped speech. A recent study
proposed a cost-effective method to convert a single-talker automatic speech
recognition (ASR) system into a multi-talker one, by inserting a Sidecar
separator into the frozen well-trained ASR model. Extending on this, we
incorporate a diarization branch into the Sidecar, allowing for unified
modeling of both ASR and diarization with a negligible overhead of only 768
parameters. The proposed method yields better ASR results compared to the
baseline on LibriMix and LibriSpeechMix datasets. Moreover, without
sophisticated customization on the diarization task, our method achieves
acceptable diarization results on the two-speaker subset of CALLHOME with only
a few adaptation steps.
|
[
"cs.SD",
"cs.AI",
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2305.15641
|
2023-05-25T01:39:51Z
|
A Robust Classifier Under Missing-Not-At-Random Sample Selection Bias
|
[
"Huy Mai",
"Wen Huang",
"Wei Du",
"Xintao Wu"
] |
The shift between the training and testing distributions is commonly due to
sample selection bias, a type of bias caused by non-random sampling of examples
to be included in the training set. Although there are many approaches proposed
to learn a classifier under sample selection bias, few address the case where a
subset of labels in the training set are missing-not-at-random (MNAR) as a
result of the selection process. In statistics, Greene's method formulates this
type of sample selection with logistic regression as the prediction model.
However, we find that simply integrating this method into a robust
classification framework is not effective for this bias setting. In this paper,
we propose BiasCorr, an algorithm that improves on Greene's method by modifying
the original training set in order for a classifier to learn under MNAR sample
selection bias. We provide theoretical guarantee for the improvement of
BiasCorr over Greene's method by analyzing its bias. Experimental results on
real-world datasets demonstrate that BiasCorr produces robust classifiers and
can be extended to outperform state-of-the-art classifiers that have been
proposed to train under sample selection bias.
|
[
"cs.LG"
] | false |
2305.15696
|
2023-05-25T04:05:09Z
|
Detecting Dataset Drift and Non-IID Sampling via k-Nearest Neighbors
|
[
"Jesse Cummings",
"Elías Snorrason",
"Jonas Mueller"
] |
We present a straightforward statistical test to detect certain violations of
the assumption that the data are Independent and Identically Distributed (IID).
The specific form of violation considered is common across real-world
applications: whether the examples are ordered in the dataset such that almost
adjacent examples tend to have more similar feature values (e.g. due to
distributional drift, or attractive interactions between datapoints). Based on
a k-Nearest Neighbors estimate, our approach can be used to audit any
multivariate numeric data as well as other data types (image, text, audio,
etc.) that can be numerically represented, perhaps with model embeddings.
Compared with existing methods to detect drift or auto-correlation, our
approach is both applicable to more types of data and also able to detect a
wider variety of IID violations in practice. Code:
https://github.com/cleanlab/cleanlab
|
[
"cs.LG"
] | false |
2305.15706
|
2023-05-25T04:25:55Z
|
pFedSim: Similarity-Aware Model Aggregation Towards Personalized
Federated Learning
|
[
"Jiahao Tan",
"Yipeng Zhou",
"Gang Liu",
"Jessie Hui Wang",
"Shui Yu"
] |
The federated learning (FL) paradigm emerges to preserve data privacy during
model training by only exposing clients' model parameters rather than original
data. One of the biggest challenges in FL lies in the non-IID (not identical
and independently distributed) data (a.k.a., data heterogeneity) distributed on
clients. To address this challenge, various personalized FL (pFL) methods are
proposed such as similarity-based aggregation and model decoupling. The former
one aggregates models from clients of a similar data distribution. The later
one decouples a neural network (NN) model into a feature extractor and a
classifier. Personalization is captured by classifiers which are obtained by
local training. To advance pFL, we propose a novel pFedSim (pFL based on model
similarity) algorithm in this work by combining these two kinds of methods.
More specifically, we decouple a NN model into a personalized feature
extractor, obtained by aggregating models from similar clients, and a
classifier, which is obtained by local training and used to estimate client
similarity. Compared with the state-of-the-art baselines, the advantages of
pFedSim include: 1) significantly improved model accuracy; 2) low communication
and computation overhead; 3) a low risk of privacy leakage; 4) no requirement
for any external public information. To demonstrate the superiority of pFedSim,
extensive experiments are conducted on real datasets. The results validate the
superb performance of our algorithm which can significantly outperform
baselines under various heterogeneous data settings.
|
[
"cs.LG"
] | false |
2305.15822
|
2023-05-25T08:06:42Z
|
Towards Label Position Bias in Graph Neural Networks
|
[
"Haoyu Han",
"Xiaorui Liu",
"Feng Shi",
"MohamadAli Torkamani",
"Charu C. Aggarwal",
"Jiliang Tang"
] |
Graph Neural Networks (GNNs) have emerged as a powerful tool for
semi-supervised node classification tasks. However, recent studies have
revealed various biases in GNNs stemming from both node features and graph
topology. In this work, we uncover a new bias - label position bias, which
indicates that the node closer to the labeled nodes tends to perform better. We
introduce a new metric, the Label Proximity Score, to quantify this bias, and
find that it is closely related to performance disparities. To address the
label position bias, we propose a novel optimization framework for learning a
label position unbiased graph structure, which can be applied to existing GNNs.
Extensive experiments demonstrate that our proposed method not only outperforms
backbone methods but also significantly mitigates the issue of label position
bias in GNNs.
|
[
"cs.LG"
] | false |
2305.15850
|
2023-05-25T08:42:25Z
|
Stochastic Modified Equations and Dynamics of Dropout Algorithm
|
[
"Zhongwang Zhang",
"Yuqing Li",
"Tao Luo",
"Zhi-Qin John Xu"
] |
Dropout is a widely utilized regularization technique in the training of
neural networks, nevertheless, its underlying mechanism and its impact on
achieving good generalization abilities remain poorly understood. In this work,
we derive the stochastic modified equations for analyzing the dynamics of
dropout, where its discrete iteration process is approximated by a class of
stochastic differential equations. In order to investigate the underlying
mechanism by which dropout facilitates the identification of flatter minima, we
study the noise structure of the derived stochastic modified equation for
dropout. By drawing upon the structural resemblance between the Hessian and
covariance through several intuitive approximations, we empirically demonstrate
the universal presence of the inverse variance-flatness relation and the
Hessian-variance relation, throughout the training process of dropout. These
theoretical and empirical findings make a substantial contribution to our
understanding of the inherent tendency of dropout to locate flatter minima.
|
[
"cs.LG"
] | false |
2305.15907
|
2023-05-25T10:13:19Z
|
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic
Phenomenon
|
[
"Yifan Luo",
"Bin Dong"
] |
In this paper, we studied two identically-trained neural networks (i.e.
networks with the same architecture, trained on the same dataset using the same
algorithm, but with different initialization) and found that their outputs
discrepancy on the training dataset exhibits a "double descent" phenomenon. We
demonstrated through extensive experiments across various tasks, datasets, and
network architectures that this phenomenon is prevalent. Leveraging this
phenomenon, we proposed a new early stopping criterion and developed a new
method for data quality assessment. Our results show that a phenomenon-driven
approach can benefit deep learning research both in theoretical understanding
and practical applications.
|
[
"cs.LG"
] | false |
2305.15924
|
2023-05-25T10:50:30Z
|
Sample and Predict Your Latent: Modality-free Sequential Disentanglement
via Contrastive Estimation
|
[
"Ilan Naiman",
"Nimrod Berman",
"Omri Azencot"
] |
Unsupervised disentanglement is a long-standing challenge in representation
learning. Recently, self-supervised techniques achieved impressive results in
the sequential setting, where data is time-dependent. However, the latter
methods employ modality-based data augmentations and random sampling or solve
auxiliary tasks. In this work, we propose to avoid that by generating,
sampling, and comparing empirical distributions from the underlying variational
model. Unlike existing work, we introduce a self-supervised sequential
disentanglement framework based on contrastive estimation with no external
signals, while using common batch sizes and samples from the latent space
itself. In practice, we propose a unified, efficient, and easy-to-code sampling
strategy for semantically similar and dissimilar views of the data. We evaluate
our approach on video, audio, and time series benchmarks. Our method presents
state-of-the-art results in comparison to existing techniques. The code is
available at https://github.com/azencot-group/SPYL.
|
[
"cs.LG"
] | false |
2305.16143
|
2023-05-25T15:16:28Z
|
Condensed Prototype Replay for Class Incremental Learning
|
[
"Jiangtao Kong",
"Zhenyu Zong",
"Tianyi Zhou",
"Huajie Shao"
] |
Incremental learning (IL) suffers from catastrophic forgetting of old tasks
when learning new tasks. This can be addressed by replaying previous tasks'
data stored in a memory, which however is usually prone to size limits and
privacy leakage. Recent studies store only class centroids as prototypes and
augment them with Gaussian noises to create synthetic data for replay. However,
they cannot effectively avoid class interference near their margins that leads
to forgetting. Moreover, the injected noises distort the rich structure between
real data and prototypes, hence even detrimental to IL. In this paper, we
propose YONO that You Only Need to replay One condensed prototype per class,
which for the first time can even outperform memory-costly exemplar-replay
methods. To this end, we develop a novel prototype learning method that (1)
searches for more representative prototypes in high-density regions by an
attentional mean-shift algorithm and (2) moves samples in each class to their
prototype to form a compact cluster distant from other classes. Thereby, the
class margins are maximized, which effectively reduces interference causing
future forgetting. In addition, we extend YONO to YONO+, which creates
synthetic replay data by random sampling in the neighborhood of each prototype
in the representation space. We show that the synthetic data can further
improve YONO. Extensive experiments on IL benchmarks demonstrate the advantages
of YONO/YONO+ over existing IL methods in terms of both accuracy and
forgetting.
|
[
"cs.LG"
] | false |
2305.16239
|
2023-05-25T16:49:40Z
|
Persistent Laplacian-enhanced Algorithm for Scarcely Labeled Data
Classification
|
[
"Gokul Bhusal",
"Ekaterina Merkurjev",
"Guo-Wei Wei"
] |
The success of many machine learning (ML) methods depends crucially on having
large amounts of labeled data. However, obtaining enough labeled data can be
expensive, time-consuming, and subject to ethical constraints for many
applications. One approach that has shown tremendous value in addressing this
challenge is semi-supervised learning (SSL); this technique utilizes both
labeled and unlabeled data during training, often with much less labeled data
than unlabeled data, which is often relatively easy and inexpensive to obtain.
In fact, SSL methods are particularly useful in applications where the cost of
labeling data is especially expensive, such as medical analysis, natural
language processing (NLP), or speech recognition. A subset of SSL methods that
have achieved great success in various domains involves algorithms that
integrate graph-based techniques. These procedures are popular due to the vast
amount of information provided by the graphical framework and the versatility
of their applications. In this work, we propose an algebraic topology-based
semi-supervised method called persistent Laplacian-enhanced graph MBO (PL-MBO)
by integrating persistent spectral graph theory with the classical
Merriman-Bence- Osher (MBO) scheme. Specifically, we use a filtration procedure
to generate a sequence of chain complexes and associated families of simplicial
complexes, from which we construct a family of persistent Laplacians. Overall,
it is a very efficient procedure that requires much less labeled data to
perform well compared to many ML techniques, and it can be adapted for both
small and large datasets. We evaluate the performance of the proposed method on
data classification, and the results indicate that the proposed technique
outperforms other existing semi-supervised algorithms.
|
[
"cs.LG"
] | false |
2305.16296
|
2023-05-25T17:50:28Z
|
A Guide Through the Zoo of Biased SGD
|
[
"Yury Demidovich",
"Grigory Malinovsky",
"Igor Sokolov",
"Peter Richtárik"
] |
Stochastic Gradient Descent (SGD) is arguably the most important single
algorithm in modern machine learning. Although SGD with unbiased gradient
estimators has been studied extensively over at least half a century, SGD
variants relying on biased estimators are rare. Nevertheless, there has been an
increased interest in this topic in recent years. However, existing literature
on SGD with biased estimators (BiasedSGD) lacks coherence since each new paper
relies on a different set of assumptions, without any clear understanding of
how they are connected, which may lead to confusion. We address this gap by
establishing connections among the existing assumptions, and presenting a
comprehensive map of the underlying relationships. Additionally, we introduce a
new set of assumptions that is provably weaker than all previous assumptions,
and use it to present a thorough analysis of BiasedSGD in both convex and
non-convex settings, offering advantages over previous results. We also provide
examples where biased estimators outperform their unbiased counterparts or
where unbiased versions are simply not available. Finally, we demonstrate the
effectiveness of our framework through experimental results that validate our
theoretical findings.
|
[
"cs.LG"
] | false |
2305.16308
|
2023-05-25T17:57:46Z
|
Rectifying Group Irregularities in Explanations for Distribution Shift
|
[
"Adam Stein",
"Yinjun Wu",
"Eric Wong",
"Mayur Naik"
] |
It is well-known that real-world changes constituting distribution shift
adversely affect model performance. How to characterize those changes in an
interpretable manner is poorly understood. Existing techniques to address this
problem take the form of shift explanations that elucidate how to map samples
from the original distribution toward the shifted one by reducing the disparity
between these two distributions. However, these methods can introduce group
irregularities, leading to explanations that are less feasible and robust. To
address these issues, we propose Group-aware Shift Explanations (GSE), a method
that produces interpretable explanations by leveraging worst-group optimization
to rectify group irregularities. We demonstrate how GSE not only maintains
group structures, such as demographic and hierarchical subpopulations, but also
enhances feasibility and robustness in the resulting explanations in a wide
range of tabular, language, and image settings.
|
[
"cs.LG"
] | false |
2305.16505
|
2023-05-25T22:13:37Z
|
Reward-Machine-Guided, Self-Paced Reinforcement Learning
|
[
"Cevahir Koprulu",
"Ufuk Topcu"
] |
Self-paced reinforcement learning (RL) aims to improve the data efficiency of
learning by automatically creating sequences, namely curricula, of probability
distributions over contexts. However, existing techniques for self-paced RL
fail in long-horizon planning tasks that involve temporally extended behaviors.
We hypothesize that taking advantage of prior knowledge about the underlying
task structure can improve the effectiveness of self-paced RL. We develop a
self-paced RL algorithm guided by reward machines, i.e., a type of finite-state
machine that encodes the underlying task structure. The algorithm integrates
reward machines in 1) the update of the policy and value functions obtained by
any RL algorithm of choice, and 2) the update of the automated curriculum that
generates context distributions. Our empirical results evidence that the
proposed algorithm achieves optimal behavior reliably even in cases in which
existing baselines cannot make any meaningful progress. It also decreases the
curriculum length and reduces the variance in the curriculum generation process
by up to one-fourth and four orders of magnitude, respectively.
|
[
"cs.LG"
] | false |
2305.16509
|
2023-05-25T22:32:45Z
|
RoLA: A Real-Time Online Lightweight Anomaly Detection System for
Multivariate Time Series
|
[
"Ming-Chang Lee",
"Jia-Chun Lin"
] |
A multivariate time series refers to observations of two or more variables
taken from a device or a system simultaneously over time. There is an
increasing need to monitor multivariate time series and detect anomalies in
real time to ensure proper system operation and good service quality. It is
also highly desirable to have a lightweight anomaly detection system that
considers correlations between different variables, adapts to changes in the
pattern of the multivariate time series, offers immediate responses, and
provides supportive information regarding detection results based on
unsupervised learning and online model training. In the past decade, many
multivariate time series anomaly detection approaches have been introduced.
However, they are unable to offer all the above-mentioned features. In this
paper, we propose RoLA, a real-time online lightweight anomaly detection system
for multivariate time series based on a divide-and-conquer strategy, parallel
processing, and the majority rule. RoLA employs multiple lightweight anomaly
detectors to monitor multivariate time series in parallel, determine the
correlations between variables dynamically on the fly, and then jointly detect
anomalies based on the majority rule in real time. To demonstrate the
performance of RoLA, we conducted an experiment based on a public dataset
provided by the FerryBox of the One Ocean Expedition. The results show that
RoLA provides satisfactory detection accuracy and lightweight performance.
|
[
"cs.LG"
] | false |
2305.15629
|
2023-05-25T00:49:27Z
|
Patient Outcome Predictions Improve Operations at a Large Hospital
Network
|
[
"Liangyuan Na",
"Kimberly Villalobos Carballo",
"Jean Pauphilet",
"Ali Haddad-Sisakht",
"Daniel Kombert",
"Melissa Boisjoli-Langlois",
"Andrew Castiglione",
"Maram Khalifa",
"Pooja Hebbal",
"Barry Stein",
"Dimitris Bertsimas"
] |
Problem definition: Access to accurate predictions of patients' outcomes can
enhance medical staff's decision-making, which ultimately benefits all
stakeholders in the hospitals. A large hospital network in the US has been
collaborating with academics and consultants to predict short-term and
long-term outcomes for all inpatients across their seven hospitals.
Methodology/results: We develop machine learning models that predict the
probabilities of next 24-hr/48-hr discharge and intensive care unit transfers,
end-of-stay mortality and discharge dispositions. All models achieve high
out-of-sample AUC (75.7%-92.5%) and are well calibrated. In addition, combining
48-hr discharge predictions with doctors' predictions simultaneously enables
more patient discharges (10%-28.7%) and fewer 7-day/30-day readmissions
($p$-value $<0.001$). We implement an automated pipeline that extracts data and
updates predictions every morning, as well as user-friendly software and a
color-coded alert system to communicate these patient-level predictions
(alongside explanations) to clinical teams. Managerial implications: Since we
have been gradually deploying the tool, and training medical staff, over 200
doctors, nurses, and case managers across seven hospitals use it in their daily
patient review process. We observe a significant reduction in the average
length of stay (0.67 days per patient) following its adoption and anticipate
substantial financial benefits (between \$55 and \$72 million annually) for the
healthcare system.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.15670
|
2023-05-25T02:40:52Z
|
Interpretable Machine Learning based on Functional ANOVA Framework:
Algorithms and Comparisons
|
[
"Linwei Hu",
"Vijayan N. Nair",
"Agus Sudjianto",
"Aijun Zhang",
"Jie Chen"
] |
In the early days of machine learning (ML), the emphasis was on developing
complex algorithms to achieve best predictive performance. To understand and
explain the model results, one had to rely on post hoc explainability
techniques, which are known to have limitations. Recently, with the recognition
that interpretability is just as important, researchers are compromising on
small increases in predictive performance to develop algorithms that are
inherently interpretable. While doing so, the ML community has rediscovered the
use of low-order functional ANOVA (fANOVA) models that have been known in the
statistical literature for some time. This paper starts with a description of
challenges with post hoc explainability and reviews the fANOVA framework with a
focus on main effects and second-order interactions. This is followed by an
overview of two recently developed techniques: Explainable Boosting Machines or
EBM (Lou et al., 2013) and GAMI-Net (Yang et al., 2021b). The paper proposes a
new algorithm, called GAMI-Lin-T, that also uses trees like EBM, but it does
linear fits instead of piecewise constants within the partitions. There are
many other differences, including the development of a new interaction
filtering algorithm. Finally, the paper uses simulated and real datasets to
compare selected ML algorithms. The results show that GAMI-Lin-T and GAMI-Net
have comparable performances, and both are generally better than EBM.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.15745
|
2023-05-25T05:50:38Z
|
Robust Ante-hoc Graph Explainer using Bilevel Optimization
|
[
"Mert Kosan",
"Arlei Silva",
"Ambuj Singh"
] |
Explaining the decisions made by machine learning models for high-stakes
applications is critical for increasing transparency and guiding improvements
to these decisions. This is particularly true in the case of models for graphs,
where decisions often depend on complex patterns combining rich structural and
attribute data. While recent work has focused on designing so-called post-hoc
explainers, the question of what constitutes a good explanation remains open.
One intuitive property is that explanations should be sufficiently informative
to enable humans to approximately reproduce the predictions given the data.
However, we show that post-hoc explanations do not achieve this goal as their
explanations are highly dependent on fixed model parameters (e.g., learned GNN
weights). To address this challenge, this paper proposes RAGE (Robust Ante-hoc
Graph Explainer), a novel and flexible ante-hoc explainer designed to discover
explanations for a broad class of graph neural networks using bilevel
optimization. RAGE is able to efficiently identify explanations that contain
the full information needed for prediction while still enabling humans to rank
these explanations based on their influence. Our experiments, based on graph
classification and regression, show that RAGE explanations are more robust than
existing post-hoc and ante-hoc approaches and often achieve similar or better
accuracy than state-of-the-art models.
|
[
"cs.LG",
"cs.SI"
] | false |
2305.15746
|
2023-05-25T05:52:05Z
|
Assessing the Spatial Structure of the Association between Attendance at
Preschool and Childrens Developmental Vulnerabilities in Queensland Australia
|
[
"wala Draidi Areed",
"Aiden Price",
"Kathryn Arnett",
"Helen Thompson",
"Reid Malseed",
"Kerrie Mengersen"
] |
The research explores the influence of preschool attendance (one year before
full-time school) on the development of children during their first year of
school. Using data collected by the Australian Early Development Census, the
findings show that areas with high proportions of preschool attendance tended
to have lower proportions of children with at least one developmental
vulnerability. Developmental vulnerablities include not being able to cope with
the school day (tired, hungry, low energy), unable to get along with others or
aggressive behaviour, trouble with reading/writing or numbers. These findings,
of course, vary by region. Using Data Analysis and Machine Learning, the
researchers were able to identify three distinct clusters within Queensland,
each characterised by different socio-demographic variables influencing the
relationship between preschool attendance and developmental vulnerability.
These analyses contribute to understanding regions with high vulnerability and
the potential need for tailored policies or investments
|
[
"stat.ML",
"cs.LG"
] | false |
2305.15770
|
2023-05-25T06:27:45Z
|
TLNets: Transformation Learning Networks for long-range time-series
prediction
|
[
"Wei Wang",
"Yang Liu",
"Hao Sun"
] |
Time series prediction is a prevalent issue across various disciplines, such
as meteorology, traffic surveillance, investment, and energy production and
consumption. Many statistical and machine-learning strategies have been
developed to tackle this problem. However, these approaches either lack
explainability or exhibit less satisfactory performance when the prediction
horizon increases. To this end, we propose a novel plan for the designing of
networks' architecture based on transformations, possessing the potential to
achieve an enhanced receptive field in learning which brings benefits to fuse
features across scales. In this context, we introduce four different
transformation mechanisms as bases to construct the learning model including
Fourier Transform (FT), Singular Value Decomposition (SVD), matrix
multiplication and Conv block. Hence, we develop four learning models based on
the above building blocks, namely, FT-Matrix, FT-SVD, FT-Conv, and Conv-SVD.
Note that the FT and SVD blocks are capable of learning global information,
while the Conv blocks focus on learning local information. The matrix block is
sparsely designed to learn both global and local information simultaneously.
The above Transformation Learning Networks (TLNets) have been extensively
tested and compared with multiple baseline models based on several real-world
datasets and showed clear potential in long-range time-series forecasting.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.15792
|
2023-05-25T07:16:00Z
|
IDEA: Invariant Causal Defense for Graph Adversarial Robustness
|
[
"Shuchang Tao",
"Qi Cao",
"Huawei Shen",
"Yunfan Wu",
"Bingbing Xu",
"Xueqi Cheng"
] |
Graph neural networks (GNNs) have achieved remarkable success in various
tasks, however, their vulnerability to adversarial attacks raises concerns for
the real-world applications. Existing defense methods can resist some attacks,
but suffer unbearable performance degradation under other unknown attacks. This
is due to their reliance on either limited observed adversarial examples to
optimize (adversarial training) or specific heuristics to alter graph or model
structures (graph purification or robust aggregation). In this paper, we
propose an Invariant causal DEfense method against adversarial Attacks (IDEA),
providing a new perspective to address this issue. The method aims to learn
causal features that possess strong predictability for labels and invariant
predictability across attacks, to achieve graph adversarial robustness. Through
modeling and analyzing the causal relationships in graph adversarial attacks,
we design two invariance objectives to learn the causal features. Extensive
experiments demonstrate that our IDEA significantly outperforms all the
baselines under both poisoning and evasion attacks on five benchmark datasets,
highlighting the strong and invariant predictability of IDEA. The
implementation of IDEA is available at
https://anonymous.4open.science/r/IDEA_repo-666B.
|
[
"cs.LG",
"cs.CR"
] | false |
2305.15801
|
2023-05-25T07:33:17Z
|
Lucy-SKG: Learning to Play Rocket League Efficiently Using Deep
Reinforcement Learning
|
[
"Vasileios Moschopoulos",
"Pantelis Kyriakidis",
"Aristotelis Lazaridis",
"Ioannis Vlahavas"
] |
A successful tactic that is followed by the scientific community for
advancing AI is to treat games as problems, which has been proven to lead to
various breakthroughs. We adapt this strategy in order to study Rocket League,
a widely popular but rather under-explored 3D multiplayer video game with a
distinct physics engine and complex dynamics that pose a significant challenge
in developing efficient and high-performance game-playing agents. In this
paper, we present Lucy-SKG, a Reinforcement Learning-based model that learned
how to play Rocket League in a sample-efficient manner, outperforming by a
notable margin the two highest-ranking bots in this game, namely Necto (2022
bot champion) and its successor Nexto, thus becoming a state-of-the-art agent.
Our contributions include: a) the development of a reward analysis and
visualization library, b) novel parameterizable reward shape functions that
capture the utility of complex reward types via our proposed Kinesthetic Reward
Combination (KRC) technique, and c) design of auxiliary neural architectures
for training on reward prediction and state representation tasks in an
on-policy fashion for enhanced efficiency in learning speed and performance. By
performing thorough ablation studies for each component of Lucy-SKG, we showed
their independent effectiveness in overall performance. In doing so, we
demonstrate the prospects and challenges of using sample-efficient
Reinforcement Learning techniques for controlling complex dynamical systems
under competitive team-based multiplayer conditions.
|
[
"cs.LG",
"cs.AI",
"I.2.1"
] | false |
2305.15821
|
2023-05-25T08:05:19Z
|
Market Making with Deep Reinforcement Learning from Limit Order Books
|
[
"Hong Guo",
"Jianwu Lin",
"Fanlin Huang"
] |
Market making (MM) is an important research topic in quantitative finance,
the agent needs to continuously optimize ask and bid quotes to provide
liquidity and make profits. The limit order book (LOB) contains information on
all active limit orders, which is an essential basis for decision-making. The
modeling of evolving, high-dimensional and low signal-to-noise ratio LOB data
is a critical challenge. Traditional MM strategy relied on strong assumptions
such as price process, order arrival process, etc. Previous reinforcement
learning (RL) works handcrafted market features, which is insufficient to
represent the market. This paper proposes a RL agent for market making with LOB
data. We leverage a neural network with convolutional filters and attention
mechanism (Attn-LOB) for feature extraction from LOB. We design a new
continuous action space and a hybrid reward function for the MM task. Finally,
we conduct comprehensive experiments on latency and interpretability, showing
that our agent has good applicability.
|
[
"q-fin.CP",
"cs.LG"
] | false |
2305.15843
|
2023-05-25T08:33:48Z
|
TabGSL: Graph Structure Learning for Tabular Data Prediction
|
[
"Jay Chiehen Liao",
"Cheng-Te Li"
] |
This work presents a novel approach to tabular data prediction leveraging
graph structure learning and graph neural networks. Despite the prevalence of
tabular data in real-world applications, traditional deep learning methods
often overlook the potentially valuable associations between data instances.
Such associations can offer beneficial insights for classification tasks, as
instances may exhibit similar patterns of correlations among features and
target labels. This information can be exploited by graph neural networks,
necessitating robust graph structures. However, existing studies primarily
focus on improving graph structure from noisy data, largely neglecting the
possibility of deriving graph structures from tabular data. We present a novel
solution, Tabular Graph Structure Learning (TabGSL), to enhance tabular data
prediction by simultaneously learning instance correlation and feature
interaction within a unified framework. This is achieved through a proposed
graph contrastive learning module, along with transformer-based feature
extractor and graph neural network. Comprehensive experiments conducted on 30
benchmark tabular datasets demonstrate that TabGSL markedly outperforms both
tree-based models and recent deep learning-based tabular models. Visualizations
of the learned instance embeddings further substantiate the effectiveness of
TabGSL.
|
[
"cs.LG",
"cs.SI"
] | false |
2305.15858
|
2023-05-25T08:47:16Z
|
LLHR: Low Latency and High Reliability CNN Distributed Inference for
Resource-Constrained UAV Swarms
|
[
"Marwan Dhuheir",
"Aiman Erbad",
"Sinan Sabeeh"
] |
Recently, Unmanned Aerial Vehicles (UAVs) have shown impressive performance
in many critical applications, such as surveillance, search and rescue
operations, environmental monitoring, etc. In many of these applications, the
UAVs capture images as well as other sensory data and then send the data
processing requests to remote servers. Nevertheless, this approach is not
always practical in real-time-based applications due to unstable connections,
limited bandwidth, limited energy, and strict end-to-end latency. One promising
solution is to divide the inference requests into subtasks that can be
distributed among UAVs in a swarm based on the available resources. Moreover,
these tasks create intermediate results that need to be transmitted reliably as
the swarm moves to cover the area. Our system model deals with real-time
requests, aiming to find the optimal transmission power that guarantees higher
reliability and low latency. We formulate the Low Latency and High-Reliability
(LLHR) distributed inference as an optimization problem, and due to the
complexity of the problem, we divide it into three subproblems. In the first
subproblem, we find the optimal transmit power of the connected UAVs with
guaranteed transmission reliability. The second subproblem aims to find the
optimal positions of the UAVs in the grid, while the last subproblem finds the
optimal placement of the CNN layers in the available UAVs. We conduct extensive
simulations and compare our work to two baseline models demonstrating that our
model outperforms the competing models.
|
[
"cs.DC",
"cs.LG"
] | false |
2305.15961
|
2023-05-25T11:59:42Z
|
Quantifying the Intrinsic Usefulness of Attributional Explanations for
Graph Neural Networks with Artificial Simulatability Studies
|
[
"Jonas Teufel",
"Luca Torresi",
"Pascal Friederich"
] |
Despite the increasing relevance of explainable AI, assessing the quality of
explanations remains a challenging issue. Due to the high costs associated with
human-subject experiments, various proxy metrics are often used to
approximately quantify explanation quality. Generally, one possible
interpretation of the quality of an explanation is its inherent value for
teaching a related concept to a student. In this work, we extend artificial
simulatability studies to the domain of graph neural networks. Instead of
costly human trials, we use explanation-supervisable graph neural networks to
perform simulatability studies to quantify the inherent usefulness of
attributional graph explanations. We perform an extensive ablation study to
investigate the conditions under which the proposed analyses are most
meaningful. We additionally validate our methods applicability on real-world
graph classification and regression datasets. We find that relevant
explanations can significantly boost the sample efficiency of graph neural
networks and analyze the robustness towards noise and bias in the explanations.
We believe that the notion of usefulness obtained from our proposed
simulatability analysis provides a dimension of explanation quality that is
largely orthogonal to the common practice of faithfulness and has great
potential to expand the toolbox of explanation quality assessments,
specifically for graph explanations.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.15997
|
2023-05-25T12:39:45Z
|
SING: A Plug-and-Play DNN Learning Technique
|
[
"Adrien Courtois",
"Damien Scieur",
"Jean-Michel Morel",
"Pablo Arias",
"Thomas Eboli"
] |
We propose SING (StabIlized and Normalized Gradient), a plug-and-play
technique that improves the stability and generalization of the Adam(W)
optimizer. SING is straightforward to implement and has minimal computational
overhead, requiring only a layer-wise standardization of the gradients fed to
Adam(W) without introducing additional hyper-parameters. We support the
effectiveness and practicality of the proposed approach by showing improved
results on a wide range of architectures, problems (such as image
classification, depth estimation, and natural language processing), and in
combination with other optimizers. We provide a theoretical analysis of the
convergence of the method, and we show that by virtue of the standardization,
SING can escape local minima narrower than a threshold that is inversely
proportional to the network's depth.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.16013
|
2023-05-25T12:53:17Z
|
Online and Streaming Algorithms for Constrained $k$-Submodular
Maximization
|
[
"Fabian Spaeh",
"Alina Ene",
"Huy L. Nguyen"
] |
Constrained $k$-submodular maximization is a general framework that captures
many discrete optimization problems such as ad allocation, influence
maximization, personalized recommendation, and many others. In many of these
applications, datasets are large or decisions need to be made in an online
manner, which motivates the development of efficient streaming and online
algorithms. In this work, we develop single-pass streaming and online
algorithms for constrained $k$-submodular maximization with both monotone and
general (possibly non-monotone) objectives subject to cardinality and knapsack
constraints. Our algorithms achieve provable constant-factor approximation
guarantees which improve upon the state of the art in almost all settings.
Moreover, they are combinatorial and very efficient, and have optimal space and
running time. We experimentally evaluate our algorithms on instances for ad
allocation and other applications, where we observe that our algorithms are
efficient and scalable, and construct solutions that are comparable in value to
offline greedy algorithms.
|
[
"cs.DS",
"cs.LG"
] | false |
2305.16035
|
2023-05-25T13:14:58Z
|
Detecting Adversarial Data by Probing Multiple Perturbations Using
Expected Perturbation Score
|
[
"Shuhai Zhang",
"Feng Liu",
"Jiahao Yang",
"Yifan Yang",
"Changsheng Li",
"Bo Han",
"Mingkui Tan"
] |
Adversarial detection aims to determine whether a given sample is an
adversarial one based on the discrepancy between natural and adversarial
distributions. Unfortunately, estimating or comparing two data distributions is
extremely difficult, especially in high-dimension spaces. Recently, the
gradient of log probability density (a.k.a., score) w.r.t. the sample is used
as an alternative statistic to compute. However, we find that the score is
sensitive in identifying adversarial samples due to insufficient information
with one sample only. In this paper, we propose a new statistic called expected
perturbation score (EPS), which is essentially the expected score of a sample
after various perturbations. Specifically, to obtain adequate information
regarding one sample, we perturb it by adding various noises to capture its
multi-view observations. We theoretically prove that EPS is a proper statistic
to compute the discrepancy between two samples under mild conditions. In
practice, we can use a pre-trained diffusion model to estimate EPS for each
sample. Last, we propose an EPS-based adversarial detection (EPS-AD) method, in
which we develop EPS-based maximum mean discrepancy (MMD) as a metric to
measure the discrepancy between the test sample and natural samples. We also
prove that the EPS-based MMD between natural and adversarial samples is larger
than that among natural samples. Extensive experiments show the superior
adversarial detection performance of our EPS-AD.
|
[
"cs.LG",
"cs.CR"
] | false |
2305.16056
|
2023-05-25T13:38:53Z
|
Markov Decision Process with an External Temporal Process
|
[
"Ranga Shaarad Ayyagari",
"Ambedkar Dukkipati"
] |
Most reinforcement learning algorithms treat the context under which they
operate as a stationary, isolated and undisturbed environment. However, in the
real world, the environment is constantly changing due to a variety of external
influences. To address this problem, we study Markov Decision Processes (MDP)
under the influence of an external temporal process. We formalize this notion
and discuss conditions under which the problem becomes tractable with suitable
solutions. We propose a policy iteration algorithm to solve this problem and
theoretically analyze its performance.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.16094
|
2023-05-25T14:26:36Z
|
On Influence Functions, Classification Influence, Relative Influence,
Memorization and Generalization
|
[
"Michael Kounavis",
"Ousmane Dia",
"Ilqar Ramazanli"
] |
Machine learning systems such as large scale recommendation systems or
natural language processing systems are usually trained on billions of training
points and are associated with hundreds of billions or trillions of parameters.
Improving the learning process in such a way that both the training load is
reduced and the model accuracy improved is highly desired. In this paper we
take a first step toward solving this problem, studying influence functions
from the perspective of simplifying the computations they involve. We discuss
assumptions, under which influence computations can be performed on
significantly fewer parameters. We also demonstrate that the sign of the
influence value can indicate whether a training point is to memorize, as
opposed to generalize upon. For this purpose we formally define what
memorization means for a training point, as opposed to generalization. We
conclude that influence functions can be made practical, even for large scale
machine learning systems, and that influence values can be taken into account
by algorithms that selectively remove training points, as part of the learning
process.
|
[
"cs.LG",
"stat.ML"
] | false |
2305.16114
|
2023-05-25T14:48:00Z
|
Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly
Detection with Scale Learning
|
[
"Hongzuo Xu",
"Yijie Wang",
"Juhui Wei",
"Songlei Jian",
"Yizhou Li",
"Ning Liu"
] |
Due to the unsupervised nature of anomaly detection, the key to fueling deep
models is finding supervisory signals. Different from current
reconstruction-guided generative models and transformation-based contrastive
models, we devise novel data-driven supervision for tabular data by introducing
a characteristic -- scale -- as data labels. By representing varied sub-vectors
of data instances, we define scale as the relationship between the
dimensionality of original sub-vectors and that of representations. Scales
serve as labels attached to transformed representations, thus offering ample
labeled data for neural network training. This paper further proposes a scale
learning-based anomaly detection method. Supervised by the learning objective
of scale distribution alignment, our approach learns the ranking of
representations converted from varied subspaces of each data instance. Through
this proxy task, our approach models inherent regularities and patterns within
data, which well describes data "normality". Abnormal degrees of testing
instances are obtained by measuring whether they fit these learned patterns.
Extensive experiments show that our approach leads to significant improvement
over state-of-the-art generative/contrastive anomaly detection methods.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.16196
|
2023-05-25T15:55:59Z
|
Optimization and Interpretability of Graph Attention Networks for Small
Sparse Graph Structures in Automotive Applications
|
[
"Marion Neumeier",
"Andreas Tollkühn",
"Sebastian Dorn",
"Michael Botsch",
"Wolfgang Utschick"
] |
For automotive applications, the Graph Attention Network (GAT) is a
prominently used architecture to include relational information of a traffic
scenario during feature embedding. As shown in this work, however, one of the
most popular GAT realizations, namely GATv2, has potential pitfalls that hinder
an optimal parameter learning. Especially for small and sparse graph structures
a proper optimization is problematic. To surpass limitations, this work
proposes architectural modifications of GATv2. In controlled experiments, it is
shown that the proposed model adaptions improve prediction performance in a
node-level regression task and make it more robust to parameter initialization.
This work aims for a better understanding of the attention mechanism and
analyzes its interpretability of identifying causal importance.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.16230
|
2023-05-25T16:37:13Z
|
Topological gap protocol based machine learning optimization of Majorana
hybrid wires
|
[
"Matthias Thamm",
"Bernd Rosenow"
] |
Majorana zero modes in superconductor-nanowire hybrid structures are a
promising candidate for topologically protected qubits with the potential to be
used in scalable structures. Currently, disorder in such Majorana wires is a
major challenge, as it can destroy the topological phase and thus reduce the
yield in the fabrication of Majorana devices. We study machine learning
optimization of a gate array in proximity to a grounded Majorana wire, which
allows us to reliably compensate even strong disorder. We propose a metric for
optimization that is inspired by the topological gap protocol, and which can be
implemented based on measurements of the non-local conductance through the
wire.
|
[
"cond-mat.mes-hall",
"cs.LG"
] | false |
2305.16242
|
2023-05-25T16:52:26Z
|
Two-timescale Extragradient for Finding Local Minimax Points
|
[
"Jiseok Chae",
"Kyuwon Kim",
"Donghwan Kim"
] |
Minimax problems are notoriously challenging to optimize. However, we
demonstrate that the two-timescale extragradient can be a viable solution. By
utilizing dynamical systems theory, we show that it converges to points that
satisfy the second-order necessary condition of local minimax points, under a
mild condition. This work surpasses all previous results as we eliminate a
crucial assumption that the Hessian, with respect to the maximization variable,
is nondegenerate.
|
[
"math.OC",
"cs.LG"
] | false |
2305.16370
|
2023-05-25T13:00:46Z
|
Stecformer: Spatio-temporal Encoding Cascaded Transformer for
Multivariate Long-term Time Series Forecasting
|
[
"Zheng Sun",
"Yi Wei",
"Wenxiao Jia",
"Long Yu"
] |
Multivariate long-term time series forecasting is of great application across
many domains, such as energy consumption and weather forecasting. With the
development of transformer-based methods, the performance of multivariate
long-term time series forecasting has been significantly improved, however, the
study of spatial features extracting in transformer-based model is rare and the
consistency of different prediction periods is unsatisfactory due to the large
span. In this work, we propose a complete solution to address these problems in
terms of feature extraction and target prediction. For extraction, we design an
efficient spatio-temporal encoding extractor including a semi-adaptive graph to
acquire sufficient spatio-temporal information. For prediction, we propose a
Cascaded Decoding Predictor (CDP) to strengthen the correlation between
different intervals, which can also be utilized as a generic component to
improve the performance of transformer-based methods. The proposed method,
termed as Spatio-temporal Encoding Cascaded Transformer (Stecformer), achieving
a notable gap over the baseline model and is comparable with the
state-of-the-art performance of transformer-based methods on five benchmark
datasets. We hope our attempt will serve as a regular configuration in
multivariate long-term time series forecasting in the future.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.16392
|
2023-05-25T18:00:15Z
|
Using neural networks to model Main Belt Asteroid albedos as a function
of their proper orbital elements
|
[
"Zachary Murray"
] |
Asteroid diameters are traditionally difficult to estimate. When a direct
measurement of the diameter cannot be made through either occultation or direct
radar observation, the most common method is to approximate the diameter from
infrared observations. Once the diameter is known, a comparison with visible
light observations can be used to find the visible geometric albedo of the
body. One of the largest datasets of asteroid albedos comes from the NEOWISE
mission, which measured asteroid albedos both in the visible and infrared. We
model these albedos as a function of proper elements available from the
Asteroid Families Portal using an ensemble of neural networks. We find that
both the visible and infrared geometric albedos are significantly correlated
with asteroid position in the belt and occur in both asteroid families and in
the background belt. We find that the ensemble's prediction reduces the average
error in albedo by about 37% compared to a model that simply adopts an average
albedo, with no regard for the dynamical state of the body. We then use this
model to predict albedos for the half million main belt asteroids with proper
elements available in the Asteroid Families Portal and provide the results in a
catalog. Finally, we show that several presently categorized asteroid families
exist within much larger groups of asteroids of similar albedos - this may
suggest that further improvements in family identification can be made.
|
[
"astro-ph.EP",
"cs.LG"
] | false |
2305.16396
|
2023-05-25T18:01:38Z
|
ADLER -- An efficient Hessian-based strategy for adaptive learning rate
|
[
"Dario Balboni",
"Davide Bacciu"
] |
We derive a sound positive semi-definite approximation of the Hessian of deep
models for which Hessian-vector products are easily computable. This enables us
to provide an adaptive SGD learning rate strategy based on the minimization of
the local quadratic approximation, which requires just twice the computation of
a single SGD run, but performs comparably with grid search on SGD learning
rates on different model architectures (CNN with and without residual
connections) on classification tasks. We also compare the novel approximation
with the Gauss-Newton approximation.
|
[
"cs.LG",
"math.OC"
] | false |
2305.16484
|
2023-05-25T21:33:56Z
|
Batch Model Consolidation: A Multi-Task Model Consolidation Framework
|
[
"Iordanis Fostiropoulos",
"Jiaye Zhu",
"Laurent Itti"
] |
In Continual Learning (CL), a model is required to learn a stream of tasks
sequentially without significant performance degradation on previously learned
tasks. Current approaches fail for a long sequence of tasks from diverse
domains and difficulties. Many of the existing CL approaches are difficult to
apply in practice due to excessive memory cost or training time, or are tightly
coupled to a single device. With the intuition derived from the widely applied
mini-batch training, we propose Batch Model Consolidation ($\textbf{BMC}$) to
support more realistic CL under conditions where multiple agents are exposed to
a range of tasks. During a $\textit{regularization}$ phase, BMC trains multiple
$\textit{expert models}$ in parallel on a set of disjoint tasks. Each expert
maintains weight similarity to a $\textit{base model}$ through a
$\textit{stability loss}$, and constructs a $\textit{buffer}$ from a fraction
of the task's data. During the $\textit{consolidation}$ phase, we combine the
learned knowledge on 'batches' of $\textit{expert models}$ using a
$\textit{batched consolidation loss}$ in $\textit{memory}$ data that aggregates
all buffers. We thoroughly evaluate each component of our method in an ablation
study and demonstrate the effectiveness on standardized benchmark datasets
Split-CIFAR-100, Tiny-ImageNet, and the Stream dataset composed of 71 image
classification tasks from diverse domains and difficulties. Our method
outperforms the next best CL approach by 70% and is the only approach that can
maintain performance at the end of 71 tasks; Our benchmark can be accessed at
https://github.com/fostiropoulos/stream_benchmark
|
[
"cs.LG",
"cs.AI"
] | false |
2305.16513
|
2023-05-25T22:37:40Z
|
Sliding Window Sum Algorithms for Deep Neural Networks
|
[
"Roman Snytsar"
] |
Sliding window sums are widely used for string indexing, hashing and time
series analysis. We have developed a family of the generic vectorized sliding
sum algorithms that provide speedup of O(P/w) for window size $w$ and number of
processors P. For a sum with a commutative operator the speedup is improved to
O(P/log(w)). Even more important, our algorithms exhibit efficient memory
access patterns. In this paper we study the application of the sliding sum
algorithms to the training and inference of the Deep Neural Networks. We
demonstrate how both pooling and convolution primitives could be expressed as
sliding sums and evaluated by the compute kernels with the shared structure. We
show that the sliding sum convolution kernels are more efficient than the
commonly used GEMM kernels on the CPU, and could even outperform their GPU
counterparts.
|
[
"cs.LG",
"cs.DS"
] | false |
2305.16541
|
2023-05-25T23:44:31Z
|
Privacy-aware Gaussian Process Regression
|
[
"Rui Tuo",
"Raktim Bhattacharya"
] |
We propose the first theoretical and methodological framework for Gaussian
process regression subject to privacy constraints. The proposed method can be
used when a data owner is unwilling to share a high-fidelity supervised
learning model built from their data with the public due to privacy concerns.
The key idea of the proposed method is to add synthetic noise to the data until
the predictive variance of the Gaussian process model reaches a prespecified
privacy level. The optimal covariance matrix of the synthetic noise is
formulated in terms of semi-definite programming. We also introduce the
formulation of privacy-aware solutions under continuous privacy constraints
using kernel-based approaches, and study their theoretical properties. The
proposed method is illustrated by considering a model that tracks the
trajectories of satellites.
|
[
"cs.LG",
"cs.CR"
] | false |
2305.18089
|
2023-05-25T02:15:25Z
|
Inverse Protein Folding Using Deep Bayesian Optimization
|
[
"Natalie Maus",
"Yimeng Zeng",
"Daniel Allen Anderson",
"Phillip Maffettone",
"Aaron Solomon",
"Peyton Greenside",
"Osbert Bastani",
"Jacob R. Gardner"
] |
Inverse protein folding -- the task of predicting a protein sequence from its
backbone atom coordinates -- has surfaced as an important problem in the "top
down", de novo design of proteins. Contemporary approaches have cast this
problem as a conditional generative modelling problem, where a large generative
model over protein sequences is conditioned on the backbone. While these
generative models very rapidly produce promising sequences, independent draws
from generative models may fail to produce sequences that reliably fold to the
correct backbone. Furthermore, it is challenging to adapt pure generative
approaches to other settings, e.g., when constraints exist. In this paper, we
cast the problem of improving generated inverse folds as an optimization
problem that we solve using recent advances in "deep" or "latent space"
Bayesian optimization. Our approach consistently produces protein sequences
with greatly reduced structural error to the target backbone structure as
measured by TM score and RMSD while using fewer computational resources.
Additionally, we demonstrate other advantages of an optimization-based approach
to the problem, such as the ability to handle constraints.
|
[
"q-bio.BM",
"cs.LG"
] | false |
2305.18227
|
2023-05-25T20:05:47Z
|
Online Dynamic Acknowledgement with Learned Predictions
|
[
"Sungjin Im",
"Benjamin Moseley",
"Chenyang Xu",
"Ruilong Zhang"
] |
We revisit the online dynamic acknowledgment problem. In the problem, a
sequence of requests arrive over time to be acknowledged, and all outstanding
requests can be satisfied simultaneously by one acknowledgement. The goal of
the problem is to minimize the total request delay plus acknowledgement cost.
This elegant model studies the trade-off between acknowledgement cost and
waiting experienced by requests. The problem has been well studied and the
tight competitive ratios have been determined. For this well-studied problem,
we focus on how to effectively use machine-learned predictions to have better
performance.
We develop algorithms that perform arbitrarily close to the optimum with
accurate predictions while concurrently having the guarantees arbitrarily close
to what the best online algorithms can offer without access to predictions,
thereby achieving simultaneous optimum consistency and robustness. This new
result is enabled by our novel prediction error measure. No error measure was
defined for the problem prior to our work, and natural measures failed due to
the challenge that requests with different arrival times have different effects
on the objective. We hope our ideas can be used for other online problems with
temporal aspects that have been resisting proper error measures.
|
[
"cs.DS",
"cs.LG"
] | false |
2306.06107
|
2023-05-25T12:05:18Z
|
Adversarial Attacks on Leakage Detectors in Water Distribution Networks
|
[
"Paul Stahlhofen",
"André Artelt",
"Luca Hermes",
"Barbara Hammer"
] |
Many Machine Learning models are vulnerable to adversarial attacks: There
exist methodologies that add a small (imperceptible) perturbation to an input
such that the model comes up with a wrong prediction. Better understanding of
such attacks is crucial in particular for models used in security-critical
domains, such as monitoring of water distribution networks, in order to devise
counter-measures enhancing model robustness and trustworthiness.
We propose a taxonomy for adversarial attacks against machine learning based
leakage detectors in water distribution networks. Following up on this, we
focus on a particular type of attack: an adversary searching the least
sensitive point, that is, the location in the water network where the largest
possible undetected leak could occur. Based on a mathematical formalization of
the least sensitive point problem, we use three different algorithmic
approaches to find a solution. Results are evaluated on two benchmark water
distribution networks.
|
[
"cs.CR",
"cs.LG"
] | false |
2306.06108
|
2023-05-25T18:36:54Z
|
Demystifying Fraudulent Transactions and Illicit Nodes in the Bitcoin
Network for Financial Forensics
|
[
"Youssef Elmougy",
"Ling Liu"
] |
Blockchain provides the unique and accountable channel for financial
forensics by mining its open and immutable transaction data. A recent surge has
been witnessed by training machine learning models with cryptocurrency
transaction data for anomaly detection, such as money laundering and other
fraudulent activities. This paper presents a holistic applied data science
approach to fraud detection in the Bitcoin network with two original
contributions. First, we contribute the Elliptic++ dataset, which extends the
Elliptic transaction dataset to include over 822k Bitcoin wallet addresses
(nodes), each with 56 features, and 1.27M temporal interactions. This enables
both the detection of fraudulent transactions and the detection of illicit
addresses (actors) in the Bitcoin network by leveraging four types of graph
data: (i) the transaction-to-transaction graph, representing the money flow in
the Bitcoin network, (ii) the address-to-address interaction graph, capturing
the types of transaction flows between Bitcoin addresses, (iii) the
address-transaction graph, representing the bi-directional money flow between
addresses and transactions (BTC flow from input address to one or more
transactions and BTC flow from a transaction to one or more output addresses),
and (iv) the user entity graph, capturing clusters of Bitcoin addresses
representing unique Bitcoin users. Second, we perform fraud detection tasks on
all four graphs by using diverse machine learning algorithms. We show that
adding enhanced features from the address-to-address and the
address-transaction graphs not only assists in effectively detecting both
illicit transactions and illicit addresses, but also assists in gaining
in-depth understanding of the root cause of money laundering vulnerabilities in
cryptocurrency transactions and the strategies for fraud detection and
prevention. Released at github.com/git-disl/EllipticPlusPlus.
|
[
"cs.CR",
"cs.LG"
] | false |
2305.15622
|
2023-05-25T00:03:22Z
|
GFairHint: Improving Individual Fairness for Graph Neural Networks via
Fairness Hint
|
[
"Paiheng Xu",
"Yuhang Zhou",
"Bang An",
"Wei Ai",
"Furong Huang"
] |
Given the growing concerns about fairness in machine learning and the
impressive performance of Graph Neural Networks (GNNs) on graph data learning,
algorithmic fairness in GNNs has attracted significant attention. While many
existing studies improve fairness at the group level, only a few works promote
individual fairness, which renders similar outcomes for similar individuals. A
desirable framework that promotes individual fairness should (1) balance
between fairness and performance, (2) accommodate two commonly-used individual
similarity measures (externally annotated and computed from input features),
(3) generalize across various GNN models, and (4) be computationally efficient.
Unfortunately, none of the prior work achieves all the desirables. In this
work, we propose a novel method, GFairHint, which promotes individual fairness
in GNNs and achieves all aforementioned desirables. GFairHint learns fairness
representations through an auxiliary link prediction task, and then
concatenates the representations with the learned node embeddings in original
GNNs as a "fairness hint". Through extensive experimental investigations on
five real-world graph datasets under three prevalent GNN models covering both
individual similarity measures above, GFairHint achieves the best fairness
results in almost all combinations of datasets with various backbone models,
while generating comparable utility results, with much less computational cost
compared to the previous state-of-the-art (SoTA) method.
|
[
"cs.LG",
"cs.CY",
"cs.SI"
] | false |
2305.15643
|
2023-05-25T01:43:29Z
|
Federated Composite Saddle Point Optimization
|
[
"Site Bai",
"Brian Bullins"
] |
Federated learning (FL) approaches for saddle point problems (SPP) have
recently gained in popularity due to the critical role they play in machine
learning (ML). Existing works mostly target smooth unconstrained objectives in
Euclidean space, whereas ML problems often involve constraints or non-smooth
regularization, which results in a need for composite optimization. Addressing
these issues, we propose Federated Dual Extrapolation (FeDualEx), an extra-step
primal-dual algorithm, which is the first of its kind that encompasses both
saddle point optimization and composite objectives under the FL paradigm. Both
the convergence analysis and the empirical evaluation demonstrate the
effectiveness of FeDualEx in these challenging settings. In addition, even for
the sequential version of FeDualEx, we provide rates for the stochastic
composite saddle point setting which, to our knowledge, are not found in prior
literature.
|
[
"cs.LG",
"math.OC",
"stat.ML"
] | false |
2305.15669
|
2023-05-25T02:40:32Z
|
PROTO: Iterative Policy Regularized Offline-to-Online Reinforcement
Learning
|
[
"Jianxiong Li",
"Xiao Hu",
"Haoran Xu",
"Jingjing Liu",
"Xianyuan Zhan",
"Ya-Qin Zhang"
] |
Offline-to-online reinforcement learning (RL), by combining the benefits of
offline pretraining and online finetuning, promises enhanced sample efficiency
and policy performance. However, existing methods, effective as they are,
suffer from suboptimal performance, limited adaptability, and unsatisfactory
computational efficiency. We propose a novel framework, PROTO, which overcomes
the aforementioned limitations by augmenting the standard RL objective with an
iteratively evolving regularization term. Performing a trust-region-style
update, PROTO yields stable initial finetuning and optimal final performance by
gradually evolving the regularization term to relax the constraint strength. By
adjusting only a few lines of code, PROTO can bridge any offline policy
pretraining and standard off-policy RL finetuning to form a powerful
offline-to-online RL pathway, birthing great adaptability to diverse methods.
Simple yet elegant, PROTO imposes minimal additional computation and enables
highly efficient online finetuning. Extensive experiments demonstrate that
PROTO achieves superior performance over SOTA baselines, offering an adaptable
and efficient offline-to-online RL framework.
|
[
"cs.LG",
"cs.AI",
"cs.RO"
] | false |
2305.15719
|
2023-05-25T05:02:35Z
|
Efficient Neural Music Generation
|
[
"Max W. Y. Lam",
"Qiao Tian",
"Tang Li",
"Zongyu Yin",
"Siyuan Feng",
"Ming Tu",
"Yuliang Ji",
"Rui Xia",
"Mingbo Ma",
"Xuchen Song",
"Jitong Chen",
"Yuping Wang",
"Yuxuan Wang"
] |
Recent progress in music generation has been remarkably advanced by the
state-of-the-art MusicLM, which comprises a hierarchy of three LMs,
respectively, for semantic, coarse acoustic, and fine acoustic modelings. Yet,
sampling with the MusicLM requires processing through these LMs one by one to
obtain the fine-grained acoustic tokens, making it computationally expensive
and prohibitive for a real-time generation. Efficient music generation with a
quality on par with MusicLM remains a significant challenge. In this paper, we
present MeLoDy (M for music; L for LM; D for diffusion), an LM-guided diffusion
model that generates music audios of state-of-the-art quality meanwhile
reducing 95.7% or 99.6% forward passes in MusicLM, respectively, for sampling
10s or 30s music. MeLoDy inherits the highest-level LM from MusicLM for
semantic modeling, and applies a novel dual-path diffusion (DPD) model and an
audio VAE-GAN to efficiently decode the conditioning semantic tokens into
waveform. DPD is proposed to simultaneously model the coarse and fine acoustics
by incorporating the semantic information into segments of latents effectively
via cross-attention at each denoising step. Our experimental results suggest
the superiority of MeLoDy, not only in its practical advantages on sampling
speed and infinitely continuable generation, but also in its state-of-the-art
musicality, audio quality, and text correlation.
Our samples are available at https://Efficient-MeLoDy.github.io/.
|
[
"cs.SD",
"cs.AI",
"cs.LG",
"eess.AS"
] | true |
2305.15723
|
2023-05-25T05:11:40Z
|
Learning across Data Owners with Joint Differential Privacy
|
[
"Yangsibo Huang",
"Haotian Jiang",
"Daogao Liu",
"Mohammad Mahdian",
"Jieming Mao",
"Vahab Mirrokni"
] |
In this paper, we study the setting in which data owners train machine
learning models collaboratively under a privacy notion called joint
differential privacy [Kearns et al., 2018]. In this setting, the model trained
for each data owner $j$ uses $j$'s data without privacy consideration and other
owners' data with differential privacy guarantees. This setting was initiated
in [Jain et al., 2021] with a focus on linear regressions. In this paper, we
study this setting for stochastic convex optimization (SCO). We present an
algorithm that is a variant of DP-SGD [Song et al., 2013; Abadi et al., 2016]
and provides theoretical bounds on its population loss. We compare our
algorithm to several baselines and discuss for what parameter setups our
algorithm is more preferred. We also empirically study joint differential
privacy in the multi-class classification problem over two public datasets. Our
empirical findings are well-connected to the insights from our theoretical
results.
|
[
"cs.LG",
"cs.CR",
"math.OC"
] | false |
2305.16566
|
2023-05-26T01:18:52Z
|
Integrating Listwise Ranking into Pairwise-based Image-Text Retrieval
|
[
"Zheng Li",
"Caili Guo",
"Xin Wang",
"Zerun Feng",
"Yanjun Wang"
] |
Image-Text Retrieval (ITR) is essentially a ranking problem. Given a query
caption, the goal is to rank candidate images by relevance, from large to
small. The current ITR datasets are constructed in a pairwise manner.
Image-text pairs are annotated as positive or negative. Correspondingly, ITR
models mainly use pairwise losses, such as triplet loss, to learn to rank.
Pairwise-based ITR increases positive pair similarity while decreasing negative
pair similarity indiscriminately. However, the relevance between dissimilar
negative pairs is different. Pairwise annotations cannot reflect this
difference in relevance. In the current datasets, pairwise annotations miss
many correlations. There are many potential positive pairs among the pairs
labeled as negative. Pairwise-based ITR can only rank positive samples before
negative samples, but cannot rank negative samples by relevance. In this paper,
we integrate listwise ranking into conventional pairwise-based ITR. Listwise
ranking optimizes the entire ranking list based on relevance scores.
Specifically, we first propose a Relevance Score Calculation (RSC) module to
calculate the relevance score of the entire ranked list. Then we choose the
ranking metric, Normalized Discounted Cumulative Gain (NDCG), as the
optimization objective. We transform the non-differentiable NDCG into a
differentiable listwise loss, named Smooth-NDCG (S-NDCG). Our listwise ranking
approach can be plug-and-play integrated into current pairwise-based ITR
models. Experiments on ITR benchmarks show that integrating listwise ranking
can improve the performance of current ITR models and provide more
user-friendly retrieval results. The code is available at
https://github.com/AAA-Zheng/Listwise_ITR.
|
[
"cs.CV"
] | false |
2305.16602
|
2023-05-26T03:21:30Z
|
Discovering Novel Actions in an Open World with Object-Grounded Visual
Commonsense Reasoning
|
[
"Sathyanarayanan N. Aakur",
"Sanjoy Kundu",
"Shubham Trehan"
] |
Learning to infer labels in an open world, i.e., in an environment where the
target ``labels'' are unknown, is an important characteristic for achieving
autonomy. Foundation models pre-trained on enormous amounts of data have shown
remarkable generalization skills through prompting, particularly in zero-shot
inference. However, their performance is restricted to the correctness of the
target label's search space. In an open world where these labels are unknown,
the search space can be exceptionally large. It can require reasoning over
several combinations of elementary concepts to arrive at an inference, which
severely restricts the performance of such models. To tackle this challenging
problem, we propose a neuro-symbolic framework called ALGO - novel Action
Learning with Grounded Object recognition that can use symbolic knowledge
stored in large-scale knowledge bases to infer activities (verb-noun
combinations) in egocentric videos with limited supervision using two steps.
First, we propose a novel neuro-symbolic prompting approach that uses
object-centric vision-language foundation models as a noisy oracle to ground
objects in the video through evidence-based reasoning. Second, driven by prior
commonsense knowledge, we discover plausible activities through an energy-based
symbolic pattern theory framework and learn to ground knowledge-based action
(verb) concepts in the video. Extensive experiments on two publicly available
datasets (GTEA Gaze and GTEA Gaze Plus) demonstrate its performance on
open-world activity inference and its generalization to unseen actions in an
unknown search space. We show that ALGO can be extended to zero-shot settings
and demonstrate its competitive performance to multimodal foundation models.
|
[
"cs.CV"
] | false |
2305.16682
|
2023-05-26T07:04:00Z
|
Sharpend Cosine Similarity based Neural Network for Hyperspectral Image
Classification
|
[
"Muhammad Ahmad"
] |
Hyperspectral Image Classification (HSIC) is a difficult task due to high
inter and intra-class similarity and variability, nested regions, and
overlapping. 2D Convolutional Neural Networks (CNN) emerged as a viable network
whereas, 3D CNNs are a better alternative due to accurate classification.
However, 3D CNNs are highly computationally complex due to their volume and
spectral dimensions. Moreover, down-sampling and hierarchical filtering (high
frequency) i.e., texture features need to be smoothed during the forward pass
which is crucial for accurate HSIC. Furthermore, CNN requires tons of tuning
parameters which increases the training time. Therefore, to overcome the
aforesaid issues, Sharpened Cosine Similarity (SCS) concept as an alternative
to convolutions in a Neural Network for HSIC is introduced. SCS is
exceptionally parameter efficient due to skipping the non-linear activation
layers, normalization, and dropout after the SCS layer. Use of MaxAbsPool
instead of MaxPool which selects the element with the highest magnitude of
activity, even if it's negative. Experimental results on publicly available HSI
datasets proved the performance of SCS as compared to the convolutions in
Neural Networks.
|
[
"cs.CV"
] | false |
2305.16685
|
2023-05-26T07:12:35Z
|
S4M: Generating Radiology Reports by A Single Model for Multiple Body
Parts
|
[
"Qi Chen",
"Yutong Xie",
"Biao Wu",
"Minh-Son To",
"James Ang",
"Qi Wu"
] |
In this paper, we seek to design a report generation model that is able to
generate reasonable reports even given different images of various body parts.
We start by directly merging multiple datasets and training a single report
generation model on this one. We, however, observe that the reports generated
in such a simple way only obtain comparable performance compared with that
trained separately on each specific dataset. We suspect that this is caused by
the dilemma between the diversity of body parts and the limited availability of
medical data. To develop robust and generalizable models, it is important to
consider a diverse range of body parts and medical conditions. However,
collecting a sufficiently large dataset for each specific body part can be
difficult due to various factors, such as data availability and privacy
concerns. Thus, rather than striving for more data, we propose a
single-for-multiple (S4M) framework, which seeks to facilitate the learning of
the report generation model with two auxiliary priors: an explicit prior (\ie,
feeding radiology-informed knowledge) and an implicit prior (\ie, guided by
cross-modal features). Specifically, based on the conventional encoder-decoder
report generation framework, we incorporate two extra branches: a
Radiology-informed Knowledge Aggregation (RadKA) branch and an Implicit Prior
Guidance (IPG) branch. We conduct the experiments on our merged dataset which
consists of a public dataset (\ie, IU-Xray) and five private datasets, covering
six body parts: chest, abdomen, knee, hip, wrist and shoulder. Our S4M model
outperforms all the baselines, regardless of whether they are trained on
separate or merged datasets. Code is available at:
\url{https://github.com/YtongXie/S4M}.
|
[
"cs.CV"
] | false |
2305.16804
|
2023-05-26T10:34:58Z
|
Towards Open-World Segmentation of Parts
|
[
"Tai-Yu Pan",
"Qing Liu",
"Wei-Lun Chao",
"Brian Price"
] |
Segmenting object parts such as cup handles and animal bodies is important in
many real-world applications but requires more annotation effort. The largest
dataset nowadays contains merely two hundred object categories, implying the
difficulty to scale up part segmentation to an unconstrained setting. To
address this, we propose to explore a seemingly simplified but empirically
useful and scalable task, class-agnostic part segmentation. In this problem, we
disregard the part class labels in training and instead treat all of them as a
single part class. We argue and demonstrate that models trained without part
classes can better localize parts and segment them on objects unseen in
training. We then present two further improvements. First, we propose to make
the model object-aware, leveraging the fact that parts are "compositions",
whose extents are bounded by the corresponding objects and whose appearances
are by nature not independent but bundled. Second, we introduce a novel
approach to improve part segmentation on unseen objects, inspired by an
interesting finding -- for unseen objects, the pixel-wise features extracted by
the model often reveal high-quality part segments. To this end, we propose a
novel self-supervised procedure that iterates between pixel clustering and
supervised contrastive learning that pulls pixels closer or pushes them away.
Via extensive experiments on PartImageNet and Pascal-Part, we show notable and
consistent gains by our approach, essentially a critical step towards
open-world part segmentation.
|
[
"cs.CV"
] | false |
2305.16807
|
2023-05-26T10:41:08Z
|
Negative-prompt Inversion: Fast Image Inversion for Editing with
Text-guided Diffusion Models
|
[
"Daiki Miyake",
"Akihiro Iohara",
"Yu Saito",
"Toshiyuki Tanaka"
] |
In image editing employing diffusion models, it is crucial to preserve the
reconstruction quality of the original image while changing its style. Although
existing methods ensure reconstruction quality through optimization, a drawback
of these is the significant amount of time required for optimization. In this
paper, we propose negative-prompt inversion, a method capable of achieving
equivalent reconstruction solely through forward propagation without
optimization, thereby enabling much faster editing processes. We experimentally
demonstrate that the reconstruction quality of our method is comparable to that
of existing methods, allowing for inversion at a resolution of 512 pixels and
with 50 sampling steps within approximately 5 seconds, which is more than 30
times faster than null-text inversion. Reduction of the computation time by the
proposed method further allows us to use a larger number of sampling steps in
diffusion models to improve the reconstruction quality with a moderate increase
in computation time.
|
[
"cs.CV"
] | false |
2305.16936
|
2023-05-26T13:52:57Z
|
CRoSS: Diffusion Model Makes Controllable, Robust and Secure Image
Steganography
|
[
"Jiwen Yu",
"Xuanyu Zhang",
"Youmin Xu",
"Jian Zhang"
] |
Current image steganography techniques are mainly focused on cover-based
methods, which commonly have the risk of leaking secret images and poor
robustness against degraded container images. Inspired by recent developments
in diffusion models, we discovered that two properties of diffusion models, the
ability to achieve translation between two images without training, and
robustness to noisy data, can be used to improve security and natural
robustness in image steganography tasks. For the choice of diffusion model, we
selected Stable Diffusion, a type of conditional diffusion model, and fully
utilized the latest tools from open-source communities, such as LoRAs and
ControlNets, to improve the controllability and diversity of container images.
In summary, we propose a novel image steganography framework, named
Controllable, Robust and Secure Image Steganography (CRoSS), which has
significant advantages in controllability, robustness, and security compared to
cover-based image steganography methods. These benefits are obtained without
additional training. To our knowledge, this is the first work to introduce
diffusion models to the field of image steganography. In the experimental
section, we conducted detailed experiments to demonstrate the advantages of our
proposed CRoSS framework in controllability, robustness, and security.
|
[
"cs.CV"
] | false |
2305.16968
|
2023-05-26T14:22:03Z
|
Linear Object Detection in Document Images using Multiple Object
Tracking
|
[
"Philippe Bernet",
"Joseph Chazalon",
"Edwin Carlinet",
"Alexandre Bourquelot",
"Elodie Puybareau"
] |
Linear objects convey substantial information about document structure, but
are challenging to detect accurately because of degradation (curved, erased) or
decoration (doubled, dashed). Many approaches can recover some vector
representation, but only one closed-source technique introduced in 1994, based
on Kalman filters (a particular case of Multiple Object Tracking algorithm),
can perform a pixel-accurate instance segmentation of linear objects and enable
to selectively remove them from the original image. We aim at re-popularizing
this approach and propose: 1. a framework for accurate instance segmentation of
linear objects in document images using Multiple Object Tracking (MOT); 2.
document image datasets and metrics which enable both vector- and pixel-based
evaluation of linear object detection; 3. performance measures of MOT
approaches against modern segment detectors; 4. performance measures of various
tracking strategies, exhibiting alternatives to the original Kalman filters
approach; and 5. an open-source implementation of a detector which can
discriminate instances of curved, erased, dashed, intersecting and/or
overlapping linear objects.
|
[
"cs.CV"
] | false |
2305.17007
|
2023-05-26T15:05:19Z
|
Improving Knowledge Distillation via Regularizing Feature Norm and
Direction
|
[
"Yuzhu Wang",
"Lechao Cheng",
"Manni Duan",
"Yongheng Wang",
"Zunlei Feng",
"Shu Kong"
] |
Knowledge distillation (KD) exploits a large well-trained model (i.e.,
teacher) to train a small student model on the same dataset for the same task.
Treating teacher features as knowledge, prevailing methods of knowledge
distillation train student by aligning its features with the teacher's, e.g.,
by minimizing the KL-divergence between their logits or L2 distance between
their intermediate features. While it is natural to believe that better
alignment of student features to the teacher better distills teacher knowledge,
simply forcing this alignment does not directly contribute to the student's
performance, e.g., classification accuracy. In this work, we propose to align
student features with class-mean of teacher features, where class-mean
naturally serves as a strong classifier. To this end, we explore baseline
techniques such as adopting the cosine distance based loss to encourage the
similarity between student features and their corresponding class-means of the
teacher. Moreover, we train the student to produce large-norm features,
inspired by other lines of work (e.g., model pruning and domain adaptation),
which find the large-norm features to be more significant. Finally, we propose
a rather simple loss term (dubbed ND loss) to simultaneously (1) encourage
student to produce large-\emph{norm} features, and (2) align the
\emph{direction} of student features and teacher class-means. Experiments on
standard benchmarks demonstrate that our explored techniques help existing KD
methods achieve better performance, i.e., higher classification accuracy on
ImageNet and CIFAR100 datasets, and higher detection precision on COCO dataset.
Importantly, our proposed ND loss helps the most, leading to the
state-of-the-art performance on these benchmarks. The source code is available
at \url{https://github.com/WangYZ1608/Knowledge-Distillation-via-ND}.
|
[
"cs.CV"
] | false |
2305.17011
|
2023-05-26T15:13:44Z
|
SOC: Semantic-Assisted Object Cluster for Referring Video Object
Segmentation
|
[
"Zhuoyan Luo",
"Yicheng Xiao",
"Yong Liu",
"Shuyan Li",
"Yitong Wang",
"Yansong Tang",
"Xiu Li",
"Yujiu Yang"
] |
This paper studies referring video object segmentation (RVOS) by boosting
video-level visual-linguistic alignment. Recent approaches model the RVOS task
as a sequence prediction problem and perform multi-modal interaction as well as
segmentation for each frame separately. However, the lack of a global view of
video content leads to difficulties in effectively utilizing inter-frame
relationships and understanding textual descriptions of object temporal
variations. To address this issue, we propose Semantic-assisted Object Cluster
(SOC), which aggregates video content and textual guidance for unified temporal
modeling and cross-modal alignment. By associating a group of frame-level
object embeddings with language tokens, SOC facilitates joint space learning
across modalities and time steps. Moreover, we present multi-modal contrastive
supervision to help construct well-aligned joint space at the video level. We
conduct extensive experiments on popular RVOS benchmarks, and our method
outperforms state-of-the-art competitors on all benchmarks by a remarkable
margin. Besides, the emphasis on temporal coherence enhances the segmentation
stability and adaptability of our method in processing text expressions with
temporal variations. Code will be available.
|
[
"cs.CV"
] | false |
2305.17024
|
2023-05-26T15:32:22Z
|
Contouring by Unit Vector Field Regression
|
[
"Amir Jamaludin",
"Sarim Ather",
"Timor Kadir",
"Rhydian Windsor"
] |
This work introduces a simple deep-learning based method to delineate
contours by `walking' along learnt unit vector fields. We demonstrate the
effectiveness of our pipeline on the unique case of open contours on the task
of delineating the sacroiliac joints (SIJs) in spinal MRIs. We show that: (i)
95% of the time the average root mean square error of the predicted contour
against the original ground truth is below 4.5 pixels (2.5mm for a standard
T1-weighted SIJ MRI), and (ii) the proposed method is better than the baseline
of regressing vertices or landmarks of contours.
|
[
"cs.CV"
] | false |
2305.17091
|
2023-05-26T17:02:42Z
|
SSSegmenation: An Open Source Supervised Semantic Segmentation Toolbox
Based on PyTorch
|
[
"Zhenchao Jin"
] |
This paper presents SSSegmenation, which is an open source supervised
semantic image segmentation toolbox based on PyTorch. The design of this
toolbox is motivated by MMSegmentation while it is easier to use because of
fewer dependencies and achieves superior segmentation performance under a
comparable training and testing setup. Moreover, the toolbox also provides
plenty of trained weights for popular and contemporary semantic segmentation
methods, including Deeplab, PSPNet, OCRNet, MaskFormer, \emph{etc}. We expect
that this toolbox can contribute to the future development of semantic
segmentation. Codes and model zoos are available at
\href{https://github.com/SegmentationBLWX/sssegmentation/}{SSSegmenation}.
|
[
"cs.CV"
] | false |
2305.17096
|
2023-05-26T17:10:24Z
|
GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance
Segmentation
|
[
"Tanveer Hannan",
"Rajat Koner",
"Maximilian Bernhard",
"Suprosanna Shit",
"Bjoern Menze",
"Volker Tresp",
"Matthias Schubert",
"Thomas Seidl"
] |
Recent trends in Video Instance Segmentation (VIS) have seen a growing
reliance on online methods to model complex and lengthy video sequences.
However, the degradation of representation and noise accumulation of the online
methods, especially during occlusion and abrupt changes, pose substantial
challenges. Transformer-based query propagation provides promising directions
at the cost of quadratic memory attention. However, they are susceptible to the
degradation of instance features due to the above-mentioned challenges and
suffer from cascading effects. The detection and rectification of such errors
remain largely underexplored. To this end, we introduce \textbf{GRAtt-VIS},
\textbf{G}ated \textbf{R}esidual \textbf{Att}ention for \textbf{V}ideo
\textbf{I}nstance \textbf{S}egmentation. Firstly, we leverage a
Gumbel-Softmax-based gate to detect possible errors in the current frame. Next,
based on the gate activation, we rectify degraded features from its past
representation. Such a residual configuration alleviates the need for dedicated
memory and provides a continuous stream of relevant instance features.
Secondly, we propose a novel inter-instance interaction using gate activation
as a mask for self-attention. This masking strategy dynamically restricts the
unrepresentative instance queries in the self-attention and preserves vital
information for long-term tracking. We refer to this novel combination of Gated
Residual Connection and Masked Self-Attention as \textbf{GRAtt} block, which
can easily be integrated into the existing propagation-based framework.
Further, GRAtt blocks significantly reduce the attention overhead and simplify
dynamic temporal modeling. GRAtt-VIS achieves state-of-the-art performance on
YouTube-VIS and the highly challenging OVIS dataset, significantly improving
over previous methods. Code is available at
\url{https://github.com/Tanveer81/GRAttVIS}.
|
[
"cs.CV"
] | false |
2305.17207
|
2023-05-26T18:58:56Z
|
Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD
Detection Using Text-image Models
|
[
"Yunhao Ge",
"Jie Ren",
"Jiaping Zhao",
"Kaifeng Chen",
"Andrew Gallagher",
"Laurent Itti",
"Balaji Lakshminarayanan"
] |
We focus on the challenge of out-of-distribution (OOD) detection in deep
learning models, a crucial aspect in ensuring reliability. Despite considerable
effort, the problem remains significantly challenging in deep learning models
due to their propensity to output over-confident predictions for OOD inputs. We
propose a novel one-class open-set OOD detector that leverages text-image
pre-trained models in a zero-shot fashion and incorporates various descriptions
of in-domain and OOD. Our approach is designed to detect anything not in-domain
and offers the flexibility to detect a wide variety of OOD, defined via fine-
or coarse-grained labels, or even in natural language. We evaluate our approach
on challenging benchmarks including large-scale datasets containing
fine-grained, semantically similar classes, distributionally shifted images,
and multi-object images containing a mixture of in-domain and OOD objects. Our
method shows superior performance over previous methods on all benchmarks. Code
is available at https://github.com/gyhandy/One-Class-Anything
|
[
"cs.CV"
] | false |
2305.17252
|
2023-05-26T20:42:52Z
|
Generalizable Pose Estimation Using Implicit Scene Representations
|
[
"Vaibhav Saxena",
"Kamal Rahimi Malekshan",
"Linh Tran",
"Yotto Koga"
] |
6-DoF pose estimation is an essential component of robotic manipulation
pipelines. However, it usually suffers from a lack of generalization to new
instances and object types. Most widely used methods learn to infer the object
pose in a discriminative setup where the model filters useful information to
infer the exact pose of the object. While such methods offer accurate poses,
the model does not store enough information to generalize to new objects. In
this work, we address the generalization capability of pose estimation using
models that contain enough information about the object to render it in
different poses. We follow the line of work that inverts neural renderers to
infer the pose. We propose i-$\sigma$SRN to maximize the information flowing
from the input pose to the rendered scene and invert them to infer the pose
given an input image. Specifically, we extend Scene Representation Networks
(SRNs) by incorporating a separate network for density estimation and introduce
a new way of obtaining a weighted scene representation. We investigate several
ways of initial pose estimates and losses for the neural renderer. Our final
evaluation shows a significant improvement in inference performance and speed
compared to existing approaches.
|
[
"cs.CV"
] | false |
2305.17305
|
2023-05-26T23:43:21Z
|
DynaShare: Task and Instance Conditioned Parameter Sharing for
Multi-Task Learning
|
[
"Elahe Rahimian",
"Golara Javadi",
"Frederick Tung",
"Gabriel Oliveira"
] |
Multi-task networks rely on effective parameter sharing to achieve robust
generalization across tasks. In this paper, we present a novel parameter
sharing method for multi-task learning that conditions parameter sharing on
both the task and the intermediate feature representations at inference time.
In contrast to traditional parameter sharing approaches, which fix or learn a
deterministic sharing pattern during training and apply the same pattern to all
examples during inference, we propose to dynamically decide which parts of the
network to activate based on both the task and the input instance. Our approach
learns a hierarchical gating policy consisting of a task-specific policy for
coarse layer selection and gating units for individual input instances, which
work together to determine the execution path at inference time. Experiments on
the NYU v2, Cityscapes and MIMIC-III datasets demonstrate the potential of the
proposed approach and its applicability across problem domains.
|
[
"cs.CV"
] | false |
2305.18547
|
2023-05-26T07:35:49Z
|
Learning from Multi-Perception Features for Real-Word Image
Super-resolution
|
[
"Axi Niu",
"Kang Zhang",
"Trung X. Pham",
"Pei Wang",
"Jinqiu Sun",
"In So Kweon",
"Yanning Zhang"
] |
Currently, there are two popular approaches for addressing real-world image
super-resolution problems: degradation-estimation-based and blind-based
methods. However, degradation-estimation-based methods may be inaccurate in
estimating the degradation, making them less applicable to real-world LR
images. On the other hand, blind-based methods are often limited by their fixed
single perception information, which hinders their ability to handle diverse
perceptual characteristics. To overcome this limitation, we propose a novel SR
method called MPF-Net that leverages multiple perceptual features of input
images. Our method incorporates a Multi-Perception Feature Extraction (MPFE)
module to extract diverse perceptual information and a series of newly-designed
Cross-Perception Blocks (CPB) to combine this information for effective
super-resolution reconstruction. Additionally, we introduce a contrastive
regularization term (CR) that improves the model's learning capability by using
newly generated HR and LR images as positive and negative samples for ground
truth HR. Experimental results on challenging real-world SR datasets
demonstrate that our approach significantly outperforms existing
state-of-the-art methods in both qualitative and quantitative measures.
|
[
"cs.CV"
] | false |
2305.16567
|
2023-05-26T01:22:35Z
|
Structured Latent Variable Models for Articulated Object Interaction
|
[
"Emily Liu",
"Michael Noseworthy",
"Nicholas Roy"
] |
In this paper, we investigate a scenario in which a robot learns a
low-dimensional representation of a door given a video of the door opening or
closing. This representation can be used to infer door-related parameters and
predict the outcomes of interacting with the door. Current machine learning
based approaches in the doors domain are based primarily on labelled datasets.
However, the large quantity of available door data suggests the feasibility of
a semisupervised approach based on pretraining. To exploit the hierarchical
structure of the dataset where each door has multiple associated images, we
pretrain with a structured latent variable model known as a neural
statistician. The neural satsitician enforces separation between shared
context-level variables (common across all images associated with the same
door) and instance-level variables (unique to each individual image). We first
demonstrate that the neural statistician is able to learn an embedding that
enables reconstruction and sampling of realistic door images. Then, we evaluate
the correspondence of the learned embeddings to human-interpretable parameters
in a series of supervised inference tasks. It was found that a pretrained
neural statistician encoder outperformed analogous context-free baselines when
predicting door handedness, size, angle location, and configuration from door
images. Finally, in a visual bandit door-opening task with a variety of door
configuration, we found that neural statistician embeddings achieve lower
regret than context-free baselines.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.16642
|
2023-05-26T05:30:04Z
|
Improving Position Encoding of Transformers for Multivariate Time Series
Classification
|
[
"Navid Mohammadi Foumani",
"Chang Wei Tan",
"Geoffrey I. Webb",
"Mahsa Salehi"
] |
Transformers have demonstrated outstanding performance in many applications
of deep learning. When applied to time series data, transformers require
effective position encoding to capture the ordering of the time series data.
The efficacy of position encoding in time series analysis is not well-studied
and remains controversial, e.g., whether it is better to inject absolute
position encoding or relative position encoding, or a combination of them. In
order to clarify this, we first review existing absolute and relative position
encoding methods when applied in time series classification. We then proposed a
new absolute position encoding method dedicated to time series data called time
Absolute Position Encoding (tAPE). Our new method incorporates the series
length and input embedding dimension in absolute position encoding.
Additionally, we propose computationally Efficient implementation of Relative
Position Encoding (eRPE) to improve generalisability for time series. We then
propose a novel multivariate time series classification (MTSC) model combining
tAPE/eRPE and convolution-based input encoding named ConvTran to improve the
position and data embedding of time series data. The proposed absolute and
relative position encoding methods are simple and efficient. They can be easily
integrated into transformer blocks and used for downstream tasks such as
forecasting, extrinsic regression, and anomaly detection. Extensive experiments
on 32 multivariate time-series datasets show that our model is significantly
more accurate than state-of-the-art convolution and transformer-based models.
Code and models are open-sourced at
\url{https://github.com/Navidfoumani/ConvTran}.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.16657
|
2023-05-26T06:02:31Z
|
Higher Order Gauge Equivariant CNNs on Riemannian Manifolds and
Applications
|
[
"Gianfranco Cortes",
"Yue Yu",
"Robin Chen",
"Melissa Armstrong",
"David Vaillancourt",
"Baba C. Vemuri"
] |
With the advent of group equivariant convolutions in deep networks
literature, spherical CNNs with $\mathsf{SO}(3)$-equivariant layers have been
developed to cope with data that are samples of signals on the sphere $S^2$.
One can implicitly obtain $\mathsf{SO}(3)$-equivariant convolutions on $S^2$
with significant efficiency gains by explicitly requiring gauge equivariance
w.r.t. $\mathsf{SO}(2)$. In this paper, we build on this fact by introducing a
higher order generalization of the gauge equivariant convolution, whose
implementation is dubbed a gauge equivariant Volterra network (GEVNet). This
allows us to model spatially extended nonlinear interactions within a given
receptive field while still maintaining equivariance to global isometries. We
prove theoretical results regarding the equivariance and construction of higher
order gauge equivariant convolutions. Then, we empirically demonstrate the
parameter efficiency of our model, first on computer vision benchmark data
(e.g. spherical MNIST), and then in combination with a convolutional kernel
network (CKN) on neuroimaging data. In the neuroimaging data experiments, the
resulting two-part architecture (CKN + GEVNet) is used to automatically
discriminate between patients with Lewy Body Disease (DLB), Alzheimer's Disease
(AD) and Parkinson's Disease (PD) from diffusion magnetic resonance images
(dMRI). The GEVNet extracts micro-architectural features within each voxel,
while the CKN extracts macro-architectural features across voxels. This
compound architecture is uniquely poised to exploit the intra- and inter-voxel
information contained in the dMRI data, leading to improved performance over
the classification results obtained from either of the individual components.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.16661
|
2023-05-26T06:16:47Z
|
Gender, Smoking History and Age Prediction from Laryngeal Images
|
[
"Tianxiao Zhang",
"Andrés M. Bur",
"Shannon Kraft",
"Hannah Kavookjian",
"Bryan Renslo",
"Xiangyu Chen",
"Bo Luo",
"Guanghui Wang"
] |
Flexible laryngoscopy is commonly performed by otolaryngologists to detect
laryngeal diseases and to recognize potentially malignant lesions. Recently,
researchers have introduced machine learning techniques to facilitate automated
diagnosis using laryngeal images and achieved promising results. Diagnostic
performance can be improved when patients' demographic information is
incorporated into models. However, manual entry of patient data is time
consuming for clinicians. In this study, we made the first endeavor to employ
deep learning models to predict patient demographic information to improve
detector model performance. The overall accuracy for gender, smoking history,
and age was 85.5%, 65.2%, and 75.9%, respectively. We also created a new
laryngoscopic image set for machine learning study and benchmarked the
performance of 8 classical deep learning models based on CNNs and Transformers.
The results can be integrated into current learning models to improve their
performance by incorporating the patient's demographic information.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.16687
|
2023-05-26T07:17:24Z
|
Balanced Supervised Contrastive Learning for Few-Shot Class-Incremental
Learning
|
[
"In-Ug Yoon",
"Tae-Min Choi",
"Young-Min Kim",
"Jong-Hwan Kim"
] |
Few-shot class-incremental learning (FSCIL) presents the primary challenge of
balancing underfitting to a new session's task and forgetting the tasks from
previous sessions. To address this challenge, we develop a simple yet powerful
learning scheme that integrates effective methods for each core component of
the FSCIL network, including the feature extractor, base session classifiers,
and incremental session classifiers. In feature extractor training, our goal is
to obtain balanced generic representations that benefit both current viewable
and unseen or past classes. To achieve this, we propose a balanced supervised
contrastive loss that effectively balances these two objectives. In terms of
classifiers, we analyze and emphasize the importance of unifying initialization
methods for both the base and incremental session classifiers. Our method
demonstrates outstanding ability for new task learning and preventing
forgetting on CUB200, CIFAR100, and miniImagenet datasets, with significant
improvements over previous state-of-the-art methods across diverse metrics. We
conduct experiments to analyze the significance and rationale behind our
approach and visualize the effectiveness of our representations on new tasks.
Furthermore, we conduct diverse ablation studies to analyze the effects of each
module.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.16811
|
2023-05-26T10:43:42Z
|
Improved Visual Story Generation with Adaptive Context Modeling
|
[
"Zhangyin Feng",
"Yuchen Ren",
"Xinmiao Yu",
"Xiaocheng Feng",
"Duyu Tang",
"Shuming Shi",
"Bing Qin"
] |
Diffusion models developed on top of powerful text-to-image generation models
like Stable Diffusion achieve remarkable success in visual story generation.
However, the best-performing approach considers historically generated results
as flattened memory cells, ignoring the fact that not all preceding images
contribute equally to the generation of the characters and scenes at the
current stage. To address this, we present a simple method that improves the
leading system with adaptive context modeling, which is not only incorporated
in the encoder but also adopted as additional guidance in the sampling stage to
boost the global consistency of the generated story. We evaluate our model on
PororoSV and FlintstonesSV datasets and show that our approach achieves
state-of-the-art FID scores on both story visualization and continuation
scenarios. We conduct detailed model analysis and show that our model excels at
generating semantically consistent images for stories.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.16922
|
2023-05-26T13:34:14Z
|
Fast refacing of MR images with a generative neural network lowers
re-identification risk and preserves volumetric consistency
|
[
"Nataliia Molchanova",
"Bénédicte Maréchal",
"Jean-Philippe Thiran",
"Tobias Kober",
"Till Huelnhagen",
"Jonas Richiardi"
] |
With the rise of open data, identifiability of individuals based on 3D
renderings obtained from routine structural magnetic resonance imaging (MRI)
scans of the head has become a growing privacy concern. To protect subject
privacy, several algorithms have been developed to de-identify imaging data
using blurring, defacing or refacing. Completely removing facial structures
provides the best re-identification protection but can significantly impact
post-processing steps, like brain morphometry. As an alternative, refacing
methods that replace individual facial structures with generic templates have a
lower effect on the geometry and intensity distribution of original scans, and
are able to provide more consistent post-processing results by the price of
higher re-identification risk and computational complexity. In the current
study, we propose a novel method for anonymised face generation for defaced 3D
T1-weighted scans based on a 3D conditional generative adversarial network. To
evaluate the performance of the proposed de-identification tool, a comparative
study was conducted between several existing defacing and refacing tools, with
two different segmentation algorithms (FAST and Morphobox). The aim was to
evaluate (i) impact on brain morphometry reproducibility, (ii)
re-identification risk, (iii) balance between (i) and (ii), and (iv) the
processing time. The proposed method takes 9 seconds for face generation and is
suitable for recovering consistent post-processing results after defacing.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.17006
|
2023-05-26T15:04:20Z
|
Zero-shot Visual Question Answering with Language Model Feedback
|
[
"Yifan Du",
"Junyi Li",
"Tianyi Tang",
"Wayne Xin Zhao",
"Ji-Rong Wen"
] |
In this paper, we propose a novel language model guided captioning approach,
LAMOC, for knowledge-based visual question answering (VQA). Our approach
employs the generated captions by a captioning model as the context of an
answer prediction model, which is a Pre-trained Language model (PLM). As the
major contribution, we leverage the guidance and feedback of the prediction
model to improve the capability of the captioning model. In this way, the
captioning model can become aware of the task goal and information need from
the PLM. To develop our approach, we design two specific training stages, where
the first stage adapts the captioning model to the prediction model (selecting
more suitable caption propositions for training) and the second stage tunes the
captioning model according to the task goal (learning from feedback of the
PLM). Extensive experiments demonstrate the effectiveness of the proposed
approach on the knowledge-based VQA task. Specifically, on the challenging
A-OKVQA dataset, LAMOC outperforms several competitive zero-shot methods and
even achieves comparable results to a fine-tuned VLP model. Our code is
publicly available at https://github.com/RUCAIBox/LAMOC.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.17105
|
2023-05-26T17:16:22Z
|
Random-Access Neural Compression of Material Textures
|
[
"Karthik Vaidyanathan",
"Marco Salvi",
"Bartlomiej Wronski",
"Tomas Akenine-Möller",
"Pontus Ebelin",
"Aaron Lefohn"
] |
The continuous advancement of photorealism in rendering is accompanied by a
growth in texture data and, consequently, increasing storage and memory
demands. To address this issue, we propose a novel neural compression technique
specifically designed for material textures. We unlock two more levels of
detail, i.e., 16x more texels, using low bitrate compression, with image
quality that is better than advanced image compression techniques, such as AVIF
and JPEG XL. At the same time, our method allows on-demand, real-time
decompression with random access similar to block texture compression on GPUs,
enabling compression on disk and memory. The key idea behind our approach is
compressing multiple material textures and their mipmap chains together, and
using a small neural network, that is optimized for each material, to
decompress them. Finally, we use a custom training implementation to achieve
practical compression speeds, whose performance surpasses that of general
frameworks, like PyTorch, by an order of magnitude.
|
[
"cs.GR",
"cs.CV",
"I.3"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.