arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.00007
|
2023-05-29T18:27:10Z
|
Datasets for Portuguese Legal Semantic Textual Similarity: Comparing
weak supervision and an annotation process approaches
|
[
"Daniel da Silva Junior",
"Paulo Roberto dos S. Corval",
"Aline Paes",
"Daniel de Oliveira"
] |
The Brazilian judiciary has a large workload, resulting in a long time to
finish legal proceedings. Brazilian National Council of Justice has established
in Resolution 469/2022 formal guidance for document and process digitalization
opening up the possibility of using automatic techniques to help with everyday
tasks in the legal field, particularly in a large number of texts yielded on
the routine of law procedures. Notably, Artificial Intelligence (AI) techniques
allow for processing and extracting useful information from textual data,
potentially speeding up the process. However, datasets from the legal domain
required by several AI techniques are scarce and difficult to obtain as they
need labels from experts. To address this challenge, this article contributes
with four datasets from the legal domain, two with documents and metadata but
unlabeled, and another two labeled with a heuristic aiming at its use in
textual semantic similarity tasks. Also, to evaluate the effectiveness of the
proposed heuristic label process, this article presents a small ground truth
dataset generated from domain expert annotations. The analysis of ground truth
labels highlights that semantic analysis of domain text can be challenging even
for domain experts. Also, the comparison between ground truth and heuristic
labels shows that heuristic labels are useful.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.00008
|
2023-05-29T18:42:01Z
|
Brainformers: Trading Simplicity for Efficiency
|
[
"Yanqi Zhou",
"Nan Du",
"Yanping Huang",
"Daiyi Peng",
"Chang Lan",
"Da Huang",
"Siamak Shakeri",
"David So",
"Andrew Dai",
"Yifeng Lu",
"Zhifeng Chen",
"Quoc Le",
"Claire Cui",
"James Laundon",
"Jeff Dean"
] |
Transformers are central to recent successes in natural language processing
and computer vision. Transformers have a mostly uniform backbone where layers
alternate between feed-forward and self-attention in order to build a deep
network. Here we investigate this design choice and find that more complex
blocks that have different permutations of layer primitives can be more
efficient. Using this insight, we develop a complex block, named Brainformer,
that consists of a diverse sets of layers such as sparsely gated feed-forward
layers, dense feed-forward layers, attention layers, and various forms of layer
normalization and activation functions. Brainformer consistently outperforms
the state-of-the-art dense and sparse Transformers, in terms of both quality
and efficiency. A Brainformer model with 8 billion activated parameters per
token demonstrates 2x faster training convergence and 5x faster step time
compared to its GLaM counterpart. In downstream task evaluation, Brainformer
also demonstrates a 3% higher SuperGLUE score with fine-tuning compared to GLaM
with a similar number of activated parameters. Finally, Brainformer largely
outperforms a Primer dense model derived with NAS with similar computation per
token on fewshot evaluations.
|
[
"cs.LG",
"cs.CL"
] | true |
2305.17846
|
2023-05-29T02:10:13Z
|
Retraining-free Customized ASR for Enharmonic Words Based on a
Named-Entity-Aware Model and Phoneme Similarity Estimation
|
[
"Yui Sudo",
"Kazuya Hata",
"Kazuhiro Nakadai"
] |
End-to-end automatic speech recognition (E2E-ASR) has the potential to
improve performance, but a specific issue that needs to be addressed is the
difficulty it has in handling enharmonic words: named entities (NEs) with the
same pronunciation and part of speech that are spelled differently. This often
occurs with Japanese personal names that have the same pronunciation but
different Kanji characters. Since such NE words tend to be important keywords,
ASR easily loses user trust if it misrecognizes them. To solve these problems,
this paper proposes a novel retraining-free customized method for E2E-ASRs
based on a named-entity-aware E2E-ASR model and phoneme similarity estimation.
Experimental results show that the proposed method improves the target NE
character error rate by 35.7% on average relative to the conventional E2E-ASR
model when selecting personal names as a target NE.
|
[
"cs.SD",
"cs.CL",
"eess.AS"
] | false |
2305.17878
|
2023-05-29T04:19:35Z
|
Ask an Expert: Leveraging Language Models to Improve Strategic Reasoning
in Goal-Oriented Dialogue Models
|
[
"Qiang Zhang",
"Jason Naradowsky",
"Yusuke Miyao"
] |
Existing dialogue models may encounter scenarios which are not
well-represented in the training data, and as a result generate responses that
are unnatural, inappropriate, or unhelpful. We propose the "Ask an Expert"
framework in which the model is trained with access to an "expert" which it can
consult at each turn. Advice is solicited via a structured dialogue with the
expert, and the model is optimized to selectively utilize (or ignore) it given
the context and dialogue history. In this work the expert takes the form of an
LLM. We evaluate this framework in a mental health support domain, where the
structure of the expert conversation is outlined by pre-specified prompts which
reflect a reasoning strategy taught to practitioners in the field. Blenderbot
models utilizing "Ask an Expert" show quality improvements across all expert
sizes, including those with fewer parameters than the dialogue model itself.
Our best model provides a $\sim 10\%$ improvement over baselines, approaching
human-level scores on "engingingness" and "helpfulness" metrics.
|
[
"cs.CL",
"cs.AI",
"cs.HC"
] | false |
2305.17984
|
2023-05-29T09:47:36Z
|
minOffense: Inter-Agreement Hate Terms for Stable Rules, Concepts,
Transitivities, and Lattices
|
[
"Animesh Chaturvedi",
"Rajesh Sharma"
] |
Hate speech classification has become an important problem due to the spread
of hate speech on social media platforms. For a given set of Hate Terms lists
(HTs-lists) and Hate Speech data (HS-data), it is challenging to understand
which hate term contributes the most for hate speech classification. This paper
contributes two approaches to quantitatively measure and qualitatively
visualise the relationship between co-occurring Hate Terms (HTs). Firstly, we
propose an approach for the classification of hate-speech by producing a Severe
Hate Terms list (Severe HTs-list) from existing HTs-lists. To achieve our goal,
we proposed three metrics (Hatefulness, Relativeness, and Offensiveness) to
measure the severity of HTs. These metrics assist to create an Inter-agreement
HTs-list, which explains the contribution of an individual hate term toward
hate speech classification. Then, we used the Offensiveness metric values of
HTs above a proposed threshold minimum Offense (minOffense) to generate a new
Severe HTs-list. To evaluate our approach, we used three hate speech datasets
and six hate terms lists. Our approach shown an improvement from 0.845 to 0.923
(best) as compared to the baseline. Secondly, we also proposed Stable Hate Rule
(SHR) mining to provide ordered co-occurrence of various HTs with minimum
Stability (minStab). The SHR mining detects frequently co-occurring HTs to form
Stable Hate Rules and Concepts. These rules and concepts are used to visualise
the graphs of Transitivities and Lattices formed by HTs.
|
[
"cs.CL",
"cs.AI",
"cs.SI",
"https://www.youtube.com/watch?v=iRGXiJGp3Cc&list=PLtvWi5o3JBnF3yxcjGdT4KCDLxRBIpsyR"
] | false |
2305.18011
|
2023-05-29T11:04:13Z
|
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
Recognition
|
[
"Xiaoliang Wu",
"Peter Bell",
"Ajitha Rajan"
] |
Explainable AI (XAI) techniques have been widely used to help explain and
understand the output of deep learning models in fields such as image
classification and Natural Language Processing. Interest in using XAI
techniques to explain deep learning-based automatic speech recognition (ASR) is
emerging. but there is not enough evidence on whether these explanations can be
trusted. To address this, we adapt a state-of-the-art XAI technique from the
image classification domain, Local Interpretable Model-Agnostic Explanations
(LIME), to a model trained for a TIMIT-based phoneme recognition task. This
simple task provides a controlled setting for evaluation while also providing
expert annotated ground truth to assess the quality of explanations. We find a
variant of LIME based on time partitioned audio segments, that we propose in
this paper, produces the most reliable explanations, containing the ground
truth 96% of the time in its top three audio segments.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.18028
|
2023-05-29T11:39:01Z
|
ADAPTERMIX: Exploring the Efficacy of Mixture of Adapters for
Low-Resource TTS Adaptation
|
[
"Ambuj Mehrish",
"Abhinav Ramesh Kashyap",
"Li Yingting",
"Navonil Majumder",
"Soujanya Poria"
] |
There are significant challenges for speaker adaptation in text-to-speech for
languages that are not widely spoken or for speakers with accents or dialects
that are not well-represented in the training data. To address this issue, we
propose the use of the "mixture of adapters" method. This approach involves
adding multiple adapters within a backbone-model layer to learn the unique
characteristics of different speakers. Our approach outperforms the baseline,
with a noticeable improvement of 5% observed in speaker preference tests when
using only one minute of data for each new speaker. Moreover, following the
adapter paradigm, we fine-tune only the adapter parameters (11% of the total
model parameters). This is a significant achievement in parameter-efficient
speaker adaptation, and one of the first models of its kind. Overall, our
proposed approach offers a promising solution to the speech synthesis
techniques, particularly for adapting to speakers from diverse backgrounds.
|
[
"cs.SD",
"cs.AI",
"cs.CL",
"eess.AS"
] | false |
2305.18176
|
2023-05-29T16:09:58Z
|
Perceived Trustworthiness of Natural Language Generators
|
[
"Beatriz Cabrero-Daniel",
"Andrea Sanagustín Cabrero"
] |
Natural Language Generation tools, such as chatbots that can generate
human-like conversational text, are becoming more common both for personal and
professional use. However, there are concerns about their trustworthiness and
ethical implications. The paper addresses the problem of understanding how
different users (e.g., linguists, engineers) perceive and adopt these tools and
their perception of machine-generated text quality. It also discusses the
perceived advantages and limitations of Natural Language Generation tools, as
well as users' beliefs on governance strategies. The main findings of this
study include the impact of users' field and level of expertise on the
perceived trust and adoption of Natural Language Generation tools, the users'
assessment of the accuracy, fluency, and potential biases of machine-generated
text in comparison to human-written text, and an analysis of the advantages and
ethical risks associated with these tools as identified by the participants.
Moreover, this paper discusses the potential implications of these findings for
enhancing the AI development process. The paper sheds light on how different
user characteristics shape their beliefs on the quality and overall
trustworthiness of machine-generated text. Furthermore, it examines the
benefits and risks of these tools from the perspectives of different users.
|
[
"cs.HC",
"cs.AI",
"cs.CL"
] | false |
2305.18189
|
2023-05-29T16:29:22Z
|
Marked Personas: Using Natural Language Prompts to Measure Stereotypes
in Language Models
|
[
"Myra Cheng",
"Esin Durmus",
"Dan Jurafsky"
] |
To recognize and mitigate harms from large language models (LLMs), we need to
understand the prevalence and nuances of stereotypes in LLM outputs. Toward
this end, we present Marked Personas, a prompt-based method to measure
stereotypes in LLMs for intersectional demographic groups without any lexicon
or data labeling. Grounded in the sociolinguistic concept of markedness (which
characterizes explicitly linguistically marked categories versus unmarked
defaults), our proposed method is twofold: 1) prompting an LLM to generate
personas, i.e., natural language descriptions, of the target demographic group
alongside personas of unmarked, default groups; 2) identifying the words that
significantly distinguish personas of the target group from corresponding
unmarked ones. We find that the portrayals generated by GPT-3.5 and GPT-4
contain higher rates of racial stereotypes than human-written portrayals using
the same prompts. The words distinguishing personas of marked (non-white,
non-male) groups reflect patterns of othering and exoticizing these
demographics. An intersectional lens further reveals tropes that dominate
portrayals of marginalized groups, such as tropicalism and the
hypersexualization of minoritized women. These representational harms have
concerning implications for downstream applications like story generation.
|
[
"cs.CL",
"cs.AI",
"cs.CY"
] | false |
2305.18265
|
2023-05-29T17:39:22Z
|
Check-COVID: Fact-Checking COVID-19 News Claims with Scientific Evidence
|
[
"Gengyu Wang",
"Kate Harwood",
"Lawrence Chillrud",
"Amith Ananthram",
"Melanie Subbiah",
"Kathleen McKeown"
] |
We present a new fact-checking benchmark, Check-COVID, that requires systems
to verify claims about COVID-19 from news using evidence from scientific
articles. This approach to fact-checking is particularly challenging as it
requires checking internet text written in everyday language against evidence
from journal articles written in formal academic language. Check-COVID contains
1, 504 expert-annotated news claims about the coronavirus paired with
sentence-level evidence from scientific journal articles and veracity labels.
It includes both extracted (journalist-written) and composed
(annotator-written) claims. Experiments using both a fact-checking specific
system and GPT-3.5, which respectively achieve F1 scores of 76.99 and 69.90 on
this task, reveal the difficulty of automatically fact-checking both claim
types and the importance of in-domain data for good performance. Our data and
models are released publicly at https://github.com/posuer/Check-COVID.
|
[
"cs.CL",
"cs.AI",
"cs.CY"
] | false |
2305.18278
|
2023-05-29T17:50:32Z
|
Mathematical Structure of Syntactic Merge
|
[
"Matilde Marcolli",
"Noam Chomsky",
"Robert Berwick"
] |
The syntactic Merge operation of the Minimalist Program in linguistics can be
described mathematically in terms of Hopf algebras, with a formalism similar to
the one arising in the physics of renormalization. This mathematical
formulation of Merge has good descriptive power, as phenomena empirically
observed in linguistics can be justified from simple mathematical arguments. It
also provides a possible mathematical model for externalization and for the
role of syntactic parameters.
|
[
"cs.CL",
"math.QA",
"math.RA",
"68Q70, 16T05"
] | false |
2305.18281
|
2023-05-29T17:53:04Z
|
HyperConformer: Multi-head HyperMixer for Efficient Speech Recognition
|
[
"Florian Mai",
"Juan Zuluaga-Gomez",
"Titouan Parcollet",
"Petr Motlicek"
] |
State-of-the-art ASR systems have achieved promising results by modeling
local and global interactions separately. While the former can be computed
efficiently, global interactions are usually modeled via attention mechanisms,
which are expensive for long input sequences. Here, we address this by
extending HyperMixer, an efficient alternative to attention exhibiting linear
complexity, to the Conformer architecture for speech recognition, leading to
HyperConformer. In particular, multi-head HyperConformer achieves comparable or
higher recognition performance while being more efficient than Conformer in
terms of inference speed, memory, parameter count, and available training data.
HyperConformer achieves a word error rate of 2.9% on Librispeech test-clean
with less than 8M neural parameters and a peak memory during training of 5.7GB,
hence trainable with accessible hardware. Encoder speed is between 38% on
mid-length speech and 56% on long speech faster than an equivalent Conformer.
(The HyperConformer recipe is publicly available in:
https://github.com/speechbrain/speechbrain/tree/develop/recipes/LibriSpeech/ASR/transformer/)
|
[
"cs.CL",
"cs.AI",
"cs.LG",
"eess.AS"
] | false |
2305.18283
|
2023-05-29T17:53:35Z
|
CommonAccent: Exploring Large Acoustic Pretrained Models for Accent
Classification Based on Common Voice
|
[
"Juan Zuluaga-Gomez",
"Sara Ahmed",
"Danielius Visockas",
"Cem Subakan"
] |
Despite the recent advancements in Automatic Speech Recognition (ASR), the
recognition of accented speech still remains a dominant problem. In order to
create more inclusive ASR systems, research has shown that the integration of
accent information, as part of a larger ASR framework, can lead to the
mitigation of accented speech errors. We address multilingual accent
classification through the ECAPA-TDNN and Wav2Vec 2.0/XLSR architectures which
have been proven to perform well on a variety of speech-related downstream
tasks. We introduce a simple-to-follow recipe aligned to the SpeechBrain
toolkit for accent classification based on Common Voice 7.0 (English) and
Common Voice 11.0 (Italian, German, and Spanish). Furthermore, we establish new
state-of-the-art for English accent classification with as high as 95%
accuracy. We also study the internal categorization of the Wav2Vev 2.0
embeddings through t-SNE, noting that there is a level of clustering based on
phonological similarity. (Our recipe is open-source in the SpeechBrain toolkit,
see: https://github.com/speechbrain/speechbrain/tree/develop/recipes)
|
[
"cs.CL",
"cs.AI",
"cs.LG",
"eess.AS"
] | false |
2305.18503
|
2023-05-29T14:55:20Z
|
From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework
|
[
"Yangyi Chen",
"Hongcheng Gao",
"Ganqu Cui",
"Lifan Yuan",
"Dehan Kong",
"Hanlu Wu",
"Ning Shi",
"Bo Yuan",
"Longtao Huang",
"Hui Xue",
"Zhiyuan Liu",
"Maosong Sun",
"Heng Ji"
] |
Textual adversarial attacks can discover models' weaknesses by adding
semantic-preserved but misleading perturbations to the inputs. The long-lasting
adversarial attack-and-defense arms race in Natural Language Processing (NLP)
is algorithm-centric, providing valuable techniques for automatic robustness
evaluation. However, the existing practice of robustness evaluation may exhibit
issues of incomprehensive evaluation, impractical evaluation protocol, and
invalid adversarial samples. In this paper, we aim to set up a unified
automatic robustness evaluation framework, shifting towards model-centric
evaluation to further exploit the advantages of adversarial attacks. To address
the above challenges, we first determine robustness evaluation dimensions based
on model capabilities and specify the reasonable algorithm to generate
adversarial samples for each dimension. Then we establish the evaluation
protocol, including evaluation settings and metrics, under realistic demands.
Finally, we use the perturbation degree of adversarial samples to control the
sample validity. We implement a toolkit RobTest that realizes our automatic
robustness evaluation framework. In our experiments, we conduct a robustness
evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation
framework, and further show the rationality of each component in the framework.
The code will be made public at \url{https://github.com/thunlp/RobTest}.
|
[
"cs.CL",
"cs.CR",
"cs.LG"
] | false |
2305.18596
|
2023-05-29T20:24:14Z
|
Building Accurate Low Latency ASR for Streaming Voice Search
|
[
"Abhinav Goyal",
"Nikesh Garera"
] |
Automatic Speech Recognition (ASR) plays a crucial role in voice-based
applications. For applications requiring real-time feedback like Voice Search,
streaming capability becomes vital. While LSTM/RNN and CTC based ASR systems
are commonly employed for low-latency streaming applications, they often
exhibit lower accuracy compared to state-of-the-art models due to a lack of
future audio frames. In this work, we focus on developing accurate LSTM,
attention, and CTC based streaming ASR models for large-scale Hinglish (a blend
of Hindi and English) Voice Search. We investigate various modifications in
vanilla LSTM training which enhance the system's accuracy while preserving its
streaming capabilities. We also address the critical requirement of
end-of-speech (EOS) detection in streaming applications. We present a simple
training and inference strategy for end-to-end CTC models that enables joint
ASR and EOS detection. The evaluation of our model on Flipkart's Voice Search,
which handles substantial traffic of approximately 6 million queries per day,
demonstrates significant performance gains over the vanilla LSTM-CTC model. Our
model achieves a word error rate (WER) of 3.69% without EOS and 4.78% with EOS
while also reducing the search latency by approximately ~1300 ms (equivalent to
46.64% reduction) when compared to an independent voice activity detection
(VAD) model.
|
[
"cs.SD",
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2305.18599
|
2023-05-29T20:32:22Z
|
Improving Generalization for Multimodal Fake News Detection
|
[
"Sahar Tahmasebi",
"Sherzod Hakimov",
"Ralph Ewerth",
"Eric Müller-Budack"
] |
The increasing proliferation of misinformation and its alarming impact have
motivated both industry and academia to develop approaches for fake news
detection. However, state-of-the-art approaches are usually trained on datasets
of smaller size or with a limited set of specific topics. As a consequence,
these models lack generalization capabilities and are not applicable to
real-world data. In this paper, we propose three models that adopt and
fine-tune state-of-the-art multimodal transformers for multimodal fake news
detection. We conduct an in-depth analysis by manipulating the input data aimed
to explore models performance in realistic use cases on social media. Our study
across multiple models demonstrates that these systems suffer significant
performance drops against manipulated data. To reduce the bias and improve
model generalization, we suggest training data augmentation to conduct more
meaningful experiments for fake news detection on social media. The proposed
data augmentation techniques enable models to generalize better and yield
improved state-of-the-art results.
|
[
"cs.CL",
"cs.IR",
"cs.LG",
"cs.MM"
] | false |
2305.18602
|
2023-05-29T20:37:06Z
|
From `Snippet-lects' to Doculects and Dialects: Leveraging Neural
Representations of Speech for Placing Audio Signals in a Language Landscape
|
[
"Séverine Guillaume",
"Guillaume Wisniewski",
"Alexis Michaud"
] |
XLSR-53 a multilingual model of speech, builds a vector representation from
audio, which allows for a range of computational treatments. The experiments
reported here use this neural representation to estimate the degree of
closeness between audio files, ultimately aiming to extract relevant linguistic
properties. We use max-pooling to aggregate the neural representations from a
"snippet-lect" (the speech in a 5-second audio snippet) to a "doculect" (the
speech in a given resource), then to dialects and languages. We use data from
corpora of 11 dialects belonging to 5 less-studied languages. Similarity
measurements between the 11 corpora bring out greatest closeness between those
that are known to be dialects of the same language. The findings suggest that
(i) dialect/language can emerge among the various parameters characterizing
audio files and (ii) estimates of overall phonetic/phonological closeness can
be obtained for a little-resourced or fully unknown language. The findings help
shed light on the type of information captured by neural representations of
speech and how it can be extracted from these representations
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.18449
|
2023-05-29T03:58:33Z
|
Taming AI Bots: Controllability of Neural States in Large Language
Models
|
[
"Stefano Soatto",
"Paulo Tabuada",
"Pratik Chaudhari",
"Tian Yu Liu"
] |
We tackle the question of whether an agent can, by suitable choice of
prompts, control an AI bot to any state. To that end, we first introduce a
formal definition of ``meaning'' that is amenable to analysis. Then, we
characterize ``meaningful data'' on which large language models (LLMs) are
ostensibly trained, and ``well-trained LLMs'' through conditions that are
largely met by today's LLMs. While a well-trained LLM constructs an embedding
space of meanings that is Euclidean, meanings themselves do not form a vector
(linear) subspace, but rather a quotient space within. We then characterize the
subset of meanings that can be reached by the state of the LLMs for some input
prompt, and show that a well-trained bot can reach any meaning albeit with
small probability. We then introduce a stronger notion of controllability as
{\em almost certain reachability}, and show that, when restricted to the space
of meanings, an AI bot is controllable. We do so after introducing a functional
characterization of attentive AI bots, and finally derive necessary and
sufficient conditions for controllability. The fact that AI bots are
controllable means that an adversary could steer them towards any state.
However, the sampling process can be designed to counteract adverse actions and
avoid reaching undesirable regions of state space before their boundary is
crossed.
|
[
"cs.AI",
"cs.CL",
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.18161
|
2023-05-29T15:44:47Z
|
VA-learning as a more efficient alternative to Q-learning
|
[
"Yunhao Tang",
"Rémi Munos",
"Mark Rowland",
"Michal Valko"
] |
In reinforcement learning, the advantage function is critical for policy
improvement, but is often extracted from a learned Q-function. A natural
question is: Why not learn the advantage function directly? In this work, we
introduce VA-learning, which directly learns advantage function and value
function using bootstrapping, without explicit reference to Q-functions.
VA-learning learns off-policy and enjoys similar theoretical guarantees as
Q-learning. Thanks to the direct learning of advantage function and value
function, VA-learning improves the sample efficiency over Q-learning both in
tabular implementations and deep RL agents on Atari-57 games. We also identify
a close connection between VA-learning and the dueling architecture, which
partially explains why a simple architectural change to DQN agents tends to
improve performance.
|
[
"cs.LG"
] | false |
2305.18440
|
2023-05-29T01:42:32Z
|
Black-Box Anomaly Attribution
|
[
"Tsuyoshi Idé",
"Naoki Abe"
] |
When the prediction of a black-box machine learning model deviates from the
true observation, what can be said about the reason behind that deviation? This
is a fundamental and ubiquitous question that the end user in a business or
industrial AI application often asks. The deviation may be due to a sub-optimal
black-box model, or it may be simply because the sample in question is an
outlier. In either case, one would ideally wish to obtain some form of
attribution score -- a value indicative of the extent to which an input
variable is responsible for the anomaly.
In the present paper we address this task of ``anomaly attribution,''
particularly in the setting in which the model is black-box and the training
data are not available. Specifically, we propose a novel likelihood-based
attribution framework we call the ``likelihood compensation (LC),'' in which
the responsibility score is equated with the correction on each input variable
needed to attain the highest possible likelihood. We begin by showing formally
why mainstream model-agnostic explanation methods, such as the local linear
surrogate modeling and Shapley values, are not designed to explain anomalies.
In particular, we show that they are ``deviation-agnostic,'' namely, that their
explanations are blind to the fact that there is a deviation in the model
prediction for the sample of interest. We do this by positioning these existing
methods under the unified umbrella of a function family we call the
``integrated gradient family.'' We validate the effectiveness of the proposed
LC approach using publicly available data sets. We also conduct a case study
with a real-world building energy prediction task and confirm its usefulness in
practice based on expert feedback.
|
[
"cs.LG"
] | false |
2305.18443
|
2023-05-29T03:25:22Z
|
Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control
via Sample Multiple Reuse
|
[
"Jiafei Lyu",
"Le Wan",
"Zongqing Lu",
"Xiu Li"
] |
Sample efficiency is one of the most critical issues for online reinforcement
learning (RL). Existing methods achieve higher sample efficiency by adopting
model-based methods, Q-ensemble, or better exploration mechanisms. We, instead,
propose to train an off-policy RL agent via updating on a fixed sampled batch
multiple times, thus reusing these samples and better exploiting them within a
single optimization loop. We name our method sample multiple reuse (SMR). We
theoretically show the properties of Q-learning with SMR, e.g., convergence.
Furthermore, we incorporate SMR with off-the-shelf off-policy RL algorithms and
conduct experiments on a variety of continuous control benchmarks. Empirical
results show that SMR significantly boosts the sample efficiency of the base
methods across most of the evaluated tasks without any hyperparameter tuning or
additional tricks.
|
[
"cs.LG"
] | false |
2305.18448
|
2023-05-29T03:55:39Z
|
Neural Network Reduction with Guided Regularizers
|
[
"Ali Haisam Muhammad Rafid",
"Adrian Sandu"
] |
Regularization techniques such as $\mathcal{L}_1$ and $\mathcal{L}_2$
regularizers are effective in sparsifying neural networks (NNs). However, to
remove a certain neuron or channel in NNs, all weight elements related to that
neuron or channel need to be prunable, which is not guaranteed by traditional
regularization. This paper proposes a simple new approach named "Guided
Regularization" that prioritizes the weights of certain NN units more than
others during training, which renders some of the units less important and
thus, prunable. This is different from the scattered sparsification of
$\mathcal{L}_1$ and $\mathcal{L}_2$ regularizers where the the components of a
weight matrix that are zeroed out can be located anywhere. The proposed
approach offers a natural reduction of NN in the sense that a model is being
trained while also neutralizing unnecessary units. We empirically demonstrate
that our proposed method is effective in pruning NNs while maintaining
performance.
|
[
"cs.LG"
] | false |
2305.18457
|
2023-05-29T04:51:09Z
|
Learning Strong Graph Neural Networks with Weak Information
|
[
"Yixin Liu",
"Kaize Ding",
"Jianling Wang",
"Vincent Lee",
"Huan Liu",
"Shirui Pan"
] |
Graph Neural Networks (GNNs) have exhibited impressive performance in many
graph learning tasks. Nevertheless, the performance of GNNs can deteriorate
when the input graph data suffer from weak information, i.e., incomplete
structure, incomplete features, and insufficient labels. Most prior studies,
which attempt to learn from the graph data with a specific type of weak
information, are far from effective in dealing with the scenario where diverse
data deficiencies exist and mutually affect each other. To fill the gap, in
this paper, we aim to develop an effective and principled approach to the
problem of graph learning with weak information (GLWI). Based on the findings
from our empirical analysis, we derive two design focal points for solving the
problem of GLWI, i.e., enabling long-range propagation in GNNs and allowing
information propagation to those stray nodes isolated from the largest
connected component. Accordingly, we propose D$^2$PT, a dual-channel GNN
framework that performs long-range information propagation not only on the
input graph with incomplete structure, but also on a global graph that encodes
global semantic similarities. We further develop a prototype contrastive
alignment algorithm that aligns the class-level prototypes learned from two
channels, such that the two different information propagation processes can
mutually benefit from each other and the finally learned model can well handle
the GLWI problem. Extensive experiments on eight real-world benchmark datasets
demonstrate the effectiveness and efficiency of our proposed methods in various
GLWI scenarios.
|
[
"cs.LG"
] | false |
2305.18458
|
2023-05-29T05:20:18Z
|
Conditional Support Alignment for Domain Adaptation with Label Shift
|
[
"Anh T Nguyen",
"Lam Tran",
"Anh Tong",
"Tuan-Duy H. Nguyen",
"Toan Tran"
] |
Unsupervised domain adaptation (UDA) refers to a domain adaptation framework
in which a learning model is trained based on the labeled samples on the source
domain and unlabelled ones in the target domain. The dominant existing methods
in the field that rely on the classical covariate shift assumption to learn
domain-invariant feature representation have yielded suboptimal performance
under the label distribution shift between source and target domains. In this
paper, we propose a novel conditional adversarial support alignment (CASA)
whose aim is to minimize the conditional symmetric support divergence between
the source's and target domain's feature representation distributions, aiming
at a more helpful representation for the classification task. We also introduce
a novel theoretical target risk bound, which justifies the merits of aligning
the supports of conditional feature distributions compared to the existing
marginal support alignment approach in the UDA settings. We then provide a
complete training process for learning in which the objective optimization
functions are precisely based on the proposed target risk bound. Our empirical
results demonstrate that CASA outperforms other state-of-the-art methods on
different UDA benchmark tasks under label shift conditions.
|
[
"cs.LG"
] | false |
2305.18478
|
2023-05-29T11:08:04Z
|
Forward and Inverse Approximation Theory for Linear Temporal
Convolutional Networks
|
[
"Haotian Jiang",
"Qianxiao Li"
] |
We present a theoretical analysis of the approximation properties of
convolutional architectures when applied to the modeling of temporal sequences.
Specifically, we prove an approximation rate estimate (Jackson-type result) and
an inverse approximation theorem (Bernstein-type result), which together
provide a comprehensive characterization of the types of sequential
relationships that can be efficiently captured by a temporal convolutional
architecture. The rate estimate improves upon a previous result via the
introduction of a refined complexity measure, whereas the inverse approximation
theorem is new.
|
[
"cs.LG"
] | false |
2305.18483
|
2023-05-29T12:04:55Z
|
Bringing regularized optimal transport to lightspeed: a splitting method
adapted for GPUs
|
[
"Jacob Lindbäck",
"Zesen Wang",
"Mikael Johansson"
] |
We present an efficient algorithm for regularized optimal transport. In
contrast to previous methods, we use the Douglas-Rachford splitting technique
to develop an efficient solver that can handle a broad class of regularizers.
The algorithm has strong global convergence guarantees, low per-iteration cost,
and can exploit GPU parallelization, making it considerably faster than the
state-of-the-art for many problems. We illustrate its competitiveness in
several applications, including domain adaptation and learning of generative
models.
|
[
"cs.LG"
] | false |
2305.18490
|
2023-05-29T13:29:31Z
|
SANE: The phases of gradient descent through Sharpness Adjusted Number
of Effective parameters
|
[
"Lawrence Wang",
"Stephen J. Roberts"
] |
Modern neural networks are undeniably successful. Numerous studies have
investigated how the curvature of loss landscapes can affect the quality of
solutions. In this work we consider the Hessian matrix during network training.
We reiterate the connection between the number of "well-determined" or
"effective" parameters and the generalisation performance of neural nets, and
we demonstrate its use as a tool for model comparison. By considering the local
curvature, we propose Sharpness Adjusted Number of Effective parameters (SANE),
a measure of effective dimensionality for the quality of solutions. We show
that SANE is robust to large learning rates, which represent learning regimes
that are attractive but (in)famously unstable. We provide evidence and
characterise the Hessian shifts across "loss basins" at large learning rates.
Finally, extending our analysis to deeper neural networks, we provide an
approximation to the full-network Hessian, exploiting the natural ordering of
neural weights, and use this approximation to provide extensive empirical
evidence for our claims.
|
[
"cs.LG"
] | false |
2305.18491
|
2023-05-29T13:34:40Z
|
Towards a Better Understanding of Representation Dynamics under
TD-learning
|
[
"Yunhao Tang",
"Rémi Munos"
] |
TD-learning is a foundation reinforcement learning (RL) algorithm for value
prediction. Critical to the accuracy of value predictions is the quality of
state representations. In this work, we consider the question: how does
end-to-end TD-learning impact the representation over time? Complementary to
prior work, we provide a set of analysis that sheds further light on the
representation dynamics under TD-learning. We first show that when the
environments are reversible, end-to-end TD-learning strictly decreases the
value approximation error over time. Under further assumptions on the
environments, we can connect the representation dynamics with spectral
decomposition over the transition matrix. This latter finding establishes
fitting multiple value functions from randomly generated rewards as a useful
auxiliary task for representation learning, as we empirically validate on both
tabular and Atari game suites.
|
[
"cs.LG"
] | false |
2305.18501
|
2023-05-29T14:36:51Z
|
DoMo-AC: Doubly Multi-step Off-policy Actor-Critic Algorithm
|
[
"Yunhao Tang",
"Tadashi Kozuno",
"Mark Rowland",
"Anna Harutyunyan",
"Rémi Munos",
"Bernardo Ávila Pires",
"Michal Valko"
] |
Multi-step learning applies lookahead over multiple time steps and has proved
valuable in policy evaluation settings. However, in the optimal control case,
the impact of multi-step learning has been relatively limited despite a number
of prior efforts. Fundamentally, this might be because multi-step policy
improvements require operations that cannot be approximated by stochastic
samples, hence hindering the widespread adoption of such methods in practice.
To address such limitations, we introduce doubly multi-step off-policy VI
(DoMo-VI), a novel oracle algorithm that combines multi-step policy
improvements and policy evaluations. DoMo-VI enjoys guaranteed convergence
speed-up to the optimal policy and is applicable in general off-policy learning
settings. We then propose doubly multi-step off-policy actor-critic (DoMo-AC),
a practical instantiation of the DoMo-VI algorithm. DoMo-AC introduces a
bias-variance trade-off that ensures improved policy gradient estimates. When
combined with the IMPALA architecture, DoMo-AC has showed improvements over the
baseline algorithm on Atari-57 game benchmarks.
|
[
"cs.LG"
] | false |
2305.18504
|
2023-05-29T14:57:38Z
|
Generalized Disparate Impact for Configurable Fairness Solutions in ML
|
[
"Luca Giuliani",
"Eleonora Misino",
"Michele Lombardi"
] |
We make two contributions in the field of AI fairness over continuous
protected attributes. First, we show that the Hirschfeld-Gebelein-Renyi (HGR)
indicator (the only one currently available for such a case) is valuable but
subject to a few crucial limitations regarding semantics, interpretability, and
robustness. Second, we introduce a family of indicators that are: 1)
complementary to HGR in terms of semantics; 2) fully interpretable and
transparent; 3) robust over finite samples; 4) configurable to suit specific
applications. Our approach also allows us to define fine-grained constraints to
permit certain types of dependence and forbid others selectively. By expanding
the available options for continuous protected attributes, our approach
represents a significant contribution to the area of fair artificial
intelligence.
|
[
"cs.LG"
] | false |
2305.19290
|
2023-05-29T20:57:17Z
|
Global Layers: Non-IID Tabular Federated Learning
|
[
"Yazan Obeidi"
] |
Data heterogeneity between clients remains a key challenge in Federated
Learning (FL), particularly in the case of tabular data. This work presents
Global Layers (GL), a novel partial model personalization method robust in the
presence of joint distribution $P(X,Y)$ shift and mixed input/output spaces $X
\times Y$ across clients. To the best of our knowledge, GL is the first method
capable of supporting both client-exclusive features and classes. We introduce
two new benchmark experiments for tabular FL naturally partitioned from
existing real world datasets: i) UCI Covertype split into 4 clients by
"wilderness area" feature, and ii) UCI Heart Disease, SAHeart, UCI Heart
Failure, each as clients. Empirical results in these experiments in the
full-participant setting show that GL achieves better outcomes than Federated
Averaging (FedAvg) and local-only training, with some clients even performing
better than their centralized baseline.
|
[
"cs.LG"
] | false |
2306.00009
|
2023-05-29T19:25:32Z
|
Graph Exploration Matters: Improving both individual-level and
system-level diversity in WeChat Feed Recommender
|
[
"Shuai Yang",
"Lixin Zhang",
"Feng Xia",
"Leyu Lin"
] |
There are roughly three stages in real industrial recommendation systems,
candidates generation (retrieval), ranking and reranking. Individual-level
diversity and system-level diversity are both important for industrial
recommender systems. The former focus on each single user's experience, while
the latter focus on the difference among users. Graph-based retrieval
strategies are inevitably hijacked by heavy users and popular items, leading to
the convergence of candidates for users and the lack of system-level diversity.
Meanwhile, in the reranking phase, Determinantal Point Process (DPP) is
deployed to increase individual-level diverisity. Heavily relying on the
semantic information of items, DPP suffers from clickbait and inaccurate
attributes. Besides, most studies only focus on one of the two levels of
diversity, and ignore the mutual influence among different stages in real
recommender systems. We argue that individual-level diversity and system-level
diversity should be viewed as an integrated problem, and we provide an
efficient and deployable solution for web-scale recommenders. Generally, we
propose to employ the retrieval graph information in diversity-based reranking,
by which to weaken the hidden similarity of items exposed to users, and
consequently gain more graph explorations to improve the system-level
diveristy. Besides, we argue that users' propensity for diversity changes over
time in content feed recommendation. Therefore, with the explored graph, we
also propose to capture the user's real-time personalized propensity to the
diversity. We implement and deploy the combined system in WeChat App's Top
Stories used by hundreds of millions of users. Offline simulations and online
A/B tests show our solution can effectively improve both user engagement and
system revenue.
|
[
"cs.LG"
] | false |
2305.18150
|
2023-05-29T15:25:06Z
|
Understanding the Helpfulness of Stale Bot for Pull-based Development:
An Empirical Study of 20 Large Open-Source Projects
|
[
"SayedHassan Khatoonabadi",
"Diego Elias Costa",
"Suhaib Mujahid",
"Emad Shihab"
] |
Pull Requests (PRs) that are neither progressed nor resolved clutter the list
of PRs, making it difficult for the maintainers to manage and prioritize
unresolved PRs. To automatically track, follow up, and close such inactive PRs,
Stale bot was introduced by GitHub. Despite its increasing adoption, there are
ongoing debates on whether using Stale bot alleviates or exacerbates the
problem of inactive PRs. To better understand if and how Stale bot helps
projects in their pull-based development workflow, we perform an empirical
study of 20 large and popular open-source projects. We find that Stale bot can
help deal with a backlog of unresolved PRs as the projects closed more PRs
within the first few months of adoption. Moreover, Stale bot can help improve
the efficiency of the PR review process as the projects reviewed PRs that ended
up merged and resolved PRs that ended up closed faster after the adoption.
However, Stale bot can also negatively affect the contributors as the projects
experienced a considerable decrease in their number of active contributors
after the adoption. Therefore, relying solely on Stale bot to deal with
inactive PRs may lead to decreased community engagement and an increased
probability of contributor abandonment.
|
[
"cs.SE",
"cs.LG"
] | false |
2305.18442
|
2023-05-29T02:54:31Z
|
Improved Projection-free Online Continuous Submodular Maximization
|
[
"Yucheng Liao",
"Yuanyu Wan",
"Chang Yao",
"Mingli Song"
] |
We investigate the problem of online learning with monotone and continuous
DR-submodular reward functions, which has received great attention recently. To
efficiently handle this problem, especially in the case with complicated
decision sets, previous studies have proposed an efficient projection-free
algorithm called Mono-Frank-Wolfe (Mono-FW) using $O(T)$ gradient evaluations
and linear optimization steps in total. However, it only attains a
$(1-1/e)$-regret bound of $O(T^{4/5})$. In this paper, we propose an improved
projection-free algorithm, namely POBGA, which reduces the regret bound to
$O(T^{3/4})$ while keeping the same computational complexity as Mono-FW.
Instead of modifying Mono-FW, our key idea is to make a novel combination of a
projection-based algorithm called online boosting gradient ascent, an
infeasible projection technique, and a blocking technique. Furthermore, we
consider the decentralized setting and develop a variant of POBGA, which not
only reduces the current best regret bound of efficient projection-free
algorithms for this setting from $O(T^{4/5})$ to $O(T^{3/4})$, but also reduces
the total communication complexity from $O(T)$ to $O(\sqrt{T})$.
|
[
"cs.LG",
"math.OC"
] | false |
2305.18464
|
2023-05-29T07:51:00Z
|
Privileged Knowledge Distillation for Sim-to-Real Policy Generalization
|
[
"Haoran He",
"Chenjia Bai",
"Hang Lai",
"Lingxiao Wang",
"Weinan Zhang"
] |
Reinforcement Learning (RL) has recently achieved remarkable success in
robotic control. However, most RL methods operate in simulated environments
where privileged knowledge (e.g., dynamics, surroundings, terrains) is readily
available. Conversely, in real-world scenarios, robot agents usually rely
solely on local states (e.g., proprioceptive feedback of robot joints) to
select actions, leading to a significant sim-to-real gap. Existing methods
address this gap by either gradually reducing the reliance on privileged
knowledge or performing a two-stage policy imitation. However, we argue that
these methods are limited in their ability to fully leverage the privileged
knowledge, resulting in suboptimal performance. In this paper, we propose a
novel single-stage privileged knowledge distillation method called the
Historical Information Bottleneck (HIB) to narrow the sim-to-real gap. In
particular, HIB learns a privileged knowledge representation from historical
trajectories by capturing the underlying changeable dynamic information.
Theoretical analysis shows that the learned privileged knowledge representation
helps reduce the value discrepancy between the oracle and learned policies.
Empirical experiments on both simulated and real-world tasks demonstrate that
HIB yields improved generalizability compared to previous methods.
|
[
"cs.LG",
"cs.RO"
] | false |
2305.18469
|
2023-05-29T09:02:05Z
|
Reducing Communication for Split Learning by Randomized Top-k
Sparsification
|
[
"Fei Zheng",
"Chaochao Chen",
"Lingjuan Lyu",
"Binhui Yao"
] |
Split learning is a simple solution for Vertical Federated Learning (VFL),
which has drawn substantial attention in both research and application due to
its simplicity and efficiency. However, communication efficiency is still a
crucial issue for split learning. In this paper, we investigate multiple
communication reduction methods for split learning, including cut layer size
reduction, top-k sparsification, quantization, and L1 regularization. Through
analysis of the cut layer size reduction and top-k sparsification, we further
propose randomized top-k sparsification, to make the model generalize and
converge better. This is done by selecting top-k elements with a large
probability while also having a small probability to select non-top-k elements.
Empirical results show that compared with other communication-reduction
methods, our proposed randomized top-k sparsification achieves a better model
performance under the same compression level.
|
[
"cs.LG",
"cs.DC"
] | false |
2305.18472
|
2023-05-29T10:17:13Z
|
Deep Predictive Coding with Bi-directional Propagation for
Classification and Reconstruction
|
[
"Senhui Qiu",
"Saugat Bhattacharyya",
"Damien Coyle",
"Shirin Dora"
] |
This paper presents a new learning algorithm, termed Deep Bi-directional
Predictive Coding (DBPC) that allows developing networks to simultaneously
perform classification and reconstruction tasks using the same weights.
Predictive Coding (PC) has emerged as a prominent theory underlying information
processing in the brain. The general concept for learning in PC is that each
layer learns to predict the activities of neurons in the previous layer which
enables local computation of error and in-parallel learning across layers. In
this paper, we extend existing PC approaches by developing a network which
supports both feedforward and feedback propagation of information. Each layer
in the networks trained using DBPC learn to predict the activities of neurons
in the previous and next layer which allows the network to simultaneously
perform classification and reconstruction tasks using feedforward and feedback
propagation, respectively. DBPC also relies on locally available information
for learning, thus enabling in-parallel learning across all layers in the
network. The proposed approach has been developed for training both, fully
connected networks and convolutional neural networks. The performance of DBPC
has been evaluated on both, classification and reconstruction tasks using the
MNIST and FashionMNIST datasets. The classification and the reconstruction
performance of networks trained using DBPC is similar to other approaches used
for comparison but DBPC uses a significantly smaller network. Further, the
significant benefit of DBPC is its ability to achieve this performance using
locally available information and in-parallel learning mechanisms which results
in an efficient training protocol. This results clearly indicate that DBPC is a
much more efficient approach for developing networks that can simultaneously
perform both classification and reconstruction.
|
[
"cs.LG",
"cs.NE"
] | false |
2305.18481
|
2023-05-29T11:49:20Z
|
A Hybrid Framework of Reinforcement Learning and Convex Optimization for
UAV-Based Autonomous Metaverse Data Collection
|
[
"Peiyuan Si",
"Liangxin Qian",
"Jun Zhao",
"Kwok-Yan Lam"
] |
Unmanned aerial vehicles (UAVs) are promising for providing communication
services due to their advantages in cost and mobility, especially in the
context of the emerging Metaverse and Internet of Things (IoT). This paper
considers a UAV-assisted Metaverse network, in which UAVs extend the coverage
of the base station (BS) to collect the Metaverse data generated at roadside
units (RSUs). Specifically, to improve the data collection efficiency, resource
allocation and trajectory control are integrated into the system model. The
time-dependent nature of the optimization problem makes it non-trivial to be
solved by traditional convex optimization methods. Based on the proposed
UAV-assisted Metaverse network system model, we design a hybrid framework with
reinforcement learning and convex optimization to {cooperatively} solve the
time-sequential optimization problem. Simulation results show that the proposed
framework is able to reduce the mission completion time with a given
transmission power resource.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.18492
|
2023-05-29T13:45:49Z
|
DMS: Differentiable Mean Shift for Dataset Agnostic Task Specific
Clustering Using Side Information
|
[
"Michael A. Hobley",
"Victor A. Prisacariu"
] |
We present a novel approach, in which we learn to cluster data directly from
side information, in the form of a small set of pairwise examples. Unlike
previous methods, with or without side information, we do not need to know the
number of clusters, their centers or any kind of distance metric for
similarity. Our method is able to divide the same data points in various ways
dependant on the needs of a specific task, defined by the side information.
Contrastingly, other work generally finds only the intrinsic, most obvious,
clusters. Inspired by the mean shift algorithm, we implement our new clustering
approach using a custom iterative neural network to create Differentiable Mean
Shift (DMS), a state of the art, dataset agnostic, clustering method. We found
that it was possible to train a strong cluster definition without enforcing a
constraint that each cluster must be presented during training. DMS outperforms
current methods in both the intrinsic and non-intrinsic dataset tasks.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.18494
|
2023-05-29T13:50:16Z
|
Adapting Learned Sparse Retrieval for Long Documents
|
[
"Thong Nguyen",
"Sean MacAvaney",
"Andrew Yates"
] |
Learned sparse retrieval (LSR) is a family of neural retrieval methods that
transform queries and documents into sparse weight vectors aligned with a
vocabulary. While LSR approaches like Splade work well for short passages, it
is unclear how well they handle longer documents. We investigate existing
aggregation approaches for adapting LSR to longer documents and find that
proximal scoring is crucial for LSR to handle long documents. To leverage this
property, we proposed two adaptations of the Sequential Dependence Model (SDM)
to LSR: ExactSDM and SoftSDM. ExactSDM assumes only exact query term
dependence, while SoftSDM uses potential functions that model the dependence of
query terms and their expansion terms (i.e., terms identified using a
transformer's masked language modeling head).
Experiments on the MSMARCO Document and TREC Robust04 datasets demonstrate
that both ExactSDM and SoftSDM outperform existing LSR aggregation approaches
for different document length constraints. Surprisingly, SoftSDM does not
provide any performance benefits over ExactSDM. This suggests that soft
proximity matching is not necessary for modeling term dependence in LSR.
Overall, this study provides insights into handling long documents with LSR,
proposing adaptations that improve its performance.
|
[
"cs.IR",
"cs.LG"
] | false |
2305.18495
|
2023-05-29T13:55:02Z
|
Hardware-aware Training Techniques for Improving Robustness of Ex-Situ
Neural Network Transfer onto Passive TiO2 ReRAM Crossbars
|
[
"Philippe Drolet",
"Raphaël Dawant",
"Victor Yon",
"Pierre-Antoine Mouny",
"Matthieu Valdenaire",
"Javier Arias Zapata",
"Pierre Gliech",
"Sean U. N. Wood",
"Serge Ecoffey",
"Fabien Alibart",
"Yann Beilliard",
"Dominique Drouin"
] |
Passive resistive random access memory (ReRAM) crossbar arrays, a promising
emerging technology used for analog matrix-vector multiplications, are far
superior to their active (1T1R) counterparts in terms of the integration
density. However, current transfers of neural network weights into the
conductance state of the memory devices in the crossbar architecture are
accompanied by significant losses in precision due to hardware variabilities
such as sneak path currents, biasing scheme effects and conductance tuning
imprecision. In this work, training approaches that adapt techniques such as
dropout, the reparametrization trick and regularization to TiO2 crossbar
variabilities are proposed in order to generate models that are better adapted
to their hardware transfers. The viability of this approach is demonstrated by
comparing the outputs and precision of the proposed hardware-aware network with
those of a regular fully connected network over a few thousand weight transfers
using the half moons dataset in a simulation based on experimental data. For
the neural network trained using the proposed hardware-aware method, 79.5% of
the test set's data points can be classified with an accuracy of 95% or higher,
while only 18.5% of the test set's data points can be classified with this
accuracy by the regularly trained neural network.
|
[
"cs.AR",
"cs.LG"
] | false |
2305.18506
|
2023-05-29T15:01:13Z
|
Generalization Ability of Wide Residual Networks
|
[
"Jianfa Lai",
"Zixiong Yu",
"Songtao Tian",
"Qian Lin"
] |
In this paper, we study the generalization ability of the wide residual
network on $\mathbb{S}^{d-1}$ with the ReLU activation function. We first show
that as the width $m\rightarrow\infty$, the residual network kernel (RNK)
uniformly converges to the residual neural tangent kernel (RNTK). This uniform
convergence further guarantees that the generalization error of the residual
network converges to that of the kernel regression with respect to the RNTK. As
direct corollaries, we then show $i)$ the wide residual network with the early
stopping strategy can achieve the minimax rate provided that the target
regression function falls in the reproducing kernel Hilbert space (RKHS)
associated with the RNTK; $ii)$ the wide residual network can not generalize
well if it is trained till overfitting the data. We finally illustrate some
experiments to reconcile the contradiction between our theoretical result and
the widely observed ``benign overfitting phenomenon''
|
[
"stat.ML",
"cs.LG",
"62G08 (Primary), 68T07, 46E22 (secondary)",
"G.3"
] | false |
2305.18550
|
2023-05-29T18:26:51Z
|
Meta-Regression Analysis of Errors in Short-Term Electricity Load
Forecasting
|
[
"Konstantin Hopf",
"Hannah Hartstang",
"Thorsten Staake"
] |
Forecasting electricity demand plays a critical role in ensuring reliable and
cost-efficient operation of the electricity supply. With the global transition
to distributed renewable energy sources and the electrification of heating and
transportation, accurate load forecasts become even more important. While
numerous empirical studies and a handful of review articles exist, there is
surprisingly little quantitative analysis of the literature, most notably none
that identifies the impact of factors on forecasting performance across the
entirety of empirical studies. In this article, we therefore present a
Meta-Regression Analysis (MRA) that examines factors that influence the
accuracy of short-term electricity load forecasts. We use data from 421
forecast models published in 59 studies. While the grid level (esp. individual
vs. aggregated vs. system), the forecast granularity, and the algorithms used
seem to have a significant impact on the MAPE, bibliometric data, dataset
sizes, and prediction horizon show no significant effect. We found the LSTM
approach and a combination of neural networks with other approaches to be the
best forecasting methods. The results help practitioners and researchers to
make meaningful model choices. Yet, this paper calls for further MRA in the
field of load forecasting to close the blind spots in research and practice of
load forecasting.
|
[
"cs.LG",
"stat.AP"
] | false |
2305.18552
|
2023-05-29T18:29:11Z
|
Learning Linear Groups in Neural Networks
|
[
"Emmanouil Theodosis",
"Karim Helwani",
"Demba Ba"
] |
Employing equivariance in neural networks leads to greater parameter
efficiency and improved generalization performance through the encoding of
domain knowledge in the architecture; however, the majority of existing
approaches require an a priori specification of the desired symmetries. We
present a neural network architecture, Linear Group Networks (LGNs), for
learning linear groups acting on the weight space of neural networks. Linear
groups are desirable due to their inherent interpretability, as they can be
represented as finite matrices. LGNs learn groups without any supervision or
knowledge of the hidden symmetries in the data and the groups can be mapped to
well known operations in machine learning. We use LGNs to learn groups on
multiple datasets while considering different downstream tasks; we demonstrate
that the linear group structure depends on both the data distribution and the
considered task.
|
[
"cs.LG",
"cs.NE"
] | false |
2305.18558
|
2023-05-29T18:42:03Z
|
DelBugV: Delta-Debugging Neural Network Verifiers
|
[
"Raya Elsaleh",
"Guy Katz"
] |
Deep neural networks (DNNs) are becoming a key component in diverse systems
across the board. However, despite their success, they often err miserably; and
this has triggered significant interest in formally verifying them.
Unfortunately, DNN verifiers are intricate tools, and are themselves
susceptible to soundness bugs. Due to the complexity of DNN verifiers, as well
as the sizes of the DNNs being verified, debugging such errors is a daunting
task. Here, we present a novel tool, named DelBugV, that uses automated delta
debugging techniques on DNN verifiers. Given a malfunctioning DNN verifier and
a correct verifier as a point of reference (or, in some cases, just a single,
malfunctioning verifier), DelBugV can produce much simpler DNN verification
instances that still trigger undesired behavior -- greatly facilitating the
task of debugging the faulty verifier. Our tool is modular and extensible, and
can easily be enhanced with additional network simplification methods and
strategies. For evaluation purposes, we ran DelBugV on 4 DNN verification
engines, which were observed to produce incorrect results at the 2021 neural
network verification competition (VNN-COMP'21). We were able to simplify many
of the verification queries that trigger these faulty behaviors, by as much as
99%. We regard our work as a step towards the ultimate goal of producing
reliable and trustworthy DNN-based software.
|
[
"cs.LO",
"cs.LG"
] | false |
2305.18632
|
2023-05-29T21:48:19Z
|
Graph Rewriting for Graph Neural Networks
|
[
"Adam Machowczyk",
"Reiko Heckel"
] |
Given graphs as input, Graph Neural Networks (GNNs) support the inference of
nodes, edges, attributes, or graph properties. Graph Rewriting investigates the
rule-based manipulation of graphs to model complex graph transformations. We
propose that, therefore, (i) graph rewriting subsumes GNNs and could serve as
formal model to study and compare them, and (ii) the representation of GNNs as
graph rewrite systems can help to design and analyse GNNs, their architectures
and algorithms. Hence we propose Graph Rewriting Neural Networks (GReNN) as
both novel semantic foundation and engineering discipline for GNNs. We develop
a case study reminiscent of a Message Passing Neural Network realised as a
Groove graph rewriting model and explore its incremental operation in response
to dynamic updates.
|
[
"cs.LG",
"cs.NE"
] | false |
2305.18646
|
2023-05-29T22:51:40Z
|
Deep Equilibrium Models Meet Federated Learning
|
[
"Alexandros Gkillas",
"Dimitris Ampeliotis",
"Kostas Berberidis"
] |
In this study the problem of Federated Learning (FL) is explored under a new
perspective by utilizing the Deep Equilibrium (DEQ) models instead of
conventional deep learning networks. We claim that incorporating DEQ models
into the federated learning framework naturally addresses several open problems
in FL, such as the communication overhead due to the sharing large models and
the ability to incorporate heterogeneous edge devices with significantly
different computation capabilities. Additionally, a weighted average fusion
rule is proposed at the server-side of the FL framework to account for the
different qualities of models from heterogeneous edge devices. To the best of
our knowledge, this study is the first to establish a connection between DEQ
models and federated learning, contributing to the development of an efficient
and effective FL framework. Finally, promising initial experimental results are
presented, demonstrating the potential of this approach in addressing
challenges of FL.
|
[
"cs.LG",
"cs.DC"
] | false |
2305.19132
|
2023-05-29T00:21:56Z
|
Full High-Dimensional Intelligible Learning In 2-D Lossless
Visualization Space
|
[
"Boris Kovalerchuk",
"Hoang Phan"
] |
This study explores a new methodology for machine learning classification
tasks in 2-dimensional visualization space (2-D ML) using Visual knowledge
Discovery in lossless General Line Coordinates. It is shown that this is a full
machine learning approach that does not require processing n-dimensional data
in an abstract n-dimensional space. It enables discovering n-D patterns in 2-D
space without loss of n-D information using graph representations of n-D data
in 2-D. Specifically, this study shows that it can be done with static and
dynamic In-line Based Coordinates in different modifications, which are a
category of General Line Coordinates. Based on these inline coordinates,
classification and regression methods were developed. The viability of the
strategy was shown by two case studies based on benchmark datasets (Wisconsin
Breast Cancer and Page Block Classification datasets). The characteristics of
page block classification data led to the development of an algorithm for
imbalanced high-resolution data with multiple classes, which exploits the
decision trees as a model design facilitator producing a model, which is more
general than a decision tree. This work accelerates the ongoing consolidation
of an emerging field of full 2-D machine learning and its methodology. Within
this methodology the end users can discover models and justify them as
self-service. Providing interpretable ML models is another benefit of this
approach.
|
[
"cs.LG",
"cs.GR"
] | false |
2305.18090
|
2023-05-29T14:43:24Z
|
ChatGPT-powered Conversational Drug Editing Using Retrieval and Domain
Feedback
|
[
"Shengchao Liu",
"Jiongxiao Wang",
"Yijin Yang",
"Chengpeng Wang",
"Ling Liu",
"Hongyu Guo",
"Chaowei Xiao"
] |
Recent advancements in conversational large language models (LLMs), such as
ChatGPT, have demonstrated remarkable promise in various domains, including
drug discovery. However, existing works mainly focus on investigating the
capabilities of conversational LLMs on chemical reaction and retrosynthesis.
While drug editing, a critical task in the drug discovery pipeline, remains
largely unexplored. To bridge this gap, we propose ChatDrug, a framework to
facilitate the systematic investigation of drug editing using LLMs. ChatDrug
jointly leverages a prompt module, a retrieval and domain feedback (ReDF)
module, and a conversation module to streamline effective drug editing. We
empirically show that ChatDrug reaches the best performance on 33 out of 39
drug editing tasks, encompassing small molecules, peptides, and proteins. We
further demonstrate, through 10 case studies, that ChatDrug can successfully
identify the key substructures (e.g., the molecule functional groups, peptide
motifs, and protein structures) for manipulation, generating diverse and valid
suggestions for drug editing. Promisingly, we also show that ChatDrug can offer
insightful explanations from a domain-specific perspective, enhancing
interpretability and enabling informed decision-making. This research sheds
light on the potential of ChatGPT and conversational LLMs for drug editing. It
paves the way for a more efficient and collaborative drug discovery pipeline,
contributing to the advancement of pharmaceutical research and development.
|
[
"q-bio.BM",
"cs.AI",
"cs.LG"
] | false |
2305.18143
|
2023-05-29T15:13:46Z
|
Reason to explain: Interactive contrastive explanations (REASONX)
|
[
"Laura State",
"Salvatore Ruggieri",
"Franco Turini"
] |
Many high-performing machine learning models are not interpretable. As they
are increasingly used in decision scenarios that can critically affect
individuals, it is necessary to develop tools to better understand their
outputs. Popular explanation methods include contrastive explanations. However,
they suffer several shortcomings, among others an insufficient incorporation of
background knowledge, and a lack of interactivity. While (dialogue-like)
interactivity is important to better communicate an explanation, background
knowledge has the potential to significantly improve their quality, e.g., by
adapting the explanation to the needs of the end-user. To close this gap, we
present REASONX, an explanation tool based on Constraint Logic Programming
(CLP). REASONX provides interactive contrastive explanations that can be
augmented by background knowledge, and allows to operate under a setting of
under-specified information, leading to increased flexibility in the provided
explanations. REASONX computes factual and constrative decision rules, as well
as closest constrative examples. It provides explanations for decision trees,
which can be the ML models under analysis, or global/local surrogate models of
any ML model. While the core part of REASONX is built on CLP, we also provide a
program layer that allows to compute the explanations via Python, making the
tool accessible to a wider audience. We illustrate the capability of REASONX on
a synthetic data set, and on a a well-developed example in the credit domain.
In both cases, we can show how REASONX can be flexibly used and tailored to the
needs of the user.
|
[
"cs.AI",
"cs.CY",
"cs.LG",
"cs.SC"
] | false |
2305.18188
|
2023-05-29T16:25:55Z
|
Understanding Predictive Coding as an Adaptive Trust-Region Method
|
[
"Francesco Innocenti",
"Ryan Singh",
"Christopher L. Buckley"
] |
Predictive coding (PC) is a brain-inspired local learning algorithm that has
recently been suggested to provide advantages over backpropagation (BP) in
biologically relevant scenarios. While theoretical work has mainly focused on
showing how PC can approximate BP in various limits, the putative benefits of
"natural" PC are less understood. Here we develop a theory of PC as an adaptive
trust-region (TR) algorithm that uses second-order information. We show that
the learning dynamics of PC can be interpreted as interpolating between BP's
loss gradient direction and a TR direction found by the PC inference dynamics.
Our theory suggests that PC should escape saddle points faster than BP, a
prediction which we prove in a shallow linear model and support with
experiments on deeper networks. This work lays a foundation for understanding
PC in deep and wide networks.
|
[
"cs.NE",
"cs.AI",
"cs.LG"
] | false |
2305.18285
|
2023-05-29T17:54:50Z
|
Partially Personalized Federated Learning: Breaking the Curse of Data
Heterogeneity
|
[
"Konstantin Mishchenko",
"Rustem Islamov",
"Eduard Gorbunov",
"Samuel Horváth"
] |
We present a partially personalized formulation of Federated Learning (FL)
that strikes a balance between the flexibility of personalization and
cooperativeness of global training. In our framework, we split the variables
into global parameters, which are shared across all clients, and individual
local parameters, which are kept private. We prove that under the right split
of parameters, it is possible to find global parameters that allow each client
to fit their data perfectly, and refer to the obtained problem as
overpersonalized. For instance, the shared global parameters can be used to
learn good data representations, whereas the personalized layers are fine-tuned
for a specific client. Moreover, we present a simple algorithm for the
partially personalized formulation that offers significant benefits to all
clients. In particular, it breaks the curse of data heterogeneity in several
settings, such as training with local steps, asynchronous training, and
Byzantine-robust training.
|
[
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML"
] | false |
2305.18441
|
2023-05-29T02:25:03Z
|
DeCoR: Defy Knowledge Forgetting by Predicting Earlier Audio Codes
|
[
"Xilin Jiang",
"Yinghao Aaron Li",
"Nima Mesgarani"
] |
Lifelong audio feature extraction involves learning new sound classes
incrementally, which is essential for adapting to new data distributions over
time. However, optimizing the model only on new data can lead to catastrophic
forgetting of previously learned tasks, which undermines the model's ability to
perform well over the long term. This paper introduces a new approach to
continual audio representation learning called DeCoR. Unlike other methods that
store previous data, features, or models, DeCoR indirectly distills knowledge
from an earlier model to the latest by predicting quantization indices from a
delayed codebook. We demonstrate that DeCoR improves acoustic scene
classification accuracy and integrates well with continual self-supervised
representation learning. Our approach introduces minimal storage and
computation overhead, making it a lightweight and efficient solution for
continual learning.
|
[
"eess.AS",
"cs.LG",
"cs.SD"
] | false |
2305.18454
|
2023-05-29T04:14:47Z
|
PubChemQC B3LYP/6-31G*//PM6 dataset: the Electronic Structures of 86
Million Molecules using B3LYP/6-31G* calculations
|
[
"Maho Nakata",
"Toshiyuki Maeda"
] |
This article presents the "PubChemQC B3LYP/6-31G*//PM6" dataset, containing
electronic properties of 85,938,443 molecules. It includes orbitals, orbital
energies, total energies, dipole moments, and other relevant properties. The
dataset encompasses a wide range of molecules, from essential compounds to
biomolecules up to 1000 molecular weight, covering 94.0% of the original
PubChem Compound catalog (as of August 29, 2016). The electronic properties
were calculated using the B3LYP/6-31G* and PM6 methods. The dataset is
available in three formats: (i) GAMESS quantum chemistry program files, (ii)
selected JSON output files, and (iii) a PostgreSQL database, enabling
researchers to query molecular properties. Five sub-datasets offer more
specific data. The first two subsets include molecules with C, H, O, and N,
under 300 and 500 molecular weight respectively. The third and fourth subsets
contain C, H, N, O, P, S, F, and Cl, under 300 and 500 molecular weight
respectively. The fifth subset includes C, H, N, O, P, S, F, Cl, Na, K, Mg, and
Ca, under 500 molecular weight. Coefficients of determination ranged from 0.892
(CHON500) to 0.803 (whole) for the HOMO-LUMO energy gap. These findings
represent extensive investigations and can be utilized for drug discovery,
material science, and other applications. The datasets are available under the
Creative Commons Attribution 4.0 International license at
https://nakatamaho.riken.jp/pubchemqc.riken.jp/b3lyp_pm6_datasets.html.
|
[
"physics.chem-ph",
"cs.LG",
"q-bio.BM"
] | false |
2305.18456
|
2023-05-29T04:26:16Z
|
Baselines for Identifying Watermarked Large Language Models
|
[
"Leonard Tang",
"Gavin Uberti",
"Tom Shlomi"
] |
We consider the emerging problem of identifying the presence and use of
watermarking schemes in widely used, publicly hosted, closed source large
language models (LLMs). We introduce a suite of baseline algorithms for
identifying watermarks in LLMs that rely on analyzing distributions of output
tokens and logits generated by watermarked and unmarked LLMs. Notably,
watermarked LLMs tend to produce distributions that diverge qualitatively and
identifiably from standard models. Furthermore, we investigate the
identifiability of watermarks at varying strengths and consider the tradeoffs
of each of our identification mechanisms with respect to watermarking scenario.
Along the way, we formalize the specific problem of identifying watermarks in
LLMs, as well as LLM watermarks and watermark detection in general, providing a
framework and foundations for studying them.
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"cs.CY"
] | false |
2305.18474
|
2023-05-29T10:41:28Z
|
Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation
|
[
"Jiawei Huang",
"Yi Ren",
"Rongjie Huang",
"Dongchao Yang",
"Zhenhui Ye",
"Chen Zhang",
"Jinglin Liu",
"Xiang Yin",
"Zejun Ma",
"Zhou Zhao"
] |
Large diffusion models have been successful in text-to-audio (T2A) synthesis
tasks, but they often suffer from common issues such as semantic misalignment
and poor temporal consistency due to limited natural language understanding and
data scarcity. Additionally, 2D spatial structures widely used in T2A works
lead to unsatisfactory audio quality when generating variable-length audio
samples since they do not adequately prioritize temporal information. To
address these challenges, we propose Make-an-Audio 2, a latent diffusion-based
T2A method that builds on the success of Make-an-Audio. Our approach includes
several techniques to improve semantic alignment and temporal consistency:
Firstly, we use pre-trained large language models (LLMs) to parse the text into
structured <event & order> pairs for better temporal information capture. We
also introduce another structured-text encoder to aid in learning semantic
alignment during the diffusion denoising process. To improve the performance of
variable length generation and enhance the temporal information extraction, we
design a feed-forward Transformer-based diffusion denoiser. Finally, we use
LLMs to augment and transform a large amount of audio-label data into
audio-text datasets to alleviate the problem of scarcity of temporal data.
Extensive experiments show that our method outperforms baseline models in both
objective and subjective metrics, and achieves significant gains in temporal
information understanding, semantic consistency, and sound quality.
|
[
"cs.SD",
"cs.LG",
"cs.MM",
"eess.AS"
] | true |
2305.18488
|
2023-05-29T12:41:32Z
|
A Bayesian sparse factor model with adaptive posterior concentration
|
[
"Ilsang Ohn",
"Lizhen Lin",
"Yongdai Kim"
] |
In this paper, we propose a new Bayesian inference method for a
high-dimensional sparse factor model that allows both the factor dimensionality
and the sparse structure of the loading matrix to be inferred. The novelty is
to introduce a certain dependence between the sparsity level and the factor
dimensionality, which leads to adaptive posterior concentration while keeping
computational tractability. We show that the posterior distribution
asymptotically concentrates on the true factor dimensionality, and more
importantly, this posterior consistency is adaptive to the sparsity level of
the true loading matrix and the noise variance. We also prove that the proposed
Bayesian model attains the optimal detection rate of the factor dimensionality
in a more general situation than those found in the literature. Moreover, we
obtain a near-optimal posterior concentration rate of the covariance matrix.
Numerical studies are conducted and show the superiority of the proposed method
compared with other competitors.
|
[
"stat.ML",
"cs.LG",
"stat.ME"
] | false |
2305.18493
|
2023-05-29T13:47:51Z
|
Insights from the Design Space Exploration of Flow-Guided Nanoscale
Localization
|
[
"Filip Lemic",
"Gerard Calvo Bartra",
"Arnau Brosa López",
"Jorge Torres Gómez",
"Jakob Struye",
"Falko Dressler",
"Sergi Abadal",
"Xavier Costa Perez"
] |
Nanodevices with Terahertz (THz)-based wireless communication capabilities
are providing a primer for flow-guided localization within the human
bloodstreams. Such localization is allowing for assigning the locations of
sensed events with the events themselves, providing benefits in precision
medicine along the lines of early and precise diagnostics, and reduced costs
and invasiveness. Flow-guided localization is still in a rudimentary phase,
with only a handful of works targeting the problem. Nonetheless, the
performance assessments of the proposed solutions are already carried out in a
non-standardized way, usually along a single performance metric, and ignoring
various aspects that are relevant at such a scale (e.g., nanodevices' limited
energy) and for such a challenging environment (e.g., extreme attenuation of
in-body THz propagation). As such, these assessments feature low levels of
realism and cannot be compared in an objective way. Toward addressing this
issue, we account for the environmental and scale-related peculiarities of the
scenario and assess the performance of two state-of-the-art flow-guided
localization approaches along a set of heterogeneous performance metrics such
as the accuracy and reliability of localization.
|
[
"cs.NI",
"cs.LG",
"eess.SP"
] | false |
2305.18508
|
2023-05-29T15:25:48Z
|
On the Variance, Admissibility, and Stability of Empirical Risk
Minimization
|
[
"Gil Kur",
"Eli Putterman",
"Alexander Rakhlin"
] |
It is well known that Empirical Risk Minimization (ERM) with squared loss may
attain minimax suboptimal error rates (Birg\'e and Massart, 1993). The key
message of this paper is that, under mild assumptions, the suboptimality of ERM
must be due to large bias rather than variance. More precisely, in the
bias-variance decomposition of the squared error of the ERM, the variance term
necessarily enjoys the minimax rate. In the case of fixed design, we provide an
elementary proof of this fact using the probabilistic method. Then, we prove
this result for various models in the random design setting. In addition, we
provide a simple proof of Chatterjee's admissibility theorem (Chatterjee, 2014,
Theorem 1.4), which states that ERM cannot be ruled out as an optimal method,
in the fixed design setting, and extend this result to the random design
setting. We also show that our estimates imply stability of ERM, complementing
the main result of Caponnetto and Rakhlin (2006) for non-Donsker classes.
Finally, we show that for non-Donsker classes, there are functions close to the
ERM, yet far from being almost-minimizers of the empirical loss, highlighting
the somewhat irregular nature of the loss landscape.
|
[
"math.ST",
"cs.LG",
"stat.ML",
"stat.TH"
] | false |
2305.18577
|
2023-05-29T19:37:28Z
|
Towards Constituting Mathematical Structures for Learning to Optimize
|
[
"Jialin Liu",
"Xiaohan Chen",
"Zhangyang Wang",
"Wotao Yin",
"HanQin Cai"
] |
Learning to Optimize (L2O), a technique that utilizes machine learning to
learn an optimization algorithm automatically from data, has gained arising
attention in recent years. A generic L2O approach parameterizes the iterative
update rule and learns the update direction as a black-box network. While the
generic approach is widely applicable, the learned model can overfit and may
not generalize well to out-of-distribution test sets. In this paper, we derive
the basic mathematical conditions that successful update rules commonly
satisfy. Consequently, we propose a novel L2O model with a mathematics-inspired
structure that is broadly applicable and generalized well to
out-of-distribution problems. Numerical simulations validate our theoretical
findings and demonstrate the superior empirical performance of the proposed L2O
model.
|
[
"cs.LG",
"math.OC",
"stat.ML"
] | false |
2305.18578
|
2023-05-29T19:37:48Z
|
Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models
|
[
"Alexandre Mösching",
"Housen Li",
"Axel Munk"
] |
Hidden Markov models (HMMs) are characterized by an unobservable (hidden)
Markov chain and an observable process, which is a noisy version of the hidden
chain. Decoding the original signal (i.e., hidden chain) from the noisy
observations is one of the main goals in nearly all HMM based data analyses.
Existing decoding algorithms such as the Viterbi algorithm have computational
complexity at best linear in the length of the observed sequence, and
sub-quadratic in the size of the state space of the Markov chain. We present
Quick Adaptive Ternary Segmentation (QATS), a divide-and-conquer procedure
which decodes the hidden sequence in polylogarithmic computational complexity
in the length of the sequence, and cubic in the size of the state space, hence
particularly suited for large scale HMMs with relatively few states. The
procedure also suggests an effective way of data storage as specific cumulative
sums. In essence, the estimated sequence of states sequentially maximizes local
likelihood scores among all local paths with at most three segments. The
maximization is performed only approximately using an adaptive search
procedure. The resulting sequence is admissible in the sense that all
transitions occur with positive probability. To complement formal results
justifying our approach, we present Monte-Carlo simulations which demonstrate
the speedups provided by QATS in comparison to Viterbi, along with a precision
analysis of the returned sequences. An implementation of QATS in C++ is
provided in the R-package QATS and is available from GitHub.
|
[
"stat.ME",
"cs.LG",
"stat.ML",
"62M05"
] | false |
2305.18584
|
2023-05-29T19:57:36Z
|
Coeditor: Leveraging Contextual Changes for Multi-round Code
Auto-editing
|
[
"Jiayi Wei",
"Greg Durrett",
"Isil Dillig"
] |
Developers often dedicate significant time to maintaining and refactoring
existing code. However, most prior work on generative models for code focuses
solely on creating new code, neglecting the unique requirements of editing
existing code. In this work, we explore a multi-round code auto-editing
setting, aiming to predict edits to a code region based on recent changes
within the same codebase. Our model, Coeditor, is a fine-tuned CodeT5 model
with enhancements specifically designed for code editing tasks. We encode code
changes using a line diff format and employ static analysis to form large
customized model contexts, ensuring appropriate information for prediction. We
collect a code editing dataset from the commit histories of 1650 open-source
Python projects for training and evaluation. In a simplified single-round,
single-edit task, Coeditor significantly outperforms the best code completion
approach -- nearly doubling its exact-match accuracy, despite using a much
smaller model -- demonstrating the benefits of incorporating editing history
for code completion. In a multi-round, multi-edit setting, we observe
substantial gains by iteratively prompting the model with additional user
edits. We open-source our code, data, and model weights to encourage future
research and release a VSCode extension powered by our model for interactive
usage.
|
[
"cs.SE",
"cs.LG",
"cs.PL"
] | false |
2305.18627
|
2023-05-29T21:32:15Z
|
Global-QSGD: Practical Floatless Quantization for Distributed Learning
with Theoretical Guarantees
|
[
"Jihao Xin",
"Marco Canini",
"Peter Richtárik",
"Samuel Horváth"
] |
Efficient distributed training is a principal driver of recent advances in
deep learning. However, communication often proves costly and becomes the
primary bottleneck in these systems. As a result, there is a demand for the
design of efficient communication mechanisms that can empirically boost
throughput while providing theoretical guarantees. In this work, we introduce
Global-QSGD, a novel family of quantization operators, engineered to accelerate
distributed training based on global scaling. We demonstrate that Global-QSGD
is the first theoretically rigorous Allreduce-compatible compression mechanism
that achieves a provable speed-up by striking a balance between compression
error and communication savings. Importantly, Global-QSGD does not rely on
costly error feedback due to its inherent unbiasedness and offers up to
$O(\sqrt{n})$ additional compression ratio compared to the popular QSGD
quantization ($n$ represents the number of workers). To obtain theoretical
guarantees, we generalize the notion of standard unbiased compression operators
to incorporate Global-QSGD. We show that this wider class permits standard
analysis for unbiased compressors and thus ensures convergence for popular
optimization algorithms (e.g., distributed SGD) under typical settings. For the
empirical component of our work, we carry out a performance modeling analysis
to determine if Global-QSGD can enhance training throughput under specific
hardware configurations. We also conduct extensive empirical evaluations on
various tasks, testing our theory on both NVLink and PCIe connections as well
as a large-scale cloud system.
|
[
"cs.LG",
"cs.DC",
"stat.ML"
] | false |
2305.18630
|
2023-05-29T21:41:35Z
|
Identification of stormwater control strategies and their associated
uncertainties using Bayesian Optimization
|
[
"Abhiram Mullapudi",
"Branko Kerkez"
] |
Dynamic control is emerging as an effective methodology for operating
stormwater systems under stress from rapidly evolving weather patterns.
Informed by rainfall predictions and real-time sensor measurements, control
assets in the stormwater network can be dynamically configured to tune the
behavior of the stormwater network to reduce the risk of urban flooding,
equalize flows to the water reclamation facilities, and protect the receiving
water bodies. However, developing such control strategies requires significant
human and computational resources, and a methodology does not yet exist for
quantifying the risks associated with implementing these control strategies. To
address these challenges, in this paper, we introduce a Bayesian
Optimization-based approach for identifying stormwater control strategies and
estimating the associated uncertainties. We evaluate the efficacy of this
approach in identifying viable control strategies in a simulated environment on
real-world inspired combined and separated stormwater networks. We demonstrate
the computational efficiency of the proposed approach by comparing it against a
Genetic algorithm. Furthermore, we extend the Bayesian Optimization-based
approach to quantify the uncertainty associated with the identified control
strategies and evaluate it on a synthetic stormwater network. To our knowledge,
this is the first-ever stormwater control methodology that quantifies
uncertainty associated with the identified control actions. This Bayesian
optimization-based stormwater control methodology is an off-the-shelf control
approach that can be applied to control any stormwater network as long we have
access to the rainfall predictions, and there exists a model for simulating the
behavior of the stormwater network.
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.19291
|
2023-05-29T21:22:08Z
|
Perimeter Control Using Deep Reinforcement Learning: A Model-free
Approach towards Homogeneous Flow Rate Optimization
|
[
"Xiaocan Li",
"Ray Coden Mercurius",
"Ayal Taitler",
"Xiaoyu Wang",
"Mohammad Noaeen",
"Scott Sanner",
"Baher Abdulhai"
] |
Perimeter control maintains high traffic efficiency within protected regions
by controlling transfer flows among regions to ensure that their traffic
densities are below critical values. Existing approaches can be categorized as
either model-based or model-free, depending on whether they rely on network
transmission models (NTMs) and macroscopic fundamental diagrams (MFDs).
Although model-based approaches are more data efficient and have performance
guarantees, they are inherently prone to model bias and inaccuracy. For
example, NTMs often become imprecise for a large number of protected regions,
and MFDs can exhibit scatter and hysteresis that are not captured in existing
model-based works. Moreover, no existing studies have employed reinforcement
learning for homogeneous flow rate optimization in microscopic simulation,
where spatial characteristics, vehicle-level information, and metering
realizations -- often overlooked in macroscopic simulations -- are taken into
account. To circumvent issues of model-based approaches and macroscopic
simulation, we propose a model-free deep reinforcement learning approach that
optimizes the flow rate homogeneously at the perimeter at the microscopic
level. Results demonstrate that our model-free reinforcement learning approach
without any knowledge of NTMs or MFDs can compete and match the performance of
a model-based approach, and exhibits enhanced generalizability and scalability.
|
[
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY"
] | false |
2305.18447
|
2023-05-29T03:53:40Z
|
Unleashing the Power of Randomization in Auditing Differentially Private
ML
|
[
"Krishna Pillutla",
"Galen Andrew",
"Peter Kairouz",
"H. Brendan McMahan",
"Alina Oprea",
"Sewoong Oh"
] |
We present a rigorous methodology for auditing differentially private machine
learning algorithms by adding multiple carefully designed examples called
canaries. We take a first principles approach based on three key components.
First, we introduce Lifted Differential Privacy (LiDP) that expands the
definition of differential privacy to handle randomized datasets. This gives us
the freedom to design randomized canaries. Second, we audit LiDP by trying to
distinguish between the model trained with $K$ canaries versus $K - 1$ canaries
in the dataset, leaving one canary out. By drawing the canaries i.i.d., LiDP
can leverage the symmetry in the design and reuse each privately trained model
to run multiple statistical tests, one for each canary. Third, we introduce
novel confidence intervals that take advantage of the multiple test statistics
by adapting to the empirical higher-order correlations. Together, this new
recipe demonstrates significant improvements in sample complexity, both
theoretically and empirically, using synthetic and real data. Further, recent
advances in designing stronger canaries can be readily incorporated into the
new framework.
|
[
"cs.LG",
"cs.CR",
"cs.IT",
"math.IT",
"math.ST",
"stat.TH"
] | false |
2305.18676
|
2023-05-30T01:26:41Z
|
LayerDiffusion: Layered Controlled Image Editing with Diffusion Models
|
[
"Pengzhi Li",
"QInxuan Huang",
"Yikang Ding",
"Zhiheng Li"
] |
Text-guided image editing has recently experienced rapid development.
However, simultaneously performing multiple editing actions on a single image,
such as background replacement and specific subject attribute changes, while
maintaining consistency between the subject and the background remains
challenging. In this paper, we propose LayerDiffusion, a semantic-based layered
controlled image editing method. Our method enables non-rigid editing and
attribute modification of specific subjects while preserving their unique
characteristics and seamlessly integrating them into new backgrounds. We
leverage a large-scale text-to-image model and employ a layered controlled
optimization strategy combined with layered diffusion training. During the
diffusion process, an iterative guidance strategy is used to generate a final
image that aligns with the textual description. Experimental results
demonstrate the effectiveness of our method in generating highly coherent
images that closely align with the given textual description. The edited images
maintain a high similarity to the features of the input image and surpass the
performance of current leading image editing methods. LayerDiffusion opens up
new possibilities for controllable image editing.
|
[
"cs.CV"
] | false |
2305.18680
|
2023-05-30T01:38:54Z
|
Improving Deep Representation Learning via Auxiliary Learnable Target
Coding
|
[
"Kangjun Liu",
"Ke Chen",
"Yaowei Wang",
"Kui Jia"
] |
Deep representation learning is a subfield of machine learning that focuses
on learning meaningful and useful representations of data through deep neural
networks. However, existing methods for semantic classification typically
employ pre-defined target codes such as the one-hot and the Hadamard codes,
which can either fail or be less flexible to model inter-class correlation. In
light of this, this paper introduces a novel learnable target coding as an
auxiliary regularization of deep representation learning, which can not only
incorporate latent dependency across classes but also impose geometric
properties of target codes into representation space. Specifically, a
margin-based triplet loss and a correlation consistency loss on the proposed
target codes are designed to encourage more discriminative representations
owing to enlarging between-class margins in representation space and favoring
equal semantic correlation of learnable target codes respectively. Experimental
results on several popular visual classification and retrieval benchmarks can
demonstrate the effectiveness of our method on improving representation
learning, especially for imbalanced data.
|
[
"cs.CV"
] | false |
2305.18684
|
2023-05-30T01:53:34Z
|
ShuffleMix: Improving Representations via Channel-Wise Shuffle of
Interpolated Hidden States
|
[
"Kangjun Liu",
"Ke Chen",
"Lihua Guo",
"Yaowei Wang",
"Kui Jia"
] |
Mixup style data augmentation algorithms have been widely adopted in various
tasks as implicit network regularization on representation learning to improve
model generalization, which can be achieved by a linear interpolation of
labeled samples in input or feature space as well as target space. Inspired by
good robustness of alternative dropout strategies against over-fitting on
limited patterns of training samples, this paper introduces a novel concept of
ShuffleMix -- Shuffle of Mixed hidden features, which can be interpreted as a
kind of dropout operation in feature space. Specifically, our ShuffleMix method
favors a simple linear shuffle of randomly selected feature channels for
feature mixup in-between training samples to leverage semantic interpolated
supervision signals, which can be extended to a generalized shuffle operation
via additionally combining linear interpolations of intra-channel features.
Compared to its direct competitor of feature augmentation -- the Manifold
Mixup, the proposed ShuffleMix can gain superior generalization, owing to
imposing more flexible and smooth constraints on generating samples and
achieving regularization effects of channel-wise feature dropout. Experimental
results on several public benchmarking datasets of single-label and multi-label
visual classification tasks can confirm the effectiveness of our method on
consistently improving representations over the state-of-the-art mixup
augmentation.
|
[
"cs.CV"
] | false |
2305.18706
|
2023-05-30T03:03:11Z
|
HQDec: Self-Supervised Monocular Depth Estimation Based on a
High-Quality Decoder
|
[
"Fei Wang",
"Jun Cheng"
] |
Decoders play significant roles in recovering scene depths. However, the
decoders used in previous works ignore the propagation of multilevel lossless
fine-grained information, cannot adaptively capture local and global
information in parallel, and cannot perform sufficient global statistical
analyses on the final output disparities. In addition, the process of mapping
from a low-resolution feature space to a high-resolution feature space is a
one-to-many problem that may have multiple solutions. Therefore, the quality of
the recovered depth map is low. To this end, we propose a high-quality decoder
(HQDec), with which multilevel near-lossless fine-grained information, obtained
by the proposed adaptive axial-normalized position-embedded channel attention
sampling module (AdaAxialNPCAS), can be adaptively incorporated into a
low-resolution feature map with high-level semantics utilizing the proposed
adaptive information exchange scheme. In the HQDec, we leverage the proposed
adaptive refinement module (AdaRM) to model the local and global dependencies
between pixels in parallel and utilize the proposed disparity attention module
to model the distribution characteristics of disparity values from a global
perspective. To recover fine-grained high-resolution features with maximal
accuracy, we adaptively fuse the high-frequency information obtained by
constraining the upsampled solution space utilizing the local and global
dependencies between pixels into the high-resolution feature map generated from
the nonlearning method. Extensive experiments demonstrate that each proposed
component improves the quality of the depth estimation results over the
baseline results, and the developed approach achieves state-of-the-art results
on the KITTI and DDAD datasets. The code and models will be publicly available
at \href{https://github.com/fwucas/HQDec}{HQDec}.
|
[
"cs.CV"
] | false |
2305.18710
|
2023-05-30T03:30:24Z
|
High-Performance Inference Graph Convolutional Networks for
Skeleton-Based Action Recognition
|
[
"Ziao Li",
"Junyi Wang",
"Guhong Nie"
] |
Recently, significant achievements have been made in skeleton-based human
action recognition with the emergence of graph convolutional networks (GCNs).
However, the state-of-the-art (SOTA) models used for this task focus on
constructing more complex higher-order connections between joint nodes to
describe skeleton information, which leads to complex inference processes and
high computational costs, resulting in reduced model's practicality. To address
the slow inference speed caused by overly complex model structures, we
introduce re-parameterization and over-parameterization techniques to GCNs, and
propose two novel high-performance inference graph convolutional networks,
namely HPI-GCN-RP and HPI-GCN-OP. HPI-GCN-RP uses re-parameterization technique
to GCNs to achieve a higher inference speed with competitive model performance.
HPI-GCN-OP further utilizes over-parameterization technique to bring
significant performance improvement with inference speed slightly decreased.
Experimental results on the two skeleton-based action recognition datasets
demonstrate the effectiveness of our approach. Our HPI-GCN-OP achieves an
accuracy of 93% on the cross-subject split of the NTU-RGB+D 60 dataset, and
90.1% on the cross-subject benchmark of the NTU-RGB+D 120 dataset and is 4.5
times faster than HD-GCN at the same accuracy.
|
[
"cs.CV"
] | false |
2305.18714
|
2023-05-30T03:39:53Z
|
Align, Perturb and Decouple: Toward Better Leverage of Difference
Information for RSI Change Detection
|
[
"Supeng Wang",
"Yuxi Li",
"Ming Xie",
"Mingmin Chi",
"Yabiao Wang",
"Chengjie Wang",
"Wenbing Zhu"
] |
Change detection is a widely adopted technique in remote sense imagery (RSI)
analysis in the discovery of long-term geomorphic evolution. To highlight the
areas of semantic changes, previous effort mostly pays attention to learning
representative feature descriptors of a single image, while the difference
information is either modeled with simple difference operations or implicitly
embedded via feature interactions. Nevertheless, such difference modeling can
be noisy since it suffers from non-semantic changes and lacks explicit guidance
from image content or context. In this paper, we revisit the importance of
feature difference for change detection in RSI, and propose a series of
operations to fully exploit the difference information: Alignment, Perturbation
and Decoupling (APD). Firstly, alignment leverages contextual similarity to
compensate for the non-semantic difference in feature space. Next, a difference
module trained with semantic-wise perturbation is adopted to learn more
generalized change estimators, which reversely bootstraps feature extraction
and prediction. Finally, a decoupled dual-decoder structure is designed to
predict semantic changes in both content-aware and content-agnostic manners.
Extensive experiments are conducted on benchmarks of LEVIR-CD, WHU-CD and
DSIFN-CD, demonstrating our proposed operations bring significant improvement
and achieve competitive results under similar comparative conditions. Code is
available at https://github.com/wangsp1999/CD-Research/tree/main/openAPD
|
[
"cs.CV"
] | false |
2305.18726
|
2023-05-30T04:07:07Z
|
Diffusion-Stego: Training-free Diffusion Generative Steganography via
Message Projection
|
[
"Daegyu Kim",
"Chaehun Shin",
"Jooyoung Choi",
"Dahuin Jung",
"Sungroh Yoon"
] |
Generative steganography is the process of hiding secret messages in
generated images instead of cover images. Existing studies on generative
steganography use GAN or Flow models to obtain high hiding message capacity and
anti-detection ability over cover images. However, they create relatively
unrealistic stego images because of the inherent limitations of generative
models. We propose Diffusion-Stego, a generative steganography approach based
on diffusion models which outperform other generative models in image
generation. Diffusion-Stego projects secret messages into latent noise of
diffusion models and generates stego images with an iterative denoising
process. Since the naive hiding of secret messages into noise boosts visual
degradation and decreases extracted message accuracy, we introduce message
projection, which hides messages into noise space while addressing these
issues. We suggest three options for message projection to adjust the trade-off
between extracted message accuracy, anti-detection ability, and image quality.
Diffusion-Stego is a training-free approach, so we can apply it to pre-trained
diffusion models which generate high-quality images, or even large-scale
text-to-image models, such as Stable diffusion. Diffusion-Stego achieved a high
capacity of messages (3.0 bpp of binary messages with 98% accuracy, and 6.0 bpp
with 90% accuracy) as well as high quality (with a FID score of 2.77 for 1.0
bpp on the FFHQ 64$\times$64 dataset) that makes it challenging to distinguish
from real images in the PNG format.
|
[
"cs.CV"
] | false |
2305.18782
|
2023-05-30T06:29:04Z
|
VVC Extension Scheme for Object Detection Using Contrast Reduction
|
[
"Takahiro Shindo",
"Taiju Watanabe",
"Kein Yamada",
"Hiroshi Watanabe"
] |
In recent years, video analysis using Artificial Intelligence (AI) has been
widely used, due to the remarkable development of image recognition technology
using deep learning. In 2019, the Moving Picture Experts Group (MPEG) has
started standardization of Video Coding for Machines (VCM) as a video coding
technology for image recognition. In the framework of VCM, both higher image
recognition accuracy and video compression performance are required. In this
paper, we propose an extention scheme of video coding for object detection
using Versatile Video Coding (VVC). Unlike video for human vision, video used
for object detection does not require a large image size or high contrast.
Since downsampling of the image can reduce the amount of information to be
transmitted. Due to the decrease in image contrast, entropy of the image
becomes smaller. Therefore, in our proposed scheme, the original image is
reduced in size and contrast, then coded with VVC encoder to achieve high
compression performance. Then, the output image from the VVC decoder is
restored to its original image size using the bicubic method. Experimental
results show that the proposed video coding scheme achieves better coding
performance than regular VVC in terms of object detection accuracy.
|
[
"cs.CV"
] | false |
2305.18830
|
2023-05-30T08:23:07Z
|
Semi-supervised Pathological Image Segmentation via Cross Distillation
of Multiple Attentions
|
[
"Lanfeng Zhong",
"Xin Liao",
"Shaoting Zhang",
"Guotai Wang"
] |
Segmentation of pathological images is a crucial step for accurate cancer
diagnosis. However, acquiring dense annotations of such images for training is
labor-intensive and time-consuming. To address this issue, Semi-Supervised
Learning (SSL) has the potential for reducing the annotation cost, but it is
challenged by a large number of unlabeled training images. In this paper, we
propose a novel SSL method based on Cross Distillation of Multiple Attentions
(CDMA) to effectively leverage unlabeled images. Firstly, we propose a
Multi-attention Tri-branch Network (MTNet) that consists of an encoder and a
three-branch decoder, with each branch using a different attention mechanism
that calibrates features in different aspects to generate diverse outputs.
Secondly, we introduce Cross Decoder Knowledge Distillation (CDKD) between the
three decoder branches, allowing them to learn from each other's soft labels to
mitigate the negative impact of incorrect pseudo labels in training.
Additionally, uncertainty minimization is applied to the average prediction of
the three branches, which further regularizes predictions on unlabeled images
and encourages inter-branch consistency. Our proposed CDMA was compared with
eight state-of-the-art SSL methods on the public DigestPath dataset, and the
experimental results showed that our method outperforms the other approaches
under different annotation ratios. The code is available at
\href{https://github.com/HiLab-git/CDMA}{https://github.com/HiLab-git/CDMA.}
|
[
"cs.CV"
] | false |
2305.18947
|
2023-05-30T11:26:18Z
|
A Probabilistic Rotation Representation for Symmetric Shapes With an
Efficiently Computable Bingham Loss Function
|
[
"Hiroya Sato",
"Takuya Ikeda",
"Koichi Nishiwaki"
] |
In recent years, a deep learning framework has been widely used for object
pose estimation. While quaternion is a common choice for rotation
representation, it cannot represent the ambiguity of the observation. In order
to handle the ambiguity, the Bingham distribution is one promising solution.
However, it requires complicated calculation when yielding the negative
log-likelihood (NLL) loss. An alternative easy-to-implement loss function has
been proposed to avoid complex computations but has difficulty expressing
symmetric distribution. In this paper, we introduce a fast-computable and
easy-to-implement NLL loss function for Bingham distribution. We also create
the inference network and show that our loss function can capture the symmetric
property of target objects from their point clouds.
|
[
"cs.CV"
] | false |
2305.18953
|
2023-05-30T11:37:41Z
|
Sit Back and Relax: Learning to Drive Incrementally in All Weather
Conditions
|
[
"Stefan Leitner",
"M. Jehanzeb Mirza",
"Wei Lin",
"Jakub Micorek",
"Marc Masana",
"Mateusz Kozinski",
"Horst Possegger",
"Horst Bischof"
] |
In autonomous driving scenarios, current object detection models show strong
performance when tested in clear weather. However, their performance
deteriorates significantly when tested in degrading weather conditions. In
addition, even when adapted to perform robustly in a sequence of different
weather conditions, they are often unable to perform well in all of them and
suffer from catastrophic forgetting. To efficiently mitigate forgetting, we
propose Domain-Incremental Learning through Activation Matching (DILAM), which
employs unsupervised feature alignment to adapt only the affine parameters of a
clear weather pre-trained network to different weather conditions. We propose
to store these affine parameters as a memory bank for each weather condition
and plug-in their weather-specific parameters during driving (i.e. test time)
when the respective weather conditions are encountered. Our memory bank is
extremely lightweight, since affine parameters account for less than 2% of a
typical object detector. Furthermore, contrary to previous domain-incremental
learning approaches, we do not require the weather label when testing and
propose to automatically infer the weather condition by a majority voting
linear classifier.
|
[
"cs.CV"
] | false |
2305.18969
|
2023-05-30T12:06:35Z
|
MS-DETR: Natural Language Video Localization with Sampling Moment-Moment
Interaction
|
[
"Jing Wang",
"Aixin Sun",
"Hao Zhang",
"Xiaoli Li"
] |
Given a query, the task of Natural Language Video Localization (NLVL) is to
localize a temporal moment in an untrimmed video that semantically matches the
query. In this paper, we adopt a proposal-based solution that generates
proposals (i.e., candidate moments) and then select the best matching proposal.
On top of modeling the cross-modal interaction between candidate moments and
the query, our proposed Moment Sampling DETR (MS-DETR) enables efficient
moment-moment relation modeling. The core idea is to sample a subset of moments
guided by the learnable templates with an adopted DETR (DEtection TRansformer)
framework. To achieve this, we design a multi-scale visual-linguistic encoder,
and an anchor-guided moment decoder paired with a set of learnable templates.
Experimental results on three public datasets demonstrate the superior
performance of MS-DETR.
|
[
"cs.CV"
] | false |
2305.18993
|
2023-05-30T12:45:49Z
|
ConES: Concept Embedding Search for Parameter Efficient Tuning Large
Vision Language Models
|
[
"Huahui Yi",
"Ziyuan Qin",
"Wei Xu",
"Miaotian Guo",
"Kun Wang",
"Shaoting Zhang",
"Kang Li",
"Qicheng Lao"
] |
Large pre-trained vision-language models have shown great prominence in
transferring pre-acquired knowledge to various domains and downstream tasks
with appropriate prompting or tuning. Existing prevalent tuning methods can be
generally categorized into three genres: 1) prompt engineering by creating
suitable prompt texts, which is time-consuming and requires domain expertise;
2) or simply fine-tuning the whole model, which is extremely inefficient; 3)
prompt tuning through parameterized prompt embeddings with the text encoder.
Nevertheless, all methods rely on the text encoder for bridging the modality
gap between vision and language. In this work, we question the necessity of the
cumbersome text encoder for a more lightweight and efficient tuning paradigm as
well as more representative prompt embeddings closer to the image
representations. To achieve this, we propose a Concept Embedding Search (ConES)
approach by optimizing prompt embeddings -- without the need of the text
encoder -- to capture the 'concept' of the image modality through a variety of
task objectives. By dropping the text encoder, we are able to significantly
speed up the learning process, \eg, from about an hour to just ten minutes in
our experiments for personalized text-to-image generation without impairing the
generation quality. Moreover, our proposed approach is orthogonal to current
existing tuning methods since the searched concept embeddings can be further
utilized in the next stage of fine-tuning the pre-trained large models for
boosting performance. Extensive experiments show that our approach can beat the
prompt tuning and textual inversion methods in a variety of downstream tasks
including objection detection, instance segmentation, and image generation. Our
approach also shows better generalization capability for unseen concepts in
specialized domains, such as the medical domain.
|
[
"cs.CV"
] | false |
2305.19021
|
2023-05-30T13:21:12Z
|
Using Data Analytics to Derive Business Intelligence: A Case Study
|
[
"Ugochukwu Orji",
"Ezugwu Obianuju",
"Modesta Ezema",
"Chikodili Ugwuishiwu",
"Elochukwu Ukwandu",
"Uchechukwu Agomuo"
] |
The data revolution experienced in recent times has thrown up new challenges
and opportunities for businesses of all sizes in diverse industries. Big data
analytics is already at the forefront of innovations to help make meaningful
business decisions from the abundance of raw data available today. Business
intelligence and analytics has become a huge trend in todays IT world as
companies of all sizes are looking to improve their business processes and
scale up using data driven solutions. This paper aims to demonstrate the data
analytical process of deriving business intelligence via the historical data of
a fictional bike share company seeking to find innovative ways to convert their
casual riders to annual paying registered members. The dataset used is freely
available as Chicago Divvy Bicycle Sharing Data on Kaggle. The authors used the
RTidyverse library in RStudio to analyse the data and followed the six data
analysis steps of ask, prepare, process, analyse, share, and act to recommend
some actionable approaches the company could adopt to convert casual riders to
paying annual members. The findings from this research serve as a valuable case
example, of a real world deployment of BIA technologies in the industry, and a
demonstration of the data analysis cycle for data practitioners, researchers,
and other potential users.
|
[
"cs.CV"
] | false |
2305.19088
|
2023-05-30T14:51:58Z
|
TrueDeep: A systematic approach of crack detection with less data
|
[
"Ram Krishna Pandey",
"Akshit Achara"
] |
Supervised and semi-supervised semantic segmentation algorithms require
significant amount of annotated data to achieve a good performance. In many
situations, the data is either not available or the annotation is expensive.
The objective of this work is to show that by incorporating domain knowledge
along with deep learning architectures, we can achieve similar performance with
less data. We have used publicly available crack segmentation datasets and
shown that selecting the input images using knowledge can significantly boost
the performance of deep-learning based architectures. Our proposed approaches
have many fold advantages such as low annotation and training cost, and less
energy consumption. We have measured the performance of our algorithm
quantitatively in terms of mean intersection over union (mIoU) and F score. Our
algorithms, developed with 23% of the overall data; have a similar performance
on the test data and significantly better performance on multiple blind
datasets.
|
[
"cs.CV"
] | false |
2305.19107
|
2023-05-30T15:12:52Z
|
Voxel2Hemodynamics: An End-to-end Deep Learning Method for Predicting
Coronary Artery Hemodynamics
|
[
"Ziyu Ni",
"Linda Wei",
"Lijian Xu",
"Simon Yu",
"Qing Xia",
"Hongsheng Li",
"Shaoting Zhang"
] |
Local hemodynamic forces play an important role in determining the functional
significance of coronary arterial stenosis and understanding the mechanism of
coronary disease progression. Computational fluid dynamics (CFD) have been
widely performed to simulate hemodynamics non-invasively from coronary computed
tomography angiography (CCTA) images. However, accurate computational analysis
is still limited by the complex construction of patient-specific modeling and
time-consuming computation. In this work, we proposed an end-to-end deep
learning framework, which could predict the coronary artery hemodynamics from
CCTA images. The model was trained on the hemodynamic data obtained from 3D
simulations of synthetic and real datasets. Extensive experiments demonstrated
that the predicted hemdynamic distributions by our method agreed well with the
CFD-derived results. Quantitatively, the proposed method has the capability of
predicting the fractional flow reserve with an average error of 0.5\% and 2.5\%
for the synthetic dataset and real dataset, respectively. Particularly, our
method achieved much better accuracy for the real dataset compared to
PointNet++ with the point cloud input. This study demonstrates the feasibility
and great potential of our end-to-end deep learning method as a fast and
accurate approach for hemodynamic analysis.
|
[
"cs.CV"
] | false |
2305.19108
|
2023-05-30T15:13:17Z
|
DisCLIP: Open-Vocabulary Referring Expression Generation
|
[
"Lior Bracha",
"Eitan Shaar",
"Aviv Shamsian",
"Ethan Fetaya",
"Gal Chechik"
] |
Referring Expressions Generation (REG) aims to produce textual descriptions
that unambiguously identifies specific objects within a visual scene.
Traditionally, this has been achieved through supervised learning methods,
which perform well on specific data distributions but often struggle to
generalize to new images and concepts. To address this issue, we present a
novel approach for REG, named DisCLIP, short for discriminative CLIP. We build
on CLIP, a large-scale visual-semantic model, to guide an LLM to generate a
contextual description of a target concept in an image while avoiding other
distracting concepts. Notably, this optimization happens at inference time and
does not require additional training or tuning of learned parameters. We
measure the quality of the generated text by evaluating the capability of a
receiver model to accurately identify the described object within the scene. To
achieve this, we use a frozen zero-shot comprehension module as a critique of
our generated referring expressions. We evaluate DisCLIP on multiple referring
expression benchmarks through human evaluation and show that it significantly
outperforms previous methods on out-of-domain datasets. Our results highlight
the potential of using pre-trained visual-semantic models for generating
high-quality contextual descriptions.
|
[
"cs.CV"
] | false |
2305.19112
|
2023-05-30T15:15:50Z
|
DENTEX: An Abnormal Tooth Detection with Dental Enumeration and
Diagnosis Benchmark for Panoramic X-rays
|
[
"Ibrahim Ethem Hamamci",
"Sezgin Er",
"Enis Simsar",
"Atif Emre Yuksel",
"Sadullah Gultekin",
"Serife Damla Ozdemir",
"Kaiyuan Yang",
"Hongwei Bran Li",
"Sarthak Pati",
"Bernd Stadlinger",
"Albert Mehl",
"Mustafa Gundogar",
"Bjoern Menze"
] |
Panoramic X-rays are frequently used in dentistry for treatment planning, but
their interpretation can be both time-consuming and prone to error. Artificial
intelligence (AI) has the potential to aid in the analysis of these X-rays,
thereby improving the accuracy of dental diagnoses and treatment plans.
Nevertheless, designing automated algorithms for this purpose poses significant
challenges, mainly due to the scarcity of annotated data and variations in
anatomical structure. To address these issues, the Dental Enumeration and
Diagnosis on Panoramic X-rays Challenge (DENTEX) has been organized in
association with the International Conference on Medical Image Computing and
Computer-Assisted Intervention (MICCAI) in 2023. This challenge aims to promote
the development of algorithms for multi-label detection of abnormal teeth,
using three types of hierarchically annotated data: partially annotated
quadrant data, partially annotated quadrant-enumeration data, and fully
annotated quadrant-enumeration-diagnosis data, inclusive of four different
diagnoses. In this paper, we present the results of evaluating participant
algorithms on the fully annotated data, additionally investigating performance
variation for quadrant, enumeration, and diagnosis labels in the detection of
abnormal teeth. The provision of this annotated dataset, alongside the results
of this challenge, may lay the groundwork for the creation of AI-powered tools
that can offer more precise and efficient diagnosis and treatment planning in
the field of dentistry. The evaluation code and datasets can be accessed at
https://github.com/ibrahimethemhamamci/DENTEX
|
[
"cs.CV"
] | false |
2305.19124
|
2023-05-30T15:34:45Z
|
Calliffusion: Chinese Calligraphy Generation and Style Transfer with
Diffusion Modeling
|
[
"Qisheng Liao",
"Gus Xia",
"Zhinuo Wang"
] |
In this paper, we propose Calliffusion, a system for generating high-quality
Chinese calligraphy using diffusion models. Our model architecture is based on
DDPM (Denoising Diffusion Probabilistic Models), and it is capable of
generating common characters in five different scripts and mimicking the styles
of famous calligraphers. Experiments demonstrate that our model can generate
calligraphy that is difficult to distinguish from real artworks and that our
controls for characters, scripts, and styles are effective. Moreover, we
demonstrate one-shot transfer learning, using LoRA (Low-Rank Adaptation) to
transfer Chinese calligraphy art styles to unseen characters and even
out-of-domain symbols such as English letters and digits.
|
[
"cs.CV"
] | false |
2305.19135
|
2023-05-30T15:46:25Z
|
Context-Preserving Two-Stage Video Domain Translation for Portrait
Stylization
|
[
"Doyeon Kim",
"Eunji Ko",
"Hyunsu Kim",
"Yunji Kim",
"Junho Kim",
"Dongchan Min",
"Junmo Kim",
"Sung Ju Hwang"
] |
Portrait stylization, which translates a real human face image into an
artistically stylized image, has attracted considerable interest and many prior
works have shown impressive quality in recent years. However, despite their
remarkable performances in the image-level translation tasks, prior methods
show unsatisfactory results when they are applied to the video domain. To
address the issue, we propose a novel two-stage video translation framework
with an objective function which enforces a model to generate a temporally
coherent stylized video while preserving context in the source video.
Furthermore, our model runs in real-time with the latency of 0.011 seconds per
frame and requires only 5.6M parameters, and thus is widely applicable to
practical real-world applications.
|
[
"cs.CV"
] | false |
2305.19160
|
2023-05-30T16:03:12Z
|
Recognizing People by Body Shape Using Deep Networks of Images and Words
|
[
"Blake A. Myers",
"Lucas Jaggernauth",
"Thomas M. Metz",
"Matthew Q. Hill",
"Veda Nandan Gandi",
"Carlos D. Castillo",
"Alice J. O'Toole"
] |
Common and important applications of person identification occur at distances
and viewpoints in which the face is not visible or is not sufficiently resolved
to be useful. We examine body shape as a biometric across distance and
viewpoint variation. We propose an approach that combines standard object
classification networks with representations based on linguistic (word-based)
descriptions of bodies. Algorithms with and without linguistic training were
compared on their ability to identify people from body shape in images captured
across a large range of distances/views (close-range, 100m, 200m, 270m, 300m,
370m, 400m, 490m, 500m, 600m, and at elevated pitch in images taken by an
unmanned aerial vehicle [UAV]). Accuracy, as measured by identity-match ranking
and false accept errors in an open-set test, was surprisingly good. For
identity-ranking, linguistic models were more accurate for close-range images,
whereas non-linguistic models fared better at intermediary distances. Fusion of
the linguistic and non-linguistic embeddings improved performance at all, but
the farthest distance. Although the non-linguistic model yielded fewer false
accepts at all distances, fusion of the linguistic and non-linguistic models
decreased false accepts for all, but the UAV images. We conclude that
linguistic and non-linguistic representations of body shape can offer
complementary identity information for bodies that can improve identification
in applications of interest.
|
[
"cs.CV"
] | false |
2305.19193
|
2023-05-30T16:39:00Z
|
Video ControlNet: Towards Temporally Consistent Synthetic-to-Real Video
Translation Using Conditional Image Diffusion Models
|
[
"Ernie Chu",
"Shuo-Yen Lin",
"Jun-Cheng Chen"
] |
In this study, we present an efficient and effective approach for achieving
temporally consistent synthetic-to-real video translation in videos of varying
lengths. Our method leverages off-the-shelf conditional image diffusion models,
allowing us to perform multiple synthetic-to-real image generations in
parallel. By utilizing the available optical flow information from the
synthetic videos, our approach seamlessly enforces temporal consistency among
corresponding pixels across frames. This is achieved through joint noise
optimization, effectively minimizing spatial and temporal discrepancies. To the
best of our knowledge, our proposed method is the first to accomplish diverse
and temporally consistent synthetic-to-real video translation using conditional
image diffusion models. Furthermore, our approach does not require any training
or fine-tuning of the diffusion models. Extensive experiments conducted on
various benchmarks for synthetic-to-real video translation demonstrate the
effectiveness of our approach, both quantitatively and qualitatively. Finally,
we show that our method outperforms other baseline methods in terms of both
temporal consistency and visual quality.
|
[
"cs.CV"
] | false |
2305.19245
|
2023-05-30T17:32:12Z
|
AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation
|
[
"Thu Nguyen-Phuoc",
"Gabriel Schwartz",
"Yuting Ye",
"Stephen Lombardi",
"Lei Xiao"
] |
This paper presents a method that can quickly adapt dynamic 3D avatars to
arbitrary text descriptions of novel styles. Among existing approaches for
avatar stylization, direct optimization methods can produce excellent results
for arbitrary styles but they are unpleasantly slow. Furthermore, they require
redoing the optimization process from scratch for every new input. Fast
approximation methods using feed-forward networks trained on a large dataset of
style images can generate results for new inputs quickly, but tend not to
generalize well to novel styles and fall short in quality. We therefore
investigate a new approach, AlteredAvatar, that combines those two approaches
using the meta-learning framework. In the inner loop, the model learns to
optimize to match a single target style well; while in the outer loop, the
model learns to stylize efficiently across many styles. After training,
AlteredAvatar learns an initialization that can quickly adapt within a small
number of update steps to a novel style, which can be given using texts, a
reference image, or a combination of both. We show that AlteredAvatar can
achieve a good balance between speed, flexibility and quality, while
maintaining consistency across a wide range of novel views and facial
expressions.
|
[
"cs.CV"
] | true |
2305.19327
|
2023-05-30T18:00:06Z
|
Cones 2: Customizable Image Synthesis with Multiple Subjects
|
[
"Zhiheng Liu",
"Yifei Zhang",
"Yujun Shen",
"Kecheng Zheng",
"Kai Zhu",
"Ruili Feng",
"Yu Liu",
"Deli Zhao",
"Jingren Zhou",
"Yang Cao"
] |
Synthesizing images with user-specified subjects has received growing
attention due to its practical applications. Despite the recent success in
single subject customization, existing algorithms suffer from high training
cost and low success rate along with increased number of subjects. Towards
controllable image synthesis with multiple subjects as the constraints, this
work studies how to efficiently represent a particular subject as well as how
to appropriately compose different subjects. We find that the text embedding
regarding the subject token already serves as a simple yet effective
representation that supports arbitrary combinations without any model tuning.
Through learning a residual on top of the base embedding, we manage to robustly
shift the raw subject to the customized subject given various text conditions.
We then propose to employ layout, a very abstract and easy-to-obtain prior, as
the spatial guidance for subject arrangement. By rectifying the activations in
the cross-attention map, the layout appoints and separates the location of
different subjects in the image, significantly alleviating the interference
across them. Both qualitative and quantitative experimental results demonstrate
our superiority over state-of-the-art alternatives under a variety of settings
for multi-subject customization.
|
[
"cs.CV"
] | false |
2305.19343
|
2023-05-30T18:12:13Z
|
Budget-Aware Graph Convolutional Network Design using Probabilistic
Magnitude Pruning
|
[
"Hichem Sahbi"
] |
Graph convolutional networks (GCNs) are nowadays becoming mainstream in
solving many image processing tasks including skeleton-based recognition. Their
general recipe consists in learning convolutional and attention layers that
maximize classification performances. With multi-head attention, GCNs are
highly accurate but oversized, and their deployment on edge devices requires
their pruning. Among existing methods, magnitude pruning (MP) is relatively
effective but its design is clearly suboptimal as network topology selection
and weight retraining are achieved independently. In this paper, we devise a
novel lightweight GCN design dubbed as Probabilistic Magnitude Pruning (PMP)
that jointly trains network topology and weights. Our method is variational and
proceeds by aligning the weight distribution of the learned networks with an a
priori distribution. This allows implementing any fixed pruning rate, and also
enhancing the generalization performances of the designed lightweight GCNs.
Extensive experiments conducted on the challenging task of skeleton-based
recognition show a substantial gain of our lightweight GCNs particularly at
very high pruning regimes.
|
[
"cs.CV"
] | false |
2306.08073
|
2023-05-30T01:11:05Z
|
Dynamic Clustering Transformer Network for Point Cloud Segmentation
|
[
"Dening Lu",
"Jun Zhou",
"Kyle Yilin Gao",
"Dilong Li",
"Jing Du",
"Linlin Xu",
"Jonathan Li"
] |
Point cloud segmentation is one of the most important tasks in computer
vision with widespread scientific, industrial, and commercial applications. The
research thereof has resulted in many breakthroughs in 3D object and scene
understanding. Previous methods typically utilized hierarchical architectures
for feature representation. However, the commonly used sampling and grouping
methods in hierarchical networks are only based on point-wise three-dimensional
coordinates, ignoring local semantic homogeneity of point clusters.
Additionally, the prevalent Farthest Point Sampling (FPS) method is often a
computational bottleneck. To address these issues, we propose a novel 3D point
cloud representation network, called Dynamic Clustering Transformer Network
(DCTNet). It has an encoder-decoder architecture, allowing for both local and
global feature learning. Specifically, we propose novel semantic feature-based
dynamic sampling and clustering methods in the encoder, which enables the model
to be aware of local semantic homogeneity for local feature aggregation.
Furthermore, in the decoder, we propose an efficient semantic feature-guided
upsampling method. Our method was evaluated on an object-based dataset
(ShapeNet), an urban navigation dataset (Toronto-3D), and a multispectral LiDAR
dataset, verifying the performance of DCTNet across a wide variety of practical
engineering applications. The inference speed of DCTNet is 3.8-16.8$\times$
faster than existing State-of-the-Art (SOTA) models on the ShapeNet dataset,
while achieving an instance-wise mIoU of $86.6\%$, the current top score. Our
method similarly outperforms previous methods on the other datasets, verifying
it as the new State-of-the-Art in point cloud segmentation.
|
[
"cs.CV"
] | false |
2305.18708
|
2023-05-30T03:24:09Z
|
Wide & deep learning for spatial & intensity adaptive image restoration
|
[
"Yadong Wang",
"Xiangzhi Bai"
] |
Most existing deep learning-based image restoration methods usually aim to
remove degradation with uniform spatial distribution and constant intensity,
making insufficient use of degradation prior knowledge. Here we bootstrap the
deep neural networks to suppress complex image degradation whose intensity is
spatially variable, through utilizing prior knowledge from degraded images.
Specifically, we propose an ingenious and efficient multi-frame image
restoration network (DparNet) with wide & deep architecture, which integrates
degraded images and prior knowledge of degradation to reconstruct images with
ideal clarity and stability. The degradation prior is directly learned from
degraded images in form of key degradation parameter matrix, with no
requirement of any off-site knowledge. The wide & deep architecture in DparNet
enables the learned parameters to directly modulate the final restoring
results, boosting spatial & intensity adaptive image restoration. We
demonstrate the proposed method on two representative image restoration
applications: image denoising and suppression of atmospheric turbulence effects
in images. Two large datasets, containing 109,536 and 49,744 images
respectively, were constructed to support our experiments. The experimental
results show that our DparNet significantly outperform SoTA methods in
restoration performance and network efficiency. More importantly, by utilizing
the learned degradation parameters via wide & deep learning, we can improve the
PSNR of image restoration by 0.6~1.1 dB with less than 2% increasing in model
parameter numbers and computational complexity. Our work suggests that degraded
images may hide key information of the degradation process, which can be
utilized to boost spatial & intensity adaptive image restoration.
|
[
"cs.CV",
"eess.IV"
] | false |
2305.18752
|
2023-05-30T05:27:21Z
|
GPT4Tools: Teaching Large Language Model to Use Tools via
Self-instruction
|
[
"Rui Yang",
"Lin Song",
"Yanwei Li",
"Sijie Zhao",
"Yixiao Ge",
"Xiu Li",
"Ying Shan"
] |
This paper aims to efficiently enable Large Language Models (LLMs) to use
multimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, have
shown great potential for tool usage through sophisticated prompt engineering.
Nevertheless, these models typically rely on prohibitive computational costs
and publicly inaccessible data. To address these challenges, we propose the
GPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA and
OPT, to use tools. It generates an instruction-following dataset by prompting
an advanced teacher with various multi-modal contexts. By using the Low-Rank
Adaptation (LoRA) optimization, our approach facilitates the open-source LLMs
to solve a range of visual problems, including visual comprehension and image
generation. Moreover, we provide a benchmark to evaluate the ability of LLMs to
use tools, which is performed in both zero-shot and fine-tuning ways. Extensive
experiments demonstrate the effectiveness of our method on various language
models, which not only significantly improves the accuracy of invoking seen
tools, but also enables the zero-shot capacity for unseen tools. The code and
demo are available at https://github.com/StevenGrove/GPT4Tools.
|
[
"cs.CV",
"cs.CL"
] | true |
2305.18756
|
2023-05-30T05:40:37Z
|
VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic
Understanding with Scene and Topic Transitions
|
[
"Yuxuan Wang",
"Zilong Zheng",
"Xueliang Zhao",
"Jinpeng Li",
"Yueqian Wang",
"Dongyan Zhao"
] |
Video-grounded dialogue understanding is a challenging problem that requires
machine to perceive, parse and reason over situated semantics extracted from
weakly aligned video and dialogues. Most existing benchmarks treat both
modalities the same as a frame-independent visual understanding task, while
neglecting the intrinsic attributes in multimodal dialogues, such as scene and
topic transitions. In this paper, we present Video-grounded Scene&Topic AwaRe
dialogue (VSTAR) dataset, a large scale video-grounded dialogue understanding
dataset based on 395 TV series. Based on VSTAR, we propose two benchmarks for
video-grounded dialogue understanding: scene segmentation and topic
segmentation, and one benchmark for video-grounded dialogue generation.
Comprehensive experiments are performed on these benchmarks to demonstrate the
importance of multimodal information and segments in video-grounded dialogue
understanding and generation.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.18769
|
2023-05-30T06:04:30Z
|
DualVAE: Controlling Colours of Generated and Real Images
|
[
"Keerth Rathakumar",
"David Liebowitz",
"Christian Walder",
"Kristen Moore",
"Salil S. Kanhere"
] |
Colour controlled image generation and manipulation are of interest to
artists and graphic designers. Vector Quantised Variational AutoEncoders
(VQ-VAEs) with autoregressive (AR) prior are able to produce high quality
images, but lack an explicit representation mechanism to control colour
attributes. We introduce DualVAE, a hybrid representation model that provides
such control by learning disentangled representations for colour and geometry.
The geometry is represented by an image intensity mapping that identifies
structural features. The disentangled representation is obtained by two novel
mechanisms:
(i) a dual branch architecture that separates image colour attributes from
geometric attributes, and (ii) a new ELBO that trains the combined colour and
geometry representations. DualVAE can control the colour of generated images,
and recolour existing images by transferring the colour latent representation
obtained from an exemplar image. We demonstrate that DualVAE generates images
with FID nearly two times better than VQ-GAN on a diverse collection of
datasets, including animated faces, logos and artistic landscapes.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.18810
|
2023-05-30T07:53:25Z
|
Scene restoration from scaffold occlusion using deep learning-based
methods
|
[
"Yuexiong Ding",
"Muyang Liu",
"Xiaowei Luo"
] |
The occlusion issues of computer vision (CV) applications in construction
have attracted significant attention, especially those caused by the
wide-coverage, crisscrossed, and immovable scaffold. Intuitively, removing the
scaffold and restoring the occluded visual information can provide CV agents
with clearer site views and thus help them better understand the construction
scenes. Therefore, this study proposes a novel two-step method combining
pixel-level segmentation and image inpainting for restoring construction scenes
from scaffold occlusion. A low-cost data synthesis method based only on
unlabeled data is developed to address the shortage dilemma of labeled data.
Experiments on the synthesized test data show that the proposed method achieves
performances of 92% mean intersection over union (MIoU) for scaffold
segmentation and over 82% structural similarity (SSIM) for scene restoration
from scaffold occlusion.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.18812
|
2023-05-30T07:59:23Z
|
DiffSketching: Sketch Control Image Synthesis with Diffusion Models
|
[
"Qiang Wang",
"Di Kong",
"Fengyin Lin",
"Yonggang Qi"
] |
Creative sketch is a universal way of visual expression, but translating
images from an abstract sketch is very challenging. Traditionally, creating a
deep learning model for sketch-to-image synthesis needs to overcome the
distorted input sketch without visual details, and requires to collect
large-scale sketch-image datasets. We first study this task by using diffusion
models. Our model matches sketches through the cross domain constraints, and
uses a classifier to guide the image synthesis more accurately. Extensive
experiments confirmed that our method can not only be faithful to user's input
sketches, but also maintain the diversity and imagination of synthetic image
results. Our model can beat GAN-based method in terms of generation quality and
human evaluation, and does not rely on massive sketch-image datasets.
Additionally, we present applications of our method in image editing and
interpolation.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.18865
|
2023-05-30T08:57:31Z
|
Elongated Physiological Structure Segmentation via Spatial and Scale
Uncertainty-aware Network
|
[
"Yinglin Zhang",
"Ruiling Xi",
"Huazhu Fu",
"Dave Towey",
"RuiBin Bai",
"Risa Higashita",
"Jiang Liu"
] |
Robust and accurate segmentation for elongated physiological structures is
challenging, especially in the ambiguous region, such as the corneal
endothelium microscope image with uneven illumination or the fundus image with
disease interference. In this paper, we present a spatial and scale
uncertainty-aware network (SSU-Net) that fully uses both spatial and scale
uncertainty to highlight ambiguous regions and integrate hierarchical structure
contexts. First, we estimate epistemic and aleatoric spatial uncertainty maps
using Monte Carlo dropout to approximate Bayesian networks. Based on these
spatial uncertainty maps, we propose the gated soft uncertainty-aware (GSUA)
module to guide the model to focus on ambiguous regions. Second, we extract the
uncertainty under different scales and propose the multi-scale
uncertainty-aware (MSUA) fusion module to integrate structure contexts from
hierarchical predictions, strengthening the final prediction. Finally, we
visualize the uncertainty map of final prediction, providing interpretability
for segmentation results. Experiment results show that the SSU-Net performs
best on cornea endothelial cell and retinal vessel segmentation tasks.
Moreover, compared with counterpart uncertainty-based methods, SSU-Net is more
accurate and robust.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.18890
|
2023-05-30T09:44:12Z
|
Sensitivity of Slot-Based Object-Centric Models to their Number of Slots
|
[
"Roland S. Zimmermann",
"Sjoerd van Steenkiste",
"Mehdi S. M. Sajjadi",
"Thomas Kipf",
"Klaus Greff"
] |
Self-supervised methods for learning object-centric representations have
recently been applied successfully to various datasets. This progress is
largely fueled by slot-based methods, whose ability to cluster visual scenes
into meaningful objects holds great promise for compositional generalization
and downstream learning. In these methods, the number of slots (clusters) $K$
is typically chosen to match the number of ground-truth objects in the data,
even though this quantity is unknown in real-world settings. Indeed, the
sensitivity of slot-based methods to $K$, and how this affects their learned
correspondence to objects in the data has largely been ignored in the
literature. In this work, we address this issue through a systematic study of
slot-based methods. We propose using analogs to precision and recall based on
the Adjusted Rand Index to accurately quantify model behavior over a large
range of $K$. We find that, especially during training, incorrect choices of
$K$ do not yield the desired object decomposition and, in fact, cause
substantial oversegmentation or merging of separate objects
(undersegmentation). We demonstrate that the choice of the objective function
and incorporating instance-level annotations can moderately mitigate this
behavior while still falling short of fully resolving this issue. Indeed, we
show how this issue persists across multiple methods and datasets and stress
its importance for future slot-based models.
|
[
"cs.CV",
"cs.LG"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.