arxiv_id
stringlengths
10
10
published
stringlengths
20
20
titles
stringlengths
9
243
authors
listlengths
1
389
abstract
stringlengths
96
3.09k
categories
listlengths
1
10
selected
bool
2 classes
2305.17448
2023-05-27T11:21:32Z
Measuring Your ASTE Models in The Wild: A Diversified Multi-domain Dataset For Aspect Sentiment Triplet Extraction
[ "Ting Xu", "Huiyun Yang", "Zhen Wu", "Jiaze Chen", "Fei Zhao", "Xinyu Dai" ]
Aspect Sentiment Triplet Extraction (ASTE) is widely used in various applications. However, existing ASTE datasets are limited in their ability to represent real-world scenarios, hindering the advancement of research in this area. In this paper, we introduce a new dataset, named DMASTE, which is manually annotated to better fit real-world scenarios by providing more diverse and realistic reviews for the task. The dataset includes various lengths, diverse expressions, more aspect types, and more domains than existing datasets. We conduct extensive experiments on DMASTE in multiple settings to evaluate previous ASTE approaches. Empirical results demonstrate that DMASTE is a more challenging ASTE dataset. Further analyses of in-domain and cross-domain settings provide promising directions for future research. Our code and dataset are available at https://github.com/NJUNLP/DMASTE.
[ "cs.CL" ]
false
2305.17458
2023-05-27T12:19:21Z
A Diffusion Model for Event Skeleton Generation
[ "Fangqi Zhu", "Lin Zhang", "Jun Gao", "Bing Qin", "Ruifeng Xu", "Haiqin Yang" ]
Event skeleton generation, aiming to induce an event schema skeleton graph with abstracted event nodes and their temporal relations from a set of event instance graphs, is a critical step in the temporal complex event schema induction task. Existing methods effectively address this task from a graph generation perspective but suffer from noise-sensitive and error accumulation, e.g., the inability to correct errors while generating schema. We, therefore, propose a novel Diffusion Event Graph Model~(DEGM) to address these issues. Our DEGM is the first workable diffusion model for event skeleton generation, where the embedding and rounding techniques with a custom edge-based loss are introduced to transform a discrete event graph into learnable latent representation. Furthermore, we propose a denoising training process to maintain the model's robustness. Consequently, DEGM derives the final schema, where error correction is guaranteed by iteratively refining the latent representation during the schema generation process. Experimental results on three IED bombing datasets demonstrate that our DEGM achieves better results than other state-of-the-art baselines. Our code and data are available at https://github.com/zhufq00/EventSkeletonGeneration.
[ "cs.CL" ]
false
2305.17491
2023-05-27T15:00:45Z
FERMAT: An Alternative to Accuracy for Numerical Reasoning
[ "Jasivan Alex Sivakumar", "Nafise Sadat Moosavi" ]
While pre-trained language models achieve impressive performance on various NLP benchmarks, they still struggle with tasks that require numerical reasoning. Recent advances in improving numerical reasoning are mostly achieved using very large language models that contain billions of parameters and are not accessible to everyone. In addition, numerical reasoning is measured using a single score on existing datasets. As a result, we do not have a clear understanding of the strengths and shortcomings of existing models on different numerical reasoning aspects and therefore, potential ways to improve them apart from scaling them up. Inspired by CheckList (Ribeiro et al., 2020), we introduce a multi-view evaluation set for numerical reasoning in English, called FERMAT. Instead of reporting a single score on a whole dataset, FERMAT evaluates models on various key numerical reasoning aspects such as number understanding, mathematical operations, and training dependency. Apart from providing a comprehensive evaluation of models on different numerical reasoning aspects, FERMAT enables a systematic and automated generation of an arbitrarily large training or evaluation set for each aspect.The datasets and codes are publicly available to generate further multi-view data for ulterior tasks and languages.
[ "cs.CL" ]
false
2305.17529
2023-05-27T17:09:25Z
MeetingBank: A Benchmark Dataset for Meeting Summarization
[ "Yebowen Hu", "Tim Ganter", "Hanieh Deilamsalehy", "Franck Dernoncourt", "Hassan Foroosh", "Fei Liu" ]
As the number of recorded meetings increases, it becomes increasingly important to utilize summarization technology to create useful summaries of these recordings. However, there is a crucial lack of annotated meeting corpora for developing this technology, as it can be hard to collect meetings, especially when the topics discussed are confidential. Furthermore, meeting summaries written by experienced writers are scarce, making it hard for abstractive summarizers to produce sensible output without a reliable reference. This lack of annotated corpora has hindered the development of meeting summarization technology. In this paper, we present MeetingBank, a new benchmark dataset of city council meetings over the past decade. MeetingBank is unique among other meeting corpora due to its divide-and-conquer approach, which involves dividing professionally written meeting minutes into shorter passages and aligning them with specific segments of the meeting. This breaks down the process of summarizing a lengthy meeting into smaller, more manageable tasks. The dataset provides a new testbed of various meeting summarization systems and also allows the public to gain insight into how council decisions are made. We make the collection, including meeting video links, transcripts, reference summaries, agenda, and other metadata, publicly available to facilitate the development of better meeting summarization techniques. Our dataset can be accessed at: https://meetingbank.github.io
[ "cs.CL" ]
false
2305.17561
2023-05-27T19:31:41Z
Grounding Characters and Places in Narrative Texts
[ "Sandeep Soni", "Amanpreet Sihra", "Elizabeth F. Evans", "Matthew Wilkens", "David Bamman" ]
Tracking characters and locations throughout a story can help improve the understanding of its plot structure. Prior research has analyzed characters and locations from text independently without grounding characters to their locations in narrative time. Here, we address this gap by proposing a new spatial relationship categorization task. The objective of the task is to assign a spatial relationship category for every character and location co-mention within a window of text, taking into consideration linguistic context, narrative tense, and temporal scope. To this end, we annotate spatial relationships in approximately 2500 book excerpts and train a model using contextual embeddings as features to predict these relationships. When applied to a set of books, this model allows us to test several hypotheses on mobility and domestic space, revealing that protagonists are more mobile than non-central characters and that women as characters tend to occupy more interior space than men. Overall, our work is the first step towards joint modeling and analysis of characters and places in narrative text.
[ "cs.CL" ]
false
2305.17580
2023-05-27T21:04:26Z
ArPanEmo: An Open-Source Dataset for Fine-Grained Emotion Recognition in Arabic Online Content during COVID-19 Pandemic
[ "Maha Jarallah Althobaiti" ]
Emotion recognition is a crucial task in Natural Language Processing (NLP) that enables machines to comprehend the feelings conveyed in the text. The applications of emotion recognition are diverse, including mental health diagnosis, student support, and the detection of online suspicious behavior. Despite the substantial amount of literature available on emotion recognition in various languages, Arabic emotion recognition has received relatively little attention, leading to a scarcity of emotion-annotated corpora. This paper presents the ArPanEmo dataset, a novel dataset for fine-grained emotion recognition of online posts in Arabic. The dataset comprises 11,128 online posts manually labeled for ten emotion categories or neutral, with Fleiss' kappa of 0.71. It targets a specific Arabic dialect and addresses topics related to the COVID-19 pandemic, making it the first and largest of its kind. Python's packages were utilized to collect online posts related to the COVID-19 pandemic from three sources: Twitter, YouTube, and online newspaper comments between March 2020 and March 2022. Upon collection of the online posts, each one underwent a semi-automatic classification process using a lexicon of emotion-related terms to determine whether it belonged to the neutral or emotional category. Subsequently, manual labeling was conducted to further categorize the emotional data into fine-grained emotion categories.
[ "cs.CL" ]
false
2306.00005
2023-05-27T17:25:13Z
A Two-Stage Decoder for Efficient ICD Coding
[ "Thanh-Tung Nguyen", "Viktor Schlegel", "Abhinav Kashyap", "Stefan Winkler" ]
Clinical notes in healthcare facilities are tagged with the International Classification of Diseases (ICD) code; a list of classification codes for medical diagnoses and procedures. ICD coding is a challenging multilabel text classification problem due to noisy clinical document inputs and long-tailed label distribution. Recent automated ICD coding efforts improve performance by encoding medical notes and codes with additional data and knowledge bases. However, most of them do not reflect how human coders generate the code: first, the coders select general code categories and then look for specific subcategories that are relevant to a patient's condition. Inspired by this, we propose a two-stage decoding mechanism to predict ICD codes. Our model uses the hierarchical properties of the codes to split the prediction into two steps: At first, we predict the parent code and then predict the child code based on the previous prediction. Experiments on the public MIMIC-III data set show that our model performs well in single-model settings without external data or knowledge.
[ "cs.CL" ]
false
2305.17331
2023-05-27T02:26:52Z
Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In
[ "Zichun Yu", "Chenyan Xiong", "Shi Yu", "Zhiyuan Liu" ]
Retrieval augmentation can aid language models (LMs) in knowledge-intensive tasks by supplying them with external information. Prior works on retrieval augmentation usually jointly fine-tune the retriever and the LM, making them closely coupled. In this paper, we explore the scheme of generic retrieval plug-in: the retriever is to assist target LMs that may not be known beforehand or are unable to be fine-tuned together. To retrieve useful documents for unseen target LMs, we propose augmentation-adapted retriever (AAR), which learns LM's preferences obtained from a known source LM. Experiments on the MMLU and PopQA datasets demonstrate that our AAR trained with a small source LM is able to significantly improve the zero-shot generalization of larger target LMs ranging from 250M Flan-T5 to 175B InstructGPT. Further analysis indicates that the preferences of different LMs overlap, enabling AAR trained with a single source LM to serve as a generic plug-in for various target LMs. Our code is open-sourced at https://github.com/OpenMatch/Augmentation-Adapted-Retriever.
[ "cs.CL", "cs.LG" ]
false
2305.17337
2023-05-27T02:38:46Z
Benchmarking Diverse-Modal Entity Linking with Generative Models
[ "Sijia Wang", "Alexander Hanbo Li", "Henry Zhu", "Sheng Zhang", "Chung-Wei Hang", "Pramuditha Perera", "Jie Ma", "William Wang", "Zhiguo Wang", "Vittorio Castelli", "Bing Xiang", "Patrick Ng" ]
Entities can be expressed in diverse formats, such as texts, images, or column names and cell values in tables. While existing entity linking (EL) models work well on per modality configuration, such as text-only EL, visual grounding, or schema linking, it is more challenging to design a unified model for diverse modality configurations. To bring various modality configurations together, we constructed a benchmark for diverse-modal EL (DMEL) from existing EL datasets, covering all three modalities including text, image, and table. To approach the DMEL task, we proposed a generative diverse-modal model (GDMM) following a multimodal-encoder-decoder paradigm. Pre-training \Model with rich corpora builds a solid foundation for DMEL without storing the entire KB for inference. Fine-tuning GDMM builds a stronger DMEL baseline, outperforming state-of-the-art task-specific EL models by 8.51 F1 score on average. Additionally, extensive error analyses are conducted to highlight the challenges of DMEL, facilitating future research on this task.
[ "cs.CL", "cs.AI" ]
false
2305.17373
2023-05-27T05:36:46Z
Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning
[ "Zhenrui Yue", "Huimin Zeng", "Mengfei Lan", "Heng Ji", "Dong Wang" ]
With emerging online topics as a source for numerous new events, detecting unseen / rare event types presents an elusive challenge for existing event detection methods, where only limited data access is provided for training. To address the data scarcity problem in event detection, we propose MetaEvent, a meta learning-based framework for zero- and few-shot event detection. Specifically, we sample training tasks from existing event types and perform meta training to search for optimal parameters that quickly adapt to unseen tasks. In our framework, we propose to use the cloze-based prompt and a trigger-aware soft verbalizer to efficiently project output to unseen event types. Moreover, we design a contrastive meta objective based on maximum mean discrepancy (MMD) to learn class-separating features. As such, the proposed MetaEvent can perform zero-shot event detection by mapping features to event types without any prior knowledge. In our experiments, we demonstrate the effectiveness of MetaEvent in both zero-shot and few-shot scenarios, where the proposed method achieves state-of-the-art performance in extensive experiments on benchmark datasets FewEvent and MAVEN.
[ "cs.CL", "cs.AI" ]
false
2305.17378
2023-05-27T06:09:03Z
Improving Generalization in Language Model-Based Text-to-SQL Semantic Parsing: Two Simple Semantic Boundary-Based Techniques
[ "Daking Rai", "Bailin Wang", "Yilun Zhou", "Ziyu Yao" ]
Compositional and domain generalization present significant challenges in semantic parsing, even for state-of-the-art semantic parsers based on pre-trained language models (LMs). In this study, we empirically investigate improving an LM's generalization in semantic parsing with two simple techniques: at the token level, we introduce a token preprocessing method to preserve the semantic boundaries of tokens produced by LM tokenizers; at the sequence level, we propose to use special tokens to mark the boundaries of components aligned between input and output. Our experimental results on two text-to-SQL semantic parsing datasets show that our token preprocessing, although simple, can substantially improve the LM performance on both types of generalization, and our component boundary marking method is particularly helpful for compositional generalization.
[ "cs.CL", "cs.AI", "I.2.7" ]
false
2305.17542
2023-05-27T18:13:17Z
Non-Sequential Graph Script Induction via Multimedia Grounding
[ "Yu Zhou", "Sha Li", "Manling Li", "Xudong Lin", "Shih-Fu Chang", "Mohit Bansal", "Heng Ji" ]
Online resources such as WikiHow compile a wide range of scripts for performing everyday tasks, which can assist models in learning to reason about procedures. However, the scripts are always presented in a linear manner, which does not reflect the flexibility displayed by people executing tasks in real life. For example, in the CrossTask Dataset, 64.5% of consecutive step pairs are also observed in the reverse order, suggesting their ordering is not fixed. In addition, each step has an average of 2.56 frequent next steps, demonstrating "branching". In this paper, we propose the new challenging task of non-sequential graph script induction, aiming to capture optional and interchangeable steps in procedural planning. To automate the induction of such graph scripts for given tasks, we propose to take advantage of loosely aligned videos of people performing the tasks. In particular, we design a multimodal framework to ground procedural videos to WikiHow textual steps and thus transform each video into an observed step path on the latent ground truth graph script. This key transformation enables us to train a script knowledge model capable of both generating explicit graph scripts for learnt tasks and predicting future steps given a partial step sequence. Our best model outperforms the strongest pure text/vision baselines by 17.52% absolute gains on F1@3 for next step prediction and 13.8% absolute gains on Acc@1 for partial sequence completion. Human evaluation shows our model outperforming the WikiHow linear baseline by 48.76% absolute gains in capturing sequential and non-sequential step relationships.
[ "cs.CL", "cs.MM" ]
false
2305.17311
2023-05-27T00:07:17Z
Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models
[ "Yuhui Zhang", "Michihiro Yasunaga", "Zhengping Zhou", "Jeff Z. HaoChen", "James Zou", "Percy Liang", "Serena Yeung" ]
Language models have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data. In this work, we introduce NeQA, a dataset consisting of questions with negation in which language models do not exhibit straightforward positive scaling. We show that this task can exhibit inverse scaling, U-shaped scaling, or positive scaling, and the three scaling trends shift in this order as we use more powerful prompting methods or model families. We hypothesize that solving NeQA depends on two subtasks: question answering (task 1) and negation understanding (task 2). We find that task 1 has linear scaling, while task 2 has sigmoid-shaped scaling with an emergent transition point, and composing these two scaling trends yields the final scaling trend of NeQA. Our work reveals and provides a way to analyze the complex scaling trends of language models.
[ "cs.CL", "cs.AI", "cs.LG" ]
false
2305.17364
2023-05-27T04:34:58Z
An Investigation of Evaluation Metrics for Automated Medical Note Generation
[ "Asma Ben Abacha", "Wen-wai Yim", "George Michalopoulos", "Thomas Lin" ]
Recent studies on automatic note generation have shown that doctors can save significant amounts of time when using automatic clinical note generation (Knoll et al., 2022). Summarization models have been used for this task to generate clinical notes as summaries of doctor-patient conversations (Krishna et al., 2021; Cai et al., 2022). However, assessing which model would best serve clinicians in their daily practice is still a challenging task due to the large set of possible correct summaries, and the potential limitations of automatic evaluation metrics. In this paper, we study evaluation methods and metrics for the automatic generation of clinical notes from medical conversations. In particular, we propose new task-specific metrics and we compare them to SOTA evaluation metrics in text summarization and generation, including: (i) knowledge-graph embedding-based metrics, (ii) customized model-based metrics, (iii) domain-adapted/fine-tuned metrics, and (iv) ensemble metrics. To study the correlation between the automatic metrics and manual judgments, we evaluate automatic notes/summaries by comparing the system and reference facts and computing the factual correctness, and the hallucination and omission rates for critical medical facts. This study relied on seven datasets manually annotated by domain experts. Our experiments show that automatic evaluation metrics can have substantially different behaviors on different types of clinical notes datasets. However, the results highlight one stable subset of metrics as the most correlated with human judgments with a relevant aggregation of different evaluation criteria.
[ "cs.CL", "cs.AI", "cs.LG" ]
false
2305.17444
2023-05-27T11:00:15Z
Query-Efficient Black-Box Red Teaming via Bayesian Optimization
[ "Deokjae Lee", "JunYeong Lee", "Jung-Woo Ha", "Jin-Hwa Kim", "Sang-Woo Lee", "Hwaran Lee", "Hyun Oh Song" ]
The deployment of large-scale generative models is often restricted by their potential risk of causing harm to users in unpredictable ways. We focus on the problem of black-box red teaming, where a red team generates test cases and interacts with the victim model to discover a diverse set of failures with limited query access. Existing red teaming methods construct test cases based on human supervision or language model (LM) and query all test cases in a brute-force manner without incorporating any information from past evaluations, resulting in a prohibitively large number of queries. To this end, we propose Bayesian red teaming (BRT), novel query-efficient black-box red teaming methods based on Bayesian optimization, which iteratively identify diverse positive test cases leading to model failures by utilizing the pre-defined user input pool and the past evaluations. Experimental results on various user input pools demonstrate that our method consistently finds a significantly larger number of diverse positive test cases under the limited query budget than the baseline methods. The source code is available at https://github.com/snu-mllab/Bayesian-Red-Teaming.
[ "cs.AI", "cs.CL", "cs.CR", "cs.LG" ]
false
2305.17457
2023-05-27T12:19:13Z
Financial misstatement detection: a realistic evaluation
[ "Elias Zavitsanos", "Dimitris Mavroeidis", "Konstantinos Bougiatiotis", "Eirini Spyropoulou", "Lefteris Loukas", "Georgios Paliouras" ]
In this work, we examine the evaluation process for the task of detecting financial reports with a high risk of containing a misstatement. This task is often referred to, in the literature, as ``misstatement detection in financial reports''. We provide an extensive review of the related literature. We propose a new, realistic evaluation framework for the task which, unlike a large part of the previous work: (a) focuses on the misstatement class and its rarity, (b) considers the dimension of time when splitting data into training and test and (c) considers the fact that misstatements can take a long time to detect. Most importantly, we show that the evaluation process significantly affects system performance, and we analyze the performance of different models and feature types in the new realistic framework.
[ "cs.CL", "cs.LG", "q-fin.CP" ]
false
2305.17499
2023-05-27T15:39:13Z
CIF-PT: Bridging Speech and Text Representations for Spoken Language Understanding via Continuous Integrate-and-Fire Pre-Training
[ "Linhao Dong", "Zhecheng An", "Peihao Wu", "Jun Zhang", "Lu Lu", "Zejun Ma" ]
Speech or text representation generated by pre-trained models contains modal-specific information that could be combined for benefiting spoken language understanding (SLU) tasks. In this work, we propose a novel pre-training paradigm termed Continuous Integrate-and-Fire Pre-Training (CIF-PT). It relies on a simple but effective frame-to-token alignment: continuous integrate-and-fire (CIF) to bridge the representations between speech and text. It jointly performs speech-to-text training and language model distillation through CIF as the pre-training (PT). Evaluated on SLU benchmark SLURP dataset, CIF-PT outperforms the state-of-the-art model by 1.94% of accuracy and 2.71% of SLU-F1 on the tasks of intent classification and slot filling, respectively. We also observe the cross-modal representation extracted by CIF-PT obtains better performance than other neural interfaces for the tasks of SLU, including the dominant speech representation learned from self-supervised pre-training.
[ "cs.CL", "cs.MM", "eess.AS" ]
false
2305.17534
2023-05-27T17:34:36Z
Unsupervised Selective Rationalization with Noise Injection
[ "Adam Storek", "Melanie Subbiah", "Kathleen McKeown" ]
A major issue with using deep learning models in sensitive applications is that they provide no explanation for their output. To address this problem, unsupervised selective rationalization produces rationales alongside predictions by chaining two jointly-trained components, a rationale generator and a predictor. Although this architecture guarantees that the prediction relies solely on the rationale, it does not ensure that the rationale contains a plausible explanation for the prediction. We introduce a novel training technique that effectively limits generation of implausible rationales by injecting noise between the generator and the predictor. Furthermore, we propose a new benchmark for evaluating unsupervised selective rationalization models using movie reviews from existing datasets. We achieve sizeable improvements in rationale plausibility and task accuracy over the state-of-the-art across a variety of tasks, including our new benchmark, while maintaining or improving model faithfulness.
[ "cs.CL", "cs.AI", "cs.LG" ]
false
2305.17315
2023-05-27T00:36:30Z
Automatic Roof Type Classification Through Machine Learning for Regional Wind Risk Assessment
[ "Shuochuan Meng", "Mohammad Hesam Soleimani-Babakamali", "Ertugrul Taciroglu" ]
Roof type is one of the most critical building characteristics for wind vulnerability modeling. It is also the most frequently missing building feature from publicly available databases. An automatic roof classification framework is developed herein to generate high-resolution roof-type data using machine learning. A Convolutional Neural Network (CNN) was trained to classify roof types using building-level satellite images. The model achieved an F1 score of 0.96 on predicting roof types for 1,000 test buildings. The CNN model was then used to predict roof types for 161,772 single-family houses in New Hanover County, NC, and Miami-Dade County, FL. The distribution of roof type in city and census tract scales was presented. A high variance was observed in the dominant roof type among census tracts. To improve the completeness of the roof-type data, imputation algorithms were developed to populate missing roof data due to low-quality images, using critical building attributes and neighborhood-level roof characteristics.
[ "cs.LG" ]
false
2305.17409
2023-05-27T08:22:34Z
On the special role of class-selective neurons in early training
[ "Omkar Ranadive", "Nikhil Thakurdesai", "Ari S Morcos", "Matthew Leavitt", "Stéphane Deny" ]
It is commonly observed that deep networks trained for classification exhibit class-selective neurons in their early and intermediate layers. Intriguingly, recent studies have shown that these class-selective neurons can be ablated without deteriorating network function. But if class-selective neurons are not necessary, why do they exist? We attempt to answer this question in a series of experiments on ResNet-50s trained on ImageNet. We first show that class-selective neurons emerge during the first few epochs of training, before receding rapidly but not completely; this suggests that class-selective neurons found in trained networks are in fact vestigial remains of early training. With single-neuron ablation experiments, we then show that class-selective neurons are important for network function in this early phase of training. We also observe that the network is close to a linear regime in this early phase; we thus speculate that class-selective neurons appear early in training as quasi-linear shortcut solutions to the classification task. Finally, in causal experiments where we regularize against class selectivity at different points in training, we show that the presence of class-selective neurons early in training is critical to the successful training of the network; in contrast, class-selective neurons can be suppressed later in training with little effect on final accuracy. It remains to be understood by which mechanism the presence of class-selective neurons in the early phase of training contributes to the successful training of networks.
[ "cs.LG" ]
false
2305.17428
2023-05-27T09:37:58Z
Choosing the Right Weights: Balancing Value, Strategy, and Noise in Recommender Systems
[ "Smitha Milli", "Emma Pierson", "Nikhil Garg" ]
Many recommender systems are based on optimizing a linear weighting of different user behaviors, such as clicks, likes, shares, etc. Though the choice of weights can have a significant impact, there is little formal study or guidance on how to choose them. We analyze the optimal choice of weights from the perspectives of both users and content producers who strategically respond to the weights. We consider three aspects of user behavior: value-faithfulness (how well a behavior indicates whether the user values the content), strategy-robustness (how hard it is for producers to manipulate the behavior), and noisiness (how much estimation error there is in predicting the behavior). Our theoretical results show that for users, upweighting more value-faithful and less noisy behaviors leads to higher utility, while for producers, upweighting more value-faithful and strategy-robust behaviors leads to higher welfare (and the impact of noise is non-monotonic). Finally, we discuss how our results can help system designers select weights in practice.
[ "cs.LG" ]
false
2305.17492
2023-05-27T15:09:56Z
Dynamic User Segmentation and Usage Profiling
[ "Animesh Mitra", "Saswata Sahoo", "Soumyabrata Dey" ]
Usage data of a group of users distributed across a number of categories, such as songs, movies, webpages, links, regular household products, mobile apps, games, etc. can be ultra-high dimensional and massive in size. More often this kind of data is categorical and sparse in nature making it even more difficult to interpret any underlying hidden patterns such as clusters of users. However, if this information can be estimated accurately, it will have huge impacts in different business areas such as user recommendations for apps, songs, movies, and other similar products, health analytics using electronic health record (EHR) data, and driver profiling for insurance premium estimation or fleet management. In this work, we propose a clustering strategy of such categorical big data, utilizing the hidden sparsity of the dataset. Most traditional clustering methods fail to give proper clusters for such data and end up giving one big cluster with small clusters around it irrespective of the true structure of the data clusters. We propose a feature transformation, which maps the binary-valued usage vector to a lower dimensional continuous feature space in terms of groups of usage categories, termed as covariate classes. The lower dimensional feature representations in terms of covariate classes can be used for clustering. We implemented the proposed strategy and applied it to a large sized very high-dimensional song playlist dataset for the performance validation. The results are impressive as we achieved similar-sized user clusters with minimal between-cluster overlap in the feature space (8%) on average). As the proposed strategy has a very generic framework, it can be utilized as the analytic engine of many of the above-mentioned business use cases allowing an intelligent and dynamic personal recommendation system or a support system for smart business decision-making.
[ "cs.LG" ]
false
2305.17528
2023-05-27T17:06:17Z
Two Heads are Better than One: Towards Better Adversarial Robustness by Combining Transduction and Rejection
[ "Nils Palumbo", "Yang Guo", "Xi Wu", "Jiefeng Chen", "Yingyu Liang", "Somesh Jha" ]
Both transduction and rejection have emerged as important techniques for defending against adversarial perturbations. A recent work by Tram\`er showed that, in the rejection-only case (no transduction), a strong rejection-solution can be turned into a strong (but computationally inefficient) non-rejection solution. This detector-to-classifier reduction has been mostly applied to give evidence that certain claims of strong selective-model solutions are susceptible, leaving the benefits of rejection unclear. On the other hand, a recent work by Goldwasser et al. showed that rejection combined with transduction can give provable guarantees (for certain problems) that cannot be achieved otherwise. Nevertheless, under recent strong adversarial attacks (GMSA, which has been shown to be much more effective than AutoAttack against transduction), Goldwasser et al.'s work was shown to have low performance in a practical deep-learning setting. In this paper, we take a step towards realizing the promise of transduction+rejection in more realistic scenarios. Theoretically, we show that a novel application of Tram\`er's classifier-to-detector technique in the transductive setting can give significantly improved sample-complexity for robust generalization. While our theoretical construction is computationally inefficient, it guides us to identify an efficient transductive algorithm to learn a selective model. Extensive experiments using state of the art attacks (AutoAttack, GMSA) show that our solutions provide significantly better robust accuracy.
[ "cs.LG" ]
false
2305.17346
2023-05-27T03:01:27Z
Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing
[ "Yuhang Li", "Abhishek Moitra", "Tamar Geller", "Priyadarshini Panda" ]
Spiking Neural Networks (SNNs) have recently attracted widespread research interest as an efficient alternative to traditional Artificial Neural Networks (ANNs) because of their capability to process sparse and binary spike information and avoid expensive multiplication operations. Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware. Therefore, in order to maximize the efficiency of SNNs, we propose input-aware Dynamic Timestep SNN (DT-SNN), a novel algorithmic solution to dynamically determine the number of timesteps during inference on an input-dependent basis. By calculating the entropy of the accumulated output after each timestep, we can compare it to a predefined threshold and decide if the information processed at the current timestep is sufficient for a confident prediction. We deploy DT-SNN on an IMC architecture and show that it incurs negligible computational overhead. We demonstrate that our method only uses 1.46 average timesteps to achieve the accuracy of a 4-timestep static SNN while reducing the energy-delay-product by 80%.
[ "cs.NE", "cs.LG" ]
false
2305.17386
2023-05-27T06:35:23Z
HyperFormer: Learning Expressive Sparse Feature Representations via Hypergraph Transformer
[ "Kaize Ding", "Albert Jiongqian Liang", "Bryan Perrozi", "Ting Chen", "Ruoxi Wang", "Lichan Hong", "Ed H. Chi", "Huan Liu", "Derek Zhiyuan Cheng" ]
Learning expressive representations for high-dimensional yet sparse features has been a longstanding problem in information retrieval. Though recent deep learning methods can partially solve the problem, they often fail to handle the numerous sparse features, particularly those tail feature values with infrequent occurrences in the training data. Worse still, existing methods cannot explicitly leverage the correlations among different instances to help further improve the representation learning on sparse features since such relational prior knowledge is not provided. To address these challenges, in this paper, we tackle the problem of representation learning on feature-sparse data from a graph learning perspective. Specifically, we propose to model the sparse features of different instances using hypergraphs where each node represents a data instance and each hyperedge denotes a distinct feature value. By passing messages on the constructed hypergraphs based on our Hypergraph Transformer (HyperFormer), the learned feature representations capture not only the correlations among different instances but also the correlations among features. Our experiments demonstrate that the proposed approach can effectively improve feature representation learning on sparse features.
[ "cs.IR", "cs.LG" ]
false
2305.17408
2023-05-27T08:22:12Z
AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs
[ "Yangjie Zhou", "Yaoxu Song", "Jingwen Leng", "Zihan Liu", "Weihao Cui", "Zhendong Zhang", "Cong Guo", "Quan Chen", "Li Li", "Minyi Guo" ]
Graph neural networks (GNNs) are powerful tools for exploring and learning from graph structures and features. As such, achieving high-performance execution for GNNs becomes crucially important. Prior works have proposed to explore the sparsity (i.e., low density) in the input graph to accelerate GNNs, which uses the full-graph-level or block-level sparsity format. We show that they fail to balance the sparsity benefit and kernel execution efficiency. In this paper, we propose a novel system, referred to as AdaptGear, that addresses the challenge of optimizing GNNs performance by leveraging kernels tailored to the density characteristics at the subgraph level. Meanwhile, we also propose a method that dynamically chooses the optimal set of kernels for a given input graph. Our evaluation shows that AdaptGear can achieve a significant performance improvement, up to $6.49 \times$ ($1.87 \times$ on average), over the state-of-the-art works on two mainstream NVIDIA GPUs across various datasets.
[ "cs.DC", "cs.LG" ]
false
2305.17437
2023-05-27T10:24:22Z
GIMM: InfoMin-Max for Automated Graph Contrastive Learning
[ "Xin Xiong", "Furao Shen", "Xiangyu Wang", "Jian Zhao" ]
Graph contrastive learning (GCL) shows great potential in unsupervised graph representation learning. Data augmentation plays a vital role in GCL, and its optimal choice heavily depends on the downstream task. Many GCL methods with automated data augmentation face the risk of insufficient information as they fail to preserve the essential information necessary for the downstream task. To solve this problem, we propose InfoMin-Max for automated Graph contrastive learning (GIMM), which prevents GCL from encoding redundant information and losing essential information. GIMM consists of two major modules: (1) automated graph view generator, which acquires the approximation of InfoMin's optimal views through adversarial training without requiring task-relevant information; (2) view comparison, which learns an excellent encoder by applying InfoMax to view representations. To the best of our knowledge, GIMM is the first method that combines the InfoMin and InfoMax principles in GCL. Besides, GIMM introduces randomness to augmentation, thus stabilizing the model against perturbations. Extensive experiments on unsupervised and semi-supervised learning for node and graph classification demonstrate the superiority of our GIMM over state-of-the-art GCL methods with automated and manual data augmentation.
[ "cs.LG", "cs.AI" ]
false
2305.17476
2023-05-27T13:46:08Z
Toward Understanding Generative Data Augmentation
[ "Chenyu Zheng", "Guoqiang Wu", "Chongxuan Li" ]
Generative data augmentation, which scales datasets by obtaining fake labeled examples from a trained conditional generative model, boosts classification performance in various learning tasks including (semi-)supervised learning, few-shot learning, and adversarially robust learning. However, little work has theoretically investigated the effect of generative data augmentation. To fill this gap, we establish a general stability bound in this not independently and identically distributed (non-i.i.d.) setting, where the learned distribution is dependent on the original train set and generally not the same as the true distribution. Our theoretical result includes the divergence between the learned distribution and the true distribution. It shows that generative data augmentation can enjoy a faster learning rate when the order of divergence term is $o(\max\left( \log(m)\beta_m, 1 / \sqrt{m})\right)$, where $m$ is the train set size and $\beta_m$ is the corresponding stability constant. We further specify the learning setup to the Gaussian mixture model and generative adversarial nets. We prove that in both cases, though generative data augmentation does not enjoy a faster learning rate, it can improve the learning guarantees at a constant level when the train set is small, which is significant when the awful overfitting occurs. Simulation results on the Gaussian mixture model and empirical results on generative adversarial nets support our theoretical conclusions. Our code is available at https://github.com/ML-GSAI/Understanding-GDA.
[ "cs.LG", "stat.ML" ]
false
2305.17482
2023-05-27T14:23:14Z
Federated Empirical Risk Minimization via Second-Order Method
[ "Song Bian", "Zhao Song", "Junze Yin" ]
Many convex optimization problems with important applications in machine learning are formulated as empirical risk minimization (ERM). There are several examples: linear and logistic regression, LASSO, kernel regression, quantile regression, $p$-norm regression, support vector machines (SVM), and mean-field variational inference. To improve data privacy, federated learning is proposed in machine learning as a framework for training deep learning models on the network edge without sharing data between participating nodes. In this work, we present an interior point method (IPM) to solve a general ERM problem under the federated learning setting. We show that the communication complexity of each iteration of our IPM is $\tilde{O}(d^{3/2})$, where $d$ is the dimension (i.e., number of features) of the dataset.
[ "cs.LG", "cs.DC" ]
false
2305.17498
2023-05-27T15:38:53Z
A Model-Based Method for Minimizing CVaR and Beyond
[ "Si Yi Meng", "Robert M. Gower" ]
We develop a variant of the stochastic prox-linear method for minimizing the Conditional Value-at-Risk (CVaR) objective. CVaR is a risk measure focused on minimizing worst-case performance, defined as the average of the top quantile of the losses. In machine learning, such a risk measure is useful to train more robust models. Although the stochastic subgradient method (SGM) is a natural choice for minimizing the CVaR objective, we show that our stochastic prox-linear (SPL+) algorithm can better exploit the structure of the objective, while still providing a convenient closed form update. Our SPL+ method also adapts to the scaling of the loss function, which allows for easier tuning. We then specialize a general convergence theorem for SPL+ to our setting, and show that it allows for a wider selection of step sizes compared to SGM. We support this theoretical finding experimentally.
[ "math.OC", "cs.LG" ]
false
2305.17523
2023-05-27T16:38:18Z
A Comparative Analysis of Portfolio Optimization Using Mean-Variance, Hierarchical Risk Parity, and Reinforcement Learning Approaches on the Indian Stock Market
[ "Jaydip Sen", "Aditya Jaiswal", "Anshuman Pathak", "Atish Kumar Majee", "Kushagra Kumar", "Manas Kumar Sarkar", "Soubhik Maji" ]
This paper presents a comparative analysis of the performances of three portfolio optimization approaches. Three approaches of portfolio optimization that are considered in this work are the mean-variance portfolio (MVP), hierarchical risk parity (HRP) portfolio, and reinforcement learning-based portfolio. The portfolios are trained and tested over several stock data and their performances are compared on their annual returns, annual risks, and Sharpe ratios. In the reinforcement learning-based portfolio design approach, the deep Q learning technique has been utilized. Due to the large number of possible states, the construction of the Q-table is done using a deep neural network. The historical prices of the 50 premier stocks from the Indian stock market, known as the NIFTY50 stocks, and several stocks from 10 important sectors of the Indian stock market are used to create the environment for training the agent.
[ "cs.LG", "q-fin.PM" ]
false
2305.17568
2023-05-27T20:08:35Z
Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with General Utilities
[ "Donghao Ying", "Yunkai Zhang", "Yuhao Ding", "Alec Koppel", "Javad Lavaei" ]
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints. The objective and constraints are described by {\it general utilities}, i.e., nonlinear functions of the long-term state-action occupancy measure, which encompass broader decision-making goals such as risk, exploration, or imitations. The exponential growth of the state-action space size with the number of agents presents challenges for global observability, further exacerbated by the global coupling arising from agents' safety constraints. To tackle this issue, we propose a primal-dual method utilizing shadow reward and $\kappa$-hop neighbor truncation under a form of correlation decay property, where $\kappa$ is the communication radius. In the exact setting, our algorithm converges to a first-order stationary point (FOSP) at the rate of $\mathcal{O}\left(T^{-2/3}\right)$. In the sample-based setting, we demonstrate that, with high probability, our algorithm requires $\widetilde{\mathcal{O}}\left(\epsilon^{-3.5}\right)$ samples to achieve an $\epsilon$-FOSP with an approximation error of $\mathcal{O}(\phi_0^{2\kappa})$, where $\phi_0\in (0,1)$. Finally, we demonstrate the effectiveness of our model through extensive numerical experiments.
[ "cs.LG", "math.OC" ]
false
2305.17589
2023-05-27T22:26:27Z
Graph Inductive Biases in Transformers without Message Passing
[ "Liheng Ma", "Chen Lin", "Derek Lim", "Adriana Romero-Soriano", "Puneet K. Dokania", "Mark Coates", "Philip Torr", "Ser-Nam Lim" ]
Transformers for graph data are increasingly widely studied and successful in numerous learning tasks. Graph inductive biases are crucial for Graph Transformers, and previous works incorporate them using message-passing modules and/or positional encodings. However, Graph Transformers that use message-passing inherit known issues of message-passing, and differ significantly from Transformers used in other domains, thus making transfer of research advances more difficult. On the other hand, Graph Transformers without message-passing often perform poorly on smaller datasets, where inductive biases are more crucial. To bridge this gap, we propose the Graph Inductive bias Transformer (GRIT) -- a new Graph Transformer that incorporates graph inductive biases without using message passing. GRIT is based on several architectural changes that are each theoretically and empirically justified, including: learned relative positional encodings initialized with random walk probabilities, a flexible attention mechanism that updates node and node-pair representations, and injection of degree information in each layer. We prove that GRIT is expressive -- it can express shortest path distances and various graph propagation matrices. GRIT achieves state-of-the-art empirical performance across a variety of graph datasets, thus showing the power that Graph Transformers without message-passing can deliver.
[ "cs.LG", "cs.AI" ]
false
2305.17592
2023-05-27T22:53:37Z
Approximation-Generalization Trade-offs under (Approximate) Group Equivariance
[ "Mircea Petrache", "Shubhendu Trivedi" ]
The explicit incorporation of task-specific inductive biases through symmetry has emerged as a general design precept in the development of high-performance machine learning models. For example, group equivariant neural networks have demonstrated impressive performance across various domains and applications such as protein and drug design. A prevalent intuition about such models is that the integration of relevant symmetry results in enhanced generalization. Moreover, it is posited that when the data and/or the model may only exhibit $\textit{approximate}$ or $\textit{partial}$ symmetry, the optimal or best-performing model is one where the model symmetry aligns with the data symmetry. In this paper, we conduct a formal unified investigation of these intuitions. To begin, we present general quantitative bounds that demonstrate how models capturing task-specific symmetries lead to improved generalization. In fact, our results do not require the transformations to be finite or even form a group and can work with partial or approximate equivariance. Utilizing this quantification, we examine the more general question of model mis-specification i.e. when the model symmetries don't align with the data symmetries. We establish, for a given symmetry group, a quantitative comparison between the approximate/partial equivariance of the model and that of the data distribution, precisely connecting model equivariance error and data equivariance error. Our result delineates conditions under which the model equivariance error is optimal, thereby yielding the best-performing model for the given task and data.
[ "cs.LG", "stat.ML" ]
false
2305.17593
2023-05-27T23:03:41Z
Data Minimization at Inference Time
[ "Cuong Tran", "Ferdinando Fioretto" ]
In domains with high stakes such as law, recruitment, and healthcare, learning models frequently rely on sensitive user data for inference, necessitating the complete set of features. This not only poses significant privacy risks for individuals but also demands substantial human effort from organizations to verify information accuracy. This paper asks whether it is necessary to use \emph{all} input features for accurate predictions at inference time. The paper demonstrates that, in a personalized setting, individuals may only need to disclose a small subset of their features without compromising decision-making accuracy. The paper also provides an efficient sequential algorithm to determine the appropriate attributes for each individual to provide. Evaluations across various learning tasks show that individuals can potentially report as little as 10\% of their information while maintaining the same accuracy level as a model that employs the full set of user information.
[ "cs.LG", "cs.AI" ]
false
2305.18372
2023-05-27T23:30:27Z
Assumption Generation for the Verification of Learning-Enabled Autonomous Systems
[ "Corina Pasareanu", "Ravi Mangal", "Divya Gopinath", "Huafeng Yu" ]
Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze due to their size (they can have thousands or millions of parameters), lack of formal specifications (DNNs are typically learnt from labeled data, in the absence of any formal requirements), and sensitivity to small changes in the environment. We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties of such autonomous systems. Our insight is that we can analyze the system in the absence of the DNN perception components by automatically synthesizing assumptions on the DNN behaviour that guarantee the satisfaction of the required safety properties. The synthesized assumptions are the weakest in the sense that they characterize the output sequences of all the possible DNNs that, plugged into the autonomous system, guarantee the required safety properties. The assumptions can be leveraged as run-time monitors over a deployed DNN to guarantee the safety of the overall system; they can also be mined to extract local specifications for use during training and testing of DNNs. We illustrate our approach on a case study taken from the autonomous airplanes domain that uses a complex DNN for perception.
[ "cs.AI", "cs.LG" ]
false
2307.10177
2023-05-27T20:12:15Z
Bayesian Spike Train Inference via Non-Local Priors
[ "Abhisek Chakraborty" ]
Advances in neuroscience have enabled researchers to measure the activities of large numbers of neurons simultaneously in behaving animals. We have access to the fluorescence of each of the neurons which provides a first-order approximation of the neural activity over time. Determining the exact spike of a neuron from this fluorescence trace constitutes an active area of research within the field of computational neuroscience. We propose a novel Bayesian approach based on a mixture of half-non-local prior densities and point masses for this task. Instead of a computationally expensive MCMC algorithm, we adopt a stochastic search-based approach that is capable of taking advantage of modern computing environments often equipped with multiple processors, to explore all possible arrangements of spikes and lack thereof in an observed spike train. It then reports the highest posterior probability arrangement of spikes and posterior probability for a spike at each location of the spike train. Our proposals lead to substantial improvements over existing proposals based on L1 regularization, and enjoy comparable estimation accuracy to the state-of-the-art L0 proposal, in simulations, and on recent calcium imaging data sets. Notably, contrary to optimization-based frequentist approaches, our methodology yields automatic uncertainty quantification associated with the spike-train inference.
[ "q-bio.NC", "cs.LG" ]
false
2305.17332
2023-05-27T02:27:27Z
Learning Capacity: A Measure of the Effective Dimensionality of a Model
[ "Daiwei Chen", "Weikai Chang", "Pratik Chaudhari" ]
We exploit a formal correspondence between thermodynamics and inference, where the number of samples can be thought of as the inverse temperature, to define a "learning capacity'' which is a measure of the effective dimensionality of a model. We show that the learning capacity is a tiny fraction of the number of parameters for many deep networks trained on typical datasets, depends upon the number of samples used for training, and is numerically consistent with notions of capacity obtained from the PAC-Bayesian framework. The test error as a function of the learning capacity does not exhibit double descent. We show that the learning capacity of a model saturates at very small and very large sample sizes; this provides guidelines, as to whether one should procure more data or whether one should search for new architectures, to improve performance. We show how the learning capacity can be used to understand the effective dimensionality, even for non-parametric models such as random forests and $k$-nearest neighbor classifiers.
[ "cs.LG", "cs.IT", "math.IT" ]
false
2305.17352
2023-05-27T03:15:24Z
Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?
[ "Yihe Zhou", "Shunyu Liu", "Yunpeng Qing", "Kaixuan Chen", "Tongya Zheng", "Yanhao Huang", "Jie Song", "Mingli Song" ]
Centralized Training with Decentralized Execution (CTDE) has recently emerged as a popular framework for cooperative Multi-Agent Reinforcement Learning (MARL), where agents can use additional global state information to guide training in a centralized way and make their own decisions only based on decentralized local policies. Despite the encouraging results achieved, CTDE makes an independence assumption on agent policies, which limits agents to adopt global cooperative information from each other during centralized training. Therefore, we argue that existing CTDE methods cannot fully utilize global information for training, leading to an inefficient joint-policy exploration and even suboptimal results. In this paper, we introduce a novel Centralized Advising and Decentralized Pruning (CADP) framework for multi-agent reinforcement learning, that not only enables an efficacious message exchange among agents during training but also guarantees the independent policies for execution. Firstly, CADP endows agents the explicit communication channel to seek and take advices from different agents for more centralized training. To further ensure the decentralized execution, we propose a smooth model pruning mechanism to progressively constraint the agent communication into a closed one without degradation in agent cooperation capability. Empirical evaluations on StarCraft II micromanagement and Google Research Football benchmarks demonstrate that the proposed framework achieves superior performance compared with the state-of-the-art counterparts. Our code will be made publicly available.
[ "cs.AI", "cs.LG", "cs.MA" ]
false
2305.17387
2023-05-27T06:46:08Z
Learning from Integral Losses in Physics Informed Neural Networks
[ "Ehsan Saleh", "Saba Ghaffari", "Timothy Bretl", "Luke Olson", "Matthew West" ]
This work proposes a solution for the problem of training physics informed networks under partial integro-differential equations. These equations require infinite or a large number of neural evaluations to construct a single residual for training. As a result, accurate evaluation may be impractical, and we show that naive approximations at replacing these integrals with unbiased estimates lead to biased loss functions and solutions. To overcome this bias, we investigate three types of solutions: the deterministic sampling approach, the double-sampling trick, and the delayed target method. We consider three classes of PDEs for benchmarking; one defining a Poisson problem with singular charges and weak solutions, another involving weak solutions on electro-magnetic fields and a Maxwell equation, and a third one defining a Smoluchowski coagulation problem. Our numerical results confirm the existence of the aforementioned bias in practice, and also show that our proposed delayed target approach can lead to accurate solutions with comparable quality to ones estimated with a large number of samples. Our implementation is open-source and available at https://github.com/ehsansaleh/btspinn.
[ "cs.LG", "cs.AI", "cs.NA", "math.NA" ]
false
2305.17417
2023-05-27T08:53:26Z
Modeling Dynamic Heterogeneous Graph and Node Importance for Future Citation Prediction
[ "Hao Geng", "Deqing Wang", "Fuzhen Zhuang", "Xuehua Ming", "Chenguang Du", "Ting Jiang", "Haolong Guo", "Rui Liu" ]
Accurate citation count prediction of newly published papers could help editors and readers rapidly figure out the influential papers in the future. Though many approaches are proposed to predict a paper's future citation, most ignore the dynamic heterogeneous graph structure or node importance in academic networks. To cope with this problem, we propose a Dynamic heterogeneous Graph and Node Importance network (DGNI) learning framework, which fully leverages the dynamic heterogeneous graph and node importance information to predict future citation trends of newly published papers. First, a dynamic heterogeneous network embedding module is provided to capture the dynamic evolutionary trends of the whole academic network. Then, a node importance embedding module is proposed to capture the global consistency relationship to figure out each paper's node importance. Finally, the dynamic evolutionary trend embeddings and node importance embeddings calculated above are combined to jointly predict the future citation counts of each paper, by a log-normal distribution model according to multi-faced paper node representations. Extensive experiments on two large-scale datasets demonstrate that our model significantly improves all indicators compared to the SOTA models.
[ "cs.DL", "cs.LG", "physics.soc-ph" ]
false
2305.17557
2023-05-27T19:16:55Z
Fair Clustering via Hierarchical Fair-Dirichlet Process
[ "Abhisek Chakraborty", "Anirban Bhattacharya", "Debdeep Pati" ]
The advent of ML-driven decision-making and policy formation has led to an increasing focus on algorithmic fairness. As clustering is one of the most commonly used unsupervised machine learning approaches, there has naturally been a proliferation of literature on {\em fair clustering}. A popular notion of fairness in clustering mandates the clusters to be {\em balanced}, i.e., each level of a protected attribute must be approximately equally represented in each cluster. Building upon the original framework, this literature has rapidly expanded in various aspects. In this article, we offer a novel model-based formulation of fair clustering, complementing the existing literature which is almost exclusively based on optimizing appropriate objective functions.
[ "stat.ML", "cs.CY", "cs.LG" ]
false
2305.19379
2023-05-27T07:43:19Z
Inter Subject Emotion Recognition Using Spatio-Temporal Features From EEG Signal
[ "Mohammad Asif", "Diya Srivastava", "Aditya Gupta", "Uma Shanker Tiwary" ]
Inter-subject or subject-independent emotion recognition has been a challenging task in affective computing. This work is about an easy-to-implement emotion recognition model that classifies emotions from EEG signals subject independently. It is based on the famous EEGNet architecture, which is used in EEG-related BCIs. We used the Dataset on Emotion using Naturalistic Stimuli (DENS) dataset. The dataset contains the Emotional Events -- the precise information of the emotion timings that participants felt. The model is a combination of regular, depthwise and separable convolution layers of CNN to classify the emotions. The model has the capacity to learn the spatial features of the EEG channels and the temporal features of the EEG signals variability with time. The model is evaluated for the valence space ratings. The model achieved an accuracy of 73.04%.
[ "cs.HC", "cs.LG", "eess.SP" ]
false
2305.17531
2023-05-27T17:22:32Z
Probing reaction channels via reinforcement learning
[ "Senwei Liang", "Aditya N. Singh", "Yuanran Zhu", "David T. Limmer", "Chao Yang" ]
We propose a reinforcement learning based method to identify important configurations that connect reactant and product states along chemical reaction paths. By shooting multiple trajectories from these configurations, we can generate an ensemble of configurations that concentrate on the transition path ensemble. This configuration ensemble can be effectively employed in a neural network-based partial differential equation solver to obtain an approximation solution of a restricted Backward Kolmogorov equation, even when the dimension of the problem is very high. The resulting solution, known as the committor function, encodes mechanistic information for the reaction and can in turn be used to evaluate reaction rates.
[ "physics.chem-ph", "cs.AI", "cs.LG", "cs.NA", "math.NA" ]
false
2305.17611
2023-05-28T02:38:53Z
Bayesian Decision Making to Localize Visual Queries in 2D
[ "Syed Asjad", "Aniket Gupta", "Hanumant Singh" ]
This report describes our approach for the EGO4D 2023 Visual Query 2D Localization Challenge. Our method aims to reduce the number of False Positives (FP) that occur because of high similarity between the visual crop and the proposed bounding boxes from the baseline's Region Proposal Network (RPN). Our method uses a transformer to determine similarity in higher dimensions which is used as our prior belief. The results are then combined together with the similarity in lower dimensions from the Siamese Head, acting as our measurement, to generate a posterior which is then used to determine the final similarity of the visual crop with the proposed bounding box. Our code is publicly available $\href{https://github.com/s-m-asjad/EGO4D_VQ2D}{here}$.
[ "cs.CV" ]
false
2305.17654
2023-05-28T07:41:10Z
MixDehazeNet : Mix Structure Block For Image Dehazing Network
[ "LiPing Lu", "Qian Xiong", "DuanFeng Chu", "BingRong Xu" ]
Image dehazing is a typical task in the low-level vision field. Previous studies verified the effectiveness of the large convolutional kernel and attention mechanism in dehazing. However, there are two drawbacks: the multi-scale properties of an image are readily ignored when a large convolutional kernel is introduced, and the standard series connection of an attention module does not sufficiently consider an uneven hazy distribution. In this paper, we propose a novel framework named Mix Structure Image Dehazing Network (MixDehazeNet), which solves two issues mentioned above. Specifically, it mainly consists of two parts: the multi-scale parallel large convolution kernel module and the enhanced parallel attention module. Compared with a single large kernel, parallel large kernels with multi-scale are more capable of taking partial texture into account during the dehazing phase. In addition, an enhanced parallel attention module is developed, in which parallel connections of attention perform better at dehazing uneven hazy distribution. Extensive experiments on three benchmarks demonstrate the effectiveness of our proposed methods. For example, compared with the previous state-of-the-art methods, MixDehazeNet achieves a significant improvement (42.62dB PSNR) on the SOTS indoor dataset. The code is released in https://github.com/AmeryXiong/MixDehazeNet.
[ "cs.CV" ]
false
2305.17695
2023-05-28T11:39:51Z
k-NNN: Nearest Neighbors of Neighbors for Anomaly Detection
[ "Ori Nizan", "Ayellet Tal" ]
Anomaly detection aims at identifying images that deviate significantly from the norm. We focus on algorithms that embed the normal training examples in space and when given a test image, detect anomalies based on the features distance to the k-nearest training neighbors. We propose a new operator that takes into account the varying structure & importance of the features in the embedding space. Interestingly, this is done by taking into account not only the nearest neighbors, but also the neighbors of these neighbors (k-NNN). We show that by simply replacing the nearest neighbor component in existing algorithms by our k-NNN operator, while leaving the rest of the algorithms untouched, each algorithms own results are improved. This is the case both for common homogeneous datasets, such as flowers or nuts of a specific type, as well as for more diverse datasets
[ "cs.CV" ]
false
2305.17710
2023-05-28T12:31:27Z
OccCasNet: Occlusion-aware Cascade Cost Volume for Light Field Depth Estimation
[ "Wentao Chao", "Fuqing Duan", "Xuechun Wang", "Yingqian Wang", "Guanghui Wang" ]
Light field (LF) depth estimation is a crucial task with numerous practical applications. However, mainstream methods based on the multi-view stereo (MVS) are resource-intensive and time-consuming as they need to construct a finer cost volume. To address this issue and achieve a better trade-off between accuracy and efficiency, we propose an occlusion-aware cascade cost volume for LF depth (disparity) estimation. Our cascaded strategy reduces the sampling number while keeping the sampling interval constant during the construction of a finer cost volume. We also introduce occlusion maps to enhance accuracy in constructing the occlusion-aware cost volume. Specifically, we first obtain the coarse disparity map through the coarse disparity estimation network. Then, the sub-aperture images (SAIs) of side views are warped to the center view based on the initial disparity map. Next, we propose photo-consistency constraints between the warped SAIs and the center SAI to generate occlusion maps for each SAI. Finally, we introduce the coarse disparity map and occlusion maps to construct an occlusion-aware refined cost volume, enabling the refined disparity estimation network to yield a more precise disparity map. Extensive experiments demonstrate the effectiveness of our method. Compared with state-of-the-art methods, our method achieves a superior balance between accuracy and efficiency and ranks first in terms of MSE and Q25 metrics among published methods on the HCI 4D benchmark. The code and model of the proposed method are available at https://github.com/chaowentao/OccCasNet.
[ "cs.CV" ]
false
2305.17763
2023-05-28T16:18:41Z
NeurOCS: Neural NOCS Supervision for Monocular 3D Object Localization
[ "Zhixiang Min", "Bingbing Zhuang", "Samuel Schulter", "Buyu Liu", "Enrique Dunn", "Manmohan Chandraker" ]
Monocular 3D object localization in driving scenes is a crucial task, but challenging due to its ill-posed nature. Estimating 3D coordinates for each pixel on the object surface holds great potential as it provides dense 2D-3D geometric constraints for the underlying PnP problem. However, high-quality ground truth supervision is not available in driving scenes due to sparsity and various artifacts of Lidar data, as well as the practical infeasibility of collecting per-instance CAD models. In this work, we present NeurOCS, a framework that uses instance masks and 3D boxes as input to learn 3D object shapes by means of differentiable rendering, which further serves as supervision for learning dense object coordinates. Our approach rests on insights in learning a category-level shape prior directly from real driving scenes, while properly handling single-view ambiguities. Furthermore, we study and make critical design choices to learn object coordinates more effectively from an object-centric view. Altogether, our framework leads to new state-of-the-art in monocular 3D localization that ranks 1st on the KITTI-Object benchmark among published monocular methods.
[ "cs.CV" ]
false
2305.17768
2023-05-28T16:28:49Z
AIMS: All-Inclusive Multi-Level Segmentation
[ "Lu Qi", "Jason Kuen", "Weidong Guo", "Jiuxiang Gu", "Zhe Lin", "Bo Du", "Yu Xu", "Ming-Hsuan Yang" ]
Despite the progress of image segmentation for accurate visual entity segmentation, completing the diverse requirements of image editing applications for different-level region-of-interest selections remains unsolved. In this paper, we propose a new task, All-Inclusive Multi-Level Segmentation (AIMS), which segments visual regions into three levels: part, entity, and relation (two entities with some semantic relationships). We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation. Specifically, we propose task complementarity, association, and prompt mask encoder for three-level predictions. Extensive experiments demonstrate the effectiveness and generalization capacity of our method compared to other state-of-the-art methods on a single dataset or the concurrent work on segmenting anything. We will make our code and training model publicly available.
[ "cs.CV" ]
false
2305.17785
2023-05-28T18:06:46Z
Lighting and Rotation Invariant Real-time Vehicle Wheel Detector based on YOLOv5
[ "Michael Shenoda" ]
Creating an object detector, in computer vision, has some common challenges when initially developed based on Convolutional Neural Network (CNN) architecture. These challenges are more apparent when creating model that needs to adapt to images captured by various camera orientations, lighting conditions, and environmental changes. The availability of the initial training samples to cover all these conditions can be an enormous challenge with a time and cost burden. While the problem can exist when creating any type of object detection, some types are less common and have no pre-labeled image datasets that exists publicly. Sometime public datasets are not reliable nor comprehensive for a rare object type. Vehicle wheel is one of those example that been chosen to demonstrate the approach of creating a lighting and rotation invariant real-time detector based on YOLOv5 architecture. The objective is to provide a simple approach that could be used as a reference for developing other types of real-time object detectors.
[ "cs.CV" ]
false
2305.17786
2023-05-28T18:17:31Z
Real-time Object Detection: YOLOv1 Re-Implementation in PyTorch
[ "Michael Shenoda" ]
Real-time object detection is a crucial problem to solve when in comes to computer vision systems that needs to make appropriate decision based on detection in a timely manner. I have chosen the YOLO v1 architecture to implement it using PyTorch framework, with goal to familiarize with entire object detection pipeline I attempted different techniques to modify the original architecture to improve the results. Finally, I compare the metrics of my implementation to the original.
[ "cs.CV" ]
false
2305.17791
2023-05-28T18:34:59Z
LowDINO -- A Low Parameter Self Supervised Learning Model
[ "Sai Krishna Prathapaneni", "Shvejan Shashank", "Srikar Reddy K" ]
This research aims to explore the possibility of designing a neural network architecture that allows for small networks to adopt the properties of huge networks, which have shown success in self-supervised learning (SSL), for all the downstream tasks like image classification, segmentation, etc. Previous studies have shown that using convolutional neural networks (ConvNets) can provide inherent inductive bias, which is crucial for learning representations in deep learning models. To reduce the number of parameters, attention mechanisms are utilized through the usage of MobileViT blocks, resulting in a model with less than 5 million parameters. The model is trained using self-distillation with momentum encoder and a student-teacher architecture is also employed, where the teacher weights use vision transformers (ViTs) from recent SOTA SSL models. The model is trained on the ImageNet1k dataset. This research provides an approach for designing smaller, more efficient neural network architectures that can perform SSL tasks comparable to heavy models
[ "cs.CV" ]
false
2305.17820
2023-05-28T22:47:54Z
Analysis of ROC for Edge Detectors
[ "Kai Yi Ji" ]
This paper presents an evaluation of edge detectors using receiver operating characteristic (ROC) analysis on the BIPED dataset. Our study examines the benefits and drawbacks of applying this technique in Matlab. We observed that while ROC analysis is suitable for certain edge filters, but for filters such as Laplacian, Laplacian of Gaussian, and Canny, it presents challenges when accurately measuring their performance using ROC metrics. To address this issue, we introduce customization techniques to enhance the performance of these filters, enabling more accurate evaluation. Through our customization efforts, we achieved improved results, ultimately facilitating a comprehensive assessment of the edge detectors.
[ "cs.CV" ]
false
2308.05179
2023-05-28T15:51:35Z
JutePestDetect: An Intelligent Approach for Jute Pest Identification Using Fine-Tuned Transfer Learning
[ "Md. Simul Hasan Talukder", "Mohammad Raziuddin Chowdhury", "Md Sakib Ullah Sourav", "Abdullah Al Rakin", "Shabbir Ahmed Shuvo", "Rejwan Bin Sulaiman", "Musarrat Saberin Nipun", "Muntarin Islam", "Mst Rumpa Islam", "Md Aminul Islam", "Zubaer Haque" ]
In certain Asian countries, Jute is one of the primary sources of income and Gross Domestic Product (GDP) for the agricultural sector. Like many other crops, Jute is prone to pest infestations, and its identification is typically made visually in countries like Bangladesh, India, Myanmar, and China. In addition, this method is time-consuming, challenging, and somewhat imprecise, which poses a substantial financial risk. To address this issue, the study proposes a high-performing and resilient transfer learning (TL) based JutePestDetect model to identify jute pests at the early stage. Firstly, we prepared jute pest dataset containing 17 classes and around 380 photos per pest class, which were evaluated after manual and automatic pre-processing and cleaning, such as background removal and resizing. Subsequently, five prominent pre-trained models -DenseNet201, InceptionV3, MobileNetV2, VGG19, and ResNet50 were selected from a previous study to design the JutePestDetect model. Each model was revised by replacing the classification layer with a global average pooling layer and incorporating a dropout layer for regularization. To evaluate the models performance, various metrics such as precision, recall, F1 score, ROC curve, and confusion matrix were employed. These analyses provided additional insights for determining the efficacy of the models. Among them, the customized regularized DenseNet201-based proposed JutePestDetect model outperformed the others, achieving an impressive accuracy of 99%. As a result, our proposed method and strategy offer an enhanced approach to pest identification in the case of Jute, which can significantly benefit farmers worldwide.
[ "cs.CV" ]
false
2305.17624
2023-05-28T04:05:24Z
SimpSON: Simplifying Photo Cleanup with Single-Click Distracting Object Segmentation Network
[ "Chuong Huynh", "Yuqian Zhou", "Zhe Lin", "Connelly Barnes", "Eli Shechtman", "Sohrab Amirghodsi", "Abhinav Shrivastava" ]
In photo editing, it is common practice to remove visual distractions to improve the overall image quality and highlight the primary subject. However, manually selecting and removing these small and dense distracting regions can be a laborious and time-consuming task. In this paper, we propose an interactive distractor selection method that is optimized to achieve the task with just a single click. Our method surpasses the precision and recall achieved by the traditional method of running panoptic segmentation and then selecting the segments containing the clicks. We also showcase how a transformer-based module can be used to identify more distracting regions similar to the user's click position. Our experiments demonstrate that the model can effectively and accurately segment unknown distracting objects interactively and in groups. By significantly simplifying the photo cleaning and retouching process, our proposed model provides inspiration for exploring rare object segmentation and group selection with a single click.
[ "cs.CV", "cs.AI" ]
false
2305.17652
2023-05-28T07:16:44Z
ConaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval
[ "Jiapeng Wang", "Chengyu Wang", "Xiaodan Wang", "Jun Huang", "Lianwen Jin" ]
Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP) are typically adopted for various vision-language applications, including text-image retrieval. However,these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Cona) technique for cross-modal pre-training distillation. Based on our findings, the resulting ConaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an e-commercial platform further demonstrates the significant effectiveness of ConaCLIP.
[ "cs.CV", "cs.CL" ]
false
2305.17714
2023-05-28T12:57:20Z
An Open-Source Gloss-Based Baseline for Spoken to Signed Language Translation
[ "Amit Moryossef", "Mathias Müller", "Anne Göhring", "Zifan Jiang", "Yoav Goldberg", "Sarah Ebling" ]
Sign language translation systems are complex and require many components. As a result, it is very hard to compare methods across publications. We present an open-source implementation of a text-to-gloss-to-pose-to-video pipeline approach, demonstrating conversion from German to Swiss German Sign Language, French to French Sign Language of Switzerland, and Italian to Italian Sign Language of Switzerland. We propose three different components for the text-to-gloss translation: a lemmatizer, a rule-based word reordering and dropping component, and a neural machine translation system. Gloss-to-pose conversion occurs using data from a lexicon for three different signed languages, with skeletal poses extracted from videos. To generate a sentence, the text-to-gloss system is first run, and the pose representations of the resulting signs are stitched together.
[ "cs.CL", "cs.CV" ]
false
2305.17748
2023-05-28T15:04:26Z
Image Hash Minimization for Tamper Detection
[ "Subhajit Maity", "Ram Kumar Karsh" ]
Tamper detection using image hash is a very common problem of modern days. Several research and advancements have already been done to address this problem. However, most of the existing methods lack the accuracy of tamper detection when the tampered area is low, as well as requiring long image hashes. In this paper, we propose a novel method objectively to minimize the hash length while enhancing the performance at low tampered area.
[ "cs.CV", "eess.IV" ]
false
2305.17784
2023-05-28T17:59:26Z
ConvGenVisMo: Evaluation of Conversational Generative Vision Models
[ "Narjes Nikzad Khasmakhi", "Meysam Asgari-Chenaghlu", "Nabiha Asghar", "Philipp Schaer", "Dietlind Zühlke" ]
Conversational generative vision models (CGVMs) like Visual ChatGPT (Wu et al., 2023) have recently emerged from the synthesis of computer vision and natural language processing techniques. These models enable more natural and interactive communication between humans and machines, because they can understand verbal inputs from users and generate responses in natural language along with visual outputs. To make informed decisions about the usage and deployment of these models, it is important to analyze their performance through a suitable evaluation framework on realistic datasets. In this paper, we present ConvGenVisMo, a framework for the novel task of evaluating CGVMs. ConvGenVisMo introduces a new benchmark evaluation dataset for this task, and also provides a suite of existing and new automated evaluation metrics to evaluate the outputs. All ConvGenVisMo assets, including the dataset and the evaluation code, will be made available publicly on GitHub.
[ "cs.CV", "cs.AI" ]
false
2305.17828
2023-05-28T23:42:35Z
Counter-Hypothetical Particle Filters for Single Object Pose Tracking
[ "Elizabeth A. Olson", "Jana Pavlasek", "Jasmine A. Berry", "Odest Chadwicke Jenkins" ]
Particle filtering is a common technique for six degree of freedom (6D) pose estimation due to its ability to tractably represent belief over object pose. However, the particle filter is prone to particle deprivation due to the high-dimensional nature of 6D pose. When particle deprivation occurs, it can cause mode collapse of the underlying belief distribution during importance sampling. If the region surrounding the true state suffers from mode collapse, recovering its belief is challenging since the area is no longer represented in the probability mass formed by the particles. Previous methods mitigate this problem by randomizing and resetting particles in the belief distribution, but determining the frequency of reinvigoration has relied on hand-tuning abstract heuristics. In this paper, we estimate the necessary reinvigoration rate at each time step by introducing a Counter-Hypothetical likelihood function, which is used alongside the standard likelihood. Inspired by the notions of plausibility and implausibility from Evidential Reasoning, the addition of our Counter-Hypothetical likelihood function assigns a level of doubt to each particle. The competing cumulative values of confidence and doubt across the particle set are used to estimate the level of failure within the filter, in order to determine the portion of particles to be reinvigorated. We demonstrate the effectiveness of our method on the rigid body object 6D pose tracking task.
[ "cs.RO", "cs.CV" ]
false
2305.18373
2023-05-28T04:49:01Z
KAFA: Rethinking Image Ad Understanding with Knowledge-Augmented Feature Adaptation of Vision-Language Models
[ "Zhiwei Jia", "Pradyumna Narayana", "Arjun R. Akula", "Garima Pruthi", "Hao Su", "Sugato Basu", "Varun Jampani" ]
Image ad understanding is a crucial task with wide real-world applications. Although highly challenging with the involvement of diverse atypical scenes, real-world entities, and reasoning over scene-texts, how to interpret image ads is relatively under-explored, especially in the era of foundational vision-language models (VLMs) featuring impressive generalizability and adaptability. In this paper, we perform the first empirical study of image ad understanding through the lens of pre-trained VLMs. We benchmark and reveal practical challenges in adapting these VLMs to image ad understanding. We propose a simple feature adaptation strategy to effectively fuse multimodal information for image ads and further empower it with knowledge of real-world entities. We hope our study draws more attention to image ad understanding which is broadly relevant to the advertising industry.
[ "cs.CV", "cs.CL" ]
true
2305.18424
2023-05-28T20:38:13Z
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
[ "Patrik Okanovic", "Roger Waleffe", "Vasilis Mageirakos", "Konstantinos E. Nikolakakis", "Amin Karbasi", "Dionysis Kalogerias", "Nezihe Merve Gürel", "Theodoros Rekatsinas" ]
Methods for carefully selecting or generating a small set of training data to learn from, i.e., data pruning, coreset selection, and data distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks. Behind this success are rigorously designed strategies for identifying informative training examples out of large datasets. However, these strategies come with additional computational costs associated with subset selection or data distillation before training begins, and furthermore, many are shown to even under-perform random sampling in high data compression regimes. As such, many data pruning, coreset selection, or distillation methods may not reduce 'time-to-accuracy', which has become a critical efficiency measure of training deep neural networks over large datasets. In this work, we revisit a powerful yet overlooked random sampling strategy to address these challenges and introduce an approach called Repeated Sampling of Random Subsets (RSRS or RS2), where we randomly sample the subset of training data for each epoch of model training. We test RS2 against thirty state-of-the-art data pruning and data distillation methods across four datasets including ImageNet. Our results demonstrate that RS2 significantly reduces time-to-accuracy compared to existing techniques. For example, when training on ImageNet in the high-compression regime (using less than 10% of the dataset each epoch), RS2 yields accuracy improvements up to 29% compared to competing pruning methods while offering a runtime reduction of 7x. Beyond the above meta-study, we provide a convergence analysis for RS2 and discuss its generalization capability. The primary goal of our work is to establish RS2 as a competitive baseline for future data selection or distillation techniques aimed at efficient training.
[ "cs.LG", "cs.CV" ]
false
2305.18433
2023-05-28T23:54:52Z
Cognitively Inspired Cross-Modal Data Generation Using Diffusion Models
[ "Zizhao Hu", "Mohammad Rostami" ]
Most existing cross-modal generative methods based on diffusion models use guidance to provide control over the latent space to enable conditional generation across different modalities. Such methods focus on providing guidance through separately-trained models, each for one modality. As a result, these methods suffer from cross-modal information loss and are limited to unidirectional conditional generation. Inspired by how humans synchronously acquire multi-modal information and learn the correlation between modalities, we explore a multi-modal diffusion model training and sampling scheme that uses channel-wise image conditioning to learn cross-modality correlation during the training phase to better mimic the learning process in the brain. Our empirical results demonstrate that our approach can achieve data generation conditioned on all correlated modalities.
[ "cs.LG", "cs.CV" ]
false
2305.19146
2023-05-28T16:52:25Z
ASU-CNN: An Efficient Deep Architecture for Image Classification and Feature Visualizations
[ "Jamshaid Ul Rahman", "Faiza Makhdoom", "Dianchen Lu" ]
Activation functions play a decisive role in determining the capacity of Deep Neural Networks as they enable neural networks to capture inherent nonlinearities present in data fed to them. The prior research on activation functions primarily focused on the utility of monotonic or non-oscillatory functions, until Growing Cosine Unit broke the taboo for a number of applications. In this paper, a Convolutional Neural Network model named as ASU-CNN is proposed which utilizes recently designed activation function ASU across its layers. The effect of this non-monotonic and oscillatory function is inspected through feature map visualizations from different convolutional layers. The optimization of proposed network is offered by Adam with a fine-tuned adjustment of learning rate. The network achieved promising results on both training and testing data for the classification of CIFAR-10. The experimental results affirm the computational feasibility and efficacy of the proposed model for performing tasks related to the field of computer vision.
[ "cs.CV", "cs.LG" ]
false
2306.00835
2023-05-28T10:46:18Z
Reconstructing Sea Surface Temperature Images: A Masked Autoencoder Approach for Cloud Masking and Reconstruction
[ "Angelina Agabin", "J. Xavier Prochaska" ]
This thesis presents a new algorithm to mitigate cloud masking in the analysis of sea surface temperature (SST) data generated by remote sensing technologies, e.g., Clouds interfere with the analysis of all remote sensing data using wavelengths shorter than 12 microns, significantly limiting the quantity of usable data and creating a biased geographical distribution (towards equatorial and coastal regions). To address this issue, we propose an unsupervised machine learning algorithm called Enki which uses a Vision Transformer with Masked Autoencoding to reconstruct masked pixels. We train four different models of Enki with varying mask ratios (t) of 10%, 35%, 50%, and 75% on the generated Ocean General Circulation Model (OGCM) dataset referred to as LLC4320. To evaluate performance, we reconstruct a validation set of LLC4320 SST images with random ``clouds'' corrupting p=10%, 20%, 30%, 40%, 50% of the images with individual patches of 4x4 pixel^2. We consistently find that at all levels of p there is one or multiple models that reconstruct the images with a mean RMSE of less than 0.03K, i.e. lower than the estimated sensor error of VIIRS data. Similarly, at the individual patch level, the reconstructions have RMSE 8x smaller than the fluctuations in the patch. And, as anticipated, reconstruction errors are larger for images with a higher degree of complexity. Our analysis also reveals that patches along the image border have systematically higher reconstruction error; we recommend ignoring these in production. We conclude that Enki shows great promise to surpass in-painting as a means of reconstructing cloud masking. Future research will develop Enki to reconstruct real-world data.
[ "cs.CV", "physics.ao-ph" ]
false
2305.18387
2023-05-28T10:52:03Z
Augmenting Character Designers Creativity Using Generative Adversarial Networks
[ "Mohammad Lataifeh", "Xavier Carrasco", "Ashraf Elnagar", "Naveed Ahmed" ]
Recent advances in Generative Adversarial Networks (GANs) continue to attract the attention of researchers in different fields due to the wide range of applications devised to take advantage of their key features. Most recent GANs are focused on realism, however, generating hyper-realistic output is not a priority for some domains, as in the case of this work. The generated outcomes are used here as cognitive components to augment character designers creativity while conceptualizing new characters for different multimedia projects. To select the best-suited GANs for such a creative context, we first present a comparison between different GAN architectures and their performance when trained from scratch on a new visual characters dataset using a single Graphics Processing Unit. We also explore alternative techniques, such as transfer learning and data augmentation, to overcome computational resource limitations, a challenge faced by many researchers in the domain. Additionally, mixed methods are used to evaluate the cognitive value of the generated visuals on character designers agency conceptualizing new characters. The results discussed proved highly effective for this context, as demonstrated by early adaptations to the characters design process. As an extension for this work, the presented approach will be further evaluated as a novel co-design process between humans and machines to investigate where and how the generated concepts are interacting with and influencing the design process outcome.
[ "cs.HC", "cs.CV", "cs.LG" ]
false
2305.18398
2023-05-28T13:35:50Z
Mitigating Inappropriateness in Image Generation: Can there be Value in Reflecting the World's Ugliness?
[ "Manuel Brack", "Felix Friedrich", "Patrick Schramowski", "Kristian Kersting" ]
Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the web, they also reproduce inappropriate human behavior. Specifically, we demonstrate inappropriate degeneration on a large-scale for various generative text-to-image models, thus motivating the need for monitoring and moderating them at deployment. To this end, we evaluate mitigation strategies at inference to suppress the generation of inappropriate content. Our findings show that we can use models' representations of the world's ugliness to align them with human preferences.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2305.19129
2023-05-28T20:26:06Z
Key-Value Transformer
[ "Ali Borji" ]
Transformers have emerged as the prevailing standard solution for various AI tasks, including computer vision and natural language processing. The widely adopted Query, Key, and Value formulation (QKV) has played a significant role in this. Nevertheless, no research has examined the essentiality of these three components for transformer performance. Therefore, we conducted an evaluation of the key-value formulation (KV), which generates symmetric attention maps, along with an asymmetric version that incorporates a 2D positional encoding into the attention matrix. Remarkably, this transformer requires fewer parameters and computation than the original one. Through experiments encompassing three task types -- synthetics (such as reversing or sorting a list), vision (mnist or cifar classification), and NLP (character generation and translation) -- we discovered that the KV transformer occasionally outperforms the QKV transformer. However, it also exhibits instances of underperformance compared to QKV, making it challenging to draw a definitive conclusion. Nonetheless, we consider the reported results to be encouraging and anticipate that they may pave the way for more efficient transformers in the future.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2305.17607
2023-05-28T02:09:08Z
More than Classification: A Unified Framework for Event Temporal Relation Extraction
[ "Quzhe Huang", "Yutong Hu", "Shengqi Zhu", "Yansong Feng", "Chang Liu", "Dongyan Zhao" ]
Event temporal relation extraction~(ETRE) is usually formulated as a multi-label classification task, where each type of relation is simply treated as a one-hot label. This formulation ignores the meaning of relations and wipes out their intrinsic dependency. After examining the relation definitions in various ETRE tasks, we observe that all relations can be interpreted using the start and end time points of events. For example, relation \textit{Includes} could be interpreted as event 1 starting no later than event 2 and ending no earlier than event 2. In this paper, we propose a unified event temporal relation extraction framework, which transforms temporal relations into logical expressions of time points and completes the ETRE by predicting the relations between certain time point pairs. Experiments on TB-Dense and MATRES show significant improvements over a strong baseline and outperform the state-of-the-art model by 0.3\% on both datasets. By representing all relations in a unified framework, we can leverage the relations with sufficient data to assist the learning of other relations, thus achieving stable improvement in low-data scenarios. When the relation definitions are changed, our method can quickly adapt to the new ones by simply modifying the logic expressions that map time points to new event relations. The code is released at \url{https://github.com/AndrewZhe/A-Unified-Framework-for-ETRE}.
[ "cs.CL" ]
false
2305.17653
2023-05-28T07:27:12Z
Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks
[ "Zhicheng Guo", "Sijie Cheng", "Yile Wang", "Peng Li", "Yang Liu" ]
Retrieval-augmented methods have received increasing attention to support downstream tasks by leveraging useful information from external resources. Recent studies mainly focus on exploring retrieval to solve knowledge-intensive (KI) tasks. However, the potential of retrieval for most non-knowledge-intensive (NKI) tasks remains under-explored. There are two main challenges to leveraging retrieval-augmented methods for NKI tasks: 1) the demand for diverse relevance score functions and 2) the dilemma between training cost and task performance. To address these challenges, we propose a two-stage framework for NKI tasks, named PGRA. In the first stage, we adopt a task-agnostic retriever to build a shared static index and select candidate evidence efficiently. In the second stage, we design a prompt-guided reranker to rerank the nearest evidence according to task-specific relevance for the reader. Experimental results show that PGRA outperforms other state-of-the-art retrieval-augmented methods. Our analyses further investigate the influence factors to model performance and demonstrate the generality of PGRA. Codes are available at https://github.com/THUNLP-MT/PGRA.
[ "cs.CL" ]
false
2305.17660
2023-05-28T08:01:40Z
Plug-and-Play Document Modules for Pre-trained Models
[ "Chaojun Xiao", "Zhengyan Zhang", "Xu Han", "Chi-Min Chan", "Yankai Lin", "Zhiyuan Liu", "Xiangyang Li", "Zhonghua Li", "Zhao Cao", "Maosong Sun" ]
Large-scale pre-trained models (PTMs) have been widely used in document-oriented NLP tasks, such as question answering. However, the encoding-task coupling requirement results in the repeated encoding of the same documents for different tasks and queries, which is highly computationally inefficient. To this end, we target to decouple document encoding from downstream tasks, and propose to represent each document as a plug-and-play document module, i.e., a document plugin, for PTMs (PlugD). By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders. Extensive experiments on 8 datasets of 4 typical NLP tasks show that PlugD enables models to encode documents once and for all across different scenarios. Especially, PlugD can save $69\%$ computational costs while achieving comparable performance to state-of-the-art encoding-task coupling methods. Additionally, we show that PlugD can serve as an effective post-processing way to inject knowledge into task-specific models, improving model performance without any additional model training.
[ "cs.CL" ]
false
2305.17663
2023-05-28T08:17:07Z
Lexical Retrieval Hypothesis in Multimodal Context
[ "Po-Ya Angela Wang", "Pin-Er Chen", "Hsin-Yu Chou", "Yu-Hsiang Tseng", "Shu-Kai Hsieh" ]
Multimodal corpora have become an essential language resource for language science and grounded natural language processing (NLP) systems due to the growing need to understand and interpret human communication across various channels. In this paper, we first present our efforts in building the first Multimodal Corpus for Languages in Taiwan (MultiMoco). Based on the corpus, we conduct a case study investigating the Lexical Retrieval Hypothesis (LRH), specifically examining whether the hand gestures co-occurring with speech constants facilitate lexical retrieval or serve other discourse functions. With detailed annotations on eight parliamentary interpellations in Taiwan Mandarin, we explore the co-occurrence between speech constants and non-verbal features (i.e., head movement, face movement, hand gesture, and function of hand gesture). Our findings suggest that while hand gestures do serve as facilitators for lexical retrieval in some cases, they also serve the purpose of information emphasis. This study highlights the potential of the MultiMoco Corpus to provide an important resource for in-depth analysis and further research in multimodal communication studies.
[ "cs.CL" ]
false
2305.17670
2023-05-28T09:22:44Z
Stochastic Bridges as Effective Regularizers for Parameter-Efficient Tuning
[ "Weize Chen", "Xu Han", "Yankai Lin", "Zhiyuan Liu", "Maosong Sun", "Jie Zhou" ]
Parameter-efficient tuning methods (PETs) have achieved promising results in tuning large pre-trained language models (PLMs). By formalizing frozen PLMs and additional tunable parameters as systems and controls respectively, PETs can be theoretically grounded to optimal control and further viewed as optimizing the terminal cost and running cost in the optimal control literature. Despite the elegance of this theoretical grounding, in practice, existing PETs often ignore the running cost and only optimize the terminal cost, i.e., focus on optimizing the loss function of the output state, regardless of the running cost that depends on the intermediate states. Since it is non-trivial to directly model the intermediate states and design a running cost function, we propose to use latent stochastic bridges to regularize the intermediate states and use the regularization as the running cost of PETs. As the first work to propose regularized PETs that use stochastic bridges as the regularizers (running costs) for the intermediate states, we show the effectiveness and generality of this regularization across different tasks, PLMs and PETs. In view of the great potential and capacity, we believe more sophisticated regularizers can be designed for PETs and better performance can be achieved in the future. The code is released at \url{https://github.com/thunlp/stochastic-bridge-pet/tree/main}.
[ "cs.CL" ]
false
2305.17679
2023-05-28T10:04:15Z
RuSentNE-2023: Evaluating Entity-Oriented Sentiment Analysis on Russian News Texts
[ "Anton Golubev", "Nicolay Rusnachenko", "Natalia Loukachevitch" ]
The paper describes the RuSentNE-2023 evaluation devoted to targeted sentiment analysis in Russian news texts. The task is to predict sentiment towards a named entity in a single sentence. The dataset for RuSentNE-2023 evaluation is based on the Russian news corpus RuSentNE having rich sentiment-related annotation. The corpus is annotated with named entities and sentiments towards these entities, along with related effects and emotional states. The evaluation was organized using the CodaLab competition framework. The main evaluation measure was macro-averaged measure of positive and negative classes. The best results achieved were of 66% Macro F-measure (Positive+Negative classes). We also tested ChatGPT on the test set from our evaluation and found that the zero-shot answers provided by ChatGPT reached 60% of the F-measure, which corresponds to 4th place in the evaluation. ChatGPT also provided detailed explanations of its conclusion. This can be considered as quite high for zero-shot application.
[ "cs.CL", "I.2.7" ]
false
2305.17690
2023-05-28T10:55:31Z
HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language
[ "Shantipriya Parida", "Idris Abdulmumin", "Shamsuddeen Hassan Muhammad", "Aneesh Bose", "Guneet Singh Kohli", "Ibrahim Said Ahmad", "Ketan Kotwal", "Sayan Deb Sarkar", "Ondřej Bojar", "Habeebah Adamu Kakudi" ]
This paper presents HaVQA, the first multimodal dataset for visual question-answering (VQA) tasks in the Hausa language. The dataset was created by manually translating 6,022 English question-answer pairs, which are associated with 1,555 unique images from the Visual Genome dataset. As a result, the dataset provides 12,044 gold standard English-Hausa parallel sentences that were translated in a fashion that guarantees their semantic match with the corresponding visual information. We conducted several baseline experiments on the dataset, including visual question answering, visual question elicitation, text-only and multimodal machine translation.
[ "cs.CL" ]
false
2305.17696
2023-05-28T11:51:20Z
SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration
[ "Hwaran Lee", "Seokhee Hong", "Joonsuk Park", "Takyoung Kim", "Meeyoung Cha", "Yejin Choi", "Byoung Pil Kim", "Gunhee Kim", "Eun-Ju Lee", "Yong Lim", "Alice Oh", "Sangchul Park", "Jung-Woo Ha" ]
The potential social harms that large language models pose, such as generating offensive content and reinforcing biases, are steeply rising. Existing works focus on coping with this concern while interacting with ill-intentioned users, such as those who explicitly make hate speech or elicit harmful responses. However, discussions on sensitive issues can become toxic even if the users are well-intentioned. For safer models in such scenarios, we present the Sensitive Questions and Acceptable Response (SQuARe) dataset, a large-scale Korean dataset of 49k sensitive questions with 42k acceptable and 46k non-acceptable responses. The dataset was constructed leveraging HyperCLOVA in a human-in-the-loop manner based on real news headlines. Experiments show that acceptable response generation significantly improves for HyperCLOVA and GPT-3, demonstrating the efficacy of this dataset.
[ "cs.CL" ]
false
2305.17698
2023-05-28T11:58:07Z
Neural Machine Translation with Dynamic Graph Convolutional Decoder
[ "Lei Li", "Kai Fan", "Lingyu Yang", "Hongjia Li", "Chun Yuan" ]
Existing wisdom demonstrates the significance of syntactic knowledge for the improvement of neural machine translation models. However, most previous works merely focus on leveraging the source syntax in the well-known encoder-decoder framework. In sharp contrast, this paper proposes an end-to-end translation architecture from the (graph \& sequence) structural inputs to the (graph \& sequence) outputs, where the target translation and its corresponding syntactic graph are jointly modeled and generated. We propose a customized Dynamic Spatial-Temporal Graph Convolutional Decoder (Dyn-STGCD), which is designed for consuming source feature representations and their syntactic graph, and auto-regressively generating the target syntactic graph and tokens simultaneously. We conduct extensive experiments on five widely acknowledged translation benchmarks, verifying that our proposal achieves consistent improvements over baselines and other syntax-aware variants.
[ "cs.CL" ]
false
2305.17699
2023-05-28T12:01:34Z
Decoupling Pseudo Label Disambiguation and Representation Learning for Generalized Intent Discovery
[ "Yutao Mou", "Xiaoshuai Song", "Keqing He", "Chen Zeng", "Pei Wang", "Jingang Wang", "Yunsen Xian", "Weiran Xu" ]
Generalized intent discovery aims to extend a closed-set in-domain intent classifier to an open-world intent set including in-domain and out-of-domain intents. The key challenges lie in pseudo label disambiguation and representation learning. Previous methods suffer from a coupling of pseudo label disambiguation and representation learning, that is, the reliability of pseudo labels relies on representation learning, and representation learning is restricted by pseudo labels in turn. In this paper, we propose a decoupled prototype learning framework (DPL) to decouple pseudo label disambiguation and representation learning. Specifically, we firstly introduce prototypical contrastive representation learning (PCL) to get discriminative representations. And then we adopt a prototype-based label disambiguation method (PLD) to obtain pseudo labels. We theoretically prove that PCL and PLD work in a collaborative fashion and facilitate pseudo label disambiguation. Experiments and analysis on three benchmark datasets show the effectiveness of our method.
[ "cs.CL" ]
false
2305.17709
2023-05-28T12:30:23Z
Parallel Data Helps Neural Entity Coreference Resolution
[ "Gongbo Tang", "Christian Hardmeier" ]
Coreference resolution is the task of finding expressions that refer to the same entity in a text. Coreference models are generally trained on monolingual annotated data but annotating coreference is expensive and challenging. Hardmeier et al.(2013) have shown that parallel data contains latent anaphoric knowledge, but it has not been explored in end-to-end neural models yet. In this paper, we propose a simple yet effective model to exploit coreference knowledge from parallel data. In addition to the conventional modules learning coreference from annotations, we introduce an unsupervised module to capture cross-lingual coreference knowledge. Our proposed cross-lingual model achieves consistent improvements, up to 1.74 percentage points, on the OntoNotes 5.0 English dataset using 9 different synthetic parallel datasets. These experimental results confirm that parallel data can provide additional coreference knowledge which is beneficial to coreference resolution tasks.
[ "cs.CL" ]
false
2305.17721
2023-05-28T13:19:12Z
Rethinking Masked Language Modeling for Chinese Spelling Correction
[ "Hongqiu Wu", "Shaohua Zhang", "Yuchen Zhang", "Hai Zhao" ]
In this paper, we study Chinese Spelling Correction (CSC) as a joint decision made by two separate models: a language model and an error model. Through empirical analysis, we find that fine-tuning BERT tends to over-fit the error model while under-fit the language model, resulting in poor generalization to out-of-distribution error patterns. Given that BERT is the backbone of most CSC models, this phenomenon has a significant negative impact. To address this issue, we are releasing a multi-domain benchmark LEMON, with higher quality and diversity than existing benchmarks, to allow a comprehensive assessment of the open domain generalization of CSC models. Then, we demonstrate that a very simple strategy, randomly masking 20\% non-error tokens from the input sequence during fine-tuning is sufficient for learning a much better language model without sacrificing the error model. This technique can be applied to any model architecture and achieves new state-of-the-art results on SIGHAN, ECSpell, and LEMON.
[ "cs.CL" ]
false
2305.17729
2023-05-28T13:59:58Z
Tri-level Joint Natural Language Understanding for Multi-turn Conversational Datasets
[ "Henry Weld", "Sijia Hu", "Siqu Long", "Josiah Poon", "Soyeon Caren Han" ]
Natural language understanding typically maps single utterances to a dual level semantic frame, sentence level intent and slot labels at the word level. The best performing models force explicit interaction between intent detection and slot filling. We present a novel tri-level joint natural language understanding approach, adding domain, and explicitly exchange semantic information between all levels. This approach enables the use of multi-turn datasets which are a more natural conversational environment than single utterance. We evaluate our model on two multi-turn datasets for which we are the first to conduct joint slot-filling and intent detection. Our model outperforms state-of-the-art joint models in slot filling and intent detection on multi-turn data sets. We provide an analysis of explicit interaction locations between the layers. We conclude that including domain information improves model performance.
[ "cs.CL" ]
false
2305.17750
2023-05-28T15:14:54Z
Reliable and Interpretable Drift Detection in Streams of Short Texts
[ "Ella Rabinovich", "Matan Vetzler", "Samuel Ackerman", "Ateret Anaby-Tavor" ]
Data drift is the change in model input data that is one of the key factors leading to machine learning models performance degradation over time. Monitoring drift helps detecting these issues and preventing their harmful consequences. Meaningful drift interpretation is a fundamental step towards effective re-training of the model. In this study we propose an end-to-end framework for reliable model-agnostic change-point detection and interpretation in large task-oriented dialog systems, proven effective in multiple customer deployments. We evaluate our approach and demonstrate its benefits with a novel variant of intent classification training dataset, simulating customer requests to a dialog system. We make the data publicly available.
[ "cs.CL" ]
false
2305.17779
2023-05-28T17:22:04Z
Generating EDU Extracts for Plan-Guided Summary Re-Ranking
[ "Griffin Adams", "Alexander R. Fabbri", "Faisal Ladhak", "Kathleen McKeown", "Noémie Elhadad" ]
Two-step approaches, in which summary candidates are generated-then-reranked to return a single summary, can improve ROUGE scores over the standard single-step approach. Yet, standard decoding methods (i.e., beam search, nucleus sampling, and diverse beam search) produce candidates with redundant, and often low quality, content. In this paper, we design a novel method to generate candidates for re-ranking that addresses these issues. We ground each candidate abstract on its own unique content plan and generate distinct plan-guided abstracts using a model's top beam. More concretely, a standard language model (a BART LM) auto-regressively generates elemental discourse unit (EDU) content plans with an extractive copy mechanism. The top K beams from the content plan generator are then used to guide a separate LM, which produces a single abstractive candidate for each distinct plan. We apply an existing re-ranker (BRIO) to abstractive candidates generated from our method, as well as baseline decoding methods. We show large relevance improvements over previously published methods on widely used single document news article corpora, with ROUGE-2 F1 gains of 0.88, 2.01, and 0.38 on CNN / Dailymail, NYT, and Xsum, respectively. A human evaluation on CNN / DM validates these results. Similarly, on 1k samples from CNN / DM, we show that prompting GPT-3 to follow EDU plans outperforms sampling-based methods by 1.05 ROUGE-2 F1 points. Code to generate and realize plans is available at https://github.com/griff4692/edu-sum.
[ "cs.CL" ]
false
2305.17804
2023-05-28T19:36:50Z
Targeted Data Generation: Finding and Fixing Model Weaknesses
[ "Zexue He", "Marco Tulio Ribeiro", "Fereshte Khani" ]
Even when aggregate accuracy is high, state-of-the-art NLP models often fail systematically on specific subgroups of data, resulting in unfair outcomes and eroding user trust. Additional data collection may not help in addressing these weaknesses, as such challenging subgroups may be unknown to users, and underrepresented in the existing and new data. We propose Targeted Data Generation (TDG), a framework that automatically identifies challenging subgroups, and generates new data for those subgroups using large language models (LLMs) with a human in the loop. TDG estimates the expected benefit and potential harm of data augmentation for each subgroup, and selects the ones most likely to improve within group performance without hurting overall performance. In our experiments, TDG significantly improves the accuracy on challenging subgroups for state-of-the-art sentiment analysis and natural language inference models, while also improving overall test accuracy.
[ "cs.CL" ]
false
2305.17812
2023-05-28T20:49:52Z
Tab-CoT: Zero-shot Tabular Chain of Thought
[ "Ziqi Jin", "Wei Lu" ]
The chain-of-though (CoT) prompting methods were successful in various natural language processing (NLP) tasks thanks to their ability to unveil the underlying complex reasoning processes. Such reasoning processes typically exhibit implicitly structured steps. Recent efforts also started investigating methods to encourage more explicitly structured reasoning procedures to be captured. In this work, we propose Tab-CoT, a novel tabular-format CoT prompting method, which allows the complex reasoning process to be explicitly modelled in a highly structured manner. Despite its simplicity, we show that our approach is capable of performing reasoning across multiple dimensions (i.e., both rows and columns). We demonstrate our approach's strong zero-shot and few-shot capabilities through extensive experiments on a range of reasoning tasks.
[ "cs.CL" ]
false
2305.17740
2023-05-28T14:48:38Z
Breaking Language Barriers with a LEAP: Learning Strategies for Polyglot LLMs
[ "Akshay Nambi", "Vaibhav Balloli", "Mercy Ranjit", "Tanuja Ganu", "Kabir Ahuja", "Sunayana Sitaram", "Kalika Bali" ]
Large language models (LLMs) are at the forefront of transforming numerous domains globally. However, their inclusivity and effectiveness remain limited for non-Latin scripts and low-resource languages. This paper tackles the imperative challenge of enhancing the multilingual performance of LLMs, specifically focusing on Generative models. Through systematic investigation and evaluation of diverse languages using popular question-answering (QA) datasets, we present novel techniques that unlock the true potential of LLMs in a polyglot landscape. Our approach encompasses three key strategies that yield remarkable improvements in multilingual proficiency. First, by meticulously optimizing prompts tailored for polyglot LLMs, we unlock their latent capabilities, resulting in substantial performance boosts across languages. Second, we introduce a new hybrid approach that synergizes GPT generation with multilingual embeddings and achieves significant multilingual performance improvement on critical tasks like QA and retrieval. Finally, to further propel the performance of polyglot LLMs, we introduce a novel learning algorithm that dynamically selects the optimal prompt strategy, LLM model, and embeddings per query. This dynamic adaptation maximizes the efficacy of LLMs across languages, outperforming best static and random strategies. Our results show substantial advancements in multilingual understanding and generation across a diverse range of languages.
[ "cs.CL", "cs.AI" ]
false
2305.17817
2023-05-28T22:36:35Z
Transfer Learning for Power Outage Detection Task with Limited Training Data
[ "Olukunle Owolabi" ]
Early detection of power outages is crucial for maintaining a reliable power distribution system. This research investigates the use of transfer learning and language models in detecting outages with limited labeled data. By leveraging pretraining and transfer learning, models can generalize to unseen classes. Using a curated balanced dataset of social media tweets related to power outages, we conducted experiments using zero-shot and few-shot learning. Our hypothesis is that Language Models pretrained with limited data could achieve high performance in outage detection tasks over baseline models. Results show that while classical models outperform zero-shot Language Models, few-shot fine-tuning significantly improves their performance. For example, with 10% fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5% accuracy (+8.5%). This has practical implications for analyzing and localizing outages in scenarios with limited data availability. Our evaluation provides insights into the potential of few-shot fine-tuning with Language Models for power outage detection, highlighting their strengths and limitations. This research contributes to the knowledge base of leveraging advanced natural language processing techniques for managing critical infrastructure.
[ "cs.CL", "stat.AP" ]
false
2305.17826
2023-05-28T23:35:17Z
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
[ "Kai Mei", "Zheng Li", "Zhenting Wang", "Yang Zhang", "Shiqing Ma" ]
Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against prompt-based models consider injecting backdoors into the entire embedding layers or word embedding vectors. Such attacks can be easily affected by retraining on downstream tasks and with different prompting strategies, limiting the transferability of backdoor attacks. In this work, we propose transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies. Specifically, NOTABLE injects backdoors into the encoders of PLMs by utilizing an adaptive verbalizer to bind triggers to specific words (i.e., anchors). It activates the backdoor by pasting input with triggers to reach adversary-desired anchors, achieving independence from downstream tasks and prompting strategies. We conduct experiments on six NLP tasks, three popular models, and three prompting strategies. Empirical results show that NOTABLE achieves superior attack performance (i.e., attack success rate over 90% on all the datasets), and outperforms two state-of-the-art baselines. Evaluations on three defenses show the robustness of NOTABLE. Our code can be found at https://github.com/RU-System-Software-and-Security/Notable.
[ "cs.CL", "cs.CR" ]
false
2306.01768
2023-05-28T20:25:20Z
A Quantitative Review on Language Model Efficiency Research
[ "Meng Jiang", "Hy Dang", "Lingbo Tong" ]
Language models (LMs) are being scaled and becoming powerful. Improving their efficiency is one of the core research topics in neural information processing systems. Tay et al. (2022) provided a comprehensive overview of efficient Transformers that have become an indispensable staple in the field of NLP. However, in the section of "On Evaluation", they left an open question "which fundamental efficient Transformer one should consider," answered by "still a mystery" because "many research papers select their own benchmarks." Unfortunately, there was not quantitative analysis about the performances of Transformers on any benchmarks. Moreover, state space models (SSMs) have demonstrated their abilities of modeling long-range sequences with non-attention mechanisms, which were not discussed in the prior review. This article makes a meta analysis on the results from a set of papers on efficient Transformers as well as those on SSMs. It provides a quantitative review on LM efficiency research and gives suggestions for future research.
[ "cs.LG", "cs.CL" ]
false
2305.17619
2023-05-28T03:29:59Z
AI Coach Assist: An Automated Approach for Call Recommendation in Contact Centers for Agent Coaching
[ "Md Tahmid Rahman Laskar", "Cheng Chen", "Xue-Yong Fu", "Mahsa Azizi", "Shashi Bhushan", "Simon Corston-Oliver" ]
In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts using Natural Language Processing (NLP) techniques, it would be possible to quickly determine which calls are most relevant for coaching purposes. In this paper, we present AI Coach Assist, which leverages the pre-trained transformer-based language models to determine whether a given call is coachable or not based on the quality assurance (QA) questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an effective way to recommend calls to the contact center managers that are more likely to contain coachable moments. Our experimental findings demonstrate the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents.
[ "cs.CL", "cs.AI", "cs.LG" ]
false
2305.17627
2023-05-28T04:25:04Z
Robust Natural Language Understanding with Residual Attention Debiasing
[ "Fei Wang", "James Y. Huang", "Tianyi Yan", "Wenxuan Zhou", "Muhao Chen" ]
Natural language understanding (NLU) models often suffer from unintended dataset biases. Among bias mitigation methods, ensemble-based debiasing methods, especially product-of-experts (PoE), have stood out for their impressive empirical success. However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns. Attention serves as the main media of feature interaction and aggregation in PLMs and plays a crucial role in providing robust prediction. In this paper, we propose REsidual Attention Debiasing (READ), an end-to-end debiasing method that mitigates unintended biases from attention. Experiments on three NLU tasks show that READ significantly improves the performance of BERT-based models on OOD data with shortcuts removed, including +12.9% accuracy on HANS, +11.0% accuracy on FEVER-Symmetric, and +2.7% F1 on PAWS. Detailed analyses demonstrate the crucial role of unbiased attention in robust NLU models and that READ effectively mitigates biases in attention. Code is available at https://github.com/luka-group/READ.
[ "cs.CL", "cs.AI", "cs.LG" ]
false
2305.17651
2023-05-28T07:09:33Z
DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models
[ "Yifan Peng", "Yui Sudo", "Shakeel Muhammad", "Shinji Watanabe" ]
Self-supervised learning (SSL) has achieved notable success in many speech processing tasks, but the large model size and heavy computational cost hinder the deployment. Knowledge distillation trains a small student model to mimic the behavior of a large teacher model. However, the student architecture usually needs to be manually designed and will remain fixed during training, which requires prior knowledge and can lead to suboptimal performance. Inspired by recent success of task-specific structured pruning, we propose DPHuBERT, a novel task-agnostic compression method for speech SSL based on joint distillation and pruning. Experiments on SUPERB show that DPHuBERT outperforms pure distillation methods in almost all tasks. Moreover, DPHuBERT requires little training time and performs well with limited training data, making it suitable for resource-constrained applications. Our method can also be applied to various speech SSL models. Our code and models will be publicly available.
[ "cs.CL", "cs.SD", "eess.AS" ]
false
2305.17733
2023-05-28T14:15:19Z
Investigating Pre-trained Audio Encoders in the Low-Resource Condition
[ "Hao Yang", "Jinming Zhao", "Gholamreza Haffari", "Ehsan Shareghi" ]
Pre-trained speech encoders have been central to pushing state-of-the-art results across various speech understanding and generation tasks. Nonetheless, the capabilities of these encoders in low-resource settings are yet to be thoroughly explored. To address this, we conduct a comprehensive set of experiments using a representative set of 3 state-of-the-art encoders (Wav2vec2, WavLM, Whisper) in the low-resource setting across 7 speech understanding and generation tasks. We provide various quantitative and qualitative analyses on task performance, convergence speed, and representational properties of the encoders. We observe a connection between the pre-training protocols of these encoders and the way in which they capture information in their internal layers. In particular, we observe the Whisper encoder exhibits the greatest low-resource capabilities on content-driven tasks in terms of performance and convergence speed.
[ "cs.CL", "cs.SD", "eess.AS" ]
false
2305.17739
2023-05-28T14:46:54Z
Range-Based Equal Error Rate for Spoof Localization
[ "Lin Zhang", "Xin Wang", "Erica Cooper", "Nicholas Evans", "Junichi Yamagishi" ]
Spoof localization, also called segment-level detection, is a crucial task that aims to locate spoofs in partially spoofed audio. The equal error rate (EER) is widely used to measure performance for such biometric scenarios. Although EER is the only threshold-free metric, it is usually calculated in a point-based way that uses scores and references with a pre-defined temporal resolution and counts the number of misclassified segments. Such point-based measurement overly relies on this resolution and may not accurately measure misclassified ranges. To properly measure misclassified ranges and better evaluate spoof localization performance, we upgrade point-based EER to range-based EER. Then, we adapt the binary search algorithm for calculating range-based EER and compare it with the classical point-based EER. Our analyses suggest utilizing either range-based EER, or point-based EER with a proper temporal resolution can fairly and properly evaluate the performance of spoof localization.
[ "cs.SD", "cs.CL", "eess.AS" ]
false
2305.17782
2023-05-28T17:48:48Z
RASR2: The RWTH ASR Toolkit for Generic Sequence-to-sequence Speech Recognition
[ "Wei Zhou", "Eugen Beck", "Simon Berger", "Ralf Schlüter", "Hermann Ney" ]
Modern public ASR tools usually provide rich support for training various sequence-to-sequence (S2S) models, but rather simple support for decoding open-vocabulary scenarios only. For closed-vocabulary scenarios, public tools supporting lexical-constrained decoding are usually only for classical ASR, or do not support all S2S models. To eliminate this restriction on research possibilities such as modeling unit choice, we present RASR2 in this work, a research-oriented generic S2S decoder implemented in C++. It offers a strong flexibility/compatibility for various S2S models, language models, label units/topologies and neural network architectures. It provides efficient decoding for both open- and closed-vocabulary scenarios based on a generalized search framework with rich support for different search modes and settings. We evaluate RASR2 with a wide range of experiments on both switchboard and Librispeech corpora. Our source code is public online.
[ "cs.CL", "cs.SD", "eess.AS" ]
false
2305.18410
2023-05-28T17:07:46Z
Understanding Breast Cancer Survival: Using Causality and Language Models on Multi-omics Data
[ "Mugariya Farooq", "Shahad Hardan", "Aigerim Zhumbhayeva", "Yujia Zheng", "Preslav Nakov", "Kun Zhang" ]
The need for more usable and explainable machine learning models in healthcare increases the importance of developing and utilizing causal discovery algorithms, which aim to discover causal relations by analyzing observational data. Explainable approaches aid clinicians and biologists in predicting the prognosis of diseases and suggesting proper treatments. However, very little research has been conducted at the crossroads between causal discovery, genomics, and breast cancer, and we aim to bridge this gap. Moreover, evaluation of causal discovery methods on real data is in general notoriously difficult because ground-truth causal relations are usually unknown, and accordingly, in this paper, we also propose to address the evaluation problem with large language models. In particular, we exploit suitable causal discovery algorithms to investigate how various perturbations in the genome can affect the survival of patients diagnosed with breast cancer. We used three main causal discovery algorithms: PC, Greedy Equivalence Search (GES), and a Generalized Precision Matrix-based one. We experiment with a subset of The Cancer Genome Atlas, which contains information about mutations, copy number variations, protein levels, and gene expressions for 705 breast cancer patients. Our findings reveal important factors related to the vital status of patients using causal discovery algorithms. However, the reliability of these results remains a concern in the medical domain. Accordingly, as another contribution of the work, the results are validated through language models trained on biomedical literature, such as BlueBERT and other large language models trained on medical corpora. Our results profess proper utilization of causal discovery algorithms and language models for revealing reliable causal relations for clinical applications.
[ "cs.LG", "cs.CL", "q-bio.GN", "stat.ME" ]
false
2305.18419
2023-05-28T19:31:45Z
Semantic Segmentation with Bidirectional Language Models Improves Long-form ASR
[ "W. Ronny Huang", "Hao Zhang", "Shankar Kumar", "Shuo-yiin Chang", "Tara N. Sainath" ]
We propose a method of segmenting long-form speech by separating semantically complete sentences within the utterance. This prevents the ASR decoder from needlessly processing faraway context while also preventing it from missing relevant context within the current sentence. Semantically complete sentence boundaries are typically demarcated by punctuation in written text; but unfortunately, spoken real-world utterances rarely contain punctuation. We address this limitation by distilling punctuation knowledge from a bidirectional teacher language model (LM) trained on written, punctuated text. We compare our segmenter, which is distilled from the LM teacher, against a segmenter distilled from a acoustic-pause-based teacher used in other works, on a streaming ASR pipeline. The pipeline with our segmenter achieves a 3.2% relative WER gain along with a 60 ms median end-of-segment latency reduction on a YouTube captioning task.
[ "cs.CL", "cs.LG", "cs.SD", "eess.AS" ]
false
2305.17608
2023-05-28T02:12:00Z
Reward Collapse in Aligning Large Language Models
[ "Ziang Song", "Tianle Cai", "Jason D. Lee", "Weijie J. Su" ]
The extraordinary capabilities of large language models (LLMs) such as ChatGPT and GPT-4 are in part unleashed by aligning them with reward models that are trained on human preferences, which are often represented as rankings of responses to prompts. In this paper, we document the phenomenon of \textit{reward collapse}, an empirical observation where the prevailing ranking-based approach results in an \textit{identical} reward distribution \textit{regardless} of the prompts during the terminal phase of training. This outcome is undesirable as open-ended prompts like ``write a short story about your best friend'' should yield a continuous range of rewards for their completions, while specific prompts like ``what is the capital of New Zealand'' should generate either high or low rewards. Our theoretical investigation reveals that reward collapse is primarily due to the insufficiency of the ranking-based objective function to incorporate prompt-related information during optimization. This insight allows us to derive closed-form expressions for the reward distribution associated with a set of utility functions in an asymptotic regime. To overcome reward collapse, we introduce a prompt-aware optimization scheme that provably admits a prompt-dependent reward distribution within the interpolating regime. Our experimental results suggest that our proposed prompt-aware utility functions significantly alleviate reward collapse during the training of reward models.
[ "cs.LG", "cs.AI", "cs.CL", "math.OC", "stat.ML" ]
false
2305.17623
2023-05-28T03:59:37Z
On the Value of Myopic Behavior in Policy Reuse
[ "Kang Xu", "Chenjia Bai", "Shuang Qiu", "Haoran He", "Bin Zhao", "Zhen Wang", "Wei Li", "Xuelong Li" ]
Leveraging learned strategies in unfamiliar scenarios is fundamental to human intelligence. In reinforcement learning, rationally reusing the policies acquired from other tasks or human experts is critical for tackling problems that are difficult to learn from scratch. In this work, we present a framework called Selective Myopic bEhavior Control~(SMEC), which results from the insight that the short-term behaviors of prior policies are sharable across tasks. By evaluating the behaviors of prior policies via a hybrid value function architecture, SMEC adaptively aggregates the sharable short-term behaviors of prior policies and the long-term behaviors of the task policy, leading to coordinated decisions. Empirical results on a collection of manipulation and locomotion tasks demonstrate that SMEC outperforms existing methods, and validate the ability of SMEC to leverage related prior policies.
[ "cs.LG" ]
false